From libMesh Wiki

(Difference between revisions)
Jump to: navigation, search
Line 459: Line 459:
[ Super Mario Game for PC], [ Download and Play Backgammon Game], [ apc rack]
[ Super Mario Game for PC], [ Download and Play Backgammon Game], [ apc rack]
[ sofa with sleeper], [ website design for free], [ hutch for desk], [ Halloween Contacts], [ Dr. Lisa Alloju], [ service truck body], [ utility truck body], [ large format digital printing], [ digital printing]
[ sofa with sleeper], [ website design for free], [ hutch for desk], [ Halloween Contacts], [ Dr. Lisa Alloju], [ service truck body], [ utility truck body], [ large format digital printing], [ digital printing], [ fort worth web design]
[ fort worth search engine optimization], [ fort worth web development]

Revision as of 03:26, 25 November 2010


This Page Is Currently Under Construction And Will Be Available Shortly, Please Visit Reserve Copy Page


There are installation instructions at, but if you are reading this, you want to hear more than that. If you have problems installing libmesh, I suggest to use exactly the procedure below. If it works, you can then experiment with other possibilities (changing compiler, playing with configure options etc.)

Please add your own experience here, if you use a different configuration.

Tips for building PETSc 2.3.3 on GNU/Linux

In this version of PETSc, the configure scripts and the Makefile both exit if they detect that you are trying to build PETSc as root. Unfortunately the 'make install' target provided by Petsc 2.3.3 will not produce a working PETSc installation after building the library in your home directory. I resorted to commenting out the checks for root in config/ (around line 95) and in the 'all' target of the Makefile. I also did a monolithic build which included having PETSc download and build MPI and BLAS/Lapack during its configuration process. Following are the configure options used for debug mode:


  2. You will need ~1Gb to build everything

cd /my/petsc/dir export PETSC_DIR=`pwd` export PETSC_ARCH=linux-gnu-dbg ./config/ --with-cc=gcc --with-fc=gfortran --with-mpi-compilers=0 \

   --with-shared=1 \
   --with-debugging=1 \
   --with-superlu=1 --download-superlu=1 \
   --with-superlu_dist=1 --download-superlu_dist=1 \
   --with-umfpack=1 --download-umfpack=1 \
   --with-spooles=1 --download-spooles=1 \
   --with-hypre=1 --download-hypre=1 \
   --with-mpi --download-mpich=1 \


And, for optimized mode:



cd /my/petsc/dir export PETSC_DIR=`pwd` export PETSC_ARCH=linux-gnu-opt ./config/ --with-cc=gcc --with-fc=gfortran --with-mpi-compilers=0 \

   --with-shared=1 \
   --with-debugging=0 \
   --with-superlu=1 --download-superlu=1 \
   --with-superlu_dist=1 --download-superlu_dist=1 \
   --with-umfpack=1 --download-umfpack=1 \
   --with-spooles=1 --download-spooles=1 \
   --with-hypre=1 --download-hypre=1 \
   --with-mpi --download-mpich=1 \


Note that PETSc does not correctly set the MPI_LIB variable in the ${PETSC_DIR}/bmake/${PETSC_ARCH}/petscconf file. Libmesh currently relies on this variable to discover how to link MPI applications, but this should probably change as Petsc seems to be supporting this variable less and less with each release. You can either set MPI_LIB by hand, build mpich *outside* of petsc instead of having petsc download it (in which case MPI_LIB should still be set) or rework the libmesh Makefile to use the PCC_* variables, which seem to perhaps be better supported (at least according to one PETSc developer I spoke with.)Liability Insurance Texas

--Jwpeterson 13:45, 2 July 2007 (PDT)

libmesh from cvs, PETSc 2.3.1, CentOS 4.2


  • My system is running CentOS 4.2
  • I have previously built mpich-1.2.7 from source
  • Intel MKL v7.2.1 is installed and provides BLAS/LAPACK
  • The goal is to install several 'flavors'of PETSc
    • optimized & debug builds with the naitive gcc compilers
    • optimized & debug builds Intel 9.0.021 series compilers
  • In the case of the gcc build I will install a number of optional packages that extend PETSc

PETSc Installation

Download and unpack PETSc-2.3.1 from

gcc build

Note that in this case I let the PETSc installer download & compile UMFPACK, SPOOLES, SUPERLU, SUPERLU_DIST, and HYPRE, which are all optional packages that provide enhanced linear solver/preconditioner functionality. <pre> cd /my/petsc/dir export PETSC_DIR=`pwd` export PETSC_ARCH=linux_gcc_dbg ./config/ --with-cc=gcc --with-fc=g77 --with-mpi-compilers=0 \

   --with-mpi-dir=/software/ia32/mpi/mpich-1.2.7-gcc \
   --with-blas-lapack-dir=/software/ia32/intel/mkl-7.2.1 \
   --with-shared=1 \
   --with-debugging=1 \
   --with-superlu=1 --download-superlu=1 \
   --with-superlu_dist=1 --download-superlu_dist=1 \
   --with-umfpack=1 --download-umfpack=1 \
   --with-spooles=1 --download-spooles=1 \
   --with-hypre=1 --download-hypre=1

make make testexamples_uni export PETSC_ARCH=linux_gcc_opt ./config/ --with-cc=gcc --with-fc=g77 --with-mpi-compilers=0 \

   --with-mpi-dir=/software/ia32/mpi/mpich-1.2.7-gcc \
   --with-blas-lapack-dir=/software/ia32/intel/mkl-7.2.1 \
   --with-shared=1 \
   --with-debugging=0 \
   --with-superlu=1 --download-superlu=1 \
   --with-superlu_dist=1 --download-superlu_dist=1 \
   --with-umfpack=1 --download-umfpack=1 \
   --with-spooles=1 --download-spooles=1 \
   --with-hypre=1 --download-hypre=1

make make testexamples_uni </pre>

icc build

<pre> cd /my/petsc/dir export PETSC_DIR=`pwd` export PETSC_ARCH=linux_icc_dbg ./config/ --with-cc=icc --with-fc=ifort --with-mpi-compilers=0 \

   --with-mpi-dir=/software/ia32/mpi/mpich-1.2.7-intel \
   --with-blas-lapack-dir=/software/ia32/intel/mkl-7.2.1 \
   --with-shared=1 \

make make testexamples_uni export PETSC_ARCH=linux_icc_opt ./config/ --with-cc=icc --with-fc=ifort --with-mpi-compilers=0 \

   --with-mpi-dir=/software/ia32/mpi/mpich-1.2.7-intel \
   --with-blas-lapack-dir=/software/ia32/intel/mkl-7.2.1 \
   --with-shared=1 \

make make testexamples_uni </pre>

Note that I specified the compilers, MPI, and BLAS/LAPACK installations directly. Also, I told PETSc to build shared libraries and not to use the MPI compilers (mpicc,mpif77). 'make' builds the libraries and places them in $PETSC_ARCH/lib/$PETSC_DIR, and 'make testexamples_uni' tests all the uniprocessor examples.

libMesh Installation

Download the latest CVS version of libMesh <pre> cvs co libmesh </pre>

gcc with debugging

<pre> cd /my/libmesh/dir export PETSC_DIR=/my/petsc/dir export PETSC_ARCH=linux_gcc_dbg export METHOD=dbg make </pre>

gcc optimized

<pre> cd /my/libmesh/dir export PETSC_DIR=/my/petsc/dir export PETSC_ARCH=linux_gcc_opt export METHOD=opt make </pre>

--Benkirk 07:41, 24 February 2006 (PST)

libmesh from cvs, Ubuntu Linux (Debian)

This should work for any Debian-based distro, but was specifically tested on Ubuntu Dapper.

To obtain the libraries from the repositories, add the universe to /etc/apt/sources.list . (Instead of dapper you should choose your edition, breezy is stable before 04.2006, dapper after the release in 04.2006.)

<pre> deb dapper universe deb-src dapper universe </pre>

Now install petsc (libraries and headers). This will also install g77, blas/lapack/atlas and mpich.

<pre> apt-get install libpetsc2.3.0-dev </pre>

Then set the environment variables (bash-syntax, csh-syntax)

<pre> export PETSC_DIR=/usr/lib/petsc setenv PETSC_DIR /usr/lib/petsc export PETSC_ARCH=linux-gnu setenv PETSC_ARCH linux-gnu </pre>

Next switch to the libmesh directory, configure and build

<pre> cd /my/path/to/libmesh ./configure make </pre>

For me, everything works fine now.

libmesh from cvs, petsc 2.3.0, slepc 2.3.0 with MPI enabled, debian

This works on debian sid, but I believe it would work on other distributions similarly. You need the mpich-bin and libmpich1.0-dev packages. Tested on i386 and amd64 architectures. Use gcc compilers. (Last tested on Jan 20, 2006)

Note (M. Truffer): The following instructions worked very well for me, only the examples will not link without installing zlib1g-dev. I am running Ubuntu on an AMD 64 processor

<pre> cvs co libmesh cd libmesh

cd contrib wget tar xzf petsc-lite.tar.gz cd petsc-2.3.0 export PETSC_ARCH=linux export PETSC_DIR=`pwd` config/ --with-cc=gcc --with-cxx=g++ --with-fc=g77 --with-mpi-dir=/usr/lib/mpich/ \

 --with-debugging=1 --with-shared

make cd ..

wget tar xzf slepc.tgz cd slepc-2.3.0 export SLEPC_DIR=`pwd` config/ make cd ../..

./configure --enable-slepc make make run_examples </pre>

All examples should run after executing 'make run_examples' except ex15, which requires the library to be compiled with second derivatives support (we didn't pass --enable-second to the configure).

Compilation times on AMD Athlon(tm) 64 Processor 3800+ for your orientation:

petsc configure 0:20
petsc make 2:48
slepc configure 0:04
slepc make 0:16
libmesh configure 0:07
libmesh make 5:33

libmesh from cvs, petsc 2.3.0, slepc 2.3.0 with MPI disabled, debian

The procedure is basically the same, configure petsc with <pre> config/ --with-mpi=0 --with-debugging=1 --with-shared </pre> But libmesh is unfortunately not prepared for this yet, you will get compilation errors. From one of libmesh's developers (

"LibMesh will try to get its MPI configuration from PETSc if you have petsc. We need a way of disabling MPI in libmesh if it is disabled in PETSc..."

libmesh Installation on Debian Lenny/Testing

Summary: Things have changed in Lenny compared to Etch, mainly that the open source MPI implementation, OpenMPI, is used by default for PETSc instead of MPICH. Those who recently upgraded from Etch (and not using a fresh Lenny install) make sure to purge all remnants of MPICH and LAM using "aptitude purge XXX," this will save you from hours of assuming your machine is a clean slate:

1) I used the latest version of LibMesh (0.6.2). 2) Installing the package petsc-dev takes care of installing OpenMPI and a slew of other dependencies for you. I also had to include zlib or there would be a linking error.

<pre> aptitude install petsc-dev zlib1g </pre>

You'll have to figure out how to configure OpenMPI for multiple machines, but by default it works fine for a multi-core machine...this is an improvement over Etch with MPICH!

3) Unpack the LibMesh tarball and do the following (depending on your shell):

<pre> export PETSC_ARCH="linux-gnu-c-opt" export PETSC_DIR="/usr/lib/petscdir/2.3.3/" ./configure </pre>

The whole mess you have to go through with Etch to point to the right MPI compiler wrappers has been resolved in Lenny, nonetheless, check the configure output carefully to make sure everything went well. Lenny is a moving target, so things might change.

6) Compile and test (if you have multiple processors use the -j option to let make know)

<pre> make -j 8 cd examples/ex9 mpiexec -np 8 ./ex9-opt </pre>

Note that with OpenMPI you use "mpiexec" instead of "mpirun"

--Nasser Mohieddin Abukhdeir 12AUG2008

libmesh Installation on Debian Etch/Stable

1) I used the latest version of LibMesh (0.6.2) 2) Installing the package petsc-dev takes care of installing MPICH for you

<pre> aptitude install petsc-dev </pre>

Edit the file /etc/mpich/machines.linux to describe your network topology, I just want to use my SMP machine, so I added this line:

<pre> localhost:8 </pre>

where the ":8" reflects the number of processors my system has.

3) Unfortunately it does not install rsh which is required my MPICH:

<pre> aptitude install rsh-server rsh-client </pre>

4) Create a file ~/.rhosts and add a line:

<pre> localhost USERNAME </pre>

where you substitute your user name in, you also might have to add "localhost" to your /etc/hosts.allow file for this to work. At this point you should be able to run "tstmachines.mpich" to verify that MPI is working

5) Unpack the LibMesh tarball and do the following (depending on your shell):

<pre> export PETSC_ARCH="linux-gnu-c-opt" export PETSC_DIR="/usr/lib/petscdir/2.3.2/" ./configure --with-mpi=/usr/lib/mpich --with-cxx=mpicxx.mpich --with-cc=mpicc.mpich --with-f77=mpif77.mpich -with-CC=mpiCC.mpich </pre>

This took me a while to figure out, but you need to make sure LibMesh knows you are using MPICH, because without the first option it will be configured for LAM-MPI. The other options are intuitive, except you need to use the suffix ".mpich" on all of the compilers so that you do not end up using the LAM wrapper version

6) Compile and test (if you have multiple processors use the -j option to let make know)

<pre> make -j 8 cd examples/ex9 mpirun -np 8 ./ex9-opt </pre>

--Nasser Mohieddin Abukhdeir

"long long" Compilation errors with MPICH

In MPICH version 1.2.5 and later (and possibly earlier) the mpio.h file declares the MPI_Offset type as long long. This is apparently not ISO C++ and so the GCC compiler rejects it based on the flags we use to compile libmesh. You will see an error message similar to this:

<pre> "/usr/local/mpich-1.2.7/include/mpio.h:42: error: ISO C++ does not support `long long'" </pre>

The fix we have employed in the past is to directly edit the mpio.h file at line 42, changing MPI_Offset to simply a long. We realize that not everyone may have write access to the mpich headers and so a better fix is still needed. We are also aware of the fact that changing this type may create problems with the code, but have not noticed any yet.

Note: You won't get this error if you use the installation instructions above (at least on debian with mpich 1.2.7). (O. Certik)

Note: The error message can be suppressed with the -Wno-long-long compiler switch for the GNU compiler.Black Friday 42 Inch LCD HDTV Deals Add this switch to the debug-mode section in Make.common. In my installation this looks like

<pre> ifeq ($(debug-mode),on)

 CXXFLAGS += -DDEBUG  [cut more options] -Wno-long-long

endif </pre>

libmesh from cvs on is TACC's latest em64t cluster. The following instructions by essay writing companies show how to build libMesh for this machine. These instructions assume you use bash as your shell. <pre> cvs co libmesh cd libmesh export PETSC_DIR=/home/utexas/ti/benkirk/petsc/2.3.2 export PETSC_ARCH=em64t export CXXFLAGS=-DMPICH_SKIP_MPICXX unset INCLUDE CXX=mpiCC CC=mpicc F77=mpif77 ./configure make </pre>

You may ignore any warnings related to overriding -O3 with -O2. This occurs because mpiCC specifies -O3 which is appended with the libMesh compiler flags, which specify -O2.

To test your install and ability to run in parallel you can test one of the examples:

<pre> cd examples/ex4 make bsub -I -n 4 -W 0:05 -q development ibrun "./ex4-opt -d 3 -n 20 -log_summary" </pre>

--Benkirk Sat Jul 7 21:10:08 CDT 2007

libmesh from svn on

Ranger is TACC's latest supercomputer. Check out the Ranger user guide for more information from the online resume writers. The software on Ranger is in a relatively rapid state of flux. These instructions may become outdated or superseded by new information in a relatively short amount of time... We may eventually be able to build a libmesh module on ranger that would simplify this process.

First: unload/load appropriate system software modules. <pre> module unload pgi module unload mvapich2

module load intel # Intel 10.1 at the time of this writing module load mvapich module load petsc module load slepc # optional module </pre>

Next, pull down libmesh from the sourceforge site and configure it <pre> svn checkout CXX=mpicxx CC=mpicc F77=mpif77 ./configure --enable-everything \

                                          --enable-second     \
                                          --disable-perflog   \
                                          --disable-bzip2     \


Intel 10.1 has a difficult time with the variational smoother code for some reason. Here, we've decided to simply disable it to speed up compiling. You'll need to recompile later if you do decide you'll need the variational smoother capability.

Finally, build! <pre> nice make -j 4 </pre>

There is a known issue with ICC which may prevent the utility programs in src/apps from building correctly. You can either ignore the error message about not being able to find '-L/some/random/dir' or change the "bin/%" Makefile rule from

<pre> bin/% : src/apps/ $(mesh_library)

	@echo "Building $@"

@$(libmesh_CXX) $(libmesh_CXXFLAGS) $(libmesh_INCLUDE) $< -o $@ $(libmesh_LIBS) $(libmesh_LDFLAGS) $(libmesh_DLFLAGS) </pre>

To a two-part rule like:


bin/% : src/apps/ $(mesh_library)
	@echo "Building $@"

@$(libmesh_CXX) $(libmesh_CXXFLAGS) $(libmesh_INCLUDE) -c $< -o $(patsubst,%.o,$<) @$(libmesh_CXX) $(libmesh_CXXFLAGS) $(patsubst,%.o,$<) -o $@ $(libmesh_LIBS) $(libmesh_DLFLAGS) $(libmesh_LDFLAGS) </pre>

Coming soon: A sample job submission script/command and output.

--Jwpeterson 12:25, 27 May 2008 (PDT)

Super Mario Game for PC, Download and Play Backgammon Game, apc rack sofa with sleeper, website design for free, hutch for desk, Halloween Contacts, Dr. Lisa Alloju, service truck body, utility truck body, large format digital printing, digital printing, fort worth web design fort worth search engine optimization, fort worth web development

Personal tools