Installation

From libMesh Wiki

(Difference between revisions)
Jump to: navigation, search
(Undo revision 1723 by AcercAbopa (Talk))
Line 1: Line 1:
-
http://www.textlirolcoorr.com
 
There are installation instructions at http://libmesh.sourceforge.net/installation.php, but if you are reading this, you want to hear more than that. If you have problems installing libmesh, I suggest to use ''exactly'' the procedure below. If it works, you can then experiment with other possibilities (changing compiler, playing with configure options etc.)
There are installation instructions at http://libmesh.sourceforge.net/installation.php, but if you are reading this, you want to hear more than that. If you have problems installing libmesh, I suggest to use ''exactly'' the procedure below. If it works, you can then experiment with other possibilities (changing compiler, playing with configure options etc.)

Revision as of 23:37, 16 July 2009

There are installation instructions at http://libmesh.sourceforge.net/installation.php, but if you are reading this, you want to hear more than that. If you have problems installing libmesh, I suggest to use exactly the procedure below. If it works, you can then experiment with other possibilities (changing compiler, playing with configure options etc.)

Please add your own experience here, if you use a different configuration.

Contents

Tips for building PETSc 2.3.3 on GNU/Linux

In this version of PETSc, the configure scripts and the Makefile both exit if they detect that you are trying to build PETSc as root. Unfortunately the 'make install' target provided by Petsc 2.3.3 will not produce a working PETSc installation after building the library in your home directory. I resorted to commenting out the checks for root in config/configure.py (around line 95) and in the 'all' target of the Makefile. I also did a monolithic build which included having PETSc download and build MPI and BLAS/Lapack during its configuration process. Following are the configure options used for debug mode:

# (DEBUG MODE)
# You will need ~1Gb to build everything
cd /my/petsc/dir
export PETSC_DIR=`pwd`
export PETSC_ARCH=linux-gnu-dbg
./config/configure.py --with-cc=gcc --with-fc=gfortran --with-mpi-compilers=0 \
    --with-shared=1 \
    --with-debugging=1 \
    --with-superlu=1 --download-superlu=1 \
    --with-superlu_dist=1 --download-superlu_dist=1 \
    --with-umfpack=1 --download-umfpack=1 \
    --with-spooles=1 --download-spooles=1 \
    --with-hypre=1 --download-hypre=1 \
    --with-mpi --download-mpich=1 \
    --download-f-blas-lapack=yes

And, for optimized mode:

# (OPTIMIZED MODE)
cd /my/petsc/dir
export PETSC_DIR=`pwd`
export PETSC_ARCH=linux-gnu-opt
./config/configure.py --with-cc=gcc --with-fc=gfortran --with-mpi-compilers=0 \
    --with-shared=1 \
    --with-debugging=0 \
    --with-superlu=1 --download-superlu=1 \
    --with-superlu_dist=1 --download-superlu_dist=1 \
    --with-umfpack=1 --download-umfpack=1 \
    --with-spooles=1 --download-spooles=1 \
    --with-hypre=1 --download-hypre=1 \
    --with-mpi --download-mpich=1 \
    --download-f-blas-lapack=yes

Note that PETSc does not correctly set the MPI_LIB variable in the ${PETSC_DIR}/bmake/${PETSC_ARCH}/petscconf file. Libmesh currently relies on this variable to discover how to link MPI applications, but this should probably change as Petsc seems to be supporting this variable less and less with each release. You can either set MPI_LIB by hand, build mpich *outside* of petsc instead of having petsc download it (in which case MPI_LIB should still be set) or rework the libmesh Makefile to use the PCC_* variables, which seem to perhaps be better supported (at least according to one PETSc developer I spoke with.)


--Jwpeterson 13:45, 2 July 2007 (PDT)


libmesh from cvs, PETSc 2.3.1, CentOS 4.2

Preliminaries

  • My system is running CentOS 4.2
  • I have previously built mpich-1.2.7 from source
  • Intel MKL v7.2.1 is installed and provides BLAS/LAPACK
  • The goal is to install several 'flavors'of PETSc
    • optimized & debug builds with the naitive gcc compilers
    • optimized & debug builds Intel 9.0.021 series compilers
  • In the case of the gcc build I will install a number of optional packages that extend PETSc

PETSc Installation

Download and unpack PETSc-2.3.1 from http://www-unix.mcs.anl.gov/petsc/petsc-2/download/index.html

gcc build

Note that in this case I let the PETSc installer download & compile UMFPACK, SPOOLES, SUPERLU, SUPERLU_DIST, and HYPRE, which are all optional packages that provide enhanced linear solver/preconditioner functionality.

cd /my/petsc/dir
export PETSC_DIR=`pwd`
export PETSC_ARCH=linux_gcc_dbg
./config/configure.py --with-cc=gcc --with-fc=g77 --with-mpi-compilers=0 \
    --with-mpi-dir=/software/ia32/mpi/mpich-1.2.7-gcc \
    --with-blas-lapack-dir=/software/ia32/intel/mkl-7.2.1 \
    --with-shared=1 \
    --with-debugging=1 \
    --with-superlu=1 --download-superlu=1 \
    --with-superlu_dist=1 --download-superlu_dist=1 \
    --with-umfpack=1 --download-umfpack=1 \
    --with-spooles=1 --download-spooles=1 \
    --with-hypre=1 --download-hypre=1
make
make testexamples_uni
export PETSC_ARCH=linux_gcc_opt
./config/configure.py --with-cc=gcc --with-fc=g77 --with-mpi-compilers=0 \
    --with-mpi-dir=/software/ia32/mpi/mpich-1.2.7-gcc \
    --with-blas-lapack-dir=/software/ia32/intel/mkl-7.2.1 \
    --with-shared=1 \
    --with-debugging=0 \
    --with-superlu=1 --download-superlu=1 \
    --with-superlu_dist=1 --download-superlu_dist=1 \
    --with-umfpack=1 --download-umfpack=1 \
    --with-spooles=1 --download-spooles=1 \
    --with-hypre=1 --download-hypre=1
make
make testexamples_uni

icc build

cd /my/petsc/dir
export PETSC_DIR=`pwd`
export PETSC_ARCH=linux_icc_dbg
./config/configure.py --with-cc=icc --with-fc=ifort --with-mpi-compilers=0 \
    --with-mpi-dir=/software/ia32/mpi/mpich-1.2.7-intel \
    --with-blas-lapack-dir=/software/ia32/intel/mkl-7.2.1 \
    --with-shared=1 \
    --with-debugging=1
make
make testexamples_uni
export PETSC_ARCH=linux_icc_opt
./config/configure.py --with-cc=icc --with-fc=ifort --with-mpi-compilers=0 \
    --with-mpi-dir=/software/ia32/mpi/mpich-1.2.7-intel \
    --with-blas-lapack-dir=/software/ia32/intel/mkl-7.2.1 \
    --with-shared=1 \
    --with-debugging=0
make
make testexamples_uni

Note that I specified the compilers, MPI, and BLAS/LAPACK installations directly. Also, I told PETSc to build shared libraries and not to use the MPI compilers (mpicc,mpif77). 'make' builds the libraries and places them in $PETSC_ARCH/lib/$PETSC_DIR, and 'make testexamples_uni' tests all the uniprocessor examples.

libMesh Installation

Download the latest CVS version of libMesh

cvs -d:pserver:anonymous@libmesh.cvs.sourceforge.net:/cvsroot/libmesh co libmesh 

gcc with debugging

cd /my/libmesh/dir
export PETSC_DIR=/my/petsc/dir 
export PETSC_ARCH=linux_gcc_dbg
export METHOD=dbg
make

gcc optimized

cd /my/libmesh/dir
export PETSC_DIR=/my/petsc/dir 
export PETSC_ARCH=linux_gcc_opt
export METHOD=opt
make

--Benkirk 07:41, 24 February 2006 (PST)

libmesh from cvs, Ubuntu Linux (Debian)

This should work for any Debian-based distro, but was specifically tested on Ubuntu Dapper.

To obtain the libraries from the repositories, add the universe to /etc/apt/sources.list . (Instead of dapper you should choose your edition, breezy is stable before 04.2006, dapper after the release in 04.2006.)

deb http://us.archive.ubuntu.com/ubuntu dapper universe
deb-src http://us.archive.ubuntu.com/ubuntu dapper universe

Now install petsc (libraries and headers). This will also install g77, blas/lapack/atlas and mpich.

apt-get install libpetsc2.3.0-dev

Then set the environment variables (bash-syntax, csh-syntax)

export PETSC_DIR=/usr/lib/petsc        setenv PETSC_DIR /usr/lib/petsc
export PETSC_ARCH=linux-gnu            setenv PETSC_ARCH linux-gnu

Next switch to the libmesh directory, configure and build

cd /my/path/to/libmesh
./configure
make

For me, everything works fine now.

libmesh from cvs, petsc 2.3.0, slepc 2.3.0 with MPI enabled, debian

This works on debian sid, but I believe it would work on other distributions similarly. You need the mpich-bin and libmpich1.0-dev packages. Tested on i386 and amd64 architectures. Use gcc compilers. (Last tested on Jan 20, 2006)

Note (M. Truffer): The following instructions worked very well for me, only the examples will not link without installing zlib1g-dev. I am running Ubuntu on an AMD 64 processor

cvs -d:pserver:anonymous@libmesh.cvs.sourceforge.net:/cvsroot/libmesh co libmesh
cd libmesh

cd contrib
wget ftp://ftp.mcs.anl.gov/pub/petsc/petsc-lite.tar.gz
tar xzf petsc-lite.tar.gz
cd petsc-2.3.0
export PETSC_ARCH=linux
export PETSC_DIR=`pwd`
config/configure.py --with-cc=gcc --with-cxx=g++ --with-fc=g77 --with-mpi-dir=/usr/lib/mpich/ \
  --with-debugging=1 --with-shared
make
cd ..

wget http://www.grycap.upv.es/slepc/download/distrib/slepc.tgz
tar xzf slepc.tgz
cd slepc-2.3.0
export SLEPC_DIR=`pwd`
config/configure.py
make
cd ../..

./configure --enable-slepc
make
make run_examples

All examples should run after executing 'make run_examples' except ex15, which requires the library to be compiled with second derivatives support (we didn't pass --enable-second to the configure).

Compilation times on AMD Athlon(tm) 64 Processor 3800+ for your orientation:

petsc configure 0:20
petsc make 2:48
slepc configure 0:04
slepc make 0:16
libmesh configure 0:07
libmesh make 5:33

libmesh from cvs, petsc 2.3.0, slepc 2.3.0 with MPI disabled, debian

The procedure is basically the same, configure petsc with

config/configure.py --with-mpi=0 --with-debugging=1 --with-shared

But libmesh is unfortunately not prepared for this yet, you will get compilation errors. From one of libmesh's developers (http://sourceforge.net/mailarchive/forum.php?thread_id=9360953&forum_id=35501):

"LibMesh will try to get its MPI configuration from PETSc if you have petsc. We need a way of disabling MPI in libmesh if it is disabled in PETSc..."

libmesh Installation on Debian Lenny/Testing

Summary: Things have changed in Lenny compared to Etch, mainly that the open source MPI implementation, OpenMPI, is used by default for PETSc instead of MPICH. Those who recently upgraded from Etch (and not using a fresh Lenny install) make sure to purge all remnants of MPICH and LAM using "aptitude purge XXX," this will save you from hours of headache...so assuming your machine is a clean slate:

1) I used the latest version of LibMesh (0.6.2). 2) Installing the package petsc-dev takes care of installing OpenMPI and a slew of other dependencies for you. I also had to include zlib or there would be a linking error.

aptitude install petsc-dev zlib1g

You'll have to figure out how to configure OpenMPI for multiple machines, but by default it works fine for a multi-core machine...this is an improvement over Etch with MPICH!

3) Unpack the LibMesh tarball and do the following (depending on your shell):

export PETSC_ARCH="linux-gnu-c-opt"
export PETSC_DIR="/usr/lib/petscdir/2.3.3/"
./configure 

The whole mess you have to go through with Etch to point to the right MPI compiler wrappers has been resolved in Lenny, nonetheless, check the configure output carefully to make sure everything went well. Lenny is a moving target, so things might change.

6) Compile and test (if you have multiple processors use the -j option to let make know)

make -j 8
cd examples/ex9
mpiexec -np 8 ./ex9-opt

Note that with OpenMPI you use "mpiexec" instead of "mpirun"

--Nasser Mohieddin Abukhdeir 12AUG2008

libmesh Installation on Debian Etch/Stable

1) I used the latest version of LibMesh (0.6.2) 2) Installing the package petsc-dev takes care of installing MPICH for you

aptitude install petsc-dev

Edit the file /etc/mpich/machines.linux to describe your network topology, I just want to use my SMP machine, so I added this line:

localhost:8

where the ":8" reflects the number of processors my system has.

3) Unfortunately it does not install rsh which is required my MPICH:

aptitude install rsh-server rsh-client

4) Create a file ~/.rhosts and add a line:

localhost USERNAME

where you substitute your user name in, you also might have to add "localhost" to your /etc/hosts.allow file for this to work. At this point you should be able to run "tstmachines.mpich" to verify that MPI is working

5) Unpack the LibMesh tarball and do the following (depending on your shell):

export PETSC_ARCH="linux-gnu-c-opt"
export PETSC_DIR="/usr/lib/petscdir/2.3.2/"
./configure --with-mpi=/usr/lib/mpich --with-cxx=mpicxx.mpich
--with-cc=mpicc.mpich --with-f77=mpif77.mpich -with-CC=mpiCC.mpich

This took me a while to figure out, but you need to make sure LibMesh knows you are using MPICH, because without the first option it will be configured for LAM-MPI. The other options are intuitive, except you need to use the suffix ".mpich" on all of the compilers so that you do not end up using the LAM wrapper version

6) Compile and test (if you have multiple processors use the -j option to let make know)

make -j 8
cd examples/ex9
mpirun -np 8 ./ex9-opt

--Nasser Mohieddin Abukhdeir

"long long" Compilation errors with MPICH

In MPICH version 1.2.5 and later (and possibly earlier) the mpio.h file declares the MPI_Offset type as long long. This is apparently not ISO C++ and so the GCC compiler rejects it based on the flags we use to compile libmesh. You will see an error message similar to this:

"/usr/local/mpich-1.2.7/include/mpio.h:42: error: ISO C++ does not support `long long'"

The fix we have employed in the past is to directly edit the mpio.h file at line 42, changing MPI_Offset to simply a long. We realize that not everyone may have write access to the mpich headers and so a better fix is still needed. We are also aware of the fact that changing this type may create problems with the code, but have not noticed any yet.

Note: You won't get this error if you use the installation instructions above (at least on debian with mpich 1.2.7). (O. Certik)

Note: The error message can be suppressed with the -Wno-long-long compiler switch for the GNU compiler. Add this switch to the debug-mode section in Make.common. In my installation this looks like

ifeq ($(debug-mode),on)
  CXXFLAGS += -DDEBUG  [cut more options] -Wno-long-long
  CFLAGS   += -DDEBUG -g -fPIC
endif


libmesh from cvs on lonestar.tacc.utexas.edu

lonestar.tacc.utexas.edu is TACC's latest em64t cluster. The following instructions show how to build libMesh for this machine. These instructions assume you use bash as your shell.

cvs -d:pserver:anonymous@libmesh.cvs.sourceforge.net:/cvsroot/libmesh co libmesh 
cd libmesh
export PETSC_DIR=/home/utexas/ti/benkirk/petsc/2.3.2
export PETSC_ARCH=em64t
export CXXFLAGS=-DMPICH_SKIP_MPICXX
unset INCLUDE
CXX=mpiCC CC=mpicc F77=mpif77 ./configure
make

You may ignore any warnings related to overriding -O3 with -O2. This occurs because mpiCC specifies -O3 which is appended with the libMesh compiler flags, which specify -O2.

To test your install and ability to run in parallel you can test one of the examples:

cd examples/ex4
make
bsub -I -n 4 -W 0:05 -q development ibrun "./ex4-opt -d 3 -n 20 -log_summary"


--Benkirk Sat Jul 7 21:10:08 CDT 2007



libmesh from svn on ranger.tacc.utexas.edu

Ranger is TACC's latest supercomputer. Check out the Ranger user guide for more information. The software on Ranger is in a relatively rapid state of flux. These instructions may become outdated or superseded by new information in a relatively short amount of time... We may eventually be able to build a libmesh module on ranger that would simplify this process.

First: unload/load appropriate system software modules.

module unload pgi
module unload mvapich2

module load intel    # Intel 10.1 at the time of this writing
module load mvapich  
module load petsc
module load slepc    # optional module


Next, pull down libmesh from the sourceforge site and configure it

svn checkout https://libmesh.svn.sourceforge.net/svnroot/libmesh/trunk/libmesh
CXX=mpicxx CC=mpicc F77=mpif77 ./configure --enable-everything \
                                           --enable-second     \
                                           --disable-perflog   \
                                           --disable-bzip2     \
                                           --disable-vsmoother

Intel 10.1 has a difficult time with the variational smoother code for some reason. Here, we've decided to simply disable it to speed up compiling. You'll need to recompile later if you do decide you'll need the variational smoother capability.


Finally, build!

nice make -j 4

There is a known issue with ICC which may prevent the utility programs in src/apps from building correctly. You can either ignore the error message about not being able to find '-L/some/random/dir' or change the "bin/%" Makefile rule from

bin/% : src/apps/%.cc $(mesh_library)
 	@echo "Building $@"
	@$(libmesh_CXX) $(libmesh_CXXFLAGS) $(libmesh_INCLUDE) $< -o $@ $(libmesh_LIBS) $(libmesh_LDFLAGS) $(libmesh_DLFLAGS)

To a two-part rule like:

 bin/% : src/apps/%.cc $(mesh_library)
 	@echo "Building $@"
	@$(libmesh_CXX) $(libmesh_CXXFLAGS) $(libmesh_INCLUDE) -c $< -o $(patsubst %.cc,%.o,$<) 
	@$(libmesh_CXX) $(libmesh_CXXFLAGS) $(patsubst %.cc,%.o,$<) -o $@ $(libmesh_LIBS) $(libmesh_DLFLAGS) $(libmesh_LDFLAGS)


Coming soon: A sample job submission script/command and output.

--Jwpeterson 12:25, 27 May 2008 (PDT)

Personal tools