# Example 3 - Solving a Poisson Problem

This is the third example program. It builds on the second example program by showing how to solve a simple Poisson system. This example also introduces the notion of customized matrix assembly functions, working with an exact solution, and using element iterators. We will not comment on things that were already explained in the second example.

C++ include files that we need
        #include <iostream>
#include <algorithm>
#include <math.h>


Basic include files needed for the mesh functionality.
        #include "libmesh.h"
#include "mesh.h"
#include "mesh_generation.h"
#include "exodusII_io.h"
#include "linear_implicit_system.h"
#include "equation_systems.h"


Define the Finite Element object.
        #include "fe.h"


Define Gauss quadrature rules.
        #include "quadrature_gauss.h"


Define useful datatypes for finite element matrix and vector components.
        #include "sparse_matrix.h"
#include "numeric_vector.h"
#include "dense_matrix.h"
#include "dense_vector.h"
#include "elem.h"


Define the DofMap, which handles degree of freedom indexing.
        #include "dof_map.h"


Bring in everything from the libMesh namespace
        using namespace libMesh;


Function prototype. This is the function that will assemble the linear system for our Poisson problem. Note that the function will take the EquationSystems object and the name of the system we are assembling as input. From the EquationSystems object we have access to the Mesh and other objects we might need.
        void assemble_poisson(EquationSystems& es,
const std::string& system_name);


Function prototype for the exact solution.
        Real exact_solution (const Real x,
const Real y,
const Real z = 0.);

int main (int argc, char** argv)
{

Initialize libraries, like in example 2.
          LibMeshInit init (argc, argv);


Brief message to the user regarding the program name and command line arguments.
          std::cout << "Running " << argv[0];

for (int i=1; i<argc; i++)
std::cout << " " << argv[i];

std::cout << std::endl << std::endl;


Skip this 2D example if libMesh was compiled as 1D-only.
          libmesh_example_assert(2 <= LIBMESH_DIM, "2D support");
Mesh mesh;


Use the MeshTools::Generation mesh generator to create a uniform 2D grid on the square [-1,1]^2. We instruct the mesh generator to build a mesh of 15x15 QUAD9 elements. Building QUAD9 elements instead of the default QUAD4's we used in example 2 allow us to use higher-order approximation.
          MeshTools::Generation::build_square (mesh,
15, 15,
-1., 1.,
-1., 1.,
QUAD9);


Print information about the mesh to the screen. Note that 5x5 QUAD9 elements actually has 11x11 nodes, so this mesh is significantly larger than the one in example 2.
          mesh.print_info();


Create an equation systems object.
          EquationSystems equation_systems (mesh);


Declare the Poisson system and its variables. The Poisson system is another example of a steady system.
          equation_systems.add_system<LinearImplicitSystem> ("Poisson");


Adds the variable "u" to "Poisson". "u" will be approximated using second-order approximation.
          equation_systems.get_system("Poisson").add_variable("u", SECOND);


Give the system a pointer to the matrix assembly function. This will be called when needed by the library.
          equation_systems.get_system("Poisson").attach_assemble_function (assemble_poisson);


Initialize the data structures for the equation system.
          equation_systems.init();


Prints information about the system to the screen.
          equation_systems.print_info();


Solve the system "Poisson". Note that calling this member will assemble the linear system and invoke the default numerical solver. With PETSc the solver can be controlled from the command line. For example, you can invoke conjugate gradient with:

./ex3 -ksp_type cg

You can also get a nice X-window that monitors the solver convergence with:

./ex3 -ksp_xmonitor

if you linked against the appropriate X libraries when you built PETSc.
          equation_systems.get_system("Poisson").solve();

#ifdef LIBMESH_HAVE_EXODUS_API

After solving the system write the solution to a ExodusII-formatted plot file.
          ExodusII_IO (mesh).write_equation_systems ("out.exd", equation_systems);
#endif // #ifdef LIBMESH_HAVE_EXODUS_API


All done.
          return 0;
}


We now define the matrix assembly function for the Poisson system. We need to first compute element matrices and right-hand sides, and then take into account the boundary conditions, which will be handled via a penalty method.
        void assemble_poisson(EquationSystems& es,
const std::string& system_name)
{


It is a good idea to make sure we are assembling the proper system.
          libmesh_assert (system_name == "Poisson");


Get a constant reference to the mesh object.
          const MeshBase& mesh = es.get_mesh();


The dimension that we are running
          const unsigned int dim = mesh.mesh_dimension();


Get a reference to the LinearImplicitSystem we are solving
          LinearImplicitSystem& system = es.get_system<LinearImplicitSystem> ("Poisson");


A reference to the DofMap object for this system. The DofMap object handles the index translation from node and element numbers to degree of freedom numbers. We will talk more about the DofMap in future examples.
          const DofMap& dof_map = system.get_dof_map();


Get a constant reference to the Finite Element type for the first (and only) variable in the system.
          FEType fe_type = dof_map.variable_type(0);


Build a Finite Element object of the specified type. Since the FEBase::build() member dynamically creates memory we will store the object as an AutoPtr. This can be thought of as a pointer that will clean up after itself. Example 4 describes some advantages of AutoPtr's in the context of quadrature rules.
          AutoPtr<FEBase> fe (FEBase::build(dim, fe_type));


A 5th order Gauss quadrature rule for numerical integration.
          QGauss qrule (dim, FIFTH);


Tell the finite element object to use our quadrature rule.
          fe->attach_quadrature_rule (&qrule);


Declare a special finite element object for boundary integration.
          AutoPtr<FEBase> fe_face (FEBase::build(dim, fe_type));


Boundary integration requires one quadraure rule, with dimensionality one less than the dimensionality of the element.
          QGauss qface(dim-1, FIFTH);


Tell the finite element object to use our quadrature rule.
          fe_face->attach_quadrature_rule (&qface);


Here we define some references to cell-specific data that will be used to assemble the linear system.

The element Jacobian * quadrature weight at each integration point.
          const std::vector<Real>& JxW = fe->get_JxW();


The physical XY locations of the quadrature points on the element. These might be useful for evaluating spatially varying material properties at the quadrature points.
          const std::vector<Point>& q_point = fe->get_xyz();


The element shape functions evaluated at the quadrature points.
          const std::vector<std::vector<Real> >& phi = fe->get_phi();


The element shape function gradients evaluated at the quadrature points.
          const std::vector<std::vector<RealGradient> >& dphi = fe->get_dphi();


Define data structures to contain the element matrix and right-hand-side vector contribution. Following basic finite element terminology we will denote these "Ke" and "Fe". These datatypes are templated on Number, which allows the same code to work for real or complex numbers.
          DenseMatrix<Number> Ke;
DenseVector<Number> Fe;


This vector will hold the degree of freedom indices for the element. These define where in the global system the element degrees of freedom get mapped.
          std::vector<unsigned int> dof_indices;


Now we will loop over all the elements in the mesh. We will compute the element matrix and right-hand-side contribution.

Element iterators are a nice way to iterate through all the elements, or all the elements that have some property. The iterator el will iterate from the first to the last element on the local processor. The iterator end_el tells us when to stop. It is smart to make this one const so that we don't accidentally mess it up! In case users later modify this program to include refinement, we will be safe and will only consider the active elements; hence we use a variant of the \p active_elem_iterator.
          MeshBase::const_element_iterator       el     = mesh.active_local_elements_begin();
const MeshBase::const_element_iterator end_el = mesh.active_local_elements_end();


Loop over the elements. Note that ++el is preferred to el++ since the latter requires an unnecessary temporary object.
          for ( ; el != end_el ; ++el)
{

Store a pointer to the element we are currently working on. This allows for nicer syntax later.
              const Elem* elem = *el;


Get the degree of freedom indices for the current element. These define where in the global matrix and right-hand-side this element will contribute to.
              dof_map.dof_indices (elem, dof_indices);


Compute the element-specific data for the current element. This involves computing the location of the quadrature points (q_point) and the shape functions (phi, dphi) for the current element.
              fe->reinit (elem);


Zero the element matrix and right-hand side before summing them. We use the resize member here because the number of degrees of freedom might have changed from the last element. Note that this will be the case if the element type is different (i.e. the last element was a triangle, now we are on a quadrilateral).

The DenseMatrix::resize() and the DenseVector::resize() members will automatically zero out the matrix and vector.
              Ke.resize (dof_indices.size(),
dof_indices.size());

Fe.resize (dof_indices.size());


Now loop over the quadrature points. This handles the numeric integration.
              for (unsigned int qp=0; qp<qrule.n_points(); qp++)
{


Now we will build the element matrix. This involves a double loop to integrate the test funcions (i) against the trial functions (j).
                  for (unsigned int i=0; i<phi.size(); i++)
for (unsigned int j=0; j<phi.size(); j++)
{
Ke(i,j) += JxW[qp]*(dphi[i][qp]*dphi[j][qp]);
}


This is the end of the matrix summation loop Now we build the element right-hand-side contribution. This involves a single loop in which we integrate the "forcing function" in the PDE against the test functions.
                  {
const Real x = q_point[qp](0);
const Real y = q_point[qp](1);
const Real eps = 1.e-3;


"fxy" is the forcing function for the Poisson equation. In this case we set fxy to be a finite difference Laplacian approximation to the (known) exact solution.

We will use the second-order accurate FD Laplacian approximation, which in 2D is

u_xx + u_yy = (u(i,j-1) + u(i,j+1) + u(i-1,j) + u(i+1,j) + -4*u(i,j))/h^2

Since the value of the forcing function depends only on the location of the quadrature point (q_point[qp]) we will compute it here, outside of the i-loop
                    const Real fxy = -(exact_solution(x,y-eps) +
exact_solution(x,y+eps) +
exact_solution(x-eps,y) +
exact_solution(x+eps,y) -
4.*exact_solution(x,y))/eps/eps;

for (unsigned int i=0; i<phi.size(); i++)
Fe(i) += JxW[qp]*fxy*phi[i][qp];
}
}


We have now reached the end of the RHS summation, and the end of quadrature point loop, so the interior element integration has been completed. However, we have not yet addressed boundary conditions. For this example we will only consider simple Dirichlet boundary conditions.

There are several ways Dirichlet boundary conditions can be imposed. A simple approach, which works for interpolary bases like the standard Lagrange polynomials, is to assign function values to the degrees of freedom living on the domain boundary. This works well for interpolary bases, but is more difficult when non-interpolary (e.g Legendre or Hierarchic) bases are used.

Dirichlet boundary conditions can also be imposed with a "penalty" method. In this case essentially the L2 projection of the boundary values are added to the matrix. The projection is multiplied by some large factor so that, in floating point arithmetic, the existing (smaller) entries in the matrix and right-hand-side are effectively ignored.

This amounts to adding a term of the form (in latex notation)

\frac{1}{\epsilon} \int_{\delta \Omega} \phi_i \phi_j = \frac{1}{\epsilon} \int_{\delta \Omega} u \phi_i

where

\frac{1}{\epsilon} is the penalty parameter, defined such that \epsilon << 1
              {


The following loop is over the sides of the element. If the element has no neighbor on a side then that side MUST live on a boundary of the domain.
                for (unsigned int side=0; side<elem->n_sides(); side++)
if (elem->neighbor(side) == NULL)
{

The value of the shape functions at the quadrature points.
                      const std::vector<std::vector<Real> >&  phi_face = fe_face->get_phi();


The Jacobian * Quadrature Weight at the quadrature points on the face.
                      const std::vector<Real>& JxW_face = fe_face->get_JxW();


The XYZ locations (in physical space) of the quadrature points on the face. This is where we will interpolate the boundary value function.
                      const std::vector<Point >& qface_point = fe_face->get_xyz();


Compute the shape function values on the element face.
                      fe_face->reinit(elem, side);


Loop over the face quadrature points for integration.
                      for (unsigned int qp=0; qp<qface.n_points(); qp++)
{


The location on the boundary of the current face quadrature point.
                          const Real xf = qface_point[qp](0);
const Real yf = qface_point[qp](1);


The penalty value. \frac{1}{\epsilon} in the discussion above.
                          const Real penalty = 1.e10;


The boundary value.
                          const Real value = exact_solution(xf, yf);


Matrix contribution of the L2 projection.
                          for (unsigned int i=0; i<phi_face.size(); i++)
for (unsigned int j=0; j<phi_face.size(); j++)
Ke(i,j) += JxW_face[qp]*penalty*phi_face[i][qp]*phi_face[j][qp];


Right-hand-side contribution of the L2 projection.
                          for (unsigned int i=0; i<phi_face.size(); i++)
Fe(i) += JxW_face[qp]*penalty*value*phi_face[i][qp];
}
}
}


We have now finished the quadrature point loop, and have therefore applied all the boundary conditions.

If this assembly program were to be used on an adaptive mesh, we would have to apply any hanging node constraint equations
              dof_map.constrain_element_matrix_and_vector (Ke, Fe, dof_indices);


The element matrix and right-hand-side are now built for this element. Add them to the global matrix and right-hand-side vector. The SparseMatrix::add_matrix() and NumericVector::add_vector() members do this for us.
              system.matrix->add_matrix (Ke, dof_indices);
system.rhs->add_vector    (Fe, dof_indices);
}


All done!
        }


# The program without comments:



#include <iostream>
#include <algorithm>
#include <math.h>

#include "libmesh.h"
#include "mesh.h"
#include "mesh_generation.h"
#include "exodusII_io.h"
#include "linear_implicit_system.h"
#include "equation_systems.h"

#include "fe.h"

#include "quadrature_gauss.h"

#include "sparse_matrix.h"
#include "numeric_vector.h"
#include "dense_matrix.h"
#include "dense_vector.h"
#include "elem.h"

#include "dof_map.h"

using namespace libMesh;

void assemble_poisson(EquationSystems& es,
const std::string& system_name);

Real exact_solution (const Real x,
const Real y,
const Real z = 0.);

int main (int argc, char** argv)
{
LibMeshInit init (argc, argv);

std::cout << "Running " << argv[0];

for (int i=1; i<argc; i++)
std::cout << " " << argv[i];

std::cout << std::endl << std::endl;

libmesh_example_assert(2 <= LIBMESH_DIM, "2D support");
Mesh mesh;

MeshTools::Generation::build_square (mesh,
15, 15,
-1., 1.,
-1., 1.,
QUAD9);

mesh.print_info();

EquationSystems equation_systems (mesh);

equation_systems.add_system<LinearImplicitSystem> ("Poisson");

equation_systems.get_system("Poisson").add_variable("u", SECOND);

equation_systems.get_system("Poisson").attach_assemble_function (assemble_poisson);

equation_systems.init();

equation_systems.print_info();

equation_systems.get_system("Poisson").solve();

#ifdef LIBMESH_HAVE_EXODUS_API
ExodusII_IO (mesh).write_equation_systems ("out.exd", equation_systems);
#endif // #ifdef LIBMESH_HAVE_EXODUS_API

return 0;
}

void assemble_poisson(EquationSystems& es,
const std::string& system_name)
{

libmesh_assert (system_name == "Poisson");

const MeshBase& mesh = es.get_mesh();

const unsigned int dim = mesh.mesh_dimension();

LinearImplicitSystem& system = es.get_system<LinearImplicitSystem> ("Poisson");

const DofMap& dof_map = system.get_dof_map();

FEType fe_type = dof_map.variable_type(0);

AutoPtr<FEBase> fe (FEBase::build(dim, fe_type));

QGauss qrule (dim, FIFTH);

fe->attach_quadrature_rule (&qrule);

AutoPtr<FEBase> fe_face (FEBase::build(dim, fe_type));

QGauss qface(dim-1, FIFTH);

fe_face->attach_quadrature_rule (&qface);

const std::vector<Real>& JxW = fe->get_JxW();

const std::vector<Point>& q_point = fe->get_xyz();

const std::vector<std::vector<Real> >& phi = fe->get_phi();

const std::vector<std::vector<RealGradient> >& dphi = fe->get_dphi();

DenseMatrix<Number> Ke;
DenseVector<Number> Fe;

std::vector<unsigned int> dof_indices;

MeshBase::const_element_iterator       el     = mesh.active_local_elements_begin();
const MeshBase::const_element_iterator end_el = mesh.active_local_elements_end();

for ( ; el != end_el ; ++el)
{
const Elem* elem = *el;

dof_map.dof_indices (elem, dof_indices);

fe->reinit (elem);

Ke.resize (dof_indices.size(),
dof_indices.size());

Fe.resize (dof_indices.size());

for (unsigned int qp=0; qp<qrule.n_points(); qp++)
{

for (unsigned int i=0; i<phi.size(); i++)
for (unsigned int j=0; j<phi.size(); j++)
{
Ke(i,j) += JxW[qp]*(dphi[i][qp]*dphi[j][qp]);
}

{
const Real x = q_point[qp](0);
const Real y = q_point[qp](1);
const Real eps = 1.e-3;

const Real fxy = -(exact_solution(x,y-eps) +
exact_solution(x,y+eps) +
exact_solution(x-eps,y) +
exact_solution(x+eps,y) -
4.*exact_solution(x,y))/eps/eps;

for (unsigned int i=0; i<phi.size(); i++)
Fe(i) += JxW[qp]*fxy*phi[i][qp];
}
}

{

for (unsigned int side=0; side<elem->n_sides(); side++)
if (elem->neighbor(side) == NULL)
{
const std::vector<std::vector<Real> >&  phi_face = fe_face->get_phi();

const std::vector<Real>& JxW_face = fe_face->get_JxW();

const std::vector<Point >& qface_point = fe_face->get_xyz();

fe_face->reinit(elem, side);

for (unsigned int qp=0; qp<qface.n_points(); qp++)
{

const Real xf = qface_point[qp](0);
const Real yf = qface_point[qp](1);

const Real penalty = 1.e10;

const Real value = exact_solution(xf, yf);

for (unsigned int i=0; i<phi_face.size(); i++)
for (unsigned int j=0; j<phi_face.size(); j++)
Ke(i,j) += JxW_face[qp]*penalty*phi_face[i][qp]*phi_face[j][qp];

for (unsigned int i=0; i<phi_face.size(); i++)
Fe(i) += JxW_face[qp]*penalty*value*phi_face[i][qp];
}
}
}

dof_map.constrain_element_matrix_and_vector (Ke, Fe, dof_indices);

system.matrix->add_matrix (Ke, dof_indices);
system.rhs->add_vector    (Fe, dof_indices);
}

}


# The console output of the program:

Compiling C++ (in optimized mode) ex3.C...
Linking ex3-opt...
***************************************************************
* Running Example  mpirun -np 2 ./ex3-opt -pc_type bjacobi -sub_pc_type ilu -sub_pc_factor_levels 4 -sub_pc_factor_zeropivot 0 -ksp_right_pc -log_summary
***************************************************************

Running ./ex3-opt -pc_type bjacobi -sub_pc_type ilu -sub_pc_factor_levels 4 -sub_pc_factor_zeropivot 0 -ksp_right_pc -log_summary

Mesh Information:
mesh_dimension()=2
spatial_dimension()=3
n_nodes()=961
n_local_nodes()=495
n_elem()=225
n_local_elem()=112
n_active_elem()=225
n_subdomains()=1
n_processors()=2
processor_id()=0

EquationSystems
n_systems()=1
System "Poisson"
Type "LinearImplicit"
Variables="u"
Finite Element Types="LAGRANGE"
Approximation Orders="SECOND"
n_dofs()=961
n_local_dofs()=495
n_constrained_dofs()=0
n_vectors()=1

************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

./ex3-opt on a gcc-4.5-l named daedalus with 2 processors, by roystgnr Tue Feb 22 12:24:38 2011
Using Petsc Release Version 3.1.0, Patch 5, Mon Sep 27 11:51:54 CDT 2010

Max       Max/Min        Avg      Total
Time (sec):           1.886e-02      1.06184   1.831e-02
Objects:              4.300e+01      1.00000   4.300e+01
Flops:                2.405e+06      1.05767   2.340e+06  4.679e+06
Flops/sec:            1.281e+08      1.00395   1.278e+08  2.556e+08
MPI Messages:         2.350e+01      1.00000   2.350e+01  4.700e+01
MPI Message Lengths:  1.432e+04      1.01647   6.045e+02  2.841e+04
MPI Reductions:       7.400e+01      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total
0:      Main Stage: 1.8277e-02  99.8%  4.6792e+06 100.0%  4.700e+01 100.0%  6.045e+02      100.0%  5.800e+01  78.4%

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase         %F - percent flops in this phase
%M - percent messages in this phase     %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecMDot               11 1.0 1.1349e-04 1.6 6.53e+04 1.1 0.0e+00 0.0e+00 1.1e+01  1  3  0  0 15   1  3  0  0 19  1117
VecNorm               13 1.0 8.6069e-05 1.1 1.29e+04 1.1 0.0e+00 0.0e+00 1.3e+01  0  1  0  0 18   0  1  0  0 22   290
VecScale              12 1.0 1.8358e-05 1.0 5.94e+03 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   628
VecCopy                3 1.0 3.0994e-06 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                17 1.0 7.6294e-06 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                2 1.0 3.3660e-03 1.0 1.98e+03 1.1 0.0e+00 0.0e+00 0.0e+00 18  0  0  0  0  18  0  0  0  0     1
VecMAXPY              12 1.0 2.5749e-05 1.3 7.62e+04 1.1 0.0e+00 0.0e+00 0.0e+00  0  3  0  0  0   0  3  0  0  0  5748
VecAssemblyBegin       3 1.0 4.3154e-05 1.0 0.00e+00 0.0 2.0e+00 2.9e+02 9.0e+00  0  0  4  2 12   0  0  4  2 16     0
VecAssemblyEnd         3 1.0 1.1921e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin       14 1.0 3.7432e-05 1.0 0.00e+00 0.0 2.6e+01 4.2e+02 0.0e+00  0  0 55 38  0   0  0 55 38  0     0
VecScatterEnd         14 1.0 2.0695e-0410.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
VecNormalize          12 1.0 1.1301e-04 1.0 1.78e+04 1.1 0.0e+00 0.0e+00 1.2e+01  1  1  0  0 16   1  1  0  0 21   306
MatMult               12 1.0 3.8266e-04 2.0 1.77e+05 1.1 2.4e+01 4.0e+02 0.0e+00  2  7 51 33  0   2  7 51 33  0   888
MatSolve              12 1.0 4.2725e-04 1.1 8.15e+05 1.1 0.0e+00 0.0e+00 0.0e+00  2 34  0  0  0   2 34  0  0  0  3707
MatLUFactorNum         1 1.0 1.2059e-03 1.0 1.25e+06 1.1 0.0e+00 0.0e+00 0.0e+00  6 52  0  0  0   6 52  0  0  0  2024
MatILUFactorSym        1 1.0 2.7699e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 15  0  0  0  1  15  0  0  0  2     0
MatAssemblyBegin       2 1.0 1.1206e-04 1.1 0.00e+00 0.0 3.0e+00 2.3e+03 4.0e+00  1  0  6 24  5   1  0  6 24  7     0
MatAssemblyEnd         2 1.0 2.5177e-04 1.0 0.00e+00 0.0 4.0e+00 1.0e+02 8.0e+00  1  0  9  1 11   1  0  9  1 14     0
MatGetRowIJ            1 1.0 9.5367e-07 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetOrdering         1 1.0 3.1948e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00  0  0  0  0  3   0  0  0  0  3     0
MatZeroEntries         3 1.0 3.0041e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPGMRESOrthog        11 1.0 1.5020e-04 1.4 1.31e+05 1.1 0.0e+00 0.0e+00 1.1e+01  1  5  0  0 15   1  5  0  0 19  1688
KSPSetup               2 1.0 5.0068e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               1 1.0 8.7571e-03 1.0 2.41e+06 1.1 2.4e+01 4.0e+02 2.7e+01 48100 51 33 36  48100 51 33 47   534
PCSetUp                2 1.0 4.2188e-03 1.1 1.25e+06 1.1 0.0e+00 0.0e+00 3.0e+00 22 52  0  0  4  22 52  0  0  5   578
PCSetUpOnBlocks        1 1.0 4.0600e-03 1.1 1.25e+06 1.1 0.0e+00 0.0e+00 3.0e+00 22 52  0  0  4  22 52  0  0  5   601
PCApply               12 1.0 5.2595e-04 1.1 8.15e+05 1.1 0.0e+00 0.0e+00 0.0e+00  3 34  0  0  0   3 34  0  0  0  3012
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

Vec    23             23       102992     0
Vec Scatter     3              3         2604     0
Index Set     8              8        10620     0
IS L to G Mapping     1              1         2648     0
Matrix     4              4       526628     0
Krylov Solver     2              2        18880     0
Preconditioner     2              2         1408     0
========================================================================================================================
Average time to get PetscTime(): 9.53674e-08
Average time for MPI_Barrier(): 1.19209e-06
Average time for zero size MPI_Send(): 4.52995e-06
#PETSc Option Table entries:
-ksp_right_pc
-log_summary
-pc_type bjacobi
-sub_pc_factor_levels 4
-sub_pc_factor_zeropivot 0
-sub_pc_type ilu
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8
Configure run at: Fri Oct 15 13:01:23 2010
Configure options: --with-debugging=false --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3 --with-clanguage=C++ --with-shared=1 --with-mpi-dir=/org/centers/pecos/LIBRARIES/MPICH2/mpich2-1.2.1-gcc-4.5-lucid --with-mumps=true --download-mumps=ifneeded --with-parmetis=true --download-parmetis=ifneeded --with-superlu=true --download-superlu=ifneeded --with-superludir=true --download-superlu_dist=ifneeded --with-blacs=true --download-blacs=ifneeded --with-scalapack=true --download-scalapack=ifneeded --with-hypre=true --download-hypre=ifneeded --with-blas-lib="[/org/centers/pecos/LIBRARIES/MKL/mkl-10.0.3.020-gcc-4.5-lucid/lib/em64t/libmkl_intel_lp64.so,/org/centers/pecos/LIBRARIES/MKL/mkl-10.0.3.020-gcc-4.5-lucid/lib/em64t/libmkl_sequential.so,/org/centers/pecos/LIBRARIES/MKL/mkl-10.0.3.020-gcc-4.5-lucid/lib/em64t/libmkl_core.so]" --with-lapack-lib=/org/centers/pecos/LIBRARIES/MKL/mkl-10.0.3.020-gcc-4.5-lucid/lib/em64t/libmkl_solver_lp64_sequential.a
-----------------------------------------
Libraries compiled on Fri Oct 15 13:01:23 CDT 2010 on atreides
Machine characteristics: Linux atreides 2.6.32-25-generic #44-Ubuntu SMP Fri Sep 17 20:05:27 UTC 2010 x86_64 GNU/Linux
Using PETSc directory: /org/centers/pecos/LIBRARIES/PETSC3/petsc-3.1-p5
Using PETSc arch: gcc-4.5-lucid-mpich2-1.2.1-cxx-opt
-----------------------------------------
Using C compiler: /org/centers/pecos/LIBRARIES/MPICH2/mpich2-1.2.1-gcc-4.5-lucid/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -O3   -fPIC
Using Fortran compiler: /org/centers/pecos/LIBRARIES/MPICH2/mpich2-1.2.1-gcc-4.5-lucid/bin/mpif90 -fPIC -Wall -Wno-unused-variable -O3
-----------------------------------------
Using include paths: -I/org/centers/pecos/LIBRARIES/PETSC3/petsc-3.1-p5/gcc-4.5-lucid-mpich2-1.2.1-cxx-opt/include -I/org/centers/pecos/LIBRARIES/PETSC3/petsc-3.1-p5/include -I/org/centers/pecos/LIBRARIES/PETSC3/petsc-3.1-p5/gcc-4.5-lucid-mpich2-1.2.1-cxx-opt/include -I/org/centers/pecos/LIBRARIES/MPICH2/mpich2-1.2.1-gcc-4.5-lucid/include
------------------------------------------
Using C linker: /org/centers/pecos/LIBRARIES/MPICH2/mpich2-1.2.1-gcc-4.5-lucid/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -O3
Using Fortran linker: /org/centers/pecos/LIBRARIES/MPICH2/mpich2-1.2.1-gcc-4.5-lucid/bin/mpif90 -fPIC -Wall -Wno-unused-variable -O3
Using libraries: -Wl,-rpath,/org/centers/pecos/LIBRARIES/PETSC3/petsc-3.1-p5/gcc-4.5-lucid-mpich2-1.2.1-cxx-opt/lib -L/org/centers/pecos/LIBRARIES/PETSC3/petsc-3.1-p5/gcc-4.5-lucid-mpich2-1.2.1-cxx-opt/lib -lpetsc       -lX11 -Wl,-rpath,/org/centers/pecos/LIBRARIES/PETSC3/petsc-3.1-p5/gcc-4.5-lucid-mpich2-1.2.1-cxx-opt/lib -L/org/centers/pecos/LIBRARIES/PETSC3/petsc-3.1-p5/gcc-4.5-lucid-mpich2-1.2.1-cxx-opt/lib -lHYPRE -lsuperlu_dist_2.4 -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lparmetis -lmetis -lscalapack -lblacs -lsuperlu_4.0 -Wl,-rpath,/org/centers/pecos/LIBRARIES/MKL/mkl-10.0.3.020-gcc-4.5-lucid/lib/em64t -L/org/centers/pecos/LIBRARIES/MKL/mkl-10.0.3.020-gcc-4.5-lucid/lib/em64t -lmkl_solver_lp64_sequential -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lm -Wl,-rpath,/org/centers/pecos/LIBRARIES/MPICH2/mpich2-1.2.1-gcc-4.5-lucid/lib -L/org/centers/pecos/LIBRARIES/MPICH2/mpich2-1.2.1-gcc-4.5-lucid/lib -Wl,-rpath,/org/centers/pecos/LIBRARIES/GCC/gcc-4.5.1-lucid/lib/gcc/x86_64-unknown-linux-gnu/4.5.1 -L/org/centers/pecos/LIBRARIES/GCC/gcc-4.5.1-lucid/lib/gcc/x86_64-unknown-linux-gnu/4.5.1 -Wl,-rpath,/org/centers/pecos/LIBRARIES/GCC/gcc-4.5.1-lucid/lib64 -L/org/centers/pecos/LIBRARIES/GCC/gcc-4.5.1-lucid/lib64 -Wl,-rpath,/org/centers/pecos/LIBRARIES/GCC/gcc-4.5.1-lucid/lib -L/org/centers/pecos/LIBRARIES/GCC/gcc-4.5.1-lucid/lib -ldl -lmpich -lopa -lpthread -lrt -lgcc_s -lmpichf90 -lgfortran -lm -lm -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lpthread -lrt -lgcc_s -ldl
------------------------------------------

***************************************************************
* Done Running Example  mpirun -np 2 ./ex3-opt -pc_type bjacobi -sub_pc_type ilu -sub_pc_factor_levels 4 -sub_pc_factor_zeropivot 0 -ksp_right_pc -log_summary
***************************************************************


Site Created By: libMesh Developers
Last modified: December 02 2011 21:19:02 UTC

Hosted By: