Tuesday 24 June 2014

A quick note on compilers

crayc++ and icpc vs g++

I was just on ARCHER again this afternoon, and I thought I’d
try recompiling dolfin with the Intel or Cray compilers.
Unfortunately, neither of them are very c++11 compliant.
So, for icpc, lots of complaints about std::unique_ptretc.

rc/dolfin-1.4.0/dolfin/adaptivity/ErrorControl.cpp(479): error: namespace "std" has no member "unique_ptr"
      std::unique_ptr<DirichletBC> e_bc;
           ^

Well, this has now been fixed in icpc 14.0.2, apparently, so I’ll just have to
wait for that to appear on ARCHER, I guess. It is already on Darwin, so we can test Intel there…

The situation with crayc++ is considerably further behind the curve…

CC-135 crayc++: ERROR File = 
rc/dolfin-1.4.0/dolfin/common/NoDeleter.h, Line = 42
  The namespace "std" has no member "shared_ptr".

    std::shared_ptr<T> reference_to_no_delete_pointer(T& r)
         ^

Someone from Cray once told me not to bother with crayc++,
it is always playing catch-up…

Written with StackEdit.

FEniCS 1.4.0 on ARCHER - Part 2

Running a job with FEniCS on ARCHER

Yesterday, I compiled up FEniCS 1.4.0 on ARCHER, and it compiled
without a problem, once I’d setup the PETSc library, and
disabled Trilinos. The configuration looks like this:

-- The following optional packages were found:
-- -------------------------------------------
-- (OK) OPENMP
-- (OK) MPI
-- (OK) PETSC
-- (OK) SCOTCH
-- (OK) PARMETIS
-- (OK) ZLIB
-- (OK) PYTHON
-- (OK) HDF5
-- (OK) QT
-- 
-- The following optional packages were not found:
-- -----------------------------------------------
-- (**) PETSC4PY
-- (**) SLEPC
-- (**) TRILINOS
-- (**) UMFPACK
-- (**) CHOLMOD
-- (**) PASTIX
-- (**) CGAL
-- (**) SPHINX
-- (**) VTK
-- 

I should probably support slepc and petsc4py, but that can wait until I’ve done some initial testing.

> module load fenics/1.4.0
> aprun -n 12 python demo_poisson.py 
Number of global vertices: 9261
Number of global cells: 48000
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling DOLFIN just-in-time (JIT) compiler, this may take some time.
Calling DOLFIN just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Calling FFC just-in-time (JIT) compiler, this may take some time.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.
Solving linear variational problem.

Summary of timings                                |  Average time  Total time  Reps
-----------------------------------------------------------------------------------
Apply (PETScMatrix)                               |    0.00047588  0.00095177     2
Apply (PETScVector)                               |    0.00021639   0.0010819     5
Assemble cells                                    |     0.0019845    0.003969     2
Assemble exterior facets                          |    0.00033498  0.00033498     1
Build mesh number mesh entities                   |    4.0531e-06  4.0531e-06     1
Build sparsity                                    |      0.021424    0.042848     2
Compute local dual graph                          |       0.01228     0.01228     1
Compute non-local dual graph                      |     0.0087841   0.0087841     1
Delete sparsity                                   |    2.1458e-06  4.2915e-06     2
DirichletBC apply                                 |      0.002965    0.002965     1
DirichletBC compute bc                            |     0.0021579   0.0021579     1
DirichletBC init facets                           |     0.0020189   0.0020189     1
Generate Box mesh                                 |       0.78396     0.78396     1
HDF5: reorder vertex values                       |     0.0002284  0.00045681     2
HDF5: write mesh to file                          |       0.13826     0.13826     1
Init MPI                                          |     0.0020611   0.0020611     1
Init PETSc                                        |       0.10989     0.10989     1
Init dof vector                                   |       0.11122     0.11122     1
Init dofmap                                       |       0.10349     0.10349     1
Init dofmap from UFC dofmap                       |      0.061792    0.061792     1
Init tensor                                       |    0.00020909  0.00041819     2
LU solver                                         |       0.12104     0.12104     1
PARALLEL 2: Distribute mesh (cells and vertices)  |     0.0057669   0.0057669     1
PARALLEL 3: Build mesh (from local mesh data)     |      0.061399    0.061399     1
PETSc LU solver                                   |       0.12081     0.12081     1
Partition graph (calling SCOTCH)                  |       0.28882     0.28882     1
SCOTCH graph ordering                             |    0.00022697  0.00022697     1
build LocalMeshData                               |       0.11821     0.11821     1
compute connectivity 0 - 2                        |    0.00058699  0.00058699     1
compute connectivity 2 - 3                        |    0.00032806  0.00032806     1
compute entities dim = 2                          |      0.021483    0.021483     1
Application 8828173 resources: utime ~664s, stime ~24s, Rss ~275324, inblocks ~3158559, outblocks ~423320

Well, that seems to work.

Written with StackEdit.

Monday 23 June 2014

Compiling FEniCS 1.4.0 on ARCHER

Compiling FEniCS 1.4.0 on ARCHER

So, today, I decided: why not? let’s compile FEniCS 1.4.0 on ARCHER.
What can possibly go wrong?
Before attempting anything, I switched to the gnu compilers. I suppose some day I should try the Cray Compiler – but why risk it, when I know gcc works.
module swap PrgEnv-cray PrgEnv-gnu
Then I thought: maybe we should try with Cray PETSC etc. - after all, it
is there, nicely set up - maybe even optimised for the machine? I need a load of
other things - some of which I have compiled myself, some of which are already
there…
module use /work/e319/shared/modules
module load eigen/3.2.0
module load python/2.7.6 
module load numpy/1.8.0
module load ply/3.4
module load boost/1.55
module load swig/2.0.10           # needed on the compute nodes
module load cmake/2.8.12.2        # yes, my own version - available on the compute nodes
module load scientific-python/2.8 # needed by FIAT
module load cray-hdf5-parallel/1.8.12
module load cray-petsc/3.4.3.1
export PETSC_DIR=$CRAY_PETSC_PREFIX_DIR
module load cray-trilinos/11.6.1.0
export TRILINOS_DIR=$CRAY_TRILINOS_PREFIX_DIR
module load cray-tpsl/1.4.0
export SCOTCH_DIR=$CRAY_TPSL_PREFIX_DIR
export PARMETIS_DIR=$CRAY_TPSL_PREFIX_DIR
Now, to get it all working, first of all, we need ffc working. That is mostly
python, so fairly easy to fix up:-
wget https://bitbucket.org/fenics-project/ffc/downloads/ffc-1.4.0.tar.gz
tar xf ffc-1.4.0.tar.gz
cd ffc-1.4.0
python setup.py install --prefix=/work/e319/shared/packages/fenics-1.4.0
which can also be repeated for instant, ufl and fiat.
Now, on to dolfin. In cmake, it is essential to disable the build tests: -DDOLFIN_SKIP_BUILD_TESTS=true, and the MPI autodetection: -DDOLFIN_AUTO_DETECT_MPI=false.
CMake doesn’t know that Cray PETSc has some crazy names for its libraries, so we have to tell it somewhere. I just hack a line into cmake/modules/FindPETSc.cmake :-
set(PETSC_LIB_BASIC "-lcraypetsc_gnu_real")
But now, there is a problem with Trilinos. This looks more serious.
CMake Error at /opt/cray/trilinos/11.6.1.0/GNU/48/sandybridge/lib/cmake/Trilinos/TrilinosTargets.cmake:412 (message):
  The imported target "teuchoscore" references the file

     "/opt/cray/trilinos/11.6.1.0/GNU/48/sandybridge/lib/libteuchoscore.so.11.6.1"
OK, maybe we can manage without Trilinos for the moment… module rm trilinos
Now it compiles.. but does it work? More later…
Written with StackEdit.