Personal tools
You are here: Home / openCMISS / Wiki / References
Log in

Forgot your password?


"OpenFOAM": (Open Field Operation and Manipulation)

A package Duane has been looking into for fluid flow solution. "Started out": as a university research project, went commerical, and is now open source under a GPL license.

The OpenFOAM CFD Toolbox can simulate anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics, electromagnetics and the pricing of financial options.

The core technology of OpenFOAM is a flexible set of efficient C++ modules. These are used to build a wealth of: solvers, to simulate specific problems in engineering mechanics; utilities, to perform pre- and post-processing tasks ranging from simple data manipulations to visualisation and mesh processing; libraries, to create toolboxes that are accessible to the solvers/utilities, such as libraries of physical models.

OpenFOAM is supplied with numerous pre-configured solvers, utilities and libraries and so can be used like any typical simulation package. However, it is open, not only in terms of source code, but also in its structure and hierarchical design, so that its solvers, utilities and libraries are fully extensible.

OpenFOAM uses finite volume numerics to solve systems of partial differential equations ascribed on any 3D unstructured mesh of polyhedral cells. The fluid flow solvers are developed within a robust, implicit, pressure-velocity, iterative solution framework, although alternative techniques are applied to other continuum mechanics solvers. Domain decomposition parallelism is fundamental to the design of OpenFOAM and integrated at a low level so that solvers can generally be developed without the need for any ’parallel-specific’ coding.

CARP (Cardiac Arrhythmias Research Package)

  • Computational tools for modeling electrical activity in cardiac tissue. Vigmond EJ, Hughes M, Plank G, Leon LJ. 2003. J. Electrocardiol. 36(Suppl 1):69-74.
  • Runs on both shared and distributed memory platforms with low-level wrappers to switch memory handling
  • Uses OpenMP, PetSC, and is interfaced to SCIRun
  • <img src="carp.png"/>


The "licence": for most of PETSc is not at all restrictive, merely requiring that the

"notice is retained thereon and on all copies or modifications." Some appears to be under LGPL and PETSc can be built without this code if necessary.

"NERSC has conducted an initial evaluation of PETSc, and results from this

testing were generally positive. Because of the size of the PETSc package and the fact that it strongly promotes a design methodology, some effort is required to learn the tool, but the payback for this effort is substantial. PETSc is one of the most established and broadly useful items in the ACTS Collection. We believe that as the Collection progresses toward tool interoperability, the PETSc design style will propagate to other tools, especially those in the numerical methods group. For this reason, we highly recommend that users of the ACTS Collection explore PETSc." (

"PETSc is not threadsafe":: and does not use threads and its developers don't see threads as the appropriate model for PETSc.


The SciDAC Terascale Simulation Tools and Technology (TSTT) Center

"The primary objective of the ("TSTT": center is to develop technologies that enable application scientists to easily use multiple mesh and discretization strategies within a single simulation on terascale computers."

Uses FMDB for its mesh management infrastructure.

Makes use of SIDL/Babel.

An interface definition effort is mentioned "here": A mesh api is specified and refers to a field api for field data on the mesh but the field api does not appear to be published (or developed?) yet (2005-12-21).


"The "libMesh": library is a C++ framework for the numerical simulation of

partial differential equations on serial and parallel platforms." "Currently the library supports 2D and 3D steady and transient finite element simulations" as demonstrated in a set of "examples":

Uses PETSc.

See "Talk": for set of PDF slides from talk by

libMesh developer.


""Cactus": is an open source problem solving environment designed for scientists and engineers."

Provides access to PETSc HDF5 MPI. Mainly written in C but some components are in Fortran.

Looks grid (rather than element) oriented.


""Sundance": is a system for rapid development of high-performance parallel
finite-element solutions of partial differential equations

It is a C++ template based system. GLPL License. Uses Trilinos.

CUBIT Mesh Generation Toolkit

"CUBIT": is available free to academic institutions but there is one-off $300 distribution fee to download it.


Designing and Building Parallel Programs

An online "reference book": by Ian Foster which covers:

* "Compositional C++":

Parallel Programming with MPI

"Parallel Programming with MPI": is an elementary introduction to programming parallel systems that use the MPI 1.1 library of extensions to C and Fortran. It is intended for use by students and professionals with some knowledge of programming conventional, single-processor systems, but who have little or no experience programming multiprocessor systems. It is an extensive revision and expansion of "A User's Guide to MPI":

Unfortunately not online, but example code (c. 2000) can be downloaded for both C and Fortran.

Data Access

Common Component Architecture (CCA)

"The objective of the "CCA Forum": is to define a minimal set of standard interfaces that a high-performance component framework has to provide to components, and can expect from them, in order to allow disparate components to be composed together to build a running application. Such a standard will promote interoperability between components developed by different teams across different institutions."

Uses Chasm, Babel/SIDL, Ccaffeine.


"Ccaffeine": "is a Common Component Architecture prototype framework for distributed memory message passing High Performance Computing."


""MOAB":, a Mesh-Oriented datABase, is a software component for creating, storing and accessing finite element mesh data."

Developed in collobaration with TSTT and CUBIT.

Written in C++.


Last release in 2004. Mailing list received 3 emails in 2005, 2 of which were spam and the other received no answer.


""HDF5": is a general purpose library and file format for storing scientific data."

"HDF5 was created to address the data management needs of scientists and engineers working in high performance, data intensive computing environments."


"The "HDF5 Mesh API (prototype)": provides a standard higher-level API for storing and retrieving structured and unstructured "mesh" data typical of applications such as computational fluid dynamics, finite element analysis, and visualization."

The latest version available is January 2003.


The "FMDB": Flexible distributed Mesh

DataBase provides "a distributed mesh data management infrastructure that satisfies the needs of distributed domain of applications." Is is operable with the TSTT Mesh API.

Written in C++ with C/C++ API.

Uses MPI, Zoltan


"Zoltan": provides "dynamic redistribution of data", and includes an "unstructured communication package that greatly simplifies interprocessor communication" with C and Fortran interfaces.



Global Arrays

"The "Global Arrays (GA)": toolkit is a library for

writing parallel programs that use large arrays distributed across processing nodes. The library has both Fortran and C interfaces."

"Contrary to MPI, GA does not require cooperation between sender and

receiver to transfer data."



""Autopack": is a message-passing

library which transparently packs small messages into fewer larger ones for more efficient transport by MPI."

Written in C.


""memcached": is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load."

Although this is used in quite a different application, something similar could be useful to our application.


"Metis": is a software package that allows users

to easily partition a discrete problem that has a connectivity matrix which can be used to divide a problem up into subdomains for a distributed computing environment. ParMetis is a parallel version of the Metis algorithm.

Note that this is used by Jon Pearce.

Cluster OpenMP

Intel's "Cluster OpenMP": implements distributed shared memory (DSM) with a derivative of TreadMarks.

A new "sharable" directive is introduced (which can be automated to some extent) to determine which variables need to be handled by DSM. The DSM handles changes to each page of memory and synchronization.

Interfaces Between Languages

SciDAC "comment": that "SciDAC applications are often written in Fortran90/95, whereas for SciDAC math and CS libraries C/C++ is more common. The technical difficulties in connecting the two often preclude effective sharing. We currently have two tools that cover the depth and breadth of the language interoperability problem: Chasm and Babel."

"These two tools with their differing approaches complement each other effectively. Chasm is applying its Fortran90 and C++ automation techniques to produce SIDL. Babel is adopting Chasm's Fortran90 array infrastructure to augment its own Fortran90 bindings."


"SWIG": is a "software development tool that connects

programs written in C and C++ with a variety of high-level programming languages" (not including Fortran).

"SWIG parses ANSI C++ that has been extended with a number of special directives."

"you don't have to add an extra layer of IDL specifications to your application."


"Chasm": "parses Fortran and C/C++ source code and automatically generates bridging code that can be used to make calls to routines in the foreign language."


"Babel": includes tools for a "Scientific Interface Definition Language (SIDL) that addresses the unique needs of parallel scientific computing". It provides wrappers

between Fortran 77, Fortran 90, C, C++, Python, and Java (client only). It is available under LGPL.

This may one day also support "Parallel Data


Used in the TSTT.


""Pyrex": is Python with C data types." This "lets you write code to convert between arbitrary Python data structures and arbitrary C data structures" in (almost) Python rather than C.

Inline for Perl

""Inline": saves you from

the hassle of having to write and compile your own glue code using facilities like XS or SWIG", and therefore may be useful for quickly implementing computational intensive perl functions in C to try out the optimization.

Inline for Python

SciPy have a

"comparison": of different methods of optimizing Python through implementation of components in compiled languages. Methods include "weave": (inline), "f2py":, and "Pyrex":

""PyInline": is the Python equivalent of Brian

Ingerson's Inline module for Perl".

Ruby Inline

""Ruby Inline": is an analog to

Perl's Inline::C. Out of the box, it allows you to embed C/++ external module code in your ruby script directly. By writing simple builder classes, you can teach how to cope with new languages (fortran, perl, whatever)."

Mixing Dynamic Languages

Mixing dynamic languages in more challenging than mixing compiled languages

as interpreters for each of the languages need to communicate. "Inline::Python": seems to have acheived this and lets you "Write Perl subs and classes in Python". It also provides a Python module (called "perl") that "exposes Perl packages and subs" to Python.

""Ruby/Python": is a Ruby extension library to embed Python interpreter in Ruby. With this library, you can use the libraries written for Python in your Ruby scripts. The most powerful feature of Ruby/Python is its transparency." This project hasn't been updated since 2000-09-11.

Tools for Dynamic Languages


""SciPy": is a set of open source scientific and

numeric tools for Python. It currently supports special functions, integration, ordinary differential equation (ODE) solvers, gradient optimization, genetic algorithms, parallel programming tools, an expression-to-C++ compiler for fast execution, and others."

BSD licence.

""SciPy Core": contains a powerful N-dimensional

array object, sophisticated (broadcasting) functions, tools for integrating C/C++ and Fortran code, and useful linear algebra".


"ScientificPython": is

a collection of Python modules that are useful for scientific computing. This includes interfaces to MPI and BSP parallel programming.

Perl Data Language

""PDL": gives standard Perl the ability to compactly store and speedily manipulate the large N-dimensional data arrays".

Numerical Ruby

""Numerical Ruby":

incorporates fast calculation and easy manipulation of large numerical arrays into the Ruby language."


"Parallel-MPI": for Perl is Perl's interface to MPI that I believe will be needed if we use Perl as our scripting language. Correct me if I am wrong on this but Sarah Healy may have a better understanding of this.


"Pypar": is a Python module that allows for interfacing to MPI. The
link also discusses other options allowing Python to interface to MPI,

MPI Ruby

"Interfaces": are available for Python to MPI, PETSc and ParMETIS.

Software Development

This list originated from Peter's email (2005-12-17 11:08am) and placed into the wiki for reference.

Continuous Integration (CI) tools:

A CI system automatically builds the project every time a change is made to some set of resources, such as the source code repository, then tells you the results of the build.

Code checking:

Bug tracking:

Testing tools:

  • "Cobertura": 'cobertura' is spanish for 'coverage'
  • Some test categories are: _Unit tests_ (standalone tests of individual objects like subroutines), _functional tests_ (entire code function), _performance tests_ (how fast), _load tests_ (performance when many users), _smoke tests_ (lightweight tests designed to exercise key parts of s/w - does it 'smoke' i.e. fail when invoke basic functions), _integration tests_ (how does the s/w work with other things like databases)

Architecture for GUIs:

Here are some of the s/w development practices advocated by the Ship It! book:

  1. Maintain a 'List': publicly available, prioritised, on a time line, living, measureable, targetted
  2. Should have one tech lead.
  3. Should have brief daily meetings: one person should always lead the meeting, 2 mins per person, be sure everyone knows the format, everyone must answer the questions, record on white board, scale to 15 people, deal with snipers
  4. Always have code reviews: never work for more than 2 days without code review with another developer (include reviewer's name), rotate reviewers, one reveiw for each added feature, do before submit
  5. Always use code change notifications: Notification emails should include reviewers name, purpose of change or addition, difference betw old & new.
  6. Good interfaces (APIs) give give good team interactions