Getting started

If you are new to HPC, please read How do High Performance Computers (HPC) differ from Desktop PCs?

In order to get access to the MaRC2 HPC Cluster, please email the MaRC2 team at marc[at] for further assistance.

There are various ways of getting started:

  • See the overview and workshop materials below
  • Visit a workshop (as announced below and on the linux-cluster mailing list)
  • Read the tutorial, FAQ and other chapters from this MaRC2 User Guide (see table of contents)
  • Just ask the MaRC2 team by email: marc[at]
  • See the language-specific example files, located on MaRC2's file system at /home/examples

The MaRC2 HPC Cluster - overview

The following picture shows a schematic overview of the MaRC2 HPC Cluster:

(Also see the German overview at

There is also a Tutorial as well as Frequently asked questions available within this wiki.

First login and running jobs:

After account activation, you may login to the head nodes via SSH and use them for medium-performance tasks, e.g. to edit your files or to compile your programs:

ssh -XC -p223

Several pre-installed compilers and software versions can be selected through Environment Modules.

For high-performance execution on the compute nodes, a job must be submitted to the Batch system (Sun Grid Engine). The batch system keeps track of the cluster's ressources and makes sure that everyone will get a chance to have his/her jobs executed in good time. You may also set the Parallel environment of your job (e.g. multiple cores on one node, or distributed over multiple nodes).

Memory and CPUs:

Memory (RAM) among the CPUs of a compute node is shared, but may have different access times (so-called NUMA architecture). Each compute node has four CPU sockets, however each physical CPU is composed of two logical CPUs internally (with eight cores each). Furthermore, each two cores of a logical CPU share a single FPU.

Jobs which are distributed over multiple compute nodes, however, have no shared memory and thus must be programmed to communicate via network, e.g. by using OpenMPI or the ParaStation MPI.

Disk storage:

Files can be put on the /home fileserver with nightly backups (in your personal home directory /home/username or your workgroup directory /home/ag_xyz), on the performance-optimized /scratch fileserver with no backups (in your personal directory /scratch/username) and on the node local temporary directory (available as $TMPDIR or $HPC_LOCAL during runtime only, disappears after job finished). See also Environment variables.

MaRC2 workshops and training materials for download

User meetings are planned to be held at the start of each semester. They contain an introduction to the cluster aimed at new users, and a part for open discussion which can and should also be joined by experienced users to discuss recent topics and issues concerning high performance computing in Marburg. The upcoming and past workshops are summarized below.

Providing HPC User Support at Hessian Universities (Workgroup seminar for theoretical chemistry, groups Berger and Tonner, 07-Jul-2017)

MaRC2 New User Meeting (30-Mar-2017)
MaRC2 reference sheet:

HiPerCH 2 workshop (High-Performance Computing in Hessen) (22-Sep-2014 thru 24-Sep-2014):
3-day workshop at TU Darmstadt
More information soon at HKHLR, see

Iterative linear solvers and parallelization (in German language) (24-Mar-2014 thru 28-Mar-2014):
1-week compact course, provided by High Performance Computing Center Stuttgart (HLRS)

Parallel Programming Concepts (starting 03-Feb-2014):
Free 6-week online course, hosted by Hasso Plattner Institute (HPI) Potsdam

MaRC2 Matlab Meeting (06-Nov-2013):

MaRC2 R Meeting (06-Nov-2013):

MaRC2 Introductory Workshop (19-Jun-2013):
High Performance Computing in general:
Using the MaRC2 cluster:

MaRC2 User Meeting (23-May-2013):
(no materials)

MaRC2 Introductory Workshop (06-Mar-2012):
(no materials)

Obtaining a PDF version of this document

A PDF version of the User's Guide may be found here Download.