.. _Cumulus1_Overview: Cumulus ======= System Overview ^^^^^^^^^^^^^^^ Cumulus is a Cray® XC40™ cluster with Intel Broadwell processors and Cray’s Aries interconnect in a network topology called Dragonfly. Each node has (128) GB of DDR3 SDRAM and (2) sockets with (18) physical cores each. The system has (2) external login nodes and (112) compute nodes. Login ^^^^^ To log into Cumulus, you will use your XCAMS/UCAMS username and password. **NOTE**: If you do not have an account yet, you can follow the steps on this page: :ref:`Account Application` Data Storage and Filesystems ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Cumulus mounts the OLCF open enclave file systems. The available filesystems are summarized in the table below. .. list-table:: :widths: 25 25 25 25 25 25 :header-rows: 1 * - Filesystem - Mount Points - Backed Up? - Purged? - Quota? - Comments * - Home Directories (NFS) - /ccsopen/home/$USER - Yes - No - 50GB - * - Project Space (NFS) - /ccsopen/proj/$PROJECT_ID - Yes - No - 50GB - * - Parallel Scratch (GPFS) - /gpfs/wolf/proj-shared/$PROJECT_ID /gpfs/wolf/scratch/$USER/$PROJECT_ID /gpfs/wolf/world-shared/$PROJECT_ID - No - Yes - - Programming Environments ^^^^^^^^^^^^^^^^^^^^^^^^ Software Modules ---------------- The software environment is managed through the **Environmental Module** tool. .. list-table:: :widths: 25 25 :header-rows: 1 * - Command - Description * - module list - Lists modules currently loaded in a user’s environment * - module avail - Lists all available modules on a system in condensed format * - module avail -l - Lists all available modules on a system in long format * - module display - Shows environment changes that will be made by loading a given module * - module load - Loads a module * - module help - Shows help for a module * - module swap - Swaps a currently loaded module for an unloaded module After logging in, you can see the modules loaded by default with ``module list``. Compilers --------- The following compilers are available on Cumulus: * Intel Composer XE (default) * Portland Group Compiler Suite * GNU Compiler Collection * Cray Compiling Environment **NOTE**: Upon login, the default versions of the Intel compiler and associated Message Passing Interface (MPI) libraries are added to each user’s environment through a programming environment module. Users do not need to make any environment changes to use the default version of Intel and MPI. Compiler Environments --------------------- If a different compiler is required, it is important to use the correct environment for each compiler. To aid users in pairing the correct compiler and environment, programming environment modules are provided. The programming environment modules will load the correct pairing of compiler version, message passing libraries, and other items required to build and run. We highly recommend that the programming environment modules be used when changing compiler vendors. The following programming environment modules are available: * PrgEnv-intel (default) * PrgEnv-pgi * PrgEnv-gnu * PrgEnv-cray To change the default loaded Intel environment to the default GCC environment use: ``module unload PrgEnv-intel`` and then ``module load PrgEnv-gnu`` Or, alternativley, use the swap command: ``module swap PrgEnv-intel PrgEnv-gnu`` Managing Compiler Versions -------------------------- To use a specific compiler version, you must first ensure the compiler’s PrgEnv module is loaded, and then swap to the correct compiler version. For example, the following will configure the environment to use the GCC compilers, then load a non-default GCC compiler version: ``module swap PrgEnv-intel PrgEnv-gnu`` ``module swap gcc gcc/4.6.1`` Compiler Commands ----------------- As is the case with our other Cray systems, the C, C++, and Fortran compilers are invoked with the following commands: * For the C compiler: ``cc`` * For the C++ compiler: ``CC`` * For the Fortran compiler: ``ftn`` These are actually compiler wrappers that automatically link in appropriate libraries (such as MPI and math libraries) and build code that targets the compute-node processor architecture. These wrappers should be used regardless of the underlying compiler (Intel, PGI, GNU, or Cray). **NOTE**: You should not call the vendor compilers (i.e pgf90, icpc, gcc) directly. Commands such as mpicc, mpiCC, and mpif90 are not available on Cray systems. You should use cc, CC, and ftn instead. General Guidelines ------------------ We recommend the following general guidelines for using the programming environment modules: * Do not purge all modules; rather, use the default module environment provided at the time of login, and modify it. * Do not swap or unload any of the Cray provided modules (those with names like ``xt-*``, ``xe-*``, ``xk-*``, or ``cray-*``). Threaded Codes -------------- When building threaded codes on Cray machines, you may need to take additional steps to ensure a proper build. For Intel, use the ``openmp`` option: .. code-block:: bash $ cc -openmp test.c -o test.x $ setenv OMP_NUM_THREADS 2 For PGI, add ``-mp`` to the build line: .. code-block:: bash $ module swap PrgEnv-intel PrgEnv-pgi $ cc -mp test.c -o test.x $ setenv OMP_NUM_THREADS 2 For GNU, add ``-fopenmp`` to the build line: .. code-block:: bash $ module swap PrgEnv-intel PrgEnv-gnu $ cc -fopenmp test.c -o test.x $ setenv OMP_NUM_THREADS 2 For Cray, no additional flags are required: .. code-block:: bash $ module swap PrgEnv-intel PrgEnv-cray $ cc test.c -o test.x $ setenv OMP_NUM_THREADS 2 Running Jobs ^^^^^^^^^^^^ In High Performance Computing (HPC), computational work is performed by jobs. Individual jobs produce data that lend relevant insight into grand challenges in science and engineering. As such, the timely, efficient execution of jobs is the primary concern in the operation of any HPC system. A job on a commodity cluster typically comprises a few different components: * A batch submission script. * A binary executable. * A set of input files for the executable. * A set of output files created by the executable. And the process for running a job, in general, is to: #. Prepare executables and input files. #. Write a batch script. #. Submit the batch script to the batch scheduler. #. Optionally monitor the job before and during execution. The following sections describe in detail how to create, submit, and manage jobs for execution on commodity clusters. Login vs Compute Nodes ---------------------- When you log into an OLCF cluster, you are placed on a login node. Login node resources are shared by all users of the system. Because of this, users should be mindful when performing tasks on a login node. Login nodes should be used for basic tasks such as file editing, code compilation, data backup, and job submission. Login nodes should not be used for memory- or compute-intensive tasks. Users should also limit the number of simultaneous tasks performed on the login resources. For example, a user should not run (10) simultaneous tar processes on a login node. **NOTE**: Compute-intensive, memory-intensive, or otherwise disruptive processes running on login nodes may be killed without warning. Batch Scheduler --------------- Cumulus utilizes the Slurm batch scheduler. The following sections look at Slurm interaction in more detail. Writing Batch Scripts --------------------- Batch scripts, or job submission scripts, are the mechanism by which a user configures and submits a job for execution. A batch script is simply a shell script that also includes commands to be interpreted by the batch scheduling software (e.g. Slurm). Batch scripts are submitted to the batch scheduler, where they are then parsed for the scheduling configuration options. The batch scheduler then places the script in the appropriate queue, where it is designated as a batch job. Once the batch jobs makes its way through the queue, the script will be executed on the primary compute node of the allocated resources. Example Batch Script -------------------- .. code-block:: bash :linenos: #!/bin/bash #SBATCH -A XXXYYY #SBATCH -J test #SBATCH -N 2 #SBATCH -t 1:00:00 cd $SLURM_SUBMIT_DIR date srun -n 8 ./a.out **Interpreter Line** 1: This line is optional and can be used to specify a shell to intrepret the script. In this example, the bash shell will be used. **Slurm Options** 2: The job will be charged to the "XXXYYY" project. 3: The job will be named test. 4: The job will request (2) nodes. 5: The job will request (1) hour walltime. **Shell Commands** 6: This line is left blank, so it will be ignored. 7: This command will change the current directory to the directory from where the script was submitted. 8: This command will run the date command. 9: This command will run (8) MPI instances of the executable a.out on the compute nodes allocated by the batch system. Submitting a Batch Script ------------------------- Batch scripts can be submitted for execution using the sbatch command. For example, the following will submit the batch script named test.slurm: ``sbatch test.slurm`` If successfully submitted, a Slurm job ID will be returned. This ID can be used to track the job. It is also helpful in troubleshooting a failed job; make a note of the job ID for each of your jobs in case you must contact the OLCF User Assistance Center for support. Common Batch Options for Slurm ------------------------------ .. list-table:: :widths: 25 25 15 :header-rows: 1 * - Option - Use - Description * - ``-A`` - #SBATCH -A - Causes the job time to be charged to . * - ``-N`` - #SBATCH -N - Number of compute nodes to allocate. Jobs cannot request partial nodes. * - ``-t`` - #SBATCH -t