Difference between revisions of "Main Page"

From TheoChem Cluster
Jump to navigation Jump to search
(Available Software)
(Available Software)
Line 68: Line 68:
=Available Software=
=Available Software=
* [[Gamess-uk]]
* [[Gromacs]]
* [[Gromacs]]
* [[GronOR]]
* [[GronOR]]

Revision as of 10:20, 27 March 2019

Welcome to the TheoChem Cluster

The cluster consist of different compute node

  • 10 Standard Nodes (node01 to node10)
    • 2 CPUs (18 Cores each/36 Threads per CPU)
    • 1 TB Disk
    • 256 GB ram

Disk space and management

  • The drive /home contain 500 Gb of space of which 10 GB is assigned to each user.
  • The drive /data contain 36 TB of space of which 1 TB is assigned to each user.

The /home drive is intended for location of important files, scripts, and own programs for development. The /data drive is intended for all data files.
It is recommended that you check your disk quota (with the 'quota' command) before submitting any job to make sure that the output fits.
You should also regularly check the available disk space with the 'df -h' command to check that the disk systems are not full.

Make sure you backup important data from /home and /data on a regular basis. We do have redundancy, but we don't make backups.

Full disks result in crashing jobs.

When you clean up your directories, you may find the 'du -sh' command as it lets you find out how much space you have in a particular folder specified.

Submitting a job

The queuing system on TheoChem is SLURM and it works similar to peregrine.

  • sbatch to submit a job;
  • scancel to cancel a job;
  • squeue to see what is in the queue (use man sbatch to see the manuals).

We have the following queues (partitions) on the TheoChem Cluster:

  • ultrashort: special queue for ultra short jobs (< 30 minutes);
  • short: default queue for short jobs (< 1 days);
  • medium: special queue for very medium jobs (< 3 days);
  • long: special queue for long jobs (< 10 days);

All queues have:

  • default wall time 00:05:00
  • default mem_per_cpu 2048

Job example

A typical job looks like:

#SBATCH --time=0:30:00
#SBATCH --partition=short
#SBATCH --mem-per-cpu=3GB
#SBATCH --ntasks-per-node=28
#SBATCH -o adf.inp.o%j


adf –n 56 <<EOF
ADF input

  • A wall time of 30 minutes is chosen in this example;
  • The job is supposed to run on 2 nodes;
  • The job uses on each node 28 cores;
  • The short partition is chosen;
  • The output will be called adf.inp.o(job_id).

Other programs can be run in a similar way.

Available Software

Getting started with the Wiki

Consult the User's Guide for information on using the wiki software.