Welcome to the TheoChem Cluster
The cluster consist of different compute node
* 10 Standard Nodes (node01 to node10)
** 2 CPUs (18 Cores each/36 Threads per CPU)
** 1 TB Disk
** 256 GB ram
Disk space and management
*The drive /home contain 500 Gb of space of which 10 GB is assigned to each user.
*The drive /data contain 36 TB of space of which 1 TB is assigned to each user.
The /home drive is intended for location of important files, scripts, and own programs for development. The /data drive is intended for all data files.
It is recommended that you check your disk quota (with the \'quota\' command) before submitting any job to make sure that the output fits. You should also regularly check the available disk space with the \'df -h\' command to check that the disk systems are not full.
Make sure you backup important data from /home and /data on a regular basis. We do have redundancy, but we don\'t make backups.
Full disks result in crashing jobs.
When you clean up your directories, you may find the \'du -sh\' command as it lets you find out how much space you have in a particular folder specified.
=Submitting a job=
The queuing system on TheoChem is SLURM and it works similar to peregrine.
* sbatch to submit a job;
* scancel to cancel a job;
* squeue to see what is in the queue (use man sbatch to see the manuals).
We have the following queues (partitions) on Nieuwpoort:
* ultrashort: special queue for ultra short jobs (< 30 minutes);
* short: default queue for short jobs (< 1 days);
* medium: special queue for very medium jobs (< 3 days);
* long: special queue for long jobs (< 10 days);
All queues have:
* default wall time 00:05:00
* default mem_per_cpu 2048
A typical job looks like:
<br>#!/bin/bash<br>#SBATCH --time=0:30:00<br>#SBATCH -N 2<br>#SBATCH --partition=short<br>#SBATCH --mem-per-cpu=3GB<br>#SBATCH --ntasks-per-node=28<br>#SBATCH -o adf.inp.o%j<br><br>module purge<br>module load shared<br><br>cd $TMPDIR<br><br>adf –n 56 <<EOF<br>ADF input<br>EOF<br>
* A wall time of 30 minutes is chosen in this example;
* The job is supposed to run on 2 nodes;
* The job uses on each node 28 cores;
* The short partition is chosen;
* The output will be called adf.inp.o(job_id).
Other programs can be run in a similar way.
== Getting started ==
Consult the User\'s Guide for information on using the wiki software.
* Configuration settings list
* MediaWiki FAQ
* MediaWiki release mailing list
* Localise MediaWiki for your language
* Learn how to combat spam on your wiki;