Difference between revisions of "Main Page"

From TheoChem Cluster
Jump to navigation Jump to search
(Job example)
 
(One intermediate revision by the same user not shown)
Line 23: Line 23:
 
When you clean up your directories, you may find the 'du -sh' command as it lets you find out how much space you have in a particular folder specified.<br>
 
When you clean up your directories, you may find the 'du -sh' command as it lets you find out how much space you have in a particular folder specified.<br>
 
<br>
 
<br>
=Submitting a job=
 
<br>
 
The queuing system on TheoChem is SLURM and it works similar to [https://redmine.hpc.rug.nl/redmine/projects/peregrine/wiki peregrine].<br>
 
* [https://slurm.schedmd.com/sbatch.html sbatch] to submit a job;<br>
 
* [https://slurm.schedmd.com/scancel.html scancel] to cancel a job;<br>
 
* [https://slurm.schedmd.com/squeue.html squeue] to see what is in the queue (use man sbatch to see the manuals).<br>
 
<br>
 
We have the following queues (partitions) on the TheoChem Cluster:<br>
 
* ultrashort: special queue for ultra short jobs (< 30 minutes);<br>
 
* <strong>short: default queue for short jobs (< 1 days);</strong><br>
 
* medium: special queue for very medium jobs (< 3 days);<br>
 
* long: special queue for long jobs (< 10 days);<br>
 
<br>
 
All queues have:<br>
 
* default wall time 00:05:00<br>
 
* default mem_per_cpu 2048<br>
 
<br>
 
=Job example=
 
<br>
 
A typical job looks like:<br>
 
#!/bin/bash
 
#SBATCH --time=0:30:00
 
#SBATCH -N 2
 
#SBATCH --partition=short
 
#SBATCH --mem-per-cpu=3GB
 
#SBATCH --ntasks-per-node=28
 
#SBATCH -o out.o%j
 
 
mpirun -n 56 ./a.out
 
<br>
 
* A wall time of 30 minutes is chosen in this example;<br>
 
* The job is supposed to run on 2 nodes;<br>
 
* The job uses on each node 28 cores;<br>
 
* The short partition is chosen;<br>
 
* The output will be called out.o(job_id).<br>
 
<br>
 
 
=Available Software=
 
* [[Gamess-uk]]
 
* [[Gromacs]]
 
* [[GronOR]]
 
* [[IQmol]]
 
* [[MCTDH]]
 
* [[Molden]]
 
* [[ORCA]]
 
* [[QChem]]
 
* [[Quantics]]
 
* [[VMD]]
 
<br>
 
 
=Getting started with the Wiki=
 
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.<br><br>
 
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]<br>
 
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]<br>
 
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]<br>
 
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]<br>
 
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki];
 

Latest revision as of 13:28, 27 March 2019

Welcome to the TheoChem Cluster

The cluster consist of different compute node

  • 10 Standard Nodes (node01 to node10)
    • 2 CPUs (18 Cores each/36 Threads per CPU)
    • 1 TB Disk
    • 256 GB ram


Disk space and management

  • The drive /home contain 500 Gb of space of which 10 GB is assigned to each user.
  • The drive /data contain 36 TB of space of which 1 TB is assigned to each user.


The /home drive is intended for location of important files, scripts, and own programs for development. The /data drive is intended for all data files.
It is recommended that you check your disk quota (with the 'quota' command) before submitting any job to make sure that the output fits.
You should also regularly check the available disk space with the 'df -h' command to check that the disk systems are not full.

Make sure you backup important data from /home and /data on a regular basis. We do have redundancy, but we don't make backups.

Full disks result in crashing jobs.

When you clean up your directories, you may find the 'du -sh' command as it lets you find out how much space you have in a particular folder specified.