- De
- En
Here you can find our most frequently asked questions. Please check whether your question is answered here, before submitting a support request. or your convenience, the questions are sorted after topics. As some questions relate to two or more topic areas, you might see them more than once in the listing.
illegal instruction
error(s) they are sometimes burried in many errors.
This is usually due to differences in processor architecture between the machine the code was compiled on and the machine the program ran on.
Specifically: If you compiled your code a newer system, such as our frontends gwdu101 and gwdu101 and try to run it on one of the older nodes, such das the dmp or dfa nodes, it will crash with an error like this.
To mitigate this, either add #SBATCH -C cascadelake
to you jobscript to limit it to nodes with a Cascade Lake processor, or compile it on our older frontend gwdu103.scontrol show job $JOBID | grep StartTime
to get an estimate for the start time of your job. However, this information is not always available.slurmstepd: error: Detected 1 oom-kill event(s) in StepId=[JOBID].batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
This means your job ran out of memory, i.e. your program used more memory/RAM than you requested. Please request more memory.(QOSGrpCpuLimit)
, it means that all the global job slots for the QoS are currently used. It has nothing to do with your user being limited. We have a global limit on 1024 cores being used simultaneously in the long-QoS. Your job has to wait until enough cores are available.scratch
, the archive or to a different system. If this is not possible, write a mail to support@gwdg.de requesting more quota for your $home
. You can always check your quota with Quota
on the frontend nodes.Quota
. If the number underneath used
is larger than the number underneath softlimit
, you have exceeded the softlimit
, but (maybe) not reached the hardlimit
yet. Check whether you can move some files to /scratch
, the archive or move it completely from the HPC systems. If this is not the case, you can also request more quota by writing a mail to support.conda
here. First, you need to load the accorsing module: module load conda
. Then you create a new module environment with conda create --name <environment>
and activate it conda activate <environment>
. Within the environment you can then install the software you want to have within this environment conda install <software>
. If you want this environment to be activated by default, you need to add the module load conda
and conda activate <environment>
commands to your .bashrc
or your .profile
.module load singularity
to get access to singularity. Then, you convert the Docker image to a Singularity image with singularity build SINGULARITY_CONTAINER_NAME docker://DOCKER_URL_TAG_ETC
where SINGULARITY_CONTAINER_NAME
is the file name you want to be the converted image, which traditionally uses the extension .sif
. If you need to overwrite an existing image, add the -F
option. Then, to run the Singularity image, do singularity run SINGULARITY_CONTAINER_NAME
followed by any command line arguments you want to pass.spack-user
, you can install software that is available for the currently installed version. You need to first load the module with module load spack-user
, then you need to source the setup environment with source $SPACK_USER_ROOT/share/spack/setup-env.sh
. You can then install your desired software with spack install <software>
and then use it with spack load <software>
. If you want this module to be loaded by default, you need to add the module load spack-user
, source $SPACK_USER_ROOT/share/spack/setup-env.sh
and spack load <software>
commands to your .bahsrc
or .profile
.