- De
- En
The operation of local HPC resources and especially of the Scientific Compute Cluster (SCC) at the GWDG is achieved by the transparent integration of different systems into a joint operating concept for the basic supply of the Max-Planck Institutes and the university. This includes a uniform software management, a shared batch management environment, cross-system monitoring and accounting, and cross-system file systems. Thus, synergies are achieved through the integration of different system generations and special-purpose systems (e.g. GPU clusters). Users will find a uniform environment on all HPC systems, while at the same time individual application environments are supported. Nonetheless, this results in a highly heterogeneous cluster, which requires good knowledge of the architecture differences and highly tuned run scripts.
The extensive documentation, the FAQ and the first steps can be found online. If you are using our systems for your research, please also refer to the acknowledgement guidelines on using our systems for your research.
7 Racks
4 Racks at the Faßberg are cold water cooled. The two GPU nodes at the MDC are air cooled. One CPU rack at the MDC is warm water cooled.
410 Compute Nodes
The SCC cluster contains a mixture of Xeon Platinum 9242, Broadwell Xeon E5-2650 v4, Haswell Xeon E5-4620 v3, Broadwell Xeon E5-2650 v4 and Xeon Gold 6252 CPUs
18.376 CPU Cores
Distributed over all compute and GPU nodes.
100 GBit/s & 56 Gbit/s Interconnect
The interconnect for the system at the Faßberg is run with 56GBit/s FDR Infiniband, and the MDC system runs with 100GBit/s Omni-Path.
1.4 TiB GPU RAM
Across all GPU nodes, 1.4 TiB of GPU-memory are available
99 TB RAM
Across all 410 nodes, 88 TB of memory are available
5,2 PiB Storage
The BeeGFS storage in the MDC system consists of 2 PiB HDD and 100 TiB SSD and 130TiB HDD at the Faßberg system. The StorNext home file system is around 3 PiB large.
22+ PiB Tape Storage
Backup storage is provided by Quantum Scalar Tape Libraries. To ensure reliable backups, these are stored at two different locations
Name | Number of nodes | CPU & GPU | Number of CPU-Cores | Memory [GB] | Partition |
---|---|---|---|---|---|
amp | 95 | 2 x Xeon Platinum 9242
| 48 | 384 | [medium] |
amp | 1 | 2 x Xeon Platinum 9242
| 48 | 384 | [gailing] |
dmp | 68 | 2 x Xeon E5-2650 v4
| 12 | 128 | [medium] |
dmp | 4 | 2 x Xeon E5-2650 v4
| 12 | 128 | [int] |
dmp | 10 | 2 x Xeon E5-2650 v4
| 12 | 128 | [medium-upsw] |
dfa | 15 | 2 x Xeon E5-2650 v4
| 12 | 512 | [fat] |
dsu | 5 | 4 x Xeon E5-4620 v3
| 10 | 1536 | [fat fat+] |
gwde | 1 | 4 x Xeon E7-4809 v3
| 8 | 2048 | [fat fat+] |
dge | 7 | 2 x Xeon E5-2650 v4
| 12 | 128 | [gpu] |
dge | 8 | 2 x Xeon E5-2650 v4
| 12 | 128 | [gpu] |
dge | 30 | 2 x Xeon E5-2650 v4
| 10 | 64 | [gpu-hub] |
gwdo | 20 | 1 x Xeon E3-1270 v2
| 4 | 32 | [gpu-hub] |
dte | 10 | 2 x Xeon E5-2650 v4
| 12 | 128 | [gpu] |
agt | 2 | 2 x Xeon Gold 6252
| 24 | 384 | [gpu] |
agq | 14 | 2 x Xeon Gold 6242
| 16 | 192 | [gpu] |
em | 32 | 2 x Xeon E5-2640 v3
| 8 | 128 | [em] |
sa | 32 | 2 x Xeon E5-2680 v3
| 12 | 256 | [sa] |
hh | 7 | 2 x Epyc 7742
| 64 | 1024 | [hh] |
sgiz | 13 | 2 x Xeon Gold 6130
| 16 | 96 | [sgiz] |
gwdd | 8 | 2 x Xeon E5-2650 v3
| 10 | 64 | [] |