- De
- En
Comparable to the HLRN, the DLR system is used by an already existing community. The embedding of the systems required additional infrastructural measures, like the connection to the DLR (intra-)network and the DLR user administration.
As Caro is exclusive to the DLR employees, the documentation is only accessible via the DLR intranet. If you have any problems or questions concerning this system, please do not hesitate to contact the User-Helpdesk with the subject “HPC-CARO: <your subject>”. Your request will then be handled by GWDG-colleagues.
18 Racks
16 warm water cooled racks accommodate most of the compute nodes of CARO while 2 cold water cooled racks are used for the login, administration, big-memory and GPU nodes.
1.364 Compute Nodes
The compute nodes are equipped with 2 AMD EPYC 7702 “Rome” 64 core processors and, in general, 256 GB memory. 20 of these nodes have 1 TB memory.
175,744 CPU Cores
Distributed over all compute and GPU nodes.
6 GPU nodes
The dedicated GPU nodes feature 1 TB memory and 4 Nvidia Quadro RTX5000 GPUs.
100 Gbit/s Interconnect
A 100 Gbps Infiniband network with Mellanox switches offers low latency and high bandwidth with a fat-tree topology.
3.46 PetaFlop/s
This result was achieved during the LINPACK benchmark (run with 1300 nodes), putting CARO in 135th place on the November 2021 Top 500 list.
364 TB RAM
Across all 1364 nodes are 364 TB of memory available.
8,4 PiB Storage
A total of 8.4 PiB of storage capacity is available on global parallel file system (DDN Lustre).
Name | Number of nodes | CPU & GPU | Number of CPU-Cores | Memory [GB] | Partition |
---|---|---|---|---|---|
n[0001-1344] | 1344 | 2 x EPYC 7702
| 64 | 256 | [medium] |
bigmem[01-20] | 20 | 2 x EPYC 7702
| 64 | 1024 | [bigmem] |
vis[01-06] | 6 | 1 x EPYC 7702
| 64 | 1024 | [vis] |