Skip to content

Hardware Information

The following hardware summaries may be useful for grant proposal writing. If any information is missing that would be helpful to you, please be sure to contact us or create an issue on our tracker.

Tip

The tables in this section are wide and can be scrolled horizontally to display more information.

Cheaha HPC Cluster

The HPC cluster is comprised of 8192 compute cores connected by low-latency Fourteen Data Rate (FDR) and Enhanced Data Rate (EDR) InfiniBand networks. In addition to the basic compute cores, there are also 72 NVIDIA Tesla P100 GPUs available. There is a total of just under 49 TB of memory across the cluster. A description of the available hardware generations are summarized in the following table.

Fabric Generation Compute Type Partition Total Cores Total Memory Gb Total Gpus Cores Per Node Memory Per Node Gb Nodes Cpu Info Gpu Info
hpc 7 gpu pascalnodes 504 4608 72 28 256 18 Intel Xeon E5-2680 v4 2.40 GHz NVIDIA Tesla P100 16 GB
hpc 8 cpu cpu 504 4032 24 192 21 Intel Xeon E5-2680 v4 2.50 GHz
hpc 8 high memory largemem 240 7680 24 768 10 Intel Xeon E5-2680 v4 2.50 GHz
hpc 8 high memory largemem 96 6144 24 1536 4 Intel Xeon E5-2680 v4 2.50 GHz
hpc 9 cpu cpu 2496 30056 48 578 52 Intel Xeon Gold 6248R 3.00 GHz
hpc 10 cpu cpu 4352 17408 128 512 34 AMD Epyc 7713 Milan 2.00 GHz
8192 69928 72 139

The full table can be downloaded here.

The table below is a theoretical analysis of FLOPS (floating point operations per second) based on processor instructions and core counts, and is not a reflection of efficiency in practice.

Generation Cpu Tflops Per Node Gpu Tflops Per Node Tflops Per Node Nodes Tflops
7 1.08 17.06 18.14 18 326.43
8 0.96 0.96 21 20.16
8 0.96 0.96 10 9.6
8 0.96 0.96 4 3.84
9 2.30 2.30 52 119.81
10 4.10 4.10 34 139.26
619.1

The full table can be downloaded here.

For information on using Cheaha, see our dedicated section.

Partitions

Partition Nodes Nodes Per Researcher Time Limit Priority Tier
interactive 52 1 2 hours 20
express 52 UNLIMITED 2 hours 20
short 52 44 12 hours 16
pascalnodes 18 UNLIMITED 12 hours 16
pascalnodes-medium 7 UNLIMITED 2 days, 0 hours 15
medium 52 44 2 days, 2 hours 12
long 52 5 6 days, 6 hours 8
intel-dcb 21 5 6 days, 6 hours 8
amd-hdr100 33 5 6 days, 6 hours 8
largemem 14 10 2 days, 2 hours 6
largemem-long 5 10 6 days, 6 hours 6

The full table can be downloaded here.

Quality of Service (QoS) Limits

Quality of Service (QoS) allows us to balance usage across the cluster, so that no single researcher can consume all of the resources. Each set of QoS limits is applied to one or more partitions according to the table below. Each limit is applied to every researcher on Cheaha. The partitions within a group all share the same limits, so that a researcher can use 1.5 TB on both express and short, but can't use 2 TB on both at the same time.

Partition Core Count Quota Memory (GB) Quota GPU Count Quota
express, short, medium, long, intel-dcb, amd-hdr100 264 3072
interactive 48
largemem, largemem-long 290 7168
pascalnodes, pascalnodes-medium 56 500 8

The full table can be downloaded here.

Cloud Service at cloud.rc

The Cloud service hardware consists of 5 Intel nodes and 4 DGX-A100 nodes. A description of the available hardware are summarized in the following table.

Fabric Generation Compute Type Partition Total Cores Total Memory Gb Total Gpus Cores Per Node Memory Per Node Gb Nodes Cpu Info Gpu Info
cloud 1 cpu 240 960 48 192 5 Intel Xeon Gold 6248R 3.00 GHz
cloud 1 gpu 512 4096 32 128 1024 4 AMD Epyc 7742 Rome 2.25 GHz NVIDIA A100 40 GB
752 5056 32 9

The full table can be downloaded here.

The table below is a theoretical analysis of FLOPS (floating point operations per second) based on processor instructions and core counts, and is not a reflection of efficiency in practice.

Generation Cpu Tflops Per Node Gpu Tflops Per Node Tflops Per Node Nodes Tflops
1 2.30 2.30 5 11.52
1 4.61 77.97 82.58 4 330.3
341.82

The full table can be downloaded here.

For information on using our Cloud service at cloud.rc, see our dedicated section.

Kubernetes Container Service

Important

The Kubernetes fabric is still in deployment and not ready for researcher use. We will be sure to inform you when the service is ready. The following information is planned hardware.

The Kubernetes container service hardware consists of 5 Intel nodes and 4 DGX-A100 nodes. A description of the available hardware are summarized in the following table.

Fabric Generation Compute Type Partition Total Cores Total Memory Gb Total Gpus Cores Per Node Memory Per Node Gb Nodes Cpu Info Gpu Info
container 1 cpu 144 576 48 192 3 Intel Xeon Gold 6248R 3.00 GHz
container 1 gpu 512 4096 32 128 1024 4 AMD Epyc 7742 Rome 2.25 GHz NVIDIA A100 40 GB
656 4672 32 7

The full table can be downloaded here.

The table below is a theoretical analysis of FLOPS (floating point operations per second) based on processor instructions and core counts, and is not a reflection of efficiency in practice.

Generation Cpu Tflops Per Node Gpu Tflops Per Node Tflops Per Node Nodes Tflops
1 2.30 2.30 3 6.91
1 4.61 77.97 82.58 4 330.3
337.21

The full table can be downloaded here.

Full Hardware Details

Detailed hardware information including processor and GPU makes and models, core clock frequencies, and other information for current hardware are in the table below.

Generation Compute Type Partition Total Cores Total Memory Gb Total Gpus Cores Per Node Cores Per Die Dies Per Node Die Brand Die Name Die Frequency Ghz Memory Per Node Gb Gpu Per Node Gpu Brand Gpu Name Gpu Memory Gb Nodes
1 cpu 128 1024 2 1 2 AMD Opteron 242 1.6 16 64
2 cpu 192 1152 8 4 2 Intel Xeon E5450 3 48 24
3 cpu 384 1536 12 6 2 Intel Xeon X5650 2.66 48 32
3 cpu 192 1536 12 6 2 Intel Xeon X5650 2.66 96 16
4 cpu 48 1152 16 8 2 Intel Xeon X5650 2.7 384 3
5 cpu 192 1152 16 8 2 Intel Xeon E2650 2 96 12
6 cpu cpu 336 5376 24 12 2 Intel Xeon E5-2680 v3 2.5 384 14
6 cpu cpu 912 9728 24 12 2 Intel Xeon E5-2680 v3 2.5 256 38
6 cpu cpu 1056 5632 24 12 2 Intel Xeon E5-2680 v3 2.5 128 44
7 gpu pascalnodes 504 4608 72 28 14 2 Intel Xeon E5-2680 v4 2.4 256 4 NVIDIA Tesla P100 16 18
8 cpu cpu 504 4032 24 12 2 Intel Xeon E5-2680 v4 2.5 192 21
8 high memory largemem 240 7680 24 12 2 Intel Xeon E5-2680 v4 2.5 768 10
8 high memory largemem 96 6144 24 12 2 Intel Xeon E5-2680 v4 2.5 1536 4
9 cpu cpu 2496 30056 48 24 2 Intel Xeon Gold 6248R 3 578 52
10 cpu cpu 4352 17408 128 64 2 AMD Epyc 7713 Milan 2 512 34
1 cpu 240 960 48 12 4 Intel Xeon Gold 6248R 3 192 5
1 gpu 512 4096 32 128 64 2 AMD Epyc 7742 Rome 2.25 1024 8 NVIDIA A100 40 4
1 cpu 144 576 48 12 4 Intel Xeon Gold 6248R 3 192 3
1 gpu 512 4096 32 128 64 2 AMD Epyc 7742 Rome 2.25 1024 8 NVIDIA A100 40 4

The full table can be downloaded here.


Last update: March 29, 2022