Difference between revisions of "Cyberinfrastructure"
From Montana Tech High Performance Computing
m (Bdeng moved page HPC Architecture to Cyberinfrastructure without leaving a redirect) |
|||
Line 1: | Line 1: | ||
+ | == HPC Architecture == | ||
The Montana Tech HPC cluster contains 1 management node, 22 compute nodes, and a 25 TB (nfs0) and a 66 TB (nfs1) NFS storage systems. There is an additional computer server (copper). | The Montana Tech HPC cluster contains 1 management node, 22 compute nodes, and a 25 TB (nfs0) and a 66 TB (nfs1) NFS storage systems. There is an additional computer server (copper). | ||
The compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are [[GPU Nodes]] that contain three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. A 40 Gbps InfiniBand (IB) network interconnects the nodes and the [[storage]] system. | The compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are [[GPU Nodes]] that contain three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. A 40 Gbps InfiniBand (IB) network interconnects the nodes and the [[storage]] system. |
Revision as of 20:48, 29 August 2017
HPC Architecture
The Montana Tech HPC cluster contains 1 management node, 22 compute nodes, and a 25 TB (nfs0) and a 66 TB (nfs1) NFS storage systems. There is an additional computer server (copper). The compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are GPU Nodes that contain three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. A 40 Gbps InfiniBand (IB) network interconnects the nodes and the storage system.
The system has a theoretical peak performance of 6.2 TFLOPS without the GPUs. The GPUs alone have a theoretical peak performance of 7.0 TFLOPS for double precision floating point operations. So the entire cluster has a theoretical peak performance of over 13 TFLOPS. The cluster has been benchmarked without the two GPU nodes at 4.6 TFLOPS using the LINPACK benchmark.
The operating system is Centos 7.3 and Penguin's Scyld ClusterWare is used to maintain and provision the compute nodes.
CPU | Dual E5-2660 (2.2 GHz, 8-cores) |
RAM | 64 GB |
Disk | 500 GB |
CPU | Dual E5-2643 v3 (3.4 GHz, 6-cores) |
RAM | 128 GB |
Disk | 1 TB |
NFS storage | nfs0 | 25 TB |
nfs1 | 66 TB | |
Network | Ethernet | |
40 Gbps InfiniBand |
CPU | Dual E5-2660 (2.2 GHz, 8-cores) |
RAM | 64 GB |
Disk | 500 GB |
Nodes | n0~n11, n13, n14 |
CPU | Dual E5-2660 (2.2 GHz, 8-cores) |
RAM | 128 GB |
Disk | 500 GB |
Nodes | n2, n15~n19 |
CPU | Dual E5-2660 (2.2 GHz, 8-cores) |
RAM | 128 GB |
Disk | 500 GB |
GPU | Three nVidia Tesla K20 |
Nodes | n20, n21 |