Difference between revisions of "Cyberinfrastructure"
From Montana Tech High Performance Computing
(→HPC Architecture) |
|||
Line 1: | Line 1: | ||
== HPC Architecture == | == HPC Architecture == | ||
− | The Montana Tech HPC cluster contains 1 management node, 26 compute nodes, and a total of 91 TB NFS storage systems. There is an additional computer server (copper). | + | The Montana Tech HPC (oredigger cluster) contains 1 management node, 26 compute nodes, and a total of 91 TB NFS storage systems. There is an additional computer server (copper). |
Twenty-two compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are [[GPU Nodes]], with three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. The remaining four nodes feature the Intel 2nd Generation Xeon Scalable Processors (48 CPU Cores and 192 GB Ram per node). Internally, a 40 Gbps InfiniBand (IB) network interconnects the nodes and the [[storage]] system. | Twenty-two compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are [[GPU Nodes]], with three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. The remaining four nodes feature the Intel 2nd Generation Xeon Scalable Processors (48 CPU Cores and 192 GB Ram per node). Internally, a 40 Gbps InfiniBand (IB) network interconnects the nodes and the [[storage]] system. | ||
Revision as of 12:28, 8 May 2020
HPC Architecture
The Montana Tech HPC (oredigger cluster) contains 1 management node, 26 compute nodes, and a total of 91 TB NFS storage systems. There is an additional computer server (copper). Twenty-two compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are GPU Nodes, with three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. The remaining four nodes feature the Intel 2nd Generation Xeon Scalable Processors (48 CPU Cores and 192 GB Ram per node). Internally, a 40 Gbps InfiniBand (IB) network interconnects the nodes and the storage system.
The system has a theoretical peak performance of 14.2 TFLOPS without the GPUs. The GPUs alone have a theoretical peak performance of 7.0 TFLOPS for double precision floating point operations. So the entire cluster has a theoretical peak performance of over 21 TFLOPS.
The operating system is Centos 7.6 and Penguin's Scyld ClusterWare is used to maintain and provision the compute nodes.
CPU | Dual E5-2660 (2.2 GHz, 8-cores) |
RAM | 64 GB |
Disk | 450 GB |
CPU | Dual E5-2643 v3 (3.4 GHz, 6-cores) |
RAM | 128 GB |
Disk | 1 TB |
NFS storage | nfs0 - 25 TB |
nfs1 - 66 TB | |
Network | Ethernet |
40 Gbps InfiniBand |
CPU | Dual E5-2660 (2.2 GHz, 8-cores) |
RAM | 64 GB |
Disk | 450 GB |
Nodes | n0~n11, n13, n14 |
CPU | Dual E5-2660 (2.2 GHz, 8-cores) |
RAM | 128 GB |
Disk | 450 GB |
Nodes | n12, n15~n19 |
CPU | Dual E5-2660 (2.2 GHz, 8-cores) |
RAM | 128 GB |
Disk | 450 GB |
GPU | Three nVidia Tesla K20 |
Nodes | n20, n21 |
CPU | Dual Xeon Platinum 8260 (2.40 GHz, 24-cores) |
RAM | 192 GB |
Disk | 256 GB SSD |
Nodes | TBD |
3D Visualization System
Montana Tech is developing two 3D data visualization systems. Both systems provide an immersive visualization experience (aka virtual reality) through 3D stereoscopic imagery and user tracking systems. These systems allow scientists to directly interact with their data and helps them gain a better understanding of their data generated modeling on the HPC Cluster or collected in the field. Remote data visualization is possible by running VisIt from the cluster's login node.
CPU | Dual E5-2643 v4 (3.4 GHz, 6-cores) |
RAM | 64 GB |
Disk | 512 GB SSD + 1TB HD |
GPU | Dual nVidia Quadro K5000 |
OS | Windows 7 |
Display | 108" 3D projector screen |
Tracking | ART SMARTTRACK |