Actions

Difference between revisions of "Welcome to Montana Tech's High Performance Computing Cluster"

From Montana Tech High Performance Computing

 
(42 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
<div class="row">
 
<div class="row">
<br><br>
+
<br>
 
+
   <div class="large-12 column">
   <div class="large-8 column">
+
==<span style="color:#925223">Supporting the Computational Science and Research Needs of Montana </span>==
== Supporting the Computational Science and Research Needs of Montana ==
+
<p align="justify">Montana Tech's High Performance Computing (HPC) architecture debuted as the first HPC in Montana University System (MUS) and it has been designed to support collaborative research and instruction within Montana MUS. Funded by the Montana Department of commerce as a MUS-wide initiative, this computing cluster is available to faculty, students, researchers, and public/private industry collaborators.</p>
<p align="justify">Montana Tech's High Performance Computing (HPC) architecture has been designed to support collaborative research and instruction within Montana and Montana University System (MUS). Funded by the Montana Department of commerce as a MUS-wide initiative, this computing cluster is available to faculty, students, researchers, and public/private industry collaborators.</p>
 
 
   </div>
 
   </div>
  
  <div class="large-4 column">
 
<br>
 
[[File:Logo-web.png|600px|link=http://www.mtech.edu]]
 
  </div>
 
 
</div>
 
</div>
  
Line 17: Line 12:
 
   <div class="large-6 columns">
 
   <div class="large-6 columns">
 
<h3 class="subheader"><span class="fa fa-th fa-lg" style="display:inline;"></span> HPC Cluster</h3>
 
<h3 class="subheader"><span class="fa fa-th fa-lg" style="display:inline;"></span> HPC Cluster</h3>
<p align="justify">Montana Tech's HPC is a small cluster consists of 22 nodes with 352 cores (704 with Hyper-threading). Two of the nodes are GPU nodes adding 7488 CUDA cores. The nodes are connected with 40Gbps InfiniBand and have access to 91 TB storage sytems. The theoretical peak performance of the entire cluster is about 13 TFLOPS. </p>
+
<p align="justify">Montana Tech's HPC is a small cluster consists of 26 nodes with 544 cores. Two of the nodes are GPU nodes adding 7488 CUDA cores. The nodes are connected with 40Gbps InfiniBand and have access to 91 TB storage sytems. The theoretical peak performance of the entire cluster is about 21 TFLOPS. </p>
 
   </div>
 
   </div>
 
   <div class="large-6 columns">
 
   <div class="large-6 columns">
Line 25: Line 20:
 
</div>
 
</div>
  
 +
<div class="row">
 +
  <div class="large-8 columns">
 
<h3 class="subheader"><span class="fa fa-handshake-o fa-lg"></span> Collaborations </h3>
 
<h3 class="subheader"><span class="fa fa-handshake-o fa-lg"></span> Collaborations </h3>
<p align="justify">Montana Tech. currently pays for the system support. We hope current and future researchers will [[Grant|incorporate]] the facilities into their grant proposals to fund system expansion and future support. Rsearchers can also propose infrastructure expansions, if funding is available. </p>
+
<p align="justify">Montana Tech currently pays for the system support. We hope current and future researchers will [[Grant_information|incorporate]] the facilities into their grant proposals to fund system expansion and future support. Researchers can also propose infrastructure expansions, if funding is available. </p>
 +
</div>
 +
</div>
  
 +
<div class="row">
 +
  <div class="large-6 columns">
 
<h3 class="subheader"><span class="fa fa-bolt fa-lg"></span> Current Uses </h3>
 
<h3 class="subheader"><span class="fa fa-bolt fa-lg"></span> Current Uses </h3>
 
* Multiphysics Simulations
 
* Multiphysics Simulations
 
* Molecular Dynamics
 
* Molecular Dynamics
 
* Statistical Simulations
 
* Statistical Simulations
 +
* Gene Analysis
 
* Teaching
 
* Teaching
 +
</div>
  
 
+
  <div class="large-6 columns">
 +
<h3 class="subheader"><span class="fa fa-newspaper-o fa-lg"></span> What's New </h3>
 +
* [https://hpc.mtech.edu/ganglia/?c=Oredigger&m=load_one&r=hour&s=by%20name&hc=4&mc=2 Current Cluster Status (Ganglia)]
 +
* 04/01/2023 Software will be reinstalled. User accounts will be recovered upon request.
 +
* 03/31/2023 HPC system upgraded to Rocky Linux 8 + Warewulf (stateless) from CentOS7 + xCAT (stateful).
 +
* 03/05/2020 We have migrated to SLURM from Torque/Moab. Documentation will be updated.
 +
* 08/20/2019 Four new compute nodes featuring the latest Xeon Platinum processors has arrived
 +
</div>
 +
</div>
  
  

Latest revision as of 15:19, 1 April 2023


Supporting the Computational Science and Research Needs of Montana

Montana Tech's High Performance Computing (HPC) architecture debuted as the first HPC in Montana University System (MUS) and it has been designed to support collaborative research and instruction within Montana MUS. Funded by the Montana Department of commerce as a MUS-wide initiative, this computing cluster is available to faculty, students, researchers, and public/private industry collaborators.

HPC Cluster

Montana Tech's HPC is a small cluster consists of 26 nodes with 544 cores. Two of the nodes are GPU nodes adding 7488 CUDA cores. The nodes are connected with 40Gbps InfiniBand and have access to 91 TB storage sytems. The theoretical peak performance of the entire cluster is about 21 TFLOPS.

Data Visualization

Associated with the HPC are two 3D data visualization systems with a variety of visualization software packages. Both 3D visualization systems are equipped with either 108" stereo projection wall or 70" 3D TV, shutter glasses, and a tracking system to enable researcher to directly interact with the 3D imagery.

Collaborations

Montana Tech currently pays for the system support. We hope current and future researchers will incorporate the facilities into their grant proposals to fund system expansion and future support. Researchers can also propose infrastructure expansions, if funding is available.

Current Uses

  • Multiphysics Simulations
  • Molecular Dynamics
  • Statistical Simulations
  • Gene Analysis
  • Teaching

What's New

  • Current Cluster Status (Ganglia)
  • 04/01/2023 Software will be reinstalled. User accounts will be recovered upon request.
  • 03/31/2023 HPC system upgraded to Rocky Linux 8 + Warewulf (stateless) from CentOS7 + xCAT (stateful).
  • 03/05/2020 We have migrated to SLURM from Torque/Moab. Documentation will be updated.
  • 08/20/2019 Four new compute nodes featuring the latest Xeon Platinum processors has arrived