Actions

Difference between revisions of "Welcome to Montana Tech's High Performance Computing Cluster"

From Montana Tech High Performance Computing

 
(53 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
<div class="row">
 
<div class="row">
<br><br>
+
<br>
 
+
   <div class="large-12 column">
   <div class="large-8 column">
+
==<span style="color:#925223">Supporting the Computational Science and Research Needs of Montana </span>==
== Supporting the Computational Science and Research Needs of Montana ==
+
<p align="justify">Montana Tech's High Performance Computing (HPC) architecture debuted as the first HPC in Montana University System (MUS) and it has been designed to support collaborative research and instruction within Montana MUS. Funded by the Montana Department of commerce as a MUS-wide initiative, this computing cluster is available to faculty, students, researchers, and public/private industry collaborators.</p>
<p align="justify">Montana Tech's High Performance Computing (HPC) architecture has been designed to support collaborative research and instruction within Montana and Montana University System (MUS). Funded by the Montana Department of commerce as a MUS-wide initiative, this computing cluster is available to faculty, students, researchers, and public/private industry collaborators.</p>
 
 
   </div>
 
   </div>
  
  <div class="large-4 column">
 
[[File:Logo-web.png|600px|link=http://www.mtech.edu]]
 
  </div>
 
 
</div>
 
</div>
  
Line 16: Line 12:
 
   <div class="large-6 columns">
 
   <div class="large-6 columns">
 
<h3 class="subheader"><span class="fa fa-th fa-lg" style="display:inline;"></span> HPC Cluster</h3>
 
<h3 class="subheader"><span class="fa fa-th fa-lg" style="display:inline;"></span> HPC Cluster</h3>
<p align="justify">Montana Tech's HPC is a small cluster consists of 22 nodes with 352 cores (704 with Hyper-threading). Two of the nodes are GPU nodes adding 7488 CUDA cores. The nodes are connected with 40Gbps InfiniBand and have access to 91 TB storage sytems. The theoretical peak performance of the entire cluster is about 13 TFLOPS. </p>
+
<p align="justify">Montana Tech's HPC is a small cluster consists of 26 nodes with 544 cores. Two of the nodes are GPU nodes adding 7488 CUDA cores. The nodes are connected with 40Gbps InfiniBand and have access to 91 TB storage sytems. The theoretical peak performance of the entire cluster is about 21 TFLOPS. </p>
 
   </div>
 
   </div>
 
   <div class="large-6 columns">
 
   <div class="large-6 columns">
 
<h3 class="subheader"><span class="fa fa-bar-chart fa-lg" style="display:inline;"></span> Data Visualization</h3>
 
<h3 class="subheader"><span class="fa fa-bar-chart fa-lg" style="display:inline;"></span> Data Visualization</h3>
<p align="justify">Associated with the HPC are two 3D data visualization systems. Both 3D visualization systems are equipped with either 108" stereo projection wall or 70" 3D TV, shutter glasses, and a tracking system to enable researcher to directly interact with the 3D imagery.</p>
+
<p align="justify">Associated with the HPC are two 3D data visualization systems with a variety of visualization software packages. Both 3D visualization systems are equipped with either 108" stereo projection wall or 70" 3D TV, shutter glasses, and a tracking system to enable researcher to directly interact with the 3D imagery.</p>
 
   </div>
 
   </div>
 
</div>
 
</div>
  
<h3 class="subheader"><span class="fa fa-bolt fa-lg"></span> Highlights </h3>
+
<div class="row">
 
+
  <div class="large-8 columns">
* A familiar wiki look with a sidebar on a desktop but a responsive mobile experience on tablets and phones.
+
<h3 class="subheader"><span class="fa fa-handshake-o fa-lg"></span> Collaborations </h3>
* [[Pivot features|Many features]]. Control the behaviour of the features offered.
+
<p align="justify">Montana Tech currently pays for the system support. We hope current and future researchers will [[Grant_information|incorporate]] the facilities into their grant proposals to fund system expansion and future support. Researchers can also propose infrastructure expansions, if funding is available. </p>
* [[Tabs|Useful Tabs]]. Built-in and responsive tabs.
+
</div>
* [[Type|Smart Typography]]. Сhoose the design of the text that you need.
+
</div>
* [[Grid|Grid Layout]]. Place your information efficiently on desktop and mobile devices. Watch the grid align and just work across all viewports.
 
* [[Block Grid]] when you need pictures or text to align and just work! No need to constantly test how it looks on this device or that device. It looks great on all of them!
 
* Need a image slideshow? An easy to use [[Slider|Image Slider]] is ready to go and you can customize speed, captions, auto play, and other elements to fit your needs.
 
* Full support of [http://fontawesome.io/ Font Awesome 4.7] built in and since visual examples speak better than words, they are being used on this page!!!
 
  
 
<div class="row">
 
<div class="row">
 
   <div class="large-6 columns">
 
   <div class="large-6 columns">
<h4 class="subheader"><span class="fa fa-server fa-lg"></span> Install</h4>
+
<h3 class="subheader"><span class="fa fa-bolt fa-lg"></span> Current Uses </h3>
 
+
* Multiphysics Simulations
[https://github.com/Hutchy68/pivot Pivot is hosted on Github!] Go there and you can download it or clone the repository directory to your MediaWiki site! The install instructions are located there too.
+
* Molecular Dynamics
 +
* Statistical Simulations
 +
* Gene Analysis
 +
* Teaching
 
</div>
 
</div>
  
 
   <div class="large-6 columns">
 
   <div class="large-6 columns">
<h4 class="subheader"><span class="fa fa-code fa-lg"></span> Simple customization</h4>
+
<h3 class="subheader"><span class="fa fa-newspaper-o fa-lg"></span> What's New </h3>
 
+
* [https://hpc.mtech.edu/ganglia/?c=Oredigger&m=load_one&r=hour&s=by%20name&hc=4&mc=2 Current Cluster Status (Ganglia)]
[[Customizing|CSS Easy Settings]]. You can easy customize color of your navbar, sidebar, mobile aside backgrounds, labels, etc. Limited by only your imagination.
+
* 04/01/2023 Software will be reinstalled. User accounts will be recovered upon request.
 +
* 03/31/2023 HPC system upgraded to Rocky Linux 8 + Warewulf (stateless) from CentOS7 + xCAT (stateful).
 +
* 03/05/2020 We have migrated to SLURM from Torque/Moab. Documentation will be updated.
 +
* 08/20/2019 Four new compute nodes featuring the latest Xeon Platinum processors has arrived
 
</div>
 
</div>
 
</div>
 
</div>
 +
 +
 +
 +
 
__NOTOC__
 
__NOTOC__
 
__NOEDITSECTION__
 
__NOEDITSECTION__

Latest revision as of 15:19, 1 April 2023


Supporting the Computational Science and Research Needs of Montana

Montana Tech's High Performance Computing (HPC) architecture debuted as the first HPC in Montana University System (MUS) and it has been designed to support collaborative research and instruction within Montana MUS. Funded by the Montana Department of commerce as a MUS-wide initiative, this computing cluster is available to faculty, students, researchers, and public/private industry collaborators.

HPC Cluster

Montana Tech's HPC is a small cluster consists of 26 nodes with 544 cores. Two of the nodes are GPU nodes adding 7488 CUDA cores. The nodes are connected with 40Gbps InfiniBand and have access to 91 TB storage sytems. The theoretical peak performance of the entire cluster is about 21 TFLOPS.

Data Visualization

Associated with the HPC are two 3D data visualization systems with a variety of visualization software packages. Both 3D visualization systems are equipped with either 108" stereo projection wall or 70" 3D TV, shutter glasses, and a tracking system to enable researcher to directly interact with the 3D imagery.

Collaborations

Montana Tech currently pays for the system support. We hope current and future researchers will incorporate the facilities into their grant proposals to fund system expansion and future support. Researchers can also propose infrastructure expansions, if funding is available.

Current Uses

  • Multiphysics Simulations
  • Molecular Dynamics
  • Statistical Simulations
  • Gene Analysis
  • Teaching

What's New

  • Current Cluster Status (Ganglia)
  • 04/01/2023 Software will be reinstalled. User accounts will be recovered upon request.
  • 03/31/2023 HPC system upgraded to Rocky Linux 8 + Warewulf (stateless) from CentOS7 + xCAT (stateful).
  • 03/05/2020 We have migrated to SLURM from Torque/Moab. Documentation will be updated.
  • 08/20/2019 Four new compute nodes featuring the latest Xeon Platinum processors has arrived