http://copper.mtech.edu/api.php?action=feedcontributions&user=Bdeng&feedformat=atomMontana Tech High Performance Computing - User contributions [en]2024-03-29T12:18:35ZUser contributionsMediaWiki 1.29.1http://copper.mtech.edu/index.php?title=Python&diff=731Python2023-07-12T00:55:22Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.18 and 3.6.8 compiled by GCC 8.5.0.<br />
<br />
We also have Anaconda3 and Miniconda3 and installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
<br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
===Deactivating a Python environment===<br />
When you finished with your environment or you need to switch to a different environment. You can deactivate the current environment with:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda deactivate</code><br />
<br />
You'll see the environment name removed from the command prompt:<br />
<br />
<code>[username@oredigger ~]$ </code><br />
==Additional conda commands==<br />
To remove a conda environment:<br />
<code>conda env remove --name environment_name</code><br />
<br />
(if environment not in default path, use:<code>conda env remove -p /PATH/environment_name</code>)<br />
<br />
To export a conda environment, first activate the environment, then use: <code>conda env export > environment.yml </code><br />
<br />
This will export a list of your environment's packages to the file ''environment.yml''<br />
<br />
For more conda commands: <br />
'''[https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf Conda Cheat Sheet]'''<br />
and<br />
'''[https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html Conda User Guide]'''<br />
<br />
==Use Python environment in a job submission script==<br />
To use a Python environment in a Slurm job submission script, below is a sample using 1 core with the mypy38 Python environment:<br />
<code style=display:block>#!/bin/sh<br>#SBATCH -J pythontest #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 1 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load anaconda<br>source activate mypy38<br>python mypython.py</code></div>Bdenghttp://copper.mtech.edu/index.php?title=Contacts&diff=730Contacts2023-06-14T22:07:03Z<p>Bdeng: /* The HPC Team */</p>
<hr />
<div>=== The HPC Team ===<br />
'''[https://sites.google.com/view/bdeng/home Bowen Deng]''', HPC Application Scientist, bdeng at mtech dot edu<br />
<br />
=== New Account Request ===<br />
Please fill this [https://docs.google.com/forms/d/e/1FAIpQLSdgFqxKFekaSGj7yUVNKABxt8z-vmxP1oNYcB7eHQBCnzE9Zw/viewform?usp=sf_link questionnaire] if you need a new account.</div>Bdenghttp://copper.mtech.edu/index.php?title=Cyberinfrastructure&diff=729Cyberinfrastructure2023-04-25T19:48:23Z<p>Bdeng: </p>
<hr />
<div>== HPC Architecture ==<br />
The Montana Tech HPC (oredigger cluster) contains 1 management node, 26 compute nodes, and a total of 91 TB NFS storage systems. There is an additional computing server (copper).<br />
Twenty-two compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are [[GPU Nodes]], with three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. The remaining four nodes feature the Intel 2nd Generation Xeon Scalable Processors (48 CPU Cores and 192 GB Ram per node). Internally, a 40 Gbps InfiniBand (IB) network interconnects the nodes and the [[storage]] system. <br />
<br />
The system has a theoretical peak performance of 14.2 TFLOPS without the GPUs. The GPUs alone have a theoretical peak performance of 7.0 TFLOPS for double precision floating point operations. So the entire cluster has a theoretical peak performance of over 21 TFLOPS.<br />
<br />
The operating system is Rocky Linux 8.6 and Warewulf is used to maintain and provision the compute nodes (stateless).<br />
<br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Head Node </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 1 TB SSD<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Copper Server </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v3 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 1 TB<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Other Specs </span><br />
|-<br />
| '''NFS storage'''|| nfs0 - 25 TB<br />
|-<br />
| || nfs1 - 66 TB<br />
|-<br />
| '''Network''' || Ethernet<br />
|-<br />
| || 40 Gbps InfiniBand<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 4 Compute Nodes (NEW) </span><br />
|-<br />
| '''CPU'''|| Dual Xeon Platinum 8260 (2.40 GHz, 2x 24-cores)<br />
|-<br />
| '''RAM'''|| 192 GB<br />
|-<br />
| '''Disk''' || 256 GB SSD<br />
|-<br />
| '''Nodes''' || cn31~cn34<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 14 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn0~cn11, cn13, cn14<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 6 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn12, cn15~cn19<br />
|}<br />
</div><br />
<br />
<div class="large-3 columns"><br />
[[File:Cluster2.jpg|280px|"maintain"]]<br />
</div><br />
<br />
<br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 2 GPU Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''GPU''' || Three nVidia Tesla K20<br />
|-<br />
| '''Nodes''' || cn20, cn21<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
== 3D Visualization System ==<br />
Montana Tech is developing two 3D data visualization systems. Both systems provide an immersive visualization experience (aka virtual reality) through 3D stereoscopic imagery and user tracking systems. These systems allow scientists to directly interact with their data and helps them gain a better understanding of their data generated modeling on the HPC Cluster or collected in the field. Remote data visualization is possible by running [[VisIt]] from the cluster's login node.<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Windows Immersive 3D Visualization System </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v4 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 512 GB SSD + 1TB HD<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro K5000<br />
|-<br />
| '''OS''' || Windows 7<br />
|-<br />
| '''Display''' || 108" 3D projector screen<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz.PNG|400px]]<br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Linux IQ Station </span><br />
|-<br />
| '''CPU'''|| Dual E5-2670 (2.60 GHz, 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 4 TB<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro 5000<br />
|-<br />
| '''OS''' || CentOS 6.4<br />
|-<br />
| '''Display''' || 70" 3D TV<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz1.jpg|400px]]<br />
</div><br />
</div></div>Bdenghttp://copper.mtech.edu/index.php?title=Available_Software&diff=728Available Software2023-04-18T18:06:17Z<p>Bdeng: </p>
<hr />
<div>Below are some software installed in the system-wide location. You can install software in your own Home directory and you are also welcome to [[Contacts|contact]] us to request installing other software or packages.<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''System'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[compilers|Compilers]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Modules]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[CUDA]]'''<br />
|}<br />
<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''Science & Engineering'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[ANSYS]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[COMSOL]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[GATK]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[LAMMPS]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[MATLAB]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Mothur]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[R]]'''<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
<br />
|}<br />
<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''Computer/Data Science'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[Julia]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Python]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Tensorflow]]'''<br />
<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
|}<br />
<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''Visualizations'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[LidarViewer]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Visit]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Vrui]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
|}</div>Bdenghttp://copper.mtech.edu/index.php?title=Available_Software&diff=727Available Software2023-04-18T18:03:54Z<p>Bdeng: </p>
<hr />
<div>Below are some software installed in the system-wide location. You can install software in your own Home directory and you are also welcome to [[Contacts|contact]] us to request installing other software or packages.<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''System'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[compilers|Compilers]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Modules]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[CUDA]]'''<br />
|}<br />
<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''Science & Engineering'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[BLAST+]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[COMSOL]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[GATK]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[LAMMPS]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[MATLAB]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Mothur]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[NAMD]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[R]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[ANSYS]]'''<br />
|}<br />
<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''Computer/Data Science'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[Julia]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Python]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Tensorflow]]'''<br />
<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
|}<br />
<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''Visualizations'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[LidarViewer]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Visit]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Vrui]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
|}</div>Bdenghttp://copper.mtech.edu/index.php?title=Modules&diff=726Modules2023-04-18T17:57:35Z<p>Bdeng: </p>
<hr />
<div>Our HPC system also offers a variety of development tools and libraries to facilitate efficient and effective programming. These resources are designed to work seamlessly with the available compilers.<br />
<br />
To take full advantage of these tools and libraries, it is essential to familiarize yourself with the module command. This command allows you to manage the development environment effectively by loading, unloading, and swapping modules as needed.<br />
<br />
===Example Commands===<br />
<br />
Some common module commands include:<br />
<br />
: <code>module list</code> - display currently loaded modules<br />
: <code>module avail</code> - show a list of available modules<br />
: <code>module load <module_name></code> - load a specific module<br />
: <code>module unload <module_name></code> - unload a specific module<br />
: <code>module help <module_name></code> - access detailed information on a specific module<br />
<br />
===Module Types===<br />
Two types of modules are installed: modules with compiler and/or MPI runtime dependencies, and modules without such dependencies. When you log on to the HPC, issue the <code>module avail</code> command, and you'll see a list of available modules grouped into three sections:<br />
: <code style=display:block><br />
---------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12-openmpi4 ----------------------------------------------<br />
boost/1.80.0 extrae/3.8.3 omb/6.1 scorep/7.1 tau/2.31.1<br />
dimemas/5.4.2 imb/2021.3 scalasca/2.5 sionlib/1.7.7<br />
-------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12 ---------------------------------------------------<br />
impi/2021.9.0 mpich/3.4.3-ofi mvapich2/2.3.7 pdtoolkit/3.25.1<br />
likwid/5.2.2 mpich/3.4.3-ucx (D) openmpi4/4.1.4 (L)<br />
----------------------------------------------------- /opt/ohpc/pub/modulefiles -----------------------------------------------------<br />
EasyBuild/4.6.2 cmake/3.24.2 hwloc/2.7.0 (L) ohpc (L) prun/2.2 (L) valgrind/3.19.0<br />
MATLAB/R2022b gnu12/12.2.0 (L) intel/2023.1.0 os singularity/3.7.1<br />
autotools (L) gnu9/9.4.0 libfabric/1.13.0 (L) papi/6.0.0 ucx/1.11.2 (L)</code><br />
<br />
In this example, the modules in the first section depend on the GNU12 + OPENMPI4 toolchain. The second section's modules are dependent on GNU12 only. The modules listed in the third section have no compiler or MPI dependencies. <br />
When selecting modules, ensure that you choose the appropriate ones based on your compiler and MPI runtime dependencies. Keep in mind that some libraries and tools may have unique dependencies or configurations that need to be considered when setting up your environment.<br />
<br />
===Swapping Compilers / MPI===<br />
To change to a different compiler / MPI toolchain, utilize the <code>module swap</code>command, for example:<br />
* <code>module swap openmpi4 mpich</code> - switch from OPENMPI to the default mpich build (compiler won't change)<br />
If issuing the above command after logging on, you'll see the output from <code>module avail</code> command changed to:<br />
: <code style=display:block><br />
----------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12-mpich ------------------------------------------------<br />
boost/1.80.0 extrae/3.8.3 omb/6.1 scorep/7.1 tau/2.31.1<br />
dimemas/5.4.2 imb/2021.3 scalasca/2.5 sionlib/1.7.7<br />
-------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12 ---------------------------------------------------<br />
impi/2021.9.0 mpich/3.4.3-ofi mvapich2/2.3.7 pdtoolkit/3.25.1<br />
likwid/5.2.2 mpich/3.4.3-ucx (L,D) openmpi4/4.1.4<br />
----------------------------------------------------- /opt/ohpc/pub/modulefiles -----------------------------------------------------<br />
EasyBuild/4.6.2 cmake/3.24.2 hwloc/2.7.0 ohpc (L) prun/2.2 (L) valgrind/3.19.0<br />
MATLAB/R2022b gnu12/12.2.0 (L) intel/2023.1.0 os singularity/3.7.1<br />
autotools (L) gnu9/9.4.0 libfabric/1.13.0 papi/6.0.0 ucx/1.11.2 (L)</code><br />
<br />
* <code>module swap gnu12 intel</code> - switch from GNU12 to Intel compilers/MPI<br />
Similarly, after issuing the above command , you'll see the output from <code>module avail</code> command changed to:<br />
:<code style=display:block><br />
-------------------------------------------------- /opt/ohpc/pub/moduledeps/oneapi --------------------------------------------------<br />
compiler-rt/2023.1.0 (L) compiler32/2023.1.0 icc/2023.1.0 mkl/2023.1.0 (L) oclfpga/2023.1.0 (L)<br />
compiler-rt32/2023.1.0 debugger/2023.1.0 icc32/2023.1.0 mkl32/2023.1.0 tbb/2021.9.0 (L)<br />
compiler/2023.1.0 (L) dev-utilities/2021.9.0 init_opencl/2023.1.0 mpi/2021.9.0<br />
<br />
-------------------------------------------------- /opt/ohpc/pub/moduledeps/intel ---------------------------------------------------<br />
impi/2021.9.0<br />
<br />
----------------------------------------------------- /opt/ohpc/pub/modulefiles -----------------------------------------------------<br />
EasyBuild/4.6.2 cmake/3.24.2 hwloc/2.7.0 ohpc (L) prun/2.2 (L) valgrind/3.19.0<br />
MATLAB/R2022b gnu12/12.2.0 intel/2023.1.0 (L) os singularity/3.7.1<br />
autotools (L) gnu9/9.4.0 libfabric/1.13.0 papi/6.0.0 ucx/1.11.2 (L)</code></div>Bdenghttp://copper.mtech.edu/index.php?title=Compilers&diff=725Compilers2023-04-18T17:51:25Z<p>Bdeng: </p>
<hr />
<div>[[Category:System]]<br />
Our HPC system features pre-installed, widely-used compilers, development tools, and libraries. Presently, there are three primary compilers available:<br />
* GNU9<br />
* GNU12<br />
* Intel<br />
For each compiler, multiple builds of tools and libraries are provided. Upon logging in to the HPC system, the default development environment is set to GNU12 + OPENMPI4.<br />
To change to a different compiler, utilize the <code>module swap</code>command, for example:<br />
: <code>module swap gnu12 gnu9</code> ->switch from GNU12 to GNU9<br />
: <code>module swap gnu12 intel</code> ->switch from GNU12 to Intel<br />
<br />
In addition to the compilers, our HPC system also offers a variety of development tools and libraries to facilitate efficient and effective programming. These resources are designed to work seamlessly with the available compilers. You can conveniently use the [[modules]] command to manage your development environment.</div>Bdenghttp://copper.mtech.edu/index.php?title=Modules&diff=724Modules2023-04-18T17:51:19Z<p>Bdeng: /* Swapping Compilers / MPI */</p>
<hr />
<div>Our HPC system also offers a variety of development tools and libraries to facilitate efficient and effective programming. These resources are designed to work seamlessly with the available compilers.<br />
<br />
To take full advantage of these tools and libraries, it is essential to familiarize yourself with the module command. This command allows you to manage the development environment effectively by loading, unloading, and swapping modules as needed.<br />
<br />
===Example Commands===<br />
<br />
Some common module commands include:<br />
<br />
: <code>module list</code> - display currently loaded modules<br />
: <code>module avail</code> - show a list of available modules<br />
: <code>module load <module_name></code> - load a specific module<br />
: <code>module unload <module_name></code> - unload a specific module<br />
: <code>module help <module_name></code> - access detailed information on a specific module<br />
<br />
===Module Types===<br />
Two types of modules are installed: modules with compiler and/or MPI runtime dependencies, and modules without such dependencies. When you log on to the HPC, issue the <code>module avail</code> command, and you'll see a list of available modules grouped into three sections:<br />
: <code style=display:block><br />
---------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12-openmpi4 ----------------------------------------------<br />
boost/1.80.0 extrae/3.8.3 omb/6.1 scorep/7.1 tau/2.31.1<br />
dimemas/5.4.2 imb/2021.3 scalasca/2.5 sionlib/1.7.7<br />
<br />
-------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12 ---------------------------------------------------<br />
impi/2021.9.0 mpich/3.4.3-ofi mvapich2/2.3.7 pdtoolkit/3.25.1<br />
likwid/5.2.2 mpich/3.4.3-ucx (D) openmpi4/4.1.4 (L)<br />
<br />
----------------------------------------------------- /opt/ohpc/pub/modulefiles -----------------------------------------------------<br />
EasyBuild/4.6.2 cmake/3.24.2 hwloc/2.7.0 (L) ohpc (L) prun/2.2 (L) valgrind/3.19.0<br />
MATLAB/R2022b gnu12/12.2.0 (L) intel/2023.1.0 os singularity/3.7.1<br />
autotools (L) gnu9/9.4.0 libfabric/1.13.0 (L) papi/6.0.0 ucx/1.11.2 (L)</code><br />
<br />
In this example, the modules in the first section depend on the GNU12 + OPENMPI4 toolchain. The second section's modules are dependent on GNU12 only. The modules listed in the third section have no compiler or MPI dependencies. <br />
When selecting modules, ensure that you choose the appropriate ones based on your compiler and MPI runtime dependencies. Keep in mind that some libraries and tools may have unique dependencies or configurations that need to be considered when setting up your environment.<br />
<br />
===Swapping Compilers / MPI===<br />
To change to a different compiler / MPI toolchain, utilize the <code>module swap</code>command, for example:<br />
: <code>module swap openmpi4 mpich</code> - switch from OPENMPI to the default mpich build (compiler won't change)<br />
: <code>module swap gnu12 intel</code> - switch from GNU12 to Intel compilers/MPI</div>Bdenghttp://copper.mtech.edu/index.php?title=Modules&diff=723Modules2023-04-18T17:50:50Z<p>Bdeng: /* Swapping Compilers / MPI */</p>
<hr />
<div>Our HPC system also offers a variety of development tools and libraries to facilitate efficient and effective programming. These resources are designed to work seamlessly with the available compilers.<br />
<br />
To take full advantage of these tools and libraries, it is essential to familiarize yourself with the module command. This command allows you to manage the development environment effectively by loading, unloading, and swapping modules as needed.<br />
<br />
===Example Commands===<br />
<br />
Some common module commands include:<br />
<br />
: <code>module list</code> - display currently loaded modules<br />
: <code>module avail</code> - show a list of available modules<br />
: <code>module load <module_name></code> - load a specific module<br />
: <code>module unload <module_name></code> - unload a specific module<br />
: <code>module help <module_name></code> - access detailed information on a specific module<br />
<br />
===Module Types===<br />
Two types of modules are installed: modules with compiler and/or MPI runtime dependencies, and modules without such dependencies. When you log on to the HPC, issue the <code>module avail</code> command, and you'll see a list of available modules grouped into three sections:<br />
: <code style=display:block><br />
---------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12-openmpi4 ----------------------------------------------<br />
boost/1.80.0 extrae/3.8.3 omb/6.1 scorep/7.1 tau/2.31.1<br />
dimemas/5.4.2 imb/2021.3 scalasca/2.5 sionlib/1.7.7<br />
<br />
-------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12 ---------------------------------------------------<br />
impi/2021.9.0 mpich/3.4.3-ofi mvapich2/2.3.7 pdtoolkit/3.25.1<br />
likwid/5.2.2 mpich/3.4.3-ucx (D) openmpi4/4.1.4 (L)<br />
<br />
----------------------------------------------------- /opt/ohpc/pub/modulefiles -----------------------------------------------------<br />
EasyBuild/4.6.2 cmake/3.24.2 hwloc/2.7.0 (L) ohpc (L) prun/2.2 (L) valgrind/3.19.0<br />
MATLAB/R2022b gnu12/12.2.0 (L) intel/2023.1.0 os singularity/3.7.1<br />
autotools (L) gnu9/9.4.0 libfabric/1.13.0 (L) papi/6.0.0 ucx/1.11.2 (L)</code><br />
<br />
In this example, the modules in the first section depend on the GNU12 + OPENMPI4 toolchain. The second section's modules are dependent on GNU12 only. The modules listed in the third section have no compiler or MPI dependencies. <br />
When selecting modules, ensure that you choose the appropriate ones based on your compiler and MPI runtime dependencies. Keep in mind that some libraries and tools may have unique dependencies or configurations that need to be considered when setting up your environment.<br />
<br />
===Swapping Compilers / MPI===<br />
To change to a different compiler / MPI toolchain, utilize the <code>module swap</code>command, for example:<br />
: <code>module swap openmpi4 mpich</code> - switch from OPENMPI to the default mpich build<br />
: <code>module swap gnu12 intel</code> - switch from GNU12 to Intel compilers/MPI</div>Bdenghttp://copper.mtech.edu/index.php?title=Modules&diff=722Modules2023-04-18T17:50:39Z<p>Bdeng: </p>
<hr />
<div>Our HPC system also offers a variety of development tools and libraries to facilitate efficient and effective programming. These resources are designed to work seamlessly with the available compilers.<br />
<br />
To take full advantage of these tools and libraries, it is essential to familiarize yourself with the module command. This command allows you to manage the development environment effectively by loading, unloading, and swapping modules as needed.<br />
<br />
===Example Commands===<br />
<br />
Some common module commands include:<br />
<br />
: <code>module list</code> - display currently loaded modules<br />
: <code>module avail</code> - show a list of available modules<br />
: <code>module load <module_name></code> - load a specific module<br />
: <code>module unload <module_name></code> - unload a specific module<br />
: <code>module help <module_name></code> - access detailed information on a specific module<br />
<br />
===Module Types===<br />
Two types of modules are installed: modules with compiler and/or MPI runtime dependencies, and modules without such dependencies. When you log on to the HPC, issue the <code>module avail</code> command, and you'll see a list of available modules grouped into three sections:<br />
: <code style=display:block><br />
---------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12-openmpi4 ----------------------------------------------<br />
boost/1.80.0 extrae/3.8.3 omb/6.1 scorep/7.1 tau/2.31.1<br />
dimemas/5.4.2 imb/2021.3 scalasca/2.5 sionlib/1.7.7<br />
<br />
-------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12 ---------------------------------------------------<br />
impi/2021.9.0 mpich/3.4.3-ofi mvapich2/2.3.7 pdtoolkit/3.25.1<br />
likwid/5.2.2 mpich/3.4.3-ucx (D) openmpi4/4.1.4 (L)<br />
<br />
----------------------------------------------------- /opt/ohpc/pub/modulefiles -----------------------------------------------------<br />
EasyBuild/4.6.2 cmake/3.24.2 hwloc/2.7.0 (L) ohpc (L) prun/2.2 (L) valgrind/3.19.0<br />
MATLAB/R2022b gnu12/12.2.0 (L) intel/2023.1.0 os singularity/3.7.1<br />
autotools (L) gnu9/9.4.0 libfabric/1.13.0 (L) papi/6.0.0 ucx/1.11.2 (L)</code><br />
<br />
In this example, the modules in the first section depend on the GNU12 + OPENMPI4 toolchain. The second section's modules are dependent on GNU12 only. The modules listed in the third section have no compiler or MPI dependencies. <br />
When selecting modules, ensure that you choose the appropriate ones based on your compiler and MPI runtime dependencies. Keep in mind that some libraries and tools may have unique dependencies or configurations that need to be considered when setting up your environment.<br />
<br />
===Swapping Compilers / MPI===<br />
To change to a different compiler / MPI toolchain, utilize the <code>module swap</code>command, for example:<br />
: <code>module swap openmpi4 mpich</code> - switch from OPENMPI to the default mpich build<br />
: <code>module swap gnu12 intel</code> - switch from GNU12 to Intel compilers</div>Bdenghttp://copper.mtech.edu/index.php?title=Modules&diff=721Modules2023-04-18T17:45:29Z<p>Bdeng: /* Module Types */</p>
<hr />
<div>Our HPC system also offers a variety of development tools and libraries to facilitate efficient and effective programming. These resources are designed to work seamlessly with the available compilers.<br />
<br />
To take full advantage of these tools and libraries, it is essential to familiarize yourself with the module command. This command allows you to manage the development environment effectively by loading, unloading, and swapping modules as needed.<br />
<br />
===Example Commands===<br />
<br />
Some common module commands include:<br />
<br />
: <code>module list</code> - display currently loaded modules<br />
: <code>module avail</code> - show a list of available modules<br />
: <code>module load <module_name></code> - load a specific module<br />
: <code>module unload <module_name></code> - unload a specific module<br />
: <code>module help <module_name></code> - access detailed information on a specific module<br />
<br />
===Module Types===<br />
Two types of modules are installed: modules with compiler and/or MPI runtime dependencies, and modules without such dependencies. When you log on to the HPC, issue the <code>module avail</code> command, and you'll see a list of available modules grouped into three sections:<br />
: <code style=display:block><br />
---------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12-openmpi4 ----------------------------------------------<br />
boost/1.80.0 extrae/3.8.3 omb/6.1 scorep/7.1 tau/2.31.1<br />
dimemas/5.4.2 imb/2021.3 scalasca/2.5 sionlib/1.7.7<br />
<br />
-------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12 ---------------------------------------------------<br />
impi/2021.9.0 mpich/3.4.3-ofi mvapich2/2.3.7 pdtoolkit/3.25.1<br />
likwid/5.2.2 mpich/3.4.3-ucx (D) openmpi4/4.1.4 (L)<br />
<br />
----------------------------------------------------- /opt/ohpc/pub/modulefiles -----------------------------------------------------<br />
EasyBuild/4.6.2 cmake/3.24.2 hwloc/2.7.0 (L) ohpc (L) prun/2.2 (L) valgrind/3.19.0<br />
MATLAB/R2022b gnu12/12.2.0 (L) intel/2023.1.0 os singularity/3.7.1<br />
autotools (L) gnu9/9.4.0 libfabric/1.13.0 (L) papi/6.0.0 ucx/1.11.2 (L)</code><br />
<br />
In this example, the modules in the first section depend on the GNU12 + OPENMPI4 toolchain. The second section's modules are dependent on GNU12 only. The modules listed in the third section have no compiler or MPI dependencies. <br />
When selecting modules, ensure that you choose the appropriate ones based on your compiler and MPI runtime dependencies. Keep in mind that some libraries and tools may have unique dependencies or configurations that need to be considered when setting up your environment.</div>Bdenghttp://copper.mtech.edu/index.php?title=Modules&diff=720Modules2023-04-18T17:44:09Z<p>Bdeng: </p>
<hr />
<div>Our HPC system also offers a variety of development tools and libraries to facilitate efficient and effective programming. These resources are designed to work seamlessly with the available compilers.<br />
<br />
To take full advantage of these tools and libraries, it is essential to familiarize yourself with the module command. This command allows you to manage the development environment effectively by loading, unloading, and swapping modules as needed.<br />
<br />
===Example Commands===<br />
<br />
Some common module commands include:<br />
<br />
: <code>module list</code> - display currently loaded modules<br />
: <code>module avail</code> - show a list of available modules<br />
: <code>module load <module_name></code> - load a specific module<br />
: <code>module unload <module_name></code> - unload a specific module<br />
: <code>module help <module_name></code> - access detailed information on a specific module<br />
<br />
===Module Types===<br />
Two types of modules are installed: modules with compiler and/or MPI runtime dependencies, and modules without such dependencies. When you log on to the HPC, issue the <code>module avail</code> command, and you'll see a list of available modules grouped into three sections:<br />
: <code style=display:block><br />
---------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12-openmpi4 ----------------------------------------------<br />
boost/1.80.0 extrae/3.8.3 omb/6.1 scorep/7.1 tau/2.31.1<br />
dimemas/5.4.2 imb/2021.3 scalasca/2.5 sionlib/1.7.7<br />
<br />
-------------------------------------------------- /opt/ohpc/pub/moduledeps/gnu12 ---------------------------------------------------<br />
impi/2021.9.0 mpich/3.4.3-ofi mvapich2/2.3.7 pdtoolkit/3.25.1<br />
likwid/5.2.2 mpich/3.4.3-ucx (D) openmpi4/4.1.4 (L)<br />
<br />
----------------------------------------------------- /opt/ohpc/pub/modulefiles -----------------------------------------------------<br />
EasyBuild/4.6.2 cmake/3.24.2 hwloc/2.7.0 (L) ohpc (L) prun/2.2 (L) valgrind/3.19.0<br />
MATLAB/R2022b gnu12/12.2.0 (L) intel/2023.1.0 os singularity/3.7.1<br />
autotools (L) gnu9/9.4.0 libfabric/1.13.0 (L) papi/6.0.0 ucx/1.11.2 (L)</code><br />
<br />
In this example, the modules in the first section depend on the GNU12 + OPENMPI4 toolchain. The second section's modules are dependent on GNU12 only. The modules listed in the third section have no compiler or MPI dependencies.</div>Bdenghttp://copper.mtech.edu/index.php?title=Compilers&diff=719Compilers2023-04-18T17:26:33Z<p>Bdeng: </p>
<hr />
<div>[[Category:System]]<br />
Our HPC system features pre-installed, widely-used compilers, development tools, and libraries. Presently, there are three primary compilers available:<br />
* GNU9<br />
* GNU12<br />
* Intel<br />
For each compiler, multiple builds of tools and libraries are provided. Upon logging in to the HPC system, the default development environment is set to GNU12 + OPENMPI4.<br />
To change to a different compiler, utilize the <code>module swap</code>command, for example:<br />
: <code>module swap gnu12 gnu9</code> ->switch from GNU12 to GNU9<br />
: <code>module swap gnu12 intel</code> ->switch from GNU12 to GNU9<br />
<br />
In addition to the compilers, our HPC system also offers a variety of development tools and libraries to facilitate efficient and effective programming. These resources are designed to work seamlessly with the available compilers. You can use conveniently use the [[modules]] command to manage your development environment.</div>Bdenghttp://copper.mtech.edu/index.php?title=Compilers&diff=718Compilers2023-04-18T17:26:01Z<p>Bdeng: </p>
<hr />
<div>[[Category:System]]<br />
Our HPC system features pre-installed, widely-used compilers, development tools, and libraries from OpenHPC. Presently, there are three primary compilers available:<br />
* GNU9<br />
* GNU12<br />
* Intel<br />
For each compiler, multiple builds of tools and libraries are provided. Upon logging in to the HPC system, the default development environment is set to GNU12 + OPENMPI4.<br />
To change to a different compiler, utilize the <code>module swap</code>command, for example:<br />
: <code>module swap gnu12 gnu9</code> ->switch from GNU12 to GNU9<br />
: <code>module swap gnu12 intel</code> ->switch from GNU12 to GNU9<br />
<br />
In addition to the compilers, our HPC system also offers a variety of development tools and libraries to facilitate efficient and effective programming. These resources are designed to work seamlessly with the available compilers. You can use conveniently use the [[modules]] command to manage your development environment.</div>Bdenghttp://copper.mtech.edu/index.php?title=Compilers&diff=717Compilers2023-04-18T17:15:42Z<p>Bdeng: </p>
<hr />
<div>[[Category:System]]<br />
We have installed some pre-packaged popular compilers, development tools and libraries provided by OpenHPC. Currently, there are three compilers installed:<br />
* GNU9<br />
* GNU12<br />
* Intel<br />
There are multiple builds of the tools and libraries for each compiler. When you log on HPC, the default development environement loaded is GNU12+OPENMPI4. <br />
<br />
There are currently 3 versions of gnu gcc (which includes g++ and gfortran) installed on the system. The default compiler is version 4.8.5 which comes with the Centos 7.3 distribution. Use the [[modules]] command to load the other compilers (version 5.4.0, 4.9.3 and 6.3.0)<br />
===Example Commands===<br />
: <code>module avail</code> ->lists available modules on the system<br />
: <code>module listing</code> ->lists the modules a user currently has loaded<br />
: <code>module load gcc/4.9.3</code> ->load the gcc package<br />
===Intel Compiler (Needs updating)===<br />
The Intel Compiler suites offer industry-leading C/C++ and Fortran compilers, Intel Math Kernel Library (MKL), and optimization features and multithreading capabilities. The Intel compilers cannot be installed from root for all users without purchasing a site license, but if the user has purchased the license or the user qualifies for the non-commercial free license, the user may follow this article to install the Intel compiler in user’s own account.<br />
<br />
1. Go to Intel non-commercial software development website and review the license agreement. If you meet the requirements for Intel non-commercial software development, you may click any product you want to install. Otherwise, you may have to obtain a paid license and go to step 3. <br />
<br />
https://software.intel.com/en-us/non-commercial-software-development<br />
<br />
If you only need Intel Fortran, you may install ‘Intel Fortran Composer XE 2013 for Linux’ to save storage space. If you need both Intel Fortran and C++, you may install ‘Intel Parallel Studio XE 2013 for Linux’. This article explains the installation of the parallel studio which will consume up to 8GB space. The download size of the installation package is around 3.5GB. Therefore, you are recommended to have at least 12 GB free storage space and should install in your /data/username directory.<br />
<br />
2. Register yourself and download the installation package. <br />
<br />
The file name is similar to ‘parallel_studio_xe_2013_sp1_update2.tgz’. Assume your logon name is ‘username’ and your affiliation is ‘mtech’, and your home directory is ‘/home/mtech/username/’. Upload the installation package to your home directory. We recommend you to use wget command to download the package directly from the HPC side to save time. Now the installation package is located at /home/mtech/username/parallel_studio_xe_2013_sp1_update2.tgz<br />
<br />
3. Extract the installation package<br />
<br />
Go to your /data directory.<br />
cd /data/username<br />
<br />
Extract the package<br />
tar zxvf parallel_studio_xe_2013_sp1_update2.tgz<br />
<br />
Once the extraction is done, you may choose to delete the installation package to free some space.<br />
<br />
rm /data/username/parallel_studio_xe_2013_sp1_update2.tgz<br />
<br />
<br />
Now, there should be a new folder called ‘parallel_studio_xe_2013_sp1_update2’. Go to that folder.<br />
<br />
cd parallel_studio_xe_2013_sp1_update2<br />
<br />
<br />
4. Installation<br />
<br />
Execute the installation script.<br />
./install.sh<br />
<br />
Follow the prompt on the screen. <br />
<br />
On the following screen, type ‘3’ and press ENTER.<br />
<br />
----<br />
<br />
Please make your selection by entering an option.<br />
Root access is recommended for evaluation.<br />
<br />
1. Run as a root for system wide access for all users [default] <br />
2. Run using sudo privileges and password for system wide access for all users <br />
3. Run as current user to limit access to user level<br />
<br />
h. Help<br />
q. Quit<br />
<br />
Please type a selection [1]:<br />
<br />
----<br />
<br />
The User may choose to use the default selections for all the remaining steps. The required serial number is provided in the email.<br />
<br />
5. Environment variable setting<br />
<br />
Go to your home directory<br />
cd ~<br />
<br />
Edit the .bashrc file<br />
nano .bashrc<br />
(You may use any other text editor such as vi)<br />
<br />
At the end of the .bashrc file, add the following three lines. You need to change ‘/data/username/’ to your username in the following three lines. You may also need to change ‘composer_xe_2013_sp1.2.144’ to the proper folder name of your installed version.<br />
<br />
: /data/username/intel/bin/compilervars.sh intel64 <br />
: export PATH=$PATH:/data/username/intel/bin<br />
: export LD_LIBRARY_PATH=/data/username/intel/composer_xe_2013_sp1.2.144/compiler/lib/intel64/:$LD_LIBRARY_PATH<br />
<br />
6. Clean up<br />
You may want to delete the installation folder created in step 3 to save storage space.<br />
: rm -rf parallel_studio_xe_2013_sp1_update2<br />
<br />
7. Log off the terminal and log on again. You are all set. <br />
<br />
8.Uninstallation<br />
Simply delete the installed intel folder <br />
: cd /data/username <br />
: rm -rf intel</div>Bdenghttp://copper.mtech.edu/index.php?title=MATLAB&diff=716MATLAB2023-04-18T17:03:22Z<p>Bdeng: </p>
<hr />
<div>MATLAB (R2022b) and the Parallel Computing Toolbox is installed. The Distributed Computing Server is not supported, so calculations are limited to single compute nodes. <br />
<br />
==Submitting MATLAB jobs==<br />
<br />
MATLAB jobs that do and do not use the Parallel Computing Toolbox can be submitted to [[Slurm]] via a script containing:<br />
: <code style=display:block>#!/bin/sh<br>#SBATCH -J MatlabJob #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load MATLAB<br>matlab -nodesktop -nosplash -r "your_matlab_program(input_parameters);quit;"</code><br />
<br />
Since MATLAB is multithreaded, you can request 12 ppn even if you are not using the Parallel Computing Toolbox. For parallel MATLAB jobs, the matlabpool is limited to the physical cores only, that is 16 workers per compute node.. <br />
<br />
Use msub to submit your job script to Slurm.<br />
: <code>sbatch matlabjob.sh</code><br />
where matlabjob contains the above script updated with your program and username info.<br />
<br />
==Running MATLAB interactively in command line==<br />
If you wish to run MATLAB interactively without the Desktop GUI, start an interactive job on a compute node with:<br />
:<code>srun -N 1 -n 12 --pty /bin/bash</code><br />
<br />
This will return with a command prompt on a compute node, for example:<br />
<br />
:<code style=display:block>[USER@oredigger ~]$ srun -N 1 -n 12 --pty /bin/bash<br>[USER@cn0 ~]$</code><br />
<br />
Then you can start the MATLAB GUI with the commands:<br />
:<code style=display:block>module load MATLAB<br>matlab</code><br />
<br />
==MATLAB desktop on headnode==<br />
The MATLAB Desktop GUI is currently limited to the management node. Please respect other users and avoid long computational runs on the management node if other users are on the system.</div>Bdenghttp://copper.mtech.edu/index.php?title=Welcome_to_Montana_Tech%27s_High_Performance_Computing_Cluster&diff=715Welcome to Montana Tech's High Performance Computing Cluster2023-04-01T22:19:27Z<p>Bdeng: </p>
<hr />
<div><div class="row"><br />
<br><br />
<div class="large-12 column"><br />
==<span style="color:#925223">Supporting the Computational Science and Research Needs of Montana </span>==<br />
<p align="justify">Montana Tech's High Performance Computing (HPC) architecture debuted as the first HPC in Montana University System (MUS) and it has been designed to support collaborative research and instruction within Montana MUS. Funded by the Montana Department of commerce as a MUS-wide initiative, this computing cluster is available to faculty, students, researchers, and public/private industry collaborators.</p><br />
</div><br />
<br />
</div><br />
<br />
<div class="row"><br />
<br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-th fa-lg" style="display:inline;"></span> HPC Cluster</h3><br />
<p align="justify">Montana Tech's HPC is a small cluster consists of 26 nodes with 544 cores. Two of the nodes are GPU nodes adding 7488 CUDA cores. The nodes are connected with 40Gbps InfiniBand and have access to 91 TB storage sytems. The theoretical peak performance of the entire cluster is about 21 TFLOPS. </p><br />
</div><br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-bar-chart fa-lg" style="display:inline;"></span> Data Visualization</h3><br />
<p align="justify">Associated with the HPC are two 3D data visualization systems with a variety of visualization software packages. Both 3D visualization systems are equipped with either 108" stereo projection wall or 70" 3D TV, shutter glasses, and a tracking system to enable researcher to directly interact with the 3D imagery.</p><br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-8 columns"><br />
<h3 class="subheader"><span class="fa fa-handshake-o fa-lg"></span> Collaborations </h3><br />
<p align="justify">Montana Tech currently pays for the system support. We hope current and future researchers will [[Grant_information|incorporate]] the facilities into their grant proposals to fund system expansion and future support. Researchers can also propose infrastructure expansions, if funding is available. </p><br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-bolt fa-lg"></span> Current Uses </h3><br />
* Multiphysics Simulations<br />
* Molecular Dynamics<br />
* Statistical Simulations<br />
* Gene Analysis<br />
* Teaching<br />
</div><br />
<br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-newspaper-o fa-lg"></span> What's New </h3><br />
* [https://hpc.mtech.edu/ganglia/?c=Oredigger&m=load_one&r=hour&s=by%20name&hc=4&mc=2 Current Cluster Status (Ganglia)]<br />
* 04/01/2023 Software will be reinstalled. User accounts will be recovered upon request.<br />
* 03/31/2023 HPC system upgraded to Rocky Linux 8 + Warewulf (stateless) from CentOS7 + xCAT (stateful).<br />
* 03/05/2020 We have migrated to SLURM from Torque/Moab. Documentation will be updated.<br />
* 08/20/2019 Four new compute nodes featuring the latest Xeon Platinum processors has arrived<br />
</div><br />
</div><br />
<br />
<br />
<br />
<br />
__NOTOC__<br />
__NOEDITSECTION__</div>Bdenghttp://copper.mtech.edu/index.php?title=Welcome_to_Montana_Tech%27s_High_Performance_Computing_Cluster&diff=714Welcome to Montana Tech's High Performance Computing Cluster2023-04-01T22:17:10Z<p>Bdeng: </p>
<hr />
<div><div class="row"><br />
<br><br />
<div class="large-12 column"><br />
==<span style="color:#925223">Supporting the Computational Science and Research Needs of Montana </span>==<br />
<p align="justify">Montana Tech's High Performance Computing (HPC) architecture debuted as the first HPC in Montana University System (MUS) and it has been designed to support collaborative research and instruction within Montana MUS. Funded by the Montana Department of commerce as a MUS-wide initiative, this computing cluster is available to faculty, students, researchers, and public/private industry collaborators.</p><br />
</div><br />
<br />
</div><br />
<br />
<div class="row"><br />
<br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-th fa-lg" style="display:inline;"></span> HPC Cluster</h3><br />
<p align="justify">Montana Tech's HPC is a small cluster consists of 26 nodes with 544 cores. Two of the nodes are GPU nodes adding 7488 CUDA cores. The nodes are connected with 40Gbps InfiniBand and have access to 91 TB storage sytems. The theoretical peak performance of the entire cluster is about 21 TFLOPS. </p><br />
</div><br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-bar-chart fa-lg" style="display:inline;"></span> Data Visualization</h3><br />
<p align="justify">Associated with the HPC are two 3D data visualization systems with a variety of visualization software packages. Both 3D visualization systems are equipped with either 108" stereo projection wall or 70" 3D TV, shutter glasses, and a tracking system to enable researcher to directly interact with the 3D imagery.</p><br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-8 columns"><br />
<h3 class="subheader"><span class="fa fa-handshake-o fa-lg"></span> Collaborations </h3><br />
<p align="justify">Montana Tech currently pays for the system support. We hope current and future researchers will [[Grant_information|incorporate]] the facilities into their grant proposals to fund system expansion and future support. Researchers can also propose infrastructure expansions, if funding is available. </p><br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-bolt fa-lg"></span> Current Uses </h3><br />
* Multiphysics Simulations<br />
* Molecular Dynamics<br />
* Statistical Simulations<br />
* Gene Analysis<br />
* Teaching<br />
</div><br />
<br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-newspaper-o fa-lg"></span> What's New </h3><br />
* [https://hpc.mtech.edu/ganglia/?c=Oredigger&m=load_one&r=hour&s=by%20name&hc=4&mc=2 Current Cluster Status (Ganglia)]<br />
* 03/31/2023 HPC system upgraded to Rocky Linux 8 + Warewulf (stateless) from CentOS7 + xCAT (stateful).<br />
* 03/05/2020 We have migrated to SLURM from Torque/Moab. Documentation will be updated.<br />
* 08/20/2019 Four new compute nodes featuring the latest Xeon Platinum processors has arrived<br />
</div><br />
</div><br />
<br />
<br />
<br />
<br />
__NOTOC__<br />
__NOEDITSECTION__</div>Bdenghttp://copper.mtech.edu/index.php?title=Cyberinfrastructure&diff=713Cyberinfrastructure2023-04-01T22:13:35Z<p>Bdeng: /* HPC Architecture */</p>
<hr />
<div>== HPC Architecture ==<br />
The Montana Tech HPC (oredigger cluster) contains 1 management node, 26 compute nodes, and a total of 91 TB NFS storage systems. There is an additional computing server (copper).<br />
Twenty-two compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are [[GPU Nodes]], with three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. The remaining four nodes feature the Intel 2nd Generation Xeon Scalable Processors (48 CPU Cores and 192 GB Ram per node). Internally, a 40 Gbps InfiniBand (IB) network interconnects the nodes and the [[storage]] system. <br />
<br />
The system has a theoretical peak performance of 14.2 TFLOPS without the GPUs. The GPUs alone have a theoretical peak performance of 7.0 TFLOPS for double precision floating point operations. So the entire cluster has a theoretical peak performance of over 21 TFLOPS.<br />
<br />
The operating system is Rocky Linux 8.6 and Warewulf is used to maintain and provision the compute nodes (stateless).<br />
<br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Head Node </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 1 TB SSD<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Copper Server </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v3 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 1 TB<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Other Specs </span><br />
|-<br />
| '''NFS storage'''|| nfs0 - 25 TB<br />
|-<br />
| || nfs1 - 66 TB<br />
|-<br />
| '''Network''' || Ethernet<br />
|-<br />
| || 40 Gbps InfiniBand<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 14 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn0~cn11, cn13, cn14<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 6 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn12, cn15~cn19<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 2 GPU Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''GPU''' || Three nVidia Tesla K20<br />
|-<br />
| '''Nodes''' || cn20, cn21<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
[[File:Cluster2.jpg|280px|"maintain"]]<br />
</div><br />
<br />
<br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 4 Compute Nodes (NEW) </span><br />
|-<br />
| '''CPU'''|| Dual Xeon Platinum 8260 (2.40 GHz, 2x 24-cores)<br />
|-<br />
| '''RAM'''|| 192 GB<br />
|-<br />
| '''Disk''' || 256 GB SSD<br />
|-<br />
| '''Nodes''' || cn31~cn34<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
== 3D Visualization System ==<br />
Montana Tech is developing two 3D data visualization systems. Both systems provide an immersive visualization experience (aka virtual reality) through 3D stereoscopic imagery and user tracking systems. These systems allow scientists to directly interact with their data and helps them gain a better understanding of their data generated modeling on the HPC Cluster or collected in the field. Remote data visualization is possible by running [[VisIt]] from the cluster's login node.<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Windows Immersive 3D Visualization System </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v4 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 512 GB SSD + 1TB HD<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro K5000<br />
|-<br />
| '''OS''' || Windows 7<br />
|-<br />
| '''Display''' || 108" 3D projector screen<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz.PNG|400px]]<br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Linux IQ Station </span><br />
|-<br />
| '''CPU'''|| Dual E5-2670 (2.60 GHz, 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 4 TB<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro 5000<br />
|-<br />
| '''OS''' || CentOS 6.4<br />
|-<br />
| '''Display''' || 70" 3D TV<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz1.jpg|400px]]<br />
</div><br />
</div></div>Bdenghttp://copper.mtech.edu/index.php?title=Welcome_to_Montana_Tech%27s_High_Performance_Computing_Cluster&diff=712Welcome to Montana Tech's High Performance Computing Cluster2022-12-27T22:41:13Z<p>Bdeng: </p>
<hr />
<div><div class="row"><br />
<br><br />
<div class="large-12 column"><br />
==<span style="color:#925223">Supporting the Computational Science and Research Needs of Montana </span>==<br />
<p align="justify">Montana Tech's High Performance Computing (HPC) architecture debuted as the first HPC in Montana University System (MUS) and it has been designed to support collaborative research and instruction within Montana MUS. Funded by the Montana Department of commerce as a MUS-wide initiative, this computing cluster is available to faculty, students, researchers, and public/private industry collaborators.</p><br />
</div><br />
<br />
</div><br />
<br />
<div class="row"><br />
<br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-th fa-lg" style="display:inline;"></span> HPC Cluster</h3><br />
<p align="justify">Montana Tech's HPC is a small cluster consists of 26 nodes with 544 cores. Two of the nodes are GPU nodes adding 7488 CUDA cores. The nodes are connected with 40Gbps InfiniBand and have access to 91 TB storage sytems. The theoretical peak performance of the entire cluster is about 21 TFLOPS. </p><br />
</div><br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-bar-chart fa-lg" style="display:inline;"></span> Data Visualization</h3><br />
<p align="justify">Associated with the HPC are two 3D data visualization systems with a variety of visualization software packages. Both 3D visualization systems are equipped with either 108" stereo projection wall or 70" 3D TV, shutter glasses, and a tracking system to enable researcher to directly interact with the 3D imagery.</p><br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-8 columns"><br />
<h3 class="subheader"><span class="fa fa-handshake-o fa-lg"></span> Collaborations </h3><br />
<p align="justify">Montana Tech currently pays for the system support. We hope current and future researchers will [[Grant_information|incorporate]] the facilities into their grant proposals to fund system expansion and future support. Researchers can also propose infrastructure expansions, if funding is available. </p><br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-bolt fa-lg"></span> Current Uses </h3><br />
* Multiphysics Simulations<br />
* Molecular Dynamics<br />
* Statistical Simulations<br />
* Gene Analysis<br />
* Teaching<br />
</div><br />
<br />
<div class="large-6 columns"><br />
<h3 class="subheader"><span class="fa fa-newspaper-o fa-lg"></span> What's New </h3><br />
* [https://hpc.mtech.edu/ganglia/?c=Oredigger&m=load_one&r=hour&s=by%20name&hc=4&mc=2 Current Cluster Status (Ganglia)]<br />
* 12/27/2022 We will be migrating the system from CentOS7 to Rocky8 in the winter break.<br />
* 03/05/2020 We have migrated to SLURM from Torque/Moab. Documentation will be updated.<br />
* 08/20/2019 Four new compute nodes featuring the latest Xeon Platinum processors has arrived<br />
* 03/06/2018 New Visualization workstation online<br />
* 02/15/2018 Tensorflow with GPU support is now on HPC<br />
* 02/13/2018 New server ordered for upgrading Visulization workstation!<br />
</div><br />
</div><br />
<br />
<br />
<br />
<br />
__NOTOC__<br />
__NOEDITSECTION__</div>Bdenghttp://copper.mtech.edu/index.php?title=Cyberinfrastructure&diff=711Cyberinfrastructure2022-12-15T05:42:37Z<p>Bdeng: </p>
<hr />
<div>== HPC Architecture ==<br />
The Montana Tech HPC (oredigger cluster) contains 1 management node, 26 compute nodes, and a total of 91 TB NFS storage systems. There is an additional computing server (copper).<br />
Twenty-two compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are [[GPU Nodes]], with three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. The remaining four nodes feature the Intel 2nd Generation Xeon Scalable Processors (48 CPU Cores and 192 GB Ram per node). Internally, a 40 Gbps InfiniBand (IB) network interconnects the nodes and the [[storage]] system. <br />
<br />
The system has a theoretical peak performance of 14.2 TFLOPS without the GPUs. The GPUs alone have a theoretical peak performance of 7.0 TFLOPS for double precision floating point operations. So the entire cluster has a theoretical peak performance of over 21 TFLOPS.<br />
<br />
The operating system is Centos 7.6 and OpenHPC is used to maintain and provision the compute nodes.<br />
<br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Head Node </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 1 TB SSD<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Copper Server </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v3 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 1 TB<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Other Specs </span><br />
|-<br />
| '''NFS storage'''|| nfs0 - 25 TB<br />
|-<br />
| || nfs1 - 66 TB<br />
|-<br />
| '''Network''' || Ethernet<br />
|-<br />
| || 40 Gbps InfiniBand<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 14 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn0~cn11, cn13, cn14<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 6 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn12, cn15~cn19<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 2 GPU Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''GPU''' || Three nVidia Tesla K20<br />
|-<br />
| '''Nodes''' || cn20, cn21<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
[[File:Cluster2.jpg|280px|"maintain"]]<br />
</div><br />
<br />
<br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 4 Compute Nodes (NEW) </span><br />
|-<br />
| '''CPU'''|| Dual Xeon Platinum 8260 (2.40 GHz, 2x 24-cores)<br />
|-<br />
| '''RAM'''|| 192 GB<br />
|-<br />
| '''Disk''' || 256 GB SSD<br />
|-<br />
| '''Nodes''' || cn31~cn34<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
== 3D Visualization System ==<br />
Montana Tech is developing two 3D data visualization systems. Both systems provide an immersive visualization experience (aka virtual reality) through 3D stereoscopic imagery and user tracking systems. These systems allow scientists to directly interact with their data and helps them gain a better understanding of their data generated modeling on the HPC Cluster or collected in the field. Remote data visualization is possible by running [[VisIt]] from the cluster's login node.<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Windows Immersive 3D Visualization System </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v4 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 512 GB SSD + 1TB HD<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro K5000<br />
|-<br />
| '''OS''' || Windows 7<br />
|-<br />
| '''Display''' || 108" 3D projector screen<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz.PNG|400px]]<br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Linux IQ Station </span><br />
|-<br />
| '''CPU'''|| Dual E5-2670 (2.60 GHz, 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 4 TB<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro 5000<br />
|-<br />
| '''OS''' || CentOS 6.4<br />
|-<br />
| '''Display''' || 70" 3D TV<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz1.jpg|400px]]<br />
</div><br />
</div></div>Bdenghttp://copper.mtech.edu/index.php?title=Cyberinfrastructure&diff=710Cyberinfrastructure2022-12-14T20:46:16Z<p>Bdeng: /* HPC Architecture */</p>
<hr />
<div>== HPC Architecture ==<br />
The Montana Tech HPC (oredigger cluster) contains 1 management node, 26 compute nodes, and a total of 91 TB NFS storage systems. There is an additional computing server (copper).<br />
Twenty-two compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are [[GPU Nodes]], with three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. The remaining four nodes feature the Intel 2nd Generation Xeon Scalable Processors (48 CPU Cores and 192 GB Ram per node). Internally, a 40 Gbps InfiniBand (IB) network interconnects the nodes and the [[storage]] system. <br />
<br />
The system has a theoretical peak performance of 14.2 TFLOPS without the GPUs. The GPUs alone have a theoretical peak performance of 7.0 TFLOPS for double precision floating point operations. So the entire cluster has a theoretical peak performance of over 21 TFLOPS.<br />
<br />
The operating system is Centos 7.6 and OpenHPC is used to maintain and provision the compute nodes.<br />
<br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Head Node </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 1 TB SSD<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Copper Server </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v3 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 1 TB<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Other Specs </span><br />
|-<br />
| '''NFS storage'''|| nfs0 - 25 TB<br />
|-<br />
| || nfs1 - 66 TB<br />
|-<br />
| '''Network''' || Ethernet<br />
|-<br />
| || 40 Gbps InfiniBand<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 14 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn0~cn11, cn13, cn14<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 6 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn12, cn15~cn19<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 2 GPU Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''GPU''' || Three nVidia Tesla K20<br />
|-<br />
| '''Nodes''' || cn20, cn21<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
[[File:Cluster2.jpg|280px|"maintain"]]<br />
</div><br />
<br />
<br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 4 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual Xeon Platinum 8260 (2.40 GHz, 2x 24-cores)<br />
|-<br />
| '''RAM'''|| 192 GB<br />
|-<br />
| '''Disk''' || 256 GB SSD<br />
|-<br />
| '''Nodes''' || cn31~cn34<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
== 3D Visualization System ==<br />
Montana Tech is developing two 3D data visualization systems. Both systems provide an immersive visualization experience (aka virtual reality) through 3D stereoscopic imagery and user tracking systems. These systems allow scientists to directly interact with their data and helps them gain a better understanding of their data generated modeling on the HPC Cluster or collected in the field. Remote data visualization is possible by running [[VisIt]] from the cluster's login node.<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Windows Immersive 3D Visualization System </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v4 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 512 GB SSD + 1TB HD<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro K5000<br />
|-<br />
| '''OS''' || Windows 7<br />
|-<br />
| '''Display''' || 108" 3D projector screen<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz.PNG|400px]]<br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Linux IQ Station </span><br />
|-<br />
| '''CPU'''|| Dual E5-2670 (2.60 GHz, 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 4 TB<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro 5000<br />
|-<br />
| '''OS''' || CentOS 6.4<br />
|-<br />
| '''Display''' || 70" 3D TV<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz1.jpg|400px]]<br />
</div><br />
</div></div>Bdenghttp://copper.mtech.edu/index.php?title=Contacts&diff=709Contacts2022-07-27T18:47:40Z<p>Bdeng: </p>
<hr />
<div>=== The HPC Team ===<br />
'''[https://sites.google.com/view/bdeng/home Bowen Deng]''', HPC Application Scientist, bdeng at mtech dot edu<br />
<br />
'''[https://cs.mtech.edu/main/index.php/component/content/article/97 Jeff Braun]''', Computer Science Professor, jbraun at mtech dot edu<br />
<br />
=== New Account Request ===<br />
Please fill this [https://docs.google.com/forms/d/e/1FAIpQLSdgFqxKFekaSGj7yUVNKABxt8z-vmxP1oNYcB7eHQBCnzE9Zw/viewform?usp=sf_link questionnaire] if you need a new account.</div>Bdenghttp://copper.mtech.edu/index.php?title=Connecting_to_HPC&diff=708Connecting to HPC2022-01-05T20:16:39Z<p>Bdeng: </p>
<hr />
<div>You can use Secure Shell(SSH) to connect to HPC. Depending on the operating system of your computer, you have different options to get connected.<br />
<br />
==For Mac/Linux==<br />
You can directly use the Terminal application comes with your system to connect.<br />
<br />
In your terminal, type the following command to connect via ssh:<br />
<br />
<code>ssh YourUserName@hpc.mtech.edu</code><br />
<br />
You will then receive prompt to enter your password, similar to the following line.<br />
<br />
<code>YourUserName@hpc.mtech.edu's password:</code><br />
<br />
You can then enter your password. Note: when you enter your password, nothing will display on the screen.<br />
<br />
If you intend to use any applications with GUI interfaces (e.g. MATLAB, COMSOL), you will need to add the '-X' option when connecting:<br />
<br />
<code>ssh -X YourUserName@hpc.mtech.edu</code><br />
<br />
<br />
== For Windows ==<br />
You will need to install a terminal emulator program to connect. There are many such programs[https://en.wikipedia.org/wiki/List_of_terminal_emulators], below are examples of using '''MobaXterm''' and '''Xshell'''.<br />
<br />
'''Now in Windows 10, with the addition of native OpenSSH support, you can use the Command Prompt comes with Windows to connect to HPC. (Use the same command in the above section)'''<br />
<br />
=== Using MobaXterm (Recommended) ===<br />
MobaXterm[http://mobaxterm.mobatek.net/] is a single application that has integrations of several tools, e.g., SSH, X11, FTP.... <br />
<br />
MobaXterm has a free Home Edition, and you download it [http://mobaxterm.mobatek.net/download.html here]. Either the Portable version or the Installer version is fine.<br />
<br />
* To connect to the HPC, you can either start a local terminal, and use the command <code>ssh YourUserName@hpc.mtech.edu</code> as detailed in the previous Mac/Linux section.<br />
<br />
[[File:Moba_1.png|border|500px]]<br />
<br />
* Or you can click the '''Session''' button on the top left corner and choose '''SSH''' in the pop-up window. Then enter the HPC's address and your username as shown below:<br />
<br />
[[File:Moba_2.png|border|500px]]<br />
<br />
Then click '''Ok''' and you'll get a prompt to enter your password.<br />
<br />
<br />
==Graphical Display (optional)==<br />
Within the Montana University System, the graphical display from hpc.mtech.edu can be redirected to your local PC if you install an X Server. The cygwin x-server (http://x.cygwin.com/ ) is free and works well. You will need to enable X11 forwarding before you login into hpc.mtech.edu - this is done in putty by selecting Connection-SSH-Tunnels and checking the Enable X11 forwarding box. Start the x-server before logging on and test by running "xeyes".<br />
<br />
<br />
==Transferring files between your computer and HPC==<br />
<br />
After you login to HPC, the files you created/saved is stored on HPC. You will need some tools to transfer files between your computer and HPC.<br />
<br />
===For Mac/Linux===<br />
You can directly use the <code>scp</code> command in the terminal to do a file transfer, or you can use any FTP applications.<br />
<br />
===For Windows===<br />
You can use any FTP programs to transfer files. Just use hpc.mtech.edu as the host address, and provide your username and password. One FTP program is Xftp (free for home/school). You can get it at https://www.netsarang.com/products/xfp_overview.html<br />
<br />
If you are logged in using Xshell, you can use the '''New File Transfer''' button in the toolbar (or the keyboard shortcut '''Ctrl'''+'''Alt'''+'''F''') to open the file transfer. <br />
By default, in the popped up Xftp window, you'll see your Desktop directory on the left side and the directory you are at in Xshell on the right. You can then Right-Click any files or folders to transfer files.<br />
<br />
For Mobaxterm, the FTP function is integrated. If you are logged in to HPC, you can choose the '''Sftp''' tab in the left sidebar to download/upload files. Or you can also start a '''SFTP''' session similar to the Mobaxterm tutorial above.<br />
<br />
<br />
==First Login==<br />
Once you logged in, you'll see some texts, including the logo and some notice.<br \><br />
At the bottom line, you'll see your cursor after a Unix prompt(possibly a dollar sign):<br \><br />
<code>[YourUserName@oredigger ~]$ &#10074;</code><br />
The texts before the dollar sign include your username, computer name and your current directory name. For general purposes, these texts will be omitted in the following examples, like <code>$ &#10074;</code><br />
<br />
===Change your initial password===<br />
It's best to change your initial password the first time you log in. To do it, simply use the passwd command:<br />
<br />
<code>$ passwd</code><br />
<br />
You will then get the following texts:<br />
<br />
<code>Changing password for user YourUserName.<br />
<br />
Changing password for YourUserName.<br />
<br />
(current) UNIX password: &#10074;</code><br />
<br />
Enter your initial(current) password and press enter. You'll then be prompt to enter and retype new passoword:<br />
<br />
<code>New password:<br />
<br />
Retype new password:</code><br />
<br />
If change is successful, you'll get:<br />
<br />
<code>passwd: all authentication tokens updated successfully.</code><br />
<br />
===Your Home Directory===<br />
When you login to HPC, you are directed to your default home directory, which generally has the following format:<br />
<br />
<code>/data1/YourAffiliationInstitution/YourUserName</code></div>Bdenghttp://copper.mtech.edu/index.php?title=Cyberinfrastructure&diff=707Cyberinfrastructure2021-08-23T17:43:49Z<p>Bdeng: </p>
<hr />
<div>== HPC Architecture ==<br />
The Montana Tech HPC (oredigger cluster) contains 1 management node, 26 compute nodes, and a total of 91 TB NFS storage systems. There is an additional computing server (copper).<br />
Twenty-two compute nodes contain two 8-core Intel Xeon 2.2 GHz Processors (E5-2660) and either 64 or 128 GB of memory. Two of these nodes are [[GPU Nodes]], with three NVIDIA Tesla K20 accelerators and 128 GB of memory. Hyperthreading is enabled, so 704 threads can run simultaneously on just the XEON CPUs. The remaining four nodes feature the Intel 2nd Generation Xeon Scalable Processors (48 CPU Cores and 192 GB Ram per node). Internally, a 40 Gbps InfiniBand (IB) network interconnects the nodes and the [[storage]] system. <br />
<br />
The system has a theoretical peak performance of 14.2 TFLOPS without the GPUs. The GPUs alone have a theoretical peak performance of 7.0 TFLOPS for double precision floating point operations. So the entire cluster has a theoretical peak performance of over 21 TFLOPS.<br />
<br />
The operating system is Centos 7.6 and Penguin's Scyld ClusterWare is used to maintain and provision the compute nodes.<br />
<br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Head Node </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 1 TB SSD<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Copper Server </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v3 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 1 TB<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> Other Specs </span><br />
|-<br />
| '''NFS storage'''|| nfs0 - 25 TB<br />
|-<br />
| || nfs1 - 66 TB<br />
|-<br />
| '''Network''' || Ethernet<br />
|-<br />
| || 40 Gbps InfiniBand<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 14 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn0~cn11, cn13, cn14<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 6 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''Nodes''' || cn12, cn15~cn19<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 2 GPU Nodes </span><br />
|-<br />
| '''CPU'''|| Dual E5-2660 (2.2 GHz, 2x 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 450 GB<br />
|-<br />
| '''GPU''' || Three nVidia Tesla K20<br />
|-<br />
| '''Nodes''' || cn20, cn21<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
[[File:Cluster2.jpg|280px|"maintain"]]<br />
</div><br />
<br />
<br />
<br />
</div><br />
<br />
<div class="row"><br />
<div class="large-3 columns"><br />
{|<br />
|+<span style="color:#925223"> 4 Compute Nodes </span><br />
|-<br />
| '''CPU'''|| Dual Xeon Platinum 8260 (2.40 GHz, 2x 24-cores)<br />
|-<br />
| '''RAM'''|| 192 GB<br />
|-<br />
| '''Disk''' || 256 GB SSD<br />
|-<br />
| '''Nodes''' || cn31~cn34<br />
|}<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<div class="large-3 columns"><br />
<br />
</div><br />
<br />
</div><br />
<br />
== 3D Visualization System ==<br />
Montana Tech is developing two 3D data visualization systems. Both systems provide an immersive visualization experience (aka virtual reality) through 3D stereoscopic imagery and user tracking systems. These systems allow scientists to directly interact with their data and helps them gain a better understanding of their data generated modeling on the HPC Cluster or collected in the field. Remote data visualization is possible by running [[VisIt]] from the cluster's login node.<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Windows Immersive 3D Visualization System </span><br />
|-<br />
| '''CPU'''|| Dual E5-2643 v4 (3.4 GHz, 2x 6-cores)<br />
|-<br />
| '''RAM'''|| 64 GB<br />
|-<br />
| '''Disk''' || 512 GB SSD + 1TB HD<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro K5000<br />
|-<br />
| '''OS''' || Windows 7<br />
|-<br />
| '''Display''' || 108" 3D projector screen<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz.PNG|400px]]<br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<div class="large-6 columns"><br />
{| style="width: 80%"<br />
|+<span style="color:#925223"> Linux IQ Station </span><br />
|-<br />
| '''CPU'''|| Dual E5-2670 (2.60 GHz, 8-cores)<br />
|-<br />
| '''RAM'''|| 128 GB<br />
|-<br />
| '''Disk''' || 4 TB<br />
|-<br />
| '''GPU''' || Dual nVidia Quadro 5000<br />
|-<br />
| '''OS''' || CentOS 6.4<br />
|-<br />
| '''Display''' || 70" 3D TV<br />
|-<br />
| '''Tracking''' || ART SMARTTRACK<br />
|}<br />
</div><br />
<div class="large-6 columns">[[File:Viz1.jpg|400px]]<br />
</div><br />
</div></div>Bdenghttp://copper.mtech.edu/index.php?title=ANSYS&diff=706ANSYS2021-07-21T19:25:57Z<p>Bdeng: </p>
<hr />
<div>To use ANSYS on HPC, you can either start a GUI session on a compute node or submit batch jobs to the compute nodes. Follow the instructions below or refer to [[Running_Jobs_on_HPC]].<br />
==Running ANSYS GUI on a compute node==<br />
You can start ANSYS GUI on a compute node by creating an interactive job with Slurm.<br />
#Start an interactive job on a compute node<br />
#: <code>srun -N 1 -n 4 -t 01:00:00 --x11 --pty /bin/bash</code><br />
#: The above command will start an interactive job on 1 node 4 processors in the normal partition for 2 hours. (refer to the following sample script for more details of the options)<br />
#Load the ANSYS module<br />
#: <code>module load ANSYS</code><br />
#Start the ANSYS workbench or FLUENT<br />
#: <code>runwb2</code> or <code>fluent</code><br />
<br />
==Submitting batch jobs through Slurm==<br />
ANSYS Fluent can also be used in batch mode and jobs can be submitted to the compute nodes through Slurm.<br />
==== Sample Script (UNDER DEVELOPMENT) ====<br />
#Create a job script for using 4 processors (cores) - put the following in a file called fluentjob.sh<br />
#: <code style=display:block>#!/bin/sh<br>#SBATCH -J JOB_NAME #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested<br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested<br> <br>module load ANSYS<br>fluent ### OPTIONS? INPUT FILES? </code><br />
#Submit to Moab<br />
#: <code>sbatch fluentjob.sh</code><br />
#Check status with <code>squeue</code> command<br />
<br />
<br />
==Running ANSYS Desktop GUI on copper==<br />
To start ANSYS workbench GUI on Copper Server<br />
: <code style=display:block>module load ANSYS<br>runwb2</code><br />
To start FLUENT GUI, you can use : <code>fluent</code> instead of : <code>runwb2</code> above</div>Bdenghttp://copper.mtech.edu/index.php?title=ANSYS&diff=705ANSYS2021-07-21T19:10:49Z<p>Bdeng: /* Running ANSYS Desktop GUI on copper */</p>
<hr />
<div>==Running ANSYS Desktop GUI==<br />
The ANSYS desktop gui can be run on the management node, but long simulations should be executed on the compute nodes. <br />
To start ANSYS workbench GUI on HPC or Copper Server<br />
: <code style=display:block>module load ANSYS<br>runwb2</code><br />
To start FLUENT GUI, you can use : <code>fluent</code> instead of : <code>runwb2</code> above<br />
<br />
==Submitting batch jobs through Moab==<br />
To avoid overloading the management node, ANSYS Fluent should be used in batch mode and jobs should be submitted to the compute nodes through torque.<br />
==== Sample Script (UNDER DEVELOPMENT) ====<br />
#Create a job script for using 4 processors (cores) - put the following in a file called fluentjob.sh<br />
#: <code style=display:block>#!/bin/sh<br>#SBATCH -J JOB_NAME #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested<br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested<br> <br>module load ANSYS<br>fluent ### OPTIONS? INPUT FILES? </code><br />
#Submit to Moab<br />
#: <code>sbatch fluentjob.sh</code><br />
#Check status with <code>squeue</code> command</div>Bdenghttp://copper.mtech.edu/index.php?title=ANSYS&diff=704ANSYS2021-07-21T19:10:34Z<p>Bdeng: /* Running ANSYS Desktop GUI */</p>
<hr />
<div>==Running ANSYS Desktop GUI on copper==<br />
The ANSYS desktop gui can be run on the management node, but long simulations should be executed on the compute nodes. <br />
To start ANSYS workbench GUI on HPC or Copper Server<br />
: <code style=display:block>module load ANSYS<br>runwb2</code><br />
To start FLUENT GUI, you can use : <code>fluent</code> instead of : <code>runwb2</code> above<br />
<br />
==Submitting batch jobs through Moab==<br />
To avoid overloading the management node, ANSYS Fluent should be used in batch mode and jobs should be submitted to the compute nodes through torque.<br />
==== Sample Script (UNDER DEVELOPMENT) ====<br />
#Create a job script for using 4 processors (cores) - put the following in a file called fluentjob.sh<br />
#: <code style=display:block>#!/bin/sh<br>#SBATCH -J JOB_NAME #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested<br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested<br> <br>module load ANSYS<br>fluent ### OPTIONS? INPUT FILES? </code><br />
#Submit to Moab<br />
#: <code>sbatch fluentjob.sh</code><br />
#Check status with <code>squeue</code> command</div>Bdenghttp://copper.mtech.edu/index.php?title=ANSYS&diff=703ANSYS2021-07-21T19:08:42Z<p>Bdeng: /* Running ANSYS Desktop GUI */</p>
<hr />
<div>==Running ANSYS Desktop GUI==<br />
The ANSYS desktop gui can be run on the management node, but long simulations should be executed on the compute nodes. <br />
To start ANSYS workbench GUI on HPC or Copper Server<br />
: <code style=display:block>module load ANSYS<br>runwb2</code><br />
To start FLUENT GUI, you can use : <code>fluent</code> instead of : <code>runwb2</code> above<br />
<br />
==Submitting batch jobs through Moab==<br />
To avoid overloading the management node, ANSYS Fluent should be used in batch mode and jobs should be submitted to the compute nodes through torque.<br />
==== Sample Script (UNDER DEVELOPMENT) ====<br />
#Create a job script for using 4 processors (cores) - put the following in a file called fluentjob.sh<br />
#: <code style=display:block>#!/bin/sh<br>#SBATCH -J JOB_NAME #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested<br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested<br> <br>module load ANSYS<br>fluent ### OPTIONS? INPUT FILES? </code><br />
#Submit to Moab<br />
#: <code>sbatch fluentjob.sh</code><br />
#Check status with <code>squeue</code> command</div>Bdenghttp://copper.mtech.edu/index.php?title=ANSYS&diff=702ANSYS2021-07-21T19:08:21Z<p>Bdeng: /* Sample Script (UNDER DEVELOPMENT) */</p>
<hr />
<div>==Running ANSYS Desktop GUI==<br />
The ANSYS desktop gui can be run on the management node, but long simulations should be executed on the compute nodes. <br />
To start ANSYS workbench GUI on HPC or Copper Server<br />
: <code style=display:block>module load ansys<br>runwb2</code><br />
To start FLUENT GUI, you can use : <code>fluent</code> instead of : <code>runwb2</code> above<br />
<br />
==Submitting batch jobs through Moab==<br />
To avoid overloading the management node, ANSYS Fluent should be used in batch mode and jobs should be submitted to the compute nodes through torque.<br />
==== Sample Script (UNDER DEVELOPMENT) ====<br />
#Create a job script for using 4 processors (cores) - put the following in a file called fluentjob.sh<br />
#: <code style=display:block>#!/bin/sh<br>#SBATCH -J JOB_NAME #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested<br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested<br> <br>module load ANSYS<br>fluent ### OPTIONS? INPUT FILES? </code><br />
#Submit to Moab<br />
#: <code>sbatch fluentjob.sh</code><br />
#Check status with <code>squeue</code> command</div>Bdenghttp://copper.mtech.edu/index.php?title=Available_Software&diff=701Available Software2021-01-25T18:10:04Z<p>Bdeng: </p>
<hr />
<div>Below are some software installed in the system-wide location. You can install software in your own Home directory and you are also welcome to [[Contacts|contact]] us to request installing other software or packages.<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''System'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[compilers|Compilers]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[CUDA]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[Modules]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[MPI]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[]]'''<br />
| style="width:100px; text-align:center;" |<br />
|}<br />
<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''Science & Engineering'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[BLAST+]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[COMSOL]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[GATK]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[LAMMPS]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[MATLAB]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Mothur]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[NAMD]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[R]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[ANSYS]]'''<br />
|}<br />
<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''Computer/Data Science'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[Julia]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Python]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Tensorflow]]'''<br />
<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
|}<br />
<br />
{| style="width: 90%;font-size: 110%;" | class="wikitable"<br />
!colspan="3" style="color:#925223"| '''Visualizations'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
'''[[LidarViewer]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Visit]]'''<br />
| style="width:100px; text-align:center;" |<br />
'''[[Vrui]]'''<br />
|- style="vertical-align:middle;"<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
<br />
| style="width:100px; text-align:center;" |<br />
|}</div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=700Python2020-11-04T16:43:31Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
<br />
We have Python 3.8 installed as a module. You can use <code>module load python/3.8</code> to load it.<br />
<br />
We also have Anaconda3 and Miniconda3 and installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
<br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
===Deactivating a Python environment===<br />
When you finished with your environment or you need to switch to a different environment. You can deactivate the current environment with:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda deactivate</code><br />
<br />
You'll see the environment name removed from the command prompt:<br />
<br />
<code>[username@oredigger ~]$ </code><br />
==Additional conda commands==<br />
To remove a conda environment:<br />
<code>conda env remove --name environment_name</code><br />
<br />
(if environment not in default path, use:<code>conda env remove -p /PATH/environment_name</code>)<br />
<br />
To export a conda environment, first activate the environment, then use: <code>conda env export > environment.yml </code><br />
<br />
This will export a list of your environment's packages to the file ''environment.yml''<br />
<br />
For more conda commands: <br />
'''[https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf Conda Cheat Sheet]'''<br />
and<br />
'''[https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html Conda User Guide]'''<br />
<br />
==Use Python environment in a job submission script==<br />
To use a Python environment in a Slurm job submission script, below is a sample using 1 core with the mypy38 Python environment:<br />
<code style=display:block>#!/bin/sh<br>#SBATCH -J pythontest #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 1 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load anaconda<br>source activate mypy38<br>python mypython.py</code></div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=699Python2020-09-18T18:47:24Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
<br />
We also have Anaconda3 and Miniconda3 and installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
<br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
===Deactivating a Python environment===<br />
When you finished with your environment or you need to switch to a different environment. You can deactivate the current environment with:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda deactivate</code><br />
<br />
You'll see the environment name removed from the command prompt:<br />
<br />
<code>[username@oredigger ~]$ </code><br />
==Additional conda commands==<br />
To remove a conda environment:<br />
<code>conda env remove --name environment_name</code><br />
<br />
(if environment not in default path, use:<code>conda env remove -p /PATH/environment_name</code>)<br />
<br />
To export a conda environment, first activate the environment, then use: <code>conda env export > environment.yml </code><br />
<br />
This will export a list of your environment's packages to the file ''environment.yml''<br />
<br />
For more conda commands: <br />
'''[https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf Conda Cheat Sheet]'''<br />
and<br />
'''[https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html Conda User Guide]'''<br />
<br />
==Use Python environment in a job submission script==<br />
To use a Python environment in a Slurm job submission script, below is a sample using 1 core with the mypy38 Python environment:<br />
<code style=display:block>#!/bin/sh<br>#SBATCH -J pythontest #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 1 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load anaconda<br>source activate mypy38<br>python mypython.py</code></div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=698Python2020-09-18T18:47:09Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
<br />
We also have Anaconda3 and Miniconda3 and installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
<br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
===Deactivating a Python environment===<br />
When you finished with your environment or you need to switch to a different environment. You can deactivate the current environment with:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda deactivate</code><br />
<br />
You'll see the environment name removed from the command prompt:<br />
<br />
<code>[username@oredigger ~]$ </code><br />
==Additional conda commands==<br />
To remove a conda environment:<br />
<code>conda env remove --name environment_name</code><br />
<br />
(if environment not in default path, use:<code>conda env remove -p /PATH/environment_name</code>)<br />
<br />
To export a conda environment, first activate the environment, then use: <code>conda env export > environment.yml </code><br />
<br />
This will export a list of your environment's packages to the file ''environment.yml''<br />
<br />
For more conda commands: <br />
'''[https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf Conda Cheat Sheet]'''<br />
'''[https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html Conda User Guide]'''<br />
<br />
==Use Python environment in a job submission script==<br />
To use a Python environment in a Slurm job submission script, below is a sample using 1 core with the mypy38 Python environment:<br />
<code style=display:block>#!/bin/sh<br>#SBATCH -J pythontest #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 1 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load anaconda<br>source activate mypy38<br>python mypython.py</code></div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=697Python2020-09-18T18:46:47Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
<br />
We also have Anaconda3 and Miniconda3 and installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
<br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
===Deactivating a Python environment===<br />
When you finished with your environment or you need to switch to a different environment. You can deactivate the current environment with:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda deactivate</code><br />
<br />
You'll see the environment name removed from the command prompt:<br />
<br />
<code>[username@oredigger ~]$ </code><br />
==Additional conda commands==<br />
To remove a conda environment:<br />
<code>conda env remove --name environment_name</code><br />
<br />
(if environment not in default path, use:<code>conda env remove -p /PATH/environment_name</code>)<br />
<br />
To export a conda environment, first activate the environment, then use: <code>conda env export > environment.yml </code><br />
<br />
This will export a list of your environment's packages to the file ''environment.yml''<br />
<br />
For more conda commands: <br />
<br />
'''[https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf Conda Cheat Sheet]'''<br />
<br />
'''[https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html Conda User Guide]'''<br />
<br />
==Use Python environment in a job submission script==<br />
To use a Python environment in a Slurm job submission script, below is a sample using 1 core with the mypy38 Python environment:<br />
<code style=display:block>#!/bin/sh<br>#SBATCH -J pythontest #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 1 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load anaconda<br>source activate mypy38<br>python mypython.py</code></div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=696Python2020-09-18T18:45:50Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
<br />
We also have Anaconda3 and Miniconda3 and installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
<br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
===Deactivating a Python environment===<br />
When you finished with your environment or you need to switch to a different environment. You can deactivate the current environment with:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda deactivate</code><br />
<br />
You'll see the environment name removed from the command prompt:<br />
<br />
<code>[username@oredigger ~]$ </code><br />
==Additional conda commands==<br />
To remove a conda environment:<br />
<code>conda env remove --name environment_name</code><br />
<br />
(if environment not in default path, use:<code>conda env remove -p /PATH/environment_name</code>)<br />
<br />
To export a conda environment, first activate the environment, then use: <code>conda env export > environment.yml </code><br />
<br />
This will export a list of your environment's packages to the file ''environment.yml''<br />
<br />
For more conda commands: <br />
'''[https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf Conda Cheat Sheet]'''<br />
'''[https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html Conda User Guide]'''<br />
<br />
==Use Python environment in a job submission script==<br />
To use a Python environment in a Slurm job submission script, below is a sample using 1 core with the mypy38 Python environment:<br />
<code style=display:block>#!/bin/sh<br>#SBATCH -J pythontest #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 1 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load anaconda<br>source activate mypy38<br>python mypython.py</code></div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=695Python2020-09-18T18:41:03Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
<br />
We also have Anaconda3 and Miniconda3 and installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
<br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
===Deactivating a Python environment===<br />
When you finished with your environment or you need to switch to a different environment. You can deactivate the current environment with:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda deactivate</code><br />
<br />
You'll see the environment name removed from the command prompt:<br />
<br />
<code>[username@oredigger ~]$ </code><br />
==Additional conda commands==<br />
To remove a conda environment:<br />
<code>conda env remove --name environment_name</code><br />
<br />
(if environment not in default path, use:<code>conda env remove -p /PATH/environment_name</code>)<br />
<br />
To export a conda environment, first activate the environment, then use: <code>conda env export > environment.yml </code><br />
<br />
This will export a list of your environment's packages to the file ''environment.yml''<br />
<br />
For more conda commands: <br />
'''[https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf Conda Cheat Sheet]'''<br />
'''[https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html Conda User Guide]'''</div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=694Python2020-09-18T18:40:42Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
<br />
We also have Miniconda3 and Anaconda3 installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
<br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
===Deactivating a Python environment===<br />
When you finished with your environment or you need to switch to a different environment. You can deactivate the current environment with:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda deactivate</code><br />
<br />
You'll see the environment name removed from the command prompt:<br />
<br />
<code>[username@oredigger ~]$ </code><br />
==Additional conda commands==<br />
To remove a conda environment:<br />
<code>conda env remove --name environment_name</code><br />
<br />
(if environment not in default path, use:<code>conda env remove -p /PATH/environment_name</code>)<br />
<br />
To export a conda environment, first activate the environment, then use: <code>conda env export > environment.yml </code><br />
<br />
This will export a list of your environment's packages to the file ''environment.yml''<br />
<br />
For more conda commands: <br />
'''[https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf Conda Cheat Sheet]'''<br />
'''[https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html Conda User Guide]'''</div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=693Python2020-09-18T18:39:52Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
<br />
We also have Miniconda3 and Anaconda3 installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
<br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
===Deactivating a Python environment===<br />
When you finished with your environment or you need to switch to a different environment. You can deactivate the current environment with:<br />
<br />
<code>(mypy38) [username@oredigger ~]$ conda deactivate</code><br />
<br />
You'll see the environment name removed from the command prompt:<br />
<br />
<code>[username@oredigger ~]$ </code><br />
==Additional conda commands==<br />
To remove a conda environment:<br />
<code>conda env remove --name environment_name</code><br />
<br />
(if environment not in default path, use:<code>conda env remove -p /PATH/environment_name</code>)<br />
<br />
To export a conda environment, first activate the environment, then use: <code>conda env export > environment.yml </code><br />
<br />
This will export a list of your environment's packages to the file ''environment.yml''<br />
<br />
For more conda commands: <br />
'''[https://docs.conda.io/projects/conda/en/latest/_downloads/843d9e0198f2a193a3484886fa28163c/conda-cheatsheet.pdf Conda Cheat Sheet]'''</div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=692Python2020-09-18T18:27:30Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
<br />
We also have Miniconda3 and Anaconda3 installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
By default, this will create the environment in your home directory at <code>~/.conda/env/</code>. If you would like to save it to another location, you can use the <code>-p</code> option:<br />
: <code>conda create -p /PATH/mypy38 python=3.8</code><br />
<br />
Please note, after the creation, you may be told to use "conda activate mypy38" to activate the environment. If you use the command, you'll then be told to do "conda init". '''You probably don't want to do that''', as it'll alter your basrc file and load one of the conda version by default.<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code>. You will get a list of the available Python environment.<br />
: <code style=display:block># conda environments:<br>#<br>mypy38 /home/mtech/bdeng/.conda/envs/mypy38<br>base * /opt/ohpc/pub/apps/anaconda3</code><br />
===Activating a Python environment===<br />
To use the mypy38 environment, use the command:<br />
: <code>source activate mypy38</code><br />
You'll then notice the environment name prepended to the command prompt, for example:<br />
: <code>(mypy38) [username@oredigger ~]$</code><br />
Now you can check your python version:<br />
: <code style=display:block>(mypy38) [username@oredigger ~]$ python -V<br>Python 3.8.5</code><br />
===Installing new Python packages===<br />
Once your Python environment is activated, you can install additional packages for your project.<br />
For example, to install the <code>scipy</code> package, use the command:<br />
<code>(mypy38) [username@oredigger ~]$ conda install scipy</code><br />
For some packages, you may need to specify the channel using the <code>-c</code> option. Refer to the documentations of the package you'll need.<br />
To get a list of conda packages installed, use:<br />
<code>(mypy38) [username@oredigger ~]$ conda list</code><br />
<br />
<br />
<br />
<br />
<br />
<br />
Once logged into HPC, you can use the module command to load the miniconda module.<br />
: <code>module load miniconda</code><br />
<br />
<br />
<br />
: <code style=display:block>module load anaconda</code></div>Bdenghttp://copper.mtech.edu/index.php?title=Python&diff=691Python2020-09-18T17:46:20Z<p>Bdeng: </p>
<hr />
<div>The default Python installed is 2.7.5 and 3.4.10, compiled by GCC 4.8.5.<br />
We also have Miniconda3 and Anaconda3 installed so that you can create your own Python development environments. The difference between Miniconda and Anaconda is that Anaconda has a lot more numeric and scientific libraries installed by default.<br />
==Loading the anaconda or miniconda module==<br />
Once logged into HPC, you can use the module command to load the anaconda module.<br />
: <code>module load anaconda</code><br />
For miniconda<br />
: <code>module load miniconda</code><br />
Please note that you can only use one of the "conda" modules at a time. If you have anaonda loaded, but need to switch to miniconda, you will first need to unload the anaconda module:<br />
: <code style=display:block>module unload anaconda<br>module load miniconda</code><br />
==Creating Anaconda or Miniconda Python environment==<br />
You can use the <code>conda create</code> command to create a new Python environment. <br />
For example, to create a Python environment named mypy38 with Python 3.8:<br />
: <code>conda create --name mypy38 python=3.8</code><br />
You'll then be provided with the location of the environment being created and the packages will be installed. Then type ''y'' to confirm the installation. <br />
<br />
<br />
==Using a Python Environment in miniconda or anaconda==<br />
To get a list of available conda environment, you can use the command: <code>conda env list</code><br />
<br />
<br />
<br />
Once logged into HPC, you can use the module command to load the miniconda module.<br />
: <code>module load miniconda</code><br />
<br />
<br />
<br />
: <code style=display:block>module load anaconda</code></div>Bdenghttp://copper.mtech.edu/index.php?title=ANSYS&diff=690ANSYS2020-08-27T23:50:47Z<p>Bdeng: /* Running ANSYS Desktop GUI */</p>
<hr />
<div>==Running ANSYS Desktop GUI==<br />
The ANSYS desktop gui can be run on the management node, but long simulations should be executed on the compute nodes. <br />
To start ANSYS workbench GUI on HPC or Copper Server<br />
: <code style=display:block>module load ansys<br>runwb2</code><br />
To start FLUENT GUI, you can use : <code>fluent</code> instead of : <code>runwb2</code> above<br />
<br />
==Submitting batch jobs through Moab==<br />
To avoid overloading the management node, ANSYS Fluent should be used in batch mode and jobs should be submitted to the compute nodes through torque.<br />
==== Sample Script (UNDER DEVELOPMENT) ====<br />
#Create a job script for using 4 processors (cores) - put the following in a file called fluentjob.sh<br />
#: <code style=display:block>#!/bin/sh<br>PBS -l nodes=1:ppn=4<br>#PBS -j oe<br>#PBS -N Fluent_test<br>#PBS -S /bin/bash<br>#PBS -l walltime=02:00:00<br> <br>module load ansys<br>fluent ### OPTIONS? INPUT FILES? </code><br />
#Submit to Moab<br />
#: <code>msub fluentjob.sh</code><br />
#Check status with showq or qstat.</div>Bdenghttp://copper.mtech.edu/index.php?title=Contacts&diff=689Contacts2020-08-19T17:43:04Z<p>Bdeng: </p>
<hr />
<div>=== The HPC Team ===<br />
'''[https://https://cs.mtech.edu/?profiles=phillip-j-curtiss Phillip Curtiss]''', Computer Science Professor, pcurtiss at mtech dot edu<br />
<br />
'''[https://sites.google.com/view/bdeng/home Bowen Deng]''', HPC Application Scientist, bdeng at mtech dot edu<br />
<br />
'''[https://cs.mtech.edu/main/index.php/component/content/article/97 Jeff Braun]''', Computer Science Professor, jbraun at mtech dot edu<br />
<br />
=== New Account Request ===<br />
Please fill this [https://docs.google.com/forms/d/e/1FAIpQLSdgFqxKFekaSGj7yUVNKABxt8z-vmxP1oNYcB7eHQBCnzE9Zw/viewform?usp=sf_link questionnaire] if you need a new account.</div>Bdenghttp://copper.mtech.edu/index.php?title=Contacts&diff=688Contacts2020-08-19T17:39:49Z<p>Bdeng: </p>
<hr />
<div>=== The HPC Team ===<br />
'''[https://https://cs.mtech.edu/?profiles=phillip-j-curtiss Phillip Curtiss]''', Computer Science Professor, pcurtiss at mtech dot edu<br />
<br />
'''[https://sites.google.com/view/bdeng/home Bowen Deng]''', HPC Application Scientist, bdeng at mtech dot edu<br />
<br />
'''[https://cs.mtech.edu/main/index.php/component/content/article/97 Jeff Braun]''', Computer Science Professor, jbraun at mtech dot edu<br />
<br />
=== New Account Request ===<br />
Please fill this questionnaire if you need a new account. <iframe src="https://docs.google.com/forms/d/e/1FAIpQLSdgFqxKFekaSGj7yUVNKABxt8z-vmxP1oNYcB7eHQBCnzE9Zw/viewform?embedded=true" width="640" height="2338" frameborder="0" marginheight="0" marginwidth="0">Loading…</iframe></div>Bdenghttp://copper.mtech.edu/index.php?title=Connecting_to_HPC&diff=687Connecting to HPC2020-06-12T20:40:17Z<p>Bdeng: </p>
<hr />
<div>You can use Secure Shell(SSH) to connect to HPC. Depending on the operating system of your computer, you have different options to get connected.<br />
<br />
==For Mac/Linux==<br />
You can directly use the Terminal application comes with your system to connect.<br />
<br />
In your terminal, type the following command to connect via ssh:<br />
<br />
<code>ssh YourUserName@hpc.mtech.edu</code><br />
<br />
You will then receive prompt to enter your password, similar to the following line.<br />
<br />
<code>YourUserName@hpc.mtech.edu's password:</code><br />
<br />
You can then enter your password. Note: when you enter your password, nothing will display on the screen.<br />
<br />
If you intend to use any applications with GUI interfaces (e.g. MATLAB, COMSOL), you will need to add the '-X' option when connecting:<br />
<br />
<code>ssh -X YourUserName@hpc.mtech.edu</code><br />
<br />
<br />
== For Windows ==<br />
You will need to install a terminal emulator program to connect. There are many such programs[https://en.wikipedia.org/wiki/List_of_terminal_emulators], below are examples of using '''MobaXterm''' and '''Xshell'''.<br />
<br />
'''Now in Windows 10, with the addition of native OpenSSH support, you can use the Command Prompt comes with Windows to connect to HPC. (Use the same command in the above section)'''<br />
<br />
=== Using MobaXterm (Recommended) ===<br />
MobaXterm[http://mobaxterm.mobatek.net/] is a single application that has integrations of several tools, e.g., SSH, X11, FTP.... <br />
<br />
MobaXterm has a free Home Edition, and you download it [http://mobaxterm.mobatek.net/download.html here]. Either the Portable version or the Installer version is fine.<br />
<br />
* To connect to the HPC, you can either start a local terminal, and use the command <code>ssh YourUserName@hpc.mtech.edu</code> as detailed in the previous Mac/Linux section.<br />
<br />
[[File:Moba_1.png|border|500px]]<br />
<br />
* Or you can click the '''Session''' button on the top left corner and choose '''SSH''' in the pop-up window. Then enter the HPC's address and your username as shown below:<br />
<br />
[[File:Moba_2.png|border|500px]]<br />
<br />
Then click '''Ok''' and you'll get a prompt to enter your password.<br />
<br />
=== Using Xshell ===<br />
You can download Xshell at https://www.netsarang.com/products/xsh_overview.html It's free for School/Home use.<br />
<br />
You'll also need to install an X Server program on your computer if you want to use any graphical applications (e.g. MATLAB, COMSOL). One free X Server is Xming and you can download it at https://sourceforge.net/projects/xming/<br />
<br />
After the installation of Xshell/Xming, you can follow the steps below to set up the connection.<br />
<br />
<div class="row"><br />
<br><br />
<div class="large-4 column"><br />
* 1. Open Xshell, select '''New''' under the '''File''' Tab, and click the '''Connection''' category.<br />In the '''Host''' field, enter our HPC address (hpc.mtech.edu).<br \>You can also name this connection in the '''Name''' field (e.g., TechHpc).<br />
</div><br />
<div class="large-8 column"><br />
[[File:Xshell_1.png|border|400px]]<br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<br><br />
<div class="large-4 column"><br />
* 2. Click the '''Authentication''' category, make sure '''Password''' is selected for the '''Method''' category.<br \>Then enter your username and password in the '''User Name''' and '''Password''' fields respectively.<br />
</div><br />
<div class="large-8 column"><br />
[[File:Xshell_2.png|border|400px]]<br />
</div><br />
</div><br />
<br />
<div class="row"><br />
<br><br />
<div class="large-4 column"><br />
* 3. (Optional for GUI Applications) Click '''Tunneling''' category, select '''Forward X11 connection to''' and choose '''X DISPLAY:'''.<br \>Then enter your username and password in the '''User Name''' and '''Password''' fields respectively.<br />
</div><br />
<div class="large-8 column"><br />
[[File:Xshell_3.png|border|400px]]<br />
</div><br />
</div><br />
Once the above setup is complete, you can select '''Open''' under the '''File''' tab (or just use the '''Open''' Button) to connect to HPC.<br />
Remember to run Xming first, if you need to use graphical interfaces. And you'll see its icon in the Taskbar.<br />
<br />
==Graphical Display (optional)==<br />
Within the Montana University System, the graphical display from hpc.mtech.edu can be redirected to your local PC if you install an X Server. The cygwin x-server (http://x.cygwin.com/ ) is free and works well. You will need to enable X11 forwarding before you login into hpc.mtech.edu - this is done in putty by selecting Connection-SSH-Tunnels and checking the Enable X11 forwarding box. Start the x-server before logging on and test by running "xeyes".<br />
<br />
<br />
==Transferring files between your computer and HPC==<br />
<br />
After you login to HPC, the files you created/saved is stored on HPC. You will need some tools to transfer files between your computer and HPC.<br />
<br />
===For Mac/Linux===<br />
You can directly use the <code>scp</code> command in the terminal to do a file transfer, or you can use any FTP applications.<br />
<br />
===For Windows===<br />
You can use any FTP programs to transfer files. Just use hpc.mtech.edu as the host address, and provide your username and password. One FTP program is Xftp (free for home/school). You can get it at https://www.netsarang.com/products/xfp_overview.html<br />
<br />
If you are logged in using Xshell, you can use the '''New File Transfer''' button in the toolbar (or the keyboard shortcut '''Ctrl'''+'''Alt'''+'''F''') to open the file transfer. <br />
By default, in the popped up Xftp window, you'll see your Desktop directory on the left side and the directory you are at in Xshell on the right. You can then Right-Click any files or folders to transfer files.<br />
<br />
For Mobaxterm, the FTP function is integrated. If you are logged in to HPC, you can choose the '''Sftp''' tab in the left sidebar to download/upload files. Or you can also start a '''SFTP''' session similar to the Mobaxterm tutorial above.<br />
<br />
<br />
==First Login==<br />
Once you logged in, you'll see some texts, including the logo and some notice.<br \><br />
At the bottom line, you'll see your cursor after a Unix prompt(possibly a dollar sign):<br \><br />
<code>[YourUserName@oredigger ~]$ &#10074;</code><br />
The texts before the dollar sign include your username, computer name and your current directory name. For general purposes, these texts will be omitted in the following examples, like <code>$ &#10074;</code><br />
<br />
===Change your initial password===<br />
It's best to change your initial password the first time you log in. To do it, simply use the passwd command:<br />
<br />
<code>$ passwd</code><br />
<br />
You will then get the following texts:<br />
<br />
<code>Changing password for user YourUserName.<br />
<br />
Changing password for YourUserName.<br />
<br />
(current) UNIX password: &#10074;</code><br />
<br />
Enter your initial(current) password and press enter. You'll then be prompt to enter and retype new passoword:<br />
<br />
<code>New password:<br />
<br />
Retype new password:</code><br />
<br />
If change is successful, you'll get:<br />
<br />
<code>passwd: all authentication tokens updated successfully.</code><br />
<br />
===Your Home Directory===<br />
When you login to HPC, you are directed to your default home directory, which generally has the following format:<br />
<br />
<code>/data1/YourAffiliationInstitution/YourUserName</code></div>Bdenghttp://copper.mtech.edu/index.php?title=MATLAB&diff=686MATLAB2020-06-10T17:40:33Z<p>Bdeng: </p>
<hr />
<div>MATLAB (R2019a) and the Parallel Computing Toolbox is installed. The Distributed Computing Server is not installed, so calculations are limited to single compute nodes. <br />
<br />
==Submitting MATLAB jobs==<br />
<br />
MATLAB jobs that do and do not use the Parallel Computing Toolbox can be submitted to [[Slurm]] via a script containing:<br />
: <code style=display:block>#!/bin/sh<br>#SBATCH -J MatlabJob #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load MATLAB<br>matlab -nodesktop -nosplash -r "your_matlab_program(input_parameters);quit;"</code><br />
<br />
Since MATLAB is multithreaded, you can request 12 ppn even if you are not using the Parallel Computing Toolbox. For parallel MATLAB jobs, the matlabpool is limited to the physical cores only, that is 16 workers per compute node.. <br />
<br />
Use msub to submit your job script to Slurm.<br />
: <code>sbatch matlabjob.sh</code><br />
where matlabjob contains the above script updated with your program and username info.<br />
<br />
==Running MATLAB interactively in command line==<br />
If you wish to run MATLAB interactively without the Desktop GUI, start an interactive job on a compute node with:<br />
:<code>srun -N 1 -n 12 --pty /bin/bash</code><br />
<br />
This will return with a command prompt on a compute node, for example:<br />
<br />
:<code style=display:block>[USER@oredigger ~]$ srun -N 1 -n 12 --pty /bin/bash<br>[USER@cn0 ~]$</code><br />
<br />
Then you can start the MATLAB GUI with the commands:<br />
:<code style=display:block>module load MATLAB<br>matlab</code><br />
<br />
==MATLAB desktop on headnode==<br />
The MATLAB Desktop GUI is currently limited to the management node. Please respect other users and avoid long computational runs on the management node if other users are on the system.</div>Bdenghttp://copper.mtech.edu/index.php?title=MATLAB&diff=685MATLAB2020-06-10T17:39:56Z<p>Bdeng: </p>
<hr />
<div>MATLAB and the Parallel Computing Toolbox is installed. The Distributed Computing Server is not installed, so calculations are limited to single compute nodes. <br />
<br />
==Submitting MATLAB jobs==<br />
<br />
MATLAB jobs that do and do not use the Parallel Computing Toolbox can be submitted to [[Slurm]] via a script containing:<br />
: <code style=display:block>#!/bin/sh<br>#SBATCH -J MatlabJob #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load MATLAB<br>matlab -nodesktop -nosplash -r "your_matlab_program(input_parameters);quit;"</code><br />
<br />
Since MATLAB is multithreaded, you can request 12 ppn even if you are not using the Parallel Computing Toolbox. For parallel MATLAB jobs, the matlabpool is limited to the physical cores only, that is 16 workers per compute node.. <br />
<br />
Use msub to submit your job script to Slurm.<br />
: <code>sbatch matlabjob.sh</code><br />
where matlabjob contains the above script updated with your program and username info.<br />
<br />
==Running MATLAB interactively in command line==<br />
If you wish to run MATLAB interactively without the Desktop GUI, start an interactive job on a compute node with:<br />
:<code>srun -N 1 -n 12 --pty /bin/bash</code><br />
<br />
This will return with a command prompt on a compute node, for example:<br />
<br />
:<code style=display:block>[USER@oredigger ~]$ srun -N 1 -n 12 --pty /bin/bash<br>[USER@cn0 ~]$</code><br />
<br />
Then you can start the MATLAB GUI with the commands:<br />
:<code style=display:block>module load MATLAB<br>matlab</code><br />
<br />
==MATLAB desktop on headnode==<br />
The MATLAB Desktop GUI is currently limited to the management node. Please respect other users and avoid long computational runs on the management node if other users are on the system.</div>Bdenghttp://copper.mtech.edu/index.php?title=MATLAB&diff=684MATLAB2020-06-10T17:39:21Z<p>Bdeng: </p>
<hr />
<div>MATLAB and the Parallel Computing Toolbox is installed. The Distributed Computing Server is not installed, so calculations are limited to single compute nodes. <br />
<br />
==Submitting MATLAB jobs==<br />
<br />
MATLAB jobs that do and do not use the Parallel Computing Toolbox can be submitted to [[Slurm]] via a script containing:<br />
: <code style=display:block>#!/bin/sh<br>#SBATCH -J MatlabJob #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load MATLAB<br>matlab -nodesktop -nosplash -r "your_matlab_program(input_parameters);quit;"</code><br />
<br />
Since MATLAB is multithreaded, you can request 12 ppn even if you are not using the Parallel Computing Toolbox. For parallel MATLAB jobs, the matlabpool is limited to the physical cores only, that is 16 workers per compute node.. <br />
<br />
Use msub to submit your job script to Slurm.<br />
: <code>sbatch matlabjob.sh</code><br />
where matlabjob contains the above script updated with your program and username info.<br />
<br />
==Running MATLAB interactively in command line==<br />
If you wish to run MATLAB interactively without the Desktop GUI, start an interactive job on a compute node with:<br />
:<code>srun -N 1 -n 12 --pty /bin/bash</code><br />
<br />
This will return with a command prompt on a compute node, for example:<br />
<br />
:<code style=display:block>[USER@oredigger ~]$ srun -N 1 -n 12 --pty /bin/bash<br>[USER@cn0 ~]$</code><br />
<br />
Then you can start the MATLAB GUI with the commands:<br />
:<code style=display:block>module load MATLAB<br>matlab</code><br />
<br />
<br />
==MATLAB desktop on headnode==<br />
The MATLAB Desktop GUI is currently limited to the management node. You will need to either use NX or run an xserver. Please respect other users and avoid long computational runs on the management node if other users are on the system.</div>Bdenghttp://copper.mtech.edu/index.php?title=COMSOL&diff=683COMSOL2020-06-10T17:32:11Z<p>Bdeng: </p>
<hr />
<div>==COMSOL General Documentation==<br />
COMSOL[http://www.comsol.com] documentation is available in both html and pdf on hpc.mtech.edu/comsol[http://hpc.mtech.edu/comsol]. Navigate the file directory structure to locate the desired document. Here is direct link to the Introduction to COMSOL<br />
[http://hpc.mtech.edu/comsol/pdf/COMSOL_Multiphysics/IntroductionToCOMSOLMultiphysics.pdf]. There is also documentation available through the COMSOL desktop. The remaining documentation in the Wiki below describes how to run COMSOL on the cluster.<br />
<br />
==Available COMSOL Modules==<br />
Our COMSOL license comes with several modules including HEATTRANSFER, OPTIMIZATION, STRUCTURALMECHANICS, NONLINEARSTRUCTMATERIALS, ACOUSTICS, ACDC, etc. (For a complete list of modules refer to the command at the end of this page)<br />
Our COMSOL license allows 3 concurrent instances of COMSOL (this also includes COMSOL running on office computers) and the modules allows 1~3 concurrent use. As a result, please remember to exit COMSOL when you are done with your calculations, so that it'll not affect other users.<br />
<br />
==Running desktop gui from Management Node==<br />
The COMSOL desktop gui can be run on the management node, but long simulations should be executed on the compute nodes (see below). Once logged in (make sure you have X server running on your local machine), type the following:<br />
: <code style=display:block>module load COMSOL<br>comsol</code><br />
<br />
==Submitting batch jobs through Slurm==<br />
To avoid overloading the management node, COMSOL should be used in batch mode and jobs should be submitted to the compute nodes through torque.<br />
====Single Node====<br />
#Create a job script for using 4 processors (cores) - put the following in a file called comsoljob.sh (be sure to update with your input/output file names)<br />
#: <code style=display:block>#!/bin/sh<br>#SBATCH -J ComsolJob #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load COMSOL<br>comsol batch -np 4 -inputfile mycomsol_file.mph -outputfile mycomsol_out.mph</code><br />
#Submit to Moab<br />
#: <code>sbatch comsoljob.sh</code><br />
#Check status with squeue or the log/status file.<br />
#: While your job is running, you can monitor the status of your job from the <code>mycomsol_out.status</code> file:<br />
#: <code style=display:block>1591807213426<br>Running</code><br />
#: Once it's finished, it'll changed to something like:<br />
#: <code style=display:block>1591807541285<br>Done</code><br />
<br />
::: You can also find the Comsol output from the Slurm log file:<br />
::: <code style=display:block>*******************************************<br>***COMSOL 5.3.0.223 progress output file***<br>*******************************************<br>Wed Jun 10 10:40:13 MDT 2020<br>COMSOL 5.3 (Build: 223) starting in batch mode<br>Opening file: /data1/mtech/test/testcomsol/mycomsol_file.mph<br>Open time: 66 s.<br>Running: Study 1<br><---- Compile Equations: Time Dependent in Study 1/Solution 1 (sol1) -----------<br>Started at 10-Jun-2020 10:41:19.<br>Geometry shape order: Linear<br>Running on Intel(R) Xeon(R) CPU E5-2660 0 at 2.20 GHz.<br>Using 4 cores on 2 sockets.<br>Available memory: 64.24 GB.<br> Current Progress: 0 % - Free triangular<br>Memory: 548/548 5771/5771<br>Number of vertex elements: 8<br>Number of boundary elements: 982<br> Current Progress: 1 % - Inserting interior points<br>Memory: 555/555 6042/6042</code><br />
<br />
==Performance==<br />
COMSOL's performance generally improves as more processors (i.e., cores) are used until a certain point. This varies with the type of simulation and model being used. The "sweet spot" might be just 4 or 8 processors. For example one test gave<br />
<table><br />
<tr><td>np</td><td>1</td><td>2</td><td>4</td><td>8</td><td>16</td><td>24</td><td>32</td></tr><br />
<tr><td>time(s)</td><td>51</td><td>33</td><td>24</td><td>18</td><td>12</td><td>19</td><td>15</td></tr><br />
</table><br />
<br />
While hyperthreading is currently enabled, don't expect performance to improve for np > 16 as there are 16 cores per node (32 threads run simultaneously with hyperthreading and COMSOL does not take advantage of hyperthreading).<br />
<br />
==Checking for license availability==<br />
Montana Tech has a floating network license, so COMSOL could be in use on another system. To check which licenses are available:<br />
<br />
<code style=display:block>/opt/ohpc/pub/apps/comsol53/license/glnxa64/lmstat -a -c /opt/ohpc/pub/apps/comsol53/license/license.dat</code></div>Bdenghttp://copper.mtech.edu/index.php?title=COMSOL&diff=682COMSOL2020-06-10T17:03:38Z<p>Bdeng: /* Submitting batch jobs through Moab */</p>
<hr />
<div>==COMSOL General Documentation==<br />
COMSOL[http://www.comsol.com] documentation is available in both html and pdf on hpc.mtech.edu/comsol[http://hpc.mtech.edu/comsol]. Navigate the file directory structure to locate the desired document. Here is direct link to the Introduction to COMSOL<br />
[http://hpc.mtech.edu/comsol/pdf/COMSOL_Multiphysics/IntroductionToCOMSOLMultiphysics.pdf]. There is also documentation available through the COMSOL desktop. The remaining documentation in the Wiki below describes how to run COMSOL on the cluster.<br />
<br />
==Available COMSOL Modules==<br />
Our COMSOL license comes with several modules including HEATTRANSFER, OPTIMIZATION, STRUCTURALMECHANICS, NONLINEARSTRUCTMATERIALS, ACOUSTICS, ACDC, etc. (For a complete list of modules refer to the command at the end of this page)<br />
Our COMSOL license allows 3 concurrent instances of COMSOL (this also includes COMSOL running on office computers) and the modules allows 1~3 concurrent use. As a result, please remember to exit COMSOL when you are done with your calculations, so that it'll not affect other users.<br />
<br />
==Running desktop gui from Management Node==<br />
The COMSOL desktop gui can be run on the management node, but long simulations should be executed on the compute nodes (see below). Once logged in (make sure you have X server running on your local machine), type the following:<br />
: <code style=display:block>module load COMSOL<br>comsol</code><br />
<br />
==Submitting batch jobs through Slurm==<br />
To avoid overloading the management node, COMSOL should be used in batch mode and jobs should be submitted to the compute nodes through torque.<br />
====Single Node====<br />
#Create a job script for using 4 processors (cores) - put the following in a file called comsoljob.sh (be sure to update with your input/output file names)<br />
#: <code style=display:block>#!/bin/sh<br>#SBATCH -J ComsolJob #Name of the computation<br>#SBATCH -N 1 # Total number of nodes requested <br>#SBATCH -n 4 # Total number of tasks per node requested<br>#SBATCH -t 01:00:00 # Total run time requested - 1 hour<br>#SBATCH -p normal # compute nodes partition requested <br><br>module load COMSOL<br>comsol batch -np 4 -inputfile mycomsol_file.mph -outputfile mycomsol_out.mph</code><br />
#Submit to Moab<br />
#: <code>sbatch comsoljob.sh</code><br />
#Check status with squeue or the log/status file.<br />
#: While your job is running, you can monitor the status of your job from the <code>mycomsol_out.status</code> file:<br />
#: <code style=display:block>1591807213426<br>Running</code><br />
#: Once it's finished, it'll changed to something like:<br />
#: <code style=display:block>1591807541285<br>Done</code><br />
<br />
::: You can also find the Comsol output from the Slurm log file:<br />
::: <code style=display:block>*******************************************<br>***COMSOL 5.3.0.223 progress output file***<br>*******************************************<br>Wed Jun 10 10:40:13 MDT 2020<br>COMSOL 5.3 (Build: 223) starting in batch mode<br>Opening file: /data1/mtech/test/testcomsol/mycomsol_file.mph<br>Open time: 66 s.<br>Running: Study 1<br><---- Compile Equations: Time Dependent in Study 1/Solution 1 (sol1) -----------<br>Started at 10-Jun-2020 10:41:19.<br>Geometry shape order: Linear<br>Running on Intel(R) Xeon(R) CPU E5-2660 0 at 2.20 GHz.<br>Using 4 cores on 2 sockets.<br>Available memory: 64.24 GB.<br> Current Progress: 0 % - Free triangular<br>Memory: 548/548 5771/5771<br>Number of vertex elements: 8<br>Number of boundary elements: 982<br> Current Progress: 1 % - Inserting interior points<br>Memory: 555/555 6042/6042</code><br />
<br />
==LiveLink for MATLAB==<br />
To use COMSOL with the LiveLink for MATLAB:<br />
<br />
<code>comsol server matlab</code><br />
<br />
This will start up the MATLAB Desktop.<br />
<br />
<br />
To run a COMSOL batch job through [[Moab]] using a .m file (e.g., myprog.m), put the following in a job script:<br />
<br />
: <code style=display:block>#!/bin/sh<br>#PBS -l nodes=1:ppn=32<br>#PBS -j oe<br>#PBS -N ComsolJob<br>#PBS -d /home/mtech/YOURNAME/COMSOL_PROJECT_DIR<br>#PBS -S /bin/bash<br>#PBS -m be<br>#PBS -M YOURNAME@mtech.edu<br>#PBS -l walltime=02:00:00<br>comsol server < /dev/null > comsolserver.log &<br>matlab -nodesktop -nosplash -r "addpath /opt/COMSOL43b/mli, mphstart, myprog, exit"</code><br />
<br />
COMSOL will automatically use the 32 available processors in the node. This job would run for 2 hours and then be cancelled, if you expect a longer run, then make sure to update the walltime.<br />
<br />
==Performance==<br />
COMSOL's performance generally improves as more processors (i.e., cores) are used until a certain point. This varies with the type of simulation and model being used. The "sweet spot" might be just 4 or 8 processors. For example one test gave<br />
<table><br />
<tr><td>np</td><td>1</td><td>2</td><td>4</td><td>8</td><td>16</td><td>24</td><td>32</td></tr><br />
<tr><td>time(s)</td><td>51</td><td>33</td><td>24</td><td>18</td><td>12</td><td>19</td><td>15</td></tr><br />
</table><br />
<br />
While hyperthreading is currently enabled, don't expect performance to improve for np > 16 as there are 16 cores per node (32 threads run simultaneously with hyperthreading and COMSOL does not take advantage of hyperthreading).<br />
<br />
==Checking for license availability==<br />
Montana Tech has a floating network license, so COMSOL could be in use on another system. To check which licenses are available:<br />
<br />
<code style=display:block>/opt/comsol53/license/glnxa64/lmstat -a -c /opt/comsol53/license/license.dat</code></div>Bdeng