Actions

Difference between revisions of "COMSOL"

From Montana Tech High Performance Computing

(Running desktop gui from Management Node)
(Submitting batch jobs through Moab)
Line 11: Line 11:
 
: <code style=display:block>module load COMSOL<br>comsol</code>
 
: <code style=display:block>module load COMSOL<br>comsol</code>
  
==Submitting batch jobs through Moab==
+
==Submitting batch jobs through Slurm==
 
To avoid overloading the management node, COMSOL should be used in batch mode and jobs should be submitted to the compute nodes through torque.
 
To avoid overloading the management node, COMSOL should be used in batch mode and jobs should be submitted to the compute nodes through torque.
 
====Single Node====
 
====Single Node====
#Create a job script for using 4 processors (cores) - put the following in a file called comsoljob
+
#Create a job script for using 4 processors (cores) - put the following in a file called comsoljob.sh (be sure to update with your input/output file names)
#: <code style=display:block>#!/bin/sh<br>PBS -l nodes=1:ppn=4<br>#PBS -j oe<br>#PBS -N ComsolJob<br>#PBS -d /home/mtech/YOURNAME/COMSOL_PROJECT_DIR<br>#PBS -S /bin/bash<br>#PBS -m be<br>#PBS -M YOURNAME@mtech.edu<br>#PBS -l walltime=00:20:00<br>comsol batch -inputfile in.mph -outputfile out.mph -np 4</code>
+
#: <code style=display:block>#!/bin/sh<br>#SBATCH -J ComsolJob      #Name of the computation<br>#SBATCH -N 1    # Total number of nodes requested <br>#SBATCH -n 4    # Total number of tasks per node requested<br>#SBATCH -t 01:00:00    # Total run time requested - 1 hour<br>#SBATCH -p normal    # compute nodes partition requested <br><br>module load COMSOL<br>comsol batch -np 4 -inputfile mycomsol_file.mph -outputfile mycomsol_out.mph</code>
#: -->be sure to update with you input/output file names and your usernames
 
 
#Submit to Moab
 
#Submit to Moab
#: <code>msub comsoljob</code>
+
#: <code>sbatch comsoljob.sh</code>
#Check status with showq or qstat or Ganglia.
+
#Check status with squeue or the log/status file.
 +
#: While your job is running, you can monitor the status of your job from the <code>mycomsol_out.status</code> file:
 +
#: <code style=display:block>1591807213426<br>Running</code>
 +
#: Once it's finished, it'll changed to something like:
 +
#: <code style=display:block>1591807541285<br>Done</code>
  
====Multiple Nodes with Intel MPI====
+
::: You can also find the Comsol output from the Slurm log file:
COMSOL is installed with Intel's MPI and can be used on multiple nodes. In order to run on 2 or more node, modify your job script to include -clustersimple -f $PBS_NODEFILE.
+
::: <code style=display:block>*******************************************<br>***COMSOL 5.3.0.223 progress output file***<br>*******************************************<br>Wed Jun 10 10:40:13 MDT 2020<br>COMSOL 5.3 (Build: 223) starting in batch mode<br>Opening file: /data1/mtech/test/testcomsol/mycomsol_file.mph<br>Open time: 66 s.<br>Running: Study 1<br><---- Compile Equations: Time Dependent in Study 1/Solution 1 (sol1) -----------<br>Started at 10-Jun-2020 10:41:19.<br>Geometry shape order: Linear<br>Running on Intel(R) Xeon(R) CPU E5-2660 0 at 2.20 GHz.<br>Using 4 cores on 2 sockets.<br>Available memory: 64.24 GB.<br>          Current Progress:  0 % - Free triangular<br>Memory: 548/548 5771/5771<br>Number of vertex elements: 8<br>Number of boundary elements: 982<br>          Current Progress:  1 % - Inserting interior points<br>Memory: 555/555 6042/6042</code>
  
For example:
 
: <code style=display:block>#!/bin/sh<br>#PBS -l nodes=2:ppn=32<br>#PBS -N Comsol-2-node-32-core<br>#PBS -d /home/mtech/username/comsol<br>#PBS -l walltime=00:30:00<br>hostname<br>comsol batch -clustersimple -f $PBS_NODEFILE -inputfile mycomsol_file.mph -outputfile mycomsol_out.mph</code>
 
 
: (edit username/path, mycomsol_file and mycomsol_out for your model)
 
 
==LiveLink for MATLAB==
 
==LiveLink for MATLAB==
 
To use COMSOL with the LiveLink for MATLAB:
 
To use COMSOL with the LiveLink for MATLAB:

Revision as of 11:03, 10 June 2020

COMSOL General Documentation

COMSOL[1] documentation is available in both html and pdf on hpc.mtech.edu/comsol[2]. Navigate the file directory structure to locate the desired document. Here is direct link to the Introduction to COMSOL [3]. There is also documentation available through the COMSOL desktop. The remaining documentation in the Wiki below describes how to run COMSOL on the cluster.

Available COMSOL Modules

Our COMSOL license comes with several modules including HEATTRANSFER, OPTIMIZATION, STRUCTURALMECHANICS, NONLINEARSTRUCTMATERIALS, ACOUSTICS, ACDC, etc. (For a complete list of modules refer to the command at the end of this page) Our COMSOL license allows 3 concurrent instances of COMSOL (this also includes COMSOL running on office computers) and the modules allows 1~3 concurrent use. As a result, please remember to exit COMSOL when you are done with your calculations, so that it'll not affect other users.

Running desktop gui from Management Node

The COMSOL desktop gui can be run on the management node, but long simulations should be executed on the compute nodes (see below). Once logged in (make sure you have X server running on your local machine), type the following:

module load COMSOL
comsol

Submitting batch jobs through Slurm

To avoid overloading the management node, COMSOL should be used in batch mode and jobs should be submitted to the compute nodes through torque.

Single Node

  1. Create a job script for using 4 processors (cores) - put the following in a file called comsoljob.sh (be sure to update with your input/output file names)
    #!/bin/sh
    #SBATCH -J ComsolJob #Name of the computation
    #SBATCH -N 1 # Total number of nodes requested
    #SBATCH -n 4 # Total number of tasks per node requested
    #SBATCH -t 01:00:00 # Total run time requested - 1 hour
    #SBATCH -p normal # compute nodes partition requested

    module load COMSOL
    comsol batch -np 4 -inputfile mycomsol_file.mph -outputfile mycomsol_out.mph
  2. Submit to Moab
    sbatch comsoljob.sh
  3. Check status with squeue or the log/status file.
    While your job is running, you can monitor the status of your job from the mycomsol_out.status file:
    1591807213426
    Running
    Once it's finished, it'll changed to something like:
    1591807541285
    Done
You can also find the Comsol output from the Slurm log file:
*******************************************
***COMSOL 5.3.0.223 progress output file***
*******************************************
Wed Jun 10 10:40:13 MDT 2020
COMSOL 5.3 (Build: 223) starting in batch mode
Opening file: /data1/mtech/test/testcomsol/mycomsol_file.mph
Open time: 66 s.
Running: Study 1
<---- Compile Equations: Time Dependent in Study 1/Solution 1 (sol1) -----------
Started at 10-Jun-2020 10:41:19.
Geometry shape order: Linear
Running on Intel(R) Xeon(R) CPU E5-2660 0 at 2.20 GHz.
Using 4 cores on 2 sockets.
Available memory: 64.24 GB.
Current Progress: 0 % - Free triangular
Memory: 548/548 5771/5771
Number of vertex elements: 8
Number of boundary elements: 982
Current Progress: 1 % - Inserting interior points
Memory: 555/555 6042/6042

LiveLink for MATLAB

To use COMSOL with the LiveLink for MATLAB:

comsol server matlab

This will start up the MATLAB Desktop.


To run a COMSOL batch job through Moab using a .m file (e.g., myprog.m), put the following in a job script:

#!/bin/sh
#PBS -l nodes=1:ppn=32
#PBS -j oe
#PBS -N ComsolJob
#PBS -d /home/mtech/YOURNAME/COMSOL_PROJECT_DIR
#PBS -S /bin/bash
#PBS -m be
#PBS -M YOURNAME@mtech.edu
#PBS -l walltime=02:00:00
comsol server < /dev/null > comsolserver.log &
matlab -nodesktop -nosplash -r "addpath /opt/COMSOL43b/mli, mphstart, myprog, exit"

COMSOL will automatically use the 32 available processors in the node. This job would run for 2 hours and then be cancelled, if you expect a longer run, then make sure to update the walltime.

Performance

COMSOL's performance generally improves as more processors (i.e., cores) are used until a certain point. This varies with the type of simulation and model being used. The "sweet spot" might be just 4 or 8 processors. For example one test gave

np1248162432
time(s)51332418121915

While hyperthreading is currently enabled, don't expect performance to improve for np > 16 as there are 16 cores per node (32 threads run simultaneously with hyperthreading and COMSOL does not take advantage of hyperthreading).

Checking for license availability

Montana Tech has a floating network license, so COMSOL could be in use on another system. To check which licenses are available:

/opt/comsol53/license/glnxa64/lmstat -a -c /opt/comsol53/license/license.dat