Difference between revisions of "GPU Nodes"
From Montana Tech High Performance Computing
Line 24: | Line 24: | ||
(note that is upper case i and a lower case L) | (note that is upper case i and a lower case L) | ||
+ | |||
+ | ==Examples of using GPU accelerating== | ||
+ | |||
+ | ===LAMMPS=== | ||
+ | |||
+ | |||
+ | ===MATLAB=== |
Revision as of 16:11, 9 February 2018
Nodes 20 and 21 have three NVIDIA Tesla K20 Graphical Processing Unit (GPU) accelerators and 128 GB of RAM.
Normal jobs will be assigned to these nodes only when all other compute nodes are in use.
CUDA
CUDA is the NVIDIA programming language for Graphical Processing Units (GPUs).
Accessing GPU nodes
To request a GPU node can be done via the resource list flag (-l [resource list]).
One option is to request the specific node:
-
msub -l nodes=n21 ....
The preferred method is to request a gpunode:
-
msub -l nodes=1:ppn=16,feature=gpunode
(If 16 processors on the node are needed, recall defaults is 1).
In a job script, the request would look something like:
-
#!/bin/sh
#PBS -l nodes=1:ppn=2
#PBS -l feature=gpunode
#PBS -N GPUJob
#PBS -d /home/mtech/username/working_dir
#PBS -S /bin/bash
...
If interactive access is needed to test and debug your CUDA code:
-
msub -I -l feature=gpunode
(note that is upper case i and a lower case L)