GPU nodes


 

Available GPUs

The following NVIDIA GPUs are currently available as part of the DCC managed HPC clusters:

# GPUsNameYearArchitectureCUDA cap.CUDA coresClock MHzMem GiBSP peak GFlopsDP peak GFlopsPeak GB/s
8Tesla M20502012GF100 (Fermi)2.04485752.621030 515148.4
6Tesla M2070Q2012GF100 (Fermi)2.04485755.251030515150.3
2*GeForce GTX 6802012GK104-400 (Kepler)3.0153610581.953090128192.2
3Tesla K20c2013GK110 (Kepler)3.524967454.6335241175208
5Tesla K40c2013GK110B (Kepler)3.52880745 / 87511.174291 / 50401430 / 1680288
8Tesla K80c (dual)2014GK210 (Kepler)3.72496562 / 87511.172796 / 4368932 / 1456240
1*GeForce GTX TITAN X2015GM200-400 (Maxwell)5.23072107611.926144192336
8*TITAN X2016GP102 (Pascal)6.135841417 / 153111.9010157 / 10974317.4 / 342.9480
14Tesla V1002017GV100(Volta)7.0512016

*Please note that the NVIDIA consumer GPUs (GForce GTX 680 and GForce GTX TITAN X) as well as TITAN X do not support ECC.

In addition, we have 1 Xeon-Phi node with 2×Intel Xeon Phi 5110P accelerators (60 cores, 8 GB memory), which can be used for testing purposes.

 

Running interactively on GPUs

There are currently two nodes available for running interactive jobs on NVIDIA GPUs.

Node n-62-17-44 is installed with 2×NVIDIA Tesla M2070Q, which are based on the Fermi architecture (same as NVIDIA Tesla M2050).

To run interactively on this node, you can use the following command:

hpclogin1: $ gpush

This command executes a bash script that submits an interactive job to the gpushqueue queue.

Node n-62-18-47 is installed with 1×NVIDIA GForce GTX TITAN X, 2×NVIDIA Tesla K20c, and 1×NVIDIA Tesla K40c, all based on the Kepler architecture (same as NVIDIA Tesla K80c and NVIDIA GForce GTX 680).

To run interactively on this node, you can use the following command:

hpclogin1: $ k40sh

This command executes a bash script that submits an interactive job to the k40_interactive queue.

Please note that multiple users are allowed on these nodes, and all users will be able to access all the GPUs on the node. We have set the GPUs to the “Exclusive process” runtime mode, which means that you will encounter a “device not available” (or similar) error, if someone is using the GPU you are trying to access.

In order to avoid too many conflicts we ask you to follow this code-of-conduct:

  • Please monitor which GPUs are currently occupied using the command nvidia-smi and predominantly select unoccupied GPUs (e.g., using cudaSetDevice()) for your application.
  • If you need to run on all CPU cores, e.g., for performance profiling, please make sure that you are not disturbing other users.
  • We kindly ask you to use the interactive nodes mainly for development, profiling, and short test jobs.

If you have further questions or issues using the GPUs please write to support@hpc.dtu.dk.

Requesting GPUs under LSF10

The syntax regarding requesting GPUs in our setup has changed from LSF9 to LSF10.
For submitting jobs into the LSF10-setup, please follow these instructions:
Using GPUs under LSF10

If you have further questions or issues using the GPUs please write to support@hpc.dtu.dk.