Computational Research Center
Skip navigation

CUDA Programs on Shale

[ Shale User Instructions | Shale Submission Scripts | Shale User Paths | CUDA Programs ]

CUDA 3.0
Each of the CUDA nodes has two NVidia c1060 Tesla GPU cards installed. They use the CUDA 3.0 developers kit for linux.

The CUDA compiler is only available on nodes 24-27 on the Shale Production cluster. To develop CUDA code, you must first move to one of these nodes using ssh, for example: ssh node24


CUDA Environment Variables
Add the following environment variable information shown below to your account's ".bash_profile" document. This document is found in your account's home directory.

NVCC=/usr/local/cuda/bin/nvcc
export NVCC

LD_LIBRARY_PATH=/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/usr/local/cuda/lib:/usr/local/cuda/lib64
export LD_LIBRARY_PATH

INCLUDES="-I ~/NVIDIA_GPU_Computing_SDK/C/common/inc -I /usr/include -I /usr/local/include "
export INCLUDES


CUDA Compiler Alias
Add the following alias to your account's ".bash_profile" document.

alias nvcc=$NVCC

Once you have updated your ".bash_profile" document as described above. Log out, and then log back into any of the Shale CUDA Nodes (node24, node25, node26, or node27). Then procede to the CUDA SDK installation.


Installing NVidia CUDA SDK
Once you have updated your ".bash_profile" document as described above, you may then install the CUDA SDK in your home directory by typing the "sh" command plus the full path of the SDK installer file on the command line. As this installs in your home directory, you can type this command from any node in the cluster:

Example: sh /d/clusterprograms/cudasdk/gpusdk_3.0_linux.run

The SDK will then be installed in the following location: ~/NVIDIA_GPU_Computing_SDK

You can compile all of the SDK examples by typing the following commands:

cd ~/NVIDIA_GPU_Computing_SDK/C
make

The compiled SDK examples will be placed in the following directory: ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release


Compiling Your Own CUDA Programs
Assuming that you have setup your environment variables properly, and that you are logged into a CUDA node, you may use the following command line to compile your CUDA program:

nvcc  nameOfYourSourceFile.cu  -o  nameOfYourExecutableFile.x


Executing Your CUDA Program
The CUDA nodes are not controlled by the cluster scheduler. In fact, if you attempt to sub a CUDA-based job to the scheduler, the job will fail as the cluster scheduler will attempt to run it on a non-CUDA computation node.

To execute your CUDA program, you must first log into one of the four CUDA nodes listed above. Once on the appropriate node, simply type the full path, or the "./" path to your CUDA compiled exacutable. For instance, if you compiled a CUDA program named "mycuda.exe", and placed it within the directory of "~/cuda/", you could execute it using one of the following two examples.

CUDA Command Line Example 1:

~/cuda/mycuda.exe -myprogramflags

CUDA Command Line Example 2:

cd ~/cuda
./mycuda.exe -myprogramflags

You must NOT use a PBS script to run your CUDA job.