Computational Research Center
Skip navigation

Linux Clusters

Cluster Software

The CRC maintains standard GNU compilers, OpenMPI, and MVAPCHX version 1.92. The CRC also maintains a two-seat floating license of Intel Cluster Studio XE - 2013.

Various software libraries and applications are maintained for the Linux cluster environments, some of these include: FFTW3, Gromacs, Schroedinger, NWChem, Quantum Espresso, and more. Other software applications can be installed upon request with the condition that the user be able to provide appropriate licensing.

Windows Users: You will need additional software to connect to the CRC Linux HPC cluster environments.

Cluster Hardware

Hodor - SandyBridge Linux Cluster

  • 32 Dell PowerEdge 720 compute nodes.
    • PCIe 3.0 expansion bus
    • Dual 64bit, Intel E5-2643 3.3GHz SandyBridge processors (8 cores total)
    • 64GB of RAM per node.
    • Dual 146GB 15K RPM drives in Mirror Raid configuration
    • Private 1Gbit Ethernet Administration Network
    • Private 56Gbit FDR 1-to-1 InfiniBand Research Network
  • Single Dell PowerEdge 720 head node
    • PCIe 3.0 expansion bus
    • Dual 64bit, Intel E5-2650 2.0GHz SandyBridge processors (16 cores total)
    • 64GB of RAM
    • Dual 1TB 7200 RPM drives in Mirror Raid configuration
    • Private 1Gbit Ethernet Administration Network
    • Private 56Gbit FDR 1-to-1 InfiniBand Research Network
    • Public 10Gbit Ethernet Network
  • Rocks+ for Cluster Management
  • Dell NSS-HA storage appliance
    • Mounted to HPC resources via NFS over private InfiniBand
    • Provides 110TB of usable storage.
    • XFS File System
  • MOAB Cluster Suite version 6.2
  • RHEL 6.2 OS

Bran - SandyBridge Linux Experimental Cluster

  • 4 Dell PowerEdge 720 compute nodes.
    • PCIe 3.0 expansion bus
    • Dual 64bit, Intel E5-2643 3.3GHz SandyBridge processors (8 cores total)
    • 64GB of RAM per node.
    • Dual 146GB 15K RPM drives in Mirror Raid configuration
    • Private 1Gbit Ethernet Administration Network
    • Private 56Gbit FDR 1-to-1 InfiniBand Research Network
  • Single Dell PowerEdge 720 head node
    • PCIe 3.0 expansion bus
    • Dual 64bit, Intel E5-2650 2.0GHz SandyBridge processors (16 cores total)
    • 64GB of RAM
    • Dual 1TB 7200 RPM drives in Mirror Raid configuration
    • Private 1Gbit Ethernet Administration Network
    • Private 56Gbit FDR 1-to-1 InfiniBand Research Network
    • Public 10Gbit Ethernet Network
  • Rocks+ for Cluster Management
  • Dell NSS-HA storage appliance
    • Mounted to HPC resources via NFS over private InfiniBand
    • Provides 110TB of usable storage.
    • XFS File System
  • MOAB Cluster Suite version 5
  • RHEL 6.2 OS