Skip to Main Content
Mobile Menu

R2

R2 is a heterogeneous compute cluster provided by the Vice President of Research. It consists of 31 compute nodes and 5 GPU nodes, each with dual Intel Xeon E5-2680 CPUs. The GPU nodes each have dual Nvidia P100 GPUs.

Access

To request an account, please email researchcomputing@boisestate.edu with your request with this information:

  • Request Account on the HPC SYSTEM: r2.boisestate.edu
  • Project: Project Name or Grant Information
  • Project PI Name: PI’s-Name
  • Email Address of Requestor: user@boisestate.edu

In order to grant access, the HPC administrator will verify that you are a valid user by contacting the project’s PI.

Policies

Storage Policies

There are two storage technologies utilized on the cluster:

  • NFS (network file system): Relatively slow, persistent file system – Nightly incremental backups of /home & network storage locations
  • DAS (direct attached storage): /home and fast scratch file system meant for fast disk I/O over Mellanox Infiniband – No backups & not intended for extended storage

NFS storage is used to provide the /home backups and research/xxx file-systems. These are designed to be used for persistent, valuable data, and not for direct cluster job I/O.

Home Directories

All user home directories (/home/$USER) reside on direct attached shared storage. Each user will have a home directory in /home. Home directories are not visible to other users and are intended for your own personal storage, not for collaborative storage.

Scratch File System

The scratch is mounted under /scratch and with symbolic links in each /home/user/scratch. Scratch should be the target of all job or heavy I/O when using the cluster. Unlike the home and research file-systems, Data on scratch has no guarantee of persistence – it will live as long as the file-system does, but there are no backups as the file-system is too large to realistically backup.

Files older than 30 days in /scratch are automatically removed on a daily basis. If /scratch reaches 95% capacity or higher, the automated process will start removing files younger than 4 weeks old. This has to be done in order to keep the file-system from going 100% full.

Quotas

Currently all R2 users have home directory quotas of 75GB. This can be increased if necessary, but increases need approval by the HPC admin team. There is currently no size quotas on Scratch.

Specifications

  • Head Node(s) – High Availability Fail Over
    • MotherBoard:Dell PE R730/xd
    • CPU: Dual Intel Xeon E5-2623 4 core 2.6GHz
    • Memory:64 GB
    • EtherNet: Dual 10GigE,Dual GigE
    • InfiniBand: Mellanox ConnectX-3 VPIFDR, QSFP+ 40/56GbE
    • Data Storage: Dell MD3460 12G SAS [60 TB] Raid-6 XFS
    • Data Storage: Dell MD3000 6G SAS [360 TB] Raid-6 XFS
  • Compute Nodes
    • MotherBoard: Dell PE R630
    • CPU:Dual intel Xeon E5-2680 v4 14 core 2.4GHz
    • Memory:192 GB
    • EtherNet: Quad Port GigE
    • InfiniBand: Mellanox ConnectX-3 VPIFDR, QSFP+ 40/56GbE
  • GPU Nodes
    • MotherBoard:Dell PE 730/xd
    • CPU: Dual Intel Xeon E5-2680 v4 14 core 2.4GHz
    • GPU: dual Nvidia Tesla NVLink P100’s(3584 cores each)
    • Memory: 256GB
    • EtherNet: Quad Port GigE
    • InfiniBand: Mellanox ConnectX-3 VPI FDR, QSFP+ 40/56GbE
  • Computational Capacity
    • Total CPU cores: 868
    • Total GPU cores: 35840
    • 62 CPU’s @ 570 = ~35.4 TFLOPS 10
    • GPU’s @ 10.6T = ~106TFLOPS
    • Total theoretical FLOPS = ~141 TFLOPS

Acknowledging Use

Please use the appropriate DOIs