Skip to main content

High-Performance Computing

Saint Louis University has a campuswide high-performance computing (HPC) cluster (Aries) run by the Research Computing Group for all SLU faculty and students. Data in Science Technologies manages all account requests, technical and support questions. 

Send account requests and technical and support questions hpc@slu.edu. Access Information about the cluster aries.ds.slu.edu/docs/ Please note this link can only be accessed on campus or by VPN.

Aries Specs

Aries has 43 CPU nodes which are a mixture of Dell PowerEdge C6320, C6220 and C6220 IIs. With 20-32 cores, 62 –512 GB of RAM. In total, we have1208 cores. It has two V100 GPU nodes. The cluster is deployed using Rocky Linux 8, Bright Cluster Manager 9.1 and job scheduling using SLURM v20. Aries uses a "Type 1" network where only the head node is accessible from connections external to the cluster. A flat network is used for cluster internalnet and ipmi. There are five 40 Gbps Infiniband switches in a leaf/spine arrangement with four leaf switches connected to the spine. Storage is served over InfiniBand and consists of a Dell ME4084 (220TB), a SuperMicro SuperChassis JBOD (600TB HDD) running ZFS and a SuperMicro SuperStorage nVME JBOF (240TB SSD) running BeeGFS for scratch. SLU provides access over SSH as well as OpenOnDemand (Jupyter Notebooks and R-studio). Users can access software from shared modules, installed using Spac or deploy their custom code in local conda environments. 

SLU recently received funding from the NSF Campus Cyberinfrastructure program (award #2430236) for “CC* Compute-Campus: Modernizing Campus Cyberinfrastructure for AI-Enhanced Research and Education (ModernCARE).” This new HPC will have 3 GPU nodes, each with 2 NVIDIA L40S48 GB GPUs and 2 Intel Xeon® Gold 6526Y 2.8G, 16C/32T CPUs and 512 GB of RAM; 3 CPUs each with 2 Intel Xeon® Gold 6526Y 2.8G, 16C/32T CPUs and 1024 GB of RAM and 2 H100 80 GB GPU nodes, each with 2 Intel Xeon® Gold 6526Y 2.8G, 16C/32T CPUs and 1024 GB of RAM. The cluster is deployed using Rocky Linux 8, Bright Cluster Manager 9.1 and job scheduling using SLURM v20. The spine-leaf InfiniBand network based on NIVIDIA’s Quantum HDR InfiniBand 200 Gbps switches and the internal management networking provided by an NVIDIA MSN2201 switch. The GPU-based AI/ML workflows will be able to access 500 TB of BeeGFS storage. This system will be accessible to all SLU users over SSH as well as OpenOnDemand (Jupyter Notebooks and R-studio). It will provide the infrastructure to be readily expanded with additional compute nodes via a condominium model.