Computing Resources#

Imperial HPC#

All students will be granted access to Imperial College’s High-Performance Computing (HPC) cluster CX3. This powerful computing resource is designed to handle large-scale, non-interactive tasks that require high-throughput and parallel processing capabilities, including compute- and data-intensive projects as well as AI/Machine Learning applications with GPU acceleration.

The CX3 HPC cluster comprises:

  • 325 compute nodes, each with dual AMD EPYC 7742 processors (128 cores, 1 TB RAM);

  • 53 compute nodes with dual Intel Icelake Xeon Platinum 8358 (64 cores, 500 GB RAM);

  • 12 high-memory nodes with dual AMD EPYC 7742 processors (128 cores, 4 TB RAM);

  • 11 GPU nodes featuring dual AMD EPYC 7742 and 8 Quadro RTX 6000 GPUs per node;

  • 7 GPU nodes with dual Intel Icelake Xeon Platinum 8358 and 8 L40S 48 GB GPUs per node;

  • 2 GPU nodes with dual Intel Icelake Xeon Platinum 8358 and 2 A100 40 GB GPUs per node;

  • 2 GPU nodes with dual Intel Icelake Xeon Platinum 8358 and 4 A40S 48 GB GPUs per node.

  • Network interconnect at 100GbE and direct access to the Research Data Store.

The CX3 cluster offers 412 compute nodes, encompassing 48,640 cores, 421.5 TB RAM, 88 RTX 6000 GPUs, 4 A100 40GB GPUs, 56 L40S 48GB GPUs and 16 A40 48GB GPUs.

Documentation and Tutorials#

The user documentation for Imperial’s HPC can be found in the RCS User Guide. A variety of helpful tips for using RCS resources and related tools and services can be found here. For a fairly comprehensive introduction to the HPC capabilities at Imperial, please refer to the Presentation by Katerina Michalickova.

Departmental GPU resources#

The Department of Earth Science and Engineering offers restricted access to its departmental cluster, which currently includes 16 Nvidia A100 40GB GPUs, with 2 more nodes expected to be available by December 2025.

While the Imperial HPC facilities and your laptop will likely meet most requirements, should you present a compelling case for utilising the departmental GPU cluster, you can submit a request for review. To ask for access, please:

  1. Discuss your needs with your main supervisor, who will assess whether the departmental GPU cluster is necessary for your project.

  2. If your main supervisor agrees that you need access to the departmental GPU cluster, submit a request for review to Marijan Beg and CC your main supervisor. Please ensure you include a brief justification for your request, explaining in particular why the Imperial HPC facilities and your laptop are insufficient for your project.

  3. After Marijan reviews your request, he will forward it to the computing team.

Please note that the computing team needs at least a week’s notice before providing you access once your request is approved.

Other resources#

If your IRP demands require anything other than the aforementioned resources, please communicate your specific needs to your main supervisor as soon as possible. For instance, if you require access to one or more desktops or workstations, if you need any specialist hardware, software, or cloud resources, it must be discussed with your main supervisor, and they will discuss it with the computing team at the earliest opportunity. The computing team will require a reasonable amount of notice to respond to these requests.

Hardware#

There is a limited stock of spare desktops and workstations that can be used for the IRP. Some customisation options are available, but the computing team may not be able to accommodate all requests.

  • Please give at least a week’s notice if you require access to a standard desktop (i7 CPU and 16 GB of RAM). (Please specify Windows / Linux and version)

  • Please give at least a month’s notice if you need anything more than a standard desktop. Please specify the minimum and recommended CPU, GPU, RAM and disk, and the computing team will do their best to match these. They cannot put more than 128 GB of RAM in a machine, and GPU choice is limited to older 2-4 GB cards. Be aware it may not be possible to accommodate the request.

Software#

  • Please give at least a couple of days’ notice if you need specialist software that the department already has access to. (Your supervisor will be aware of any relevant software and whether it is something we usually have access to.)

  • Please give at least a month’s notice if we do not have the software. Be aware it may not be possible to accommodate the request.

Cloud#

If you need access to Azure (or other) cloud resources, submit a request for review to Marijan Beg and your main supervisor. Arrangements would need to be made to cover any costs. Please give at least a week’s notice, but it may not be possible to accommodate all requests.

Accessing HPC and Other Resources Off-Campus#

To access resources from outside Imperial’s network, such as when connecting to HPC systems remotely, you must use a service called Unified Access, which operates through a client application known as Zscaler.

Please follow the instructions provided in the link above. MSc student accounts are typically configured to use Zscaler by default. However, if you encounter any issues with your account, you will need to contact ICT for further assistance.

Computing Team contact#

  • After you receive an approval from both your supervisor and Marijan Beg, please forward it to Francois van Schalkwyk (john.van-schalkwyk00@imperial.ac.uk) if you have a query regarding departmental GPU and cloud resources.

  • After an agreement with your supervisor, please contact Gareth Oliver (w.oliver@imperial.ac.uk) if you have a query regarding hardware or software resources.