Resource-efficient AI model parallelisation on LUMI supercomputer
Event details
Location: Online
Date: 22.1.2026
Time: 11.00-12.30 (CET)
This webinar explores how to harness the full potential of the LUMI supercomputer for large-scale AI model training through efficient utilisation of HPC resources. Participants will learn how thoughtful design of neural network architectures and optimal use of parallelisation techniques — such as model, data, and tensor parallelisation — can significantly improve performance and resource efficiency.
The session will demonstrate how frameworks like PyTorch and TensorFlow can be leveraged to distribute training workloads effectively across multiple GPUs and nodes on LUMI. Attendees will gain practical insights into balancing computational loads, minimising communication overhead, and achieving scalability for advanced AI workloads in an HPC environment.
The speaker of this webinar is Dr. Vijeta Sharma.
Who is the webinar for
This webinar is designed for AI practitioners, computational scientists, and HPC users who aim to train large-scale machine learning models efficiently on modern supercomputing infrastructures. It is ideal for professionals seeking to optimise their deep learning workflows by leveraging advanced parallelisation techniques and maximising GPU performance on systems like LUMI. Participants with a background in AI, data analytics, or scientific computing who wish to scale their models and improve training efficiency in high-performance environments will particularly benefit from this session.
Key takeways
- Understand the fundamentals of model, data, and tensor parallelisation.
- Learn strategies for efficient AI training on HPC systems like LUMI.
- Explore practical examples using PyTorch and TensorFlow.
- Gain insights into optimising GPU utilisation for scalable AI workloads.
Organizer
Registration for the webinar
This webinar is free of charge.