Skip to content

anuragdogra2192/FundamentalsOfDeepLearningForMulti-GPUs

Repository files navigation

Fundamentals of Deep Learning for Multi-GPUs

Offered by NVIDIA Deep Learning Institute

Overview

This workshop introduces strategies for accelerating deep neural network training using multi-GPU systems. By leveraging distributed deep learning frameworks and workflows, participants learn how to scale training across GPUs efficiently—reducing training time for data-intensive applications while preserving model accuracy.

A central focus is on simplifying distributed software development through Horovod, enabling seamless transition from single-GPU to multi-GPU training environments.


Learning Outcomes

By the end of the course, you will be able to:

  • Explain and apply stochastic gradient descent (SGD) in parallelized training.
  • Understand how batch size choices impact both performance and model accuracy.
  • Convert a single-GPU training pipeline into a Horovod-enabled multi-GPU implementation.
  • Apply best practices for maintaining accuracy and stability during large-scale training.

Technologies Covered

  • TensorFlow
  • Keras
  • Horovod

About

NVIDIA - Fundamentals Of DeepLearning For Multi-GPUs Using Horovod

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published