Skip to content

Commit 664ee96

Browse files
Add gsoc contributor info and intro blog (#303)
* Add personal, project details and intro blog * updated names.txt and terms.txt * compressed pdf and changed banner * Update _data/contributors.yml * update mentor list order Co-authored-by: Vassil Vassilev <[email protected]> * compressed banner and added photo --------- Co-authored-by: Vassil Vassilev <[email protected]>
1 parent 03558fa commit 664ee96

File tree

8 files changed

+111
-0
lines changed

8 files changed

+111
-0
lines changed

.github/actions/spelling/allow/names.txt

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@ Abdulrasool
22
Abdelrhman
33
Abhigyan
44
Abhinav
5+
Aditi
56
Alexandru
67
Alja
78
Anandh
@@ -32,6 +33,7 @@ Ilieva
3233
Isemann
3334
JLange
3435
Jomy
36+
Joshi
3537
Jurgaityt
3638
Kyiv
3739
LBNL
@@ -43,6 +45,7 @@ Mabille
4345
Manipal
4446
Matevz
4547
Mihaly
48+
Milind
4649
Militaru
4750
Mircho
4851
Mozil
@@ -99,6 +102,7 @@ abhi
99102
abhinav
100103
acherjan
101104
acherjee
105+
aditi
102106
aditya
103107
adityapand
104108
adityapandeycn
@@ -142,6 +146,7 @@ isaacmoralessantana
142146
izvekov
143147
jacklqiu
144148
jeaye
149+
joshi
145150
junaire
146151
kausik
147152
kchristin
@@ -163,6 +168,7 @@ mfoco
163168
mihail
164169
mihailmihov
165170
mihov
171+
milind
166172
mizvekov
167173
mozil
168174
mvassilev

.github/actions/spelling/allow/terms.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
AARCH
2+
AIML
23
BGZF
34
Caa
45
CINT
@@ -16,9 +17,11 @@ JIT'd
1617
Jacobians
1718
LLMs
1819
LLVM
20+
LULESH
1921
NVIDIA
2022
NVMe
2123
PTX
24+
SBO
2225
Slib
2326
Softsusy
2427
Superbuilds

_data/contributors.yml

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -405,6 +405,31 @@
405405
proposal: /assets/docs/Abhinav_Kumar_Proposal_GSoC_2025.pdf
406406
mentors: Anutosh Bhat, Vipul Cariappa, Aaron Jomy, Vassil Vassilev
407407

408+
409+
- name: Aditi Milind Joshi
410+
photo: Aditi.jpeg
411+
info: "Google Summer of Code 2025 Contributor"
412+
413+
github: "https://github.com/aditimjoshi"
414+
linkedin: "https://www.linkedin.com/in/aditi-joshi-149280309/"
415+
education: "B.Tech in Computer Science and Engineering (AIML), Manipal Institute of Technology, Manipal, India"
416+
active: 1
417+
projects:
418+
- title: "Implement and improve an efficient, layered tape with prefetching capabilities"
419+
status: Ongoing
420+
description: |
421+
Automatic Differentiation (AD) is a computational technique that enables
422+
efficient and precise evaluation of derivatives for functions expressed in code.
423+
Clad is a Clang-based automatic differentiation tool that transforms C++ source
424+
code to compute derivatives efficiently. A crucial component for AD in Clad is the
425+
tape, a stack-like data structure that stores intermediate values for reverse mode AD.
426+
While benchmarking, it was observed that the tape operations of the current implementation
427+
were significantly slowing down the program. This project aims to optimize and generalize
428+
the Clad tape to improve its efficiency, introduce multilayer storage, enhance thread safety,
429+
and enable CPU-GPU transfer.
430+
proposal: /assets/docs/Aditi_Milind_Joshi_Proposal_2025.pdf
431+
mentors: Aaron Jomy, David Lange, Vassil Vassilev
432+
408433
- name: "This could be you!"
409434
photo: rock.jpg
410435
info: See <a href="/careers">openings</a> for more info

_pages/team/aditi-milind-joshi.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
---
2+
title: "Compiler Research - Team - Aditi Milind Joshi"
3+
layout: gridlay
4+
excerpt: "Compiler Research: Team members"
5+
sitemap: false
6+
permalink: /team/AditiMilindJoshi
7+
8+
---
9+
10+
{% include team-profile.html %}
Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
---
2+
title: "Implement and improve an efficient, layered tape with prefetching capabilities"
3+
layout: post
4+
excerpt: "A GSoC 2025 project focussing on optimizing Clad's tape data structure for reverse-mode automatic differentiation, introducing slab-based memory, thread safety, multilayer storage, and future support for CPU-GPU transfers."
5+
sitemap: true
6+
author: Aditi Milind Joshi
7+
permalink: blogs/gsoc25_aditi_introduction_blog/
8+
banner_image: /images/blog/gsoc-clad-banner.png
9+
date: 2025-05-22
10+
tags: gsoc clad clang c++
11+
---
12+
13+
### Introduction
14+
15+
I'm Aditi Joshi, a third-year B.Tech undergraduate student studying Computer Science and Engineering (AIML) at Manipal Institute of Technology, Manipal, India. This summer, I will be contributing to the Clad repository as part of Google Summer of Code 2025, where I will be working on the project "Implement and improve an efficient, layered tape with prefetching capabilities."
16+
17+
**Mentors:** Aaron Jomy, David Lange, Vassil Vassilev
18+
19+
### Briefly about Automatic Differentiation and Clad
20+
21+
Automatic Differentiation (AD) is a computational technique that enables efficient and precise evaluation of derivatives for functions expressed in code. Unlike numerical differentiation, which suffers from approximation errors, or symbolic differentiation, which can be computationally expensive, AD systematically applies the chain rule to compute gradients with minimal overhead.
22+
23+
Clad is a Clang-based automatic differentiation tool that transforms C++ source code to compute derivatives efficiently. By leveraging Clang’s compiler infrastructure, Clad performs source code transformations to generate derivative code for given functions, enabling users to compute gradients without manually rewriting their implementations. It supports both forward-mode and reverse-mode differentiation, making it useful for a range of applications.
24+
25+
### Understanding the Problem
26+
27+
In reverse-mode automatic differentiation (AD), we compute gradients efficiently for functions with many inputs and a single output. To do this, we need to store intermediate results during the forward pass for use during the backward (gradient) pass. This is where the tape comes in — a stack-like data structure that records the order of operations and their intermediate values.
28+
29+
Currently, Clad uses a monolithic memory buffer as the tape. While this approach is lightweight for small problems, it becomes inefficient and non-scalable for larger applications or parallel workloads. Frequent memory reallocations, lack of thread safety, and the absence of support for offloading make it a limiting factor in Clad’s usability in complex scenarios.
30+
31+
### Project Goals
32+
33+
The aim of this project is to design a more efficient, scalable, and flexible tape. Some of the key enhancements include:
34+
35+
- Replacing dynamic reallocation with a slab-based memory structure to minimize copying overhead.
36+
- Introducing Small Buffer Optimization (SBO) for short-lived tapes.
37+
- Making the tape thread-safe by using locks or atomic operations.
38+
- Implementing multi-layer storage, where parts of the tape are offloaded to disk to manage memory better.
39+
- (Stretch Goal) Supporting CPU-GPU memory transfers for future heterogeneous computing use cases.
40+
- (Stretch Goal) Introducing checkpointing for optimal memory-computation trade-offs.
41+
42+
### Implementation Plan
43+
44+
The first phase of the project will focus on redesigning Clad’s current tape structure to use a slab-based memory model instead of a single contiguous buffer. This change will reduce memory reallocation overhead by linking fixed-size slabs dynamically as the tape grows. To improve performance in smaller workloads, I’ll also implement Small Buffer Optimization (SBO) — a lightweight buffer embedded directly in the tape object that avoids heap allocation for short-lived tapes. These improvements are aimed at making the tape more scalable, efficient, and cache-friendly.
45+
46+
Once the core memory model is in place, the next step will be to add thread safety to enable parallel usage. The current tape assumes single-threaded execution, which limits its applicability in multi-threaded scientific workflows. I’ll introduce synchronization mechanisms such as std::mutex to guard access to tape operations and ensure correctness in concurrent scenarios. Following this, I will implement a multi-layered tape system that offloads older tape entries to disk when memory usage exceeds a certain threshold — similar to LRU-style paging — enabling Clad to handle much larger computation graphs.
47+
48+
As stretch goals, I plan to explore CPU-GPU memory transfer support for the slabbed tape and introduce basic checkpointing functionality to recompute intermediate values instead of storing them all, trading memory usage for computational efficiency. Throughout the project, I’ll use benchmark applications like LULESH to evaluate the performance impact of each feature and ensure that the redesigned tape integrates cleanly into Clad’s AD workflow. The final stages will focus on extensive testing, documentation, and contributing the changes back to the main repository.
49+
50+
### Why I Chose This Project
51+
52+
My interest in AD started when I was building a neural network from scratch using CUDA C++. That led me to Clad, where I saw the potential of compiler-assisted differentiation. I’ve since contributed to the Clad repo by investigating issues and raising pull requests, and I’m looking forward to pushing the limits of what Clad’s tape can do.
53+
54+
This project aligns perfectly with my interests in memory optimization, compiler design, and parallel computing. I believe the enhancements we’re building will make Clad significantly more powerful for real-world workloads.
55+
56+
### Looking Ahead
57+
58+
By the end of the summer, I hope to deliver a robust, feature-rich tape that enhances Clad’s reverse-mode AD performance across CPU and GPU environments. I’m excited to contribute to the scientific computing community and gain deeper insights into the world of compilers.
59+
60+
---
61+
62+
### Related Links
63+
64+
- [Clad Repository](https://github.com/vgvassilev/clad)
65+
- [Project Description](https://hepsoftwarefoundation.org/gsoc/2025/proposal_Clad-ImproveTape.html)
66+
- [GSoC Project Proposal](/assets/docs/Aditi_Milind_Joshi_Proposal_2025.pdf)
67+
- [My GitHub Profile](https://github.com/aditimjoshi)
Binary file not shown.

images/blog/gsoc-clad-banner.png

99.5 KB
Loading

images/team/Aditi.jpeg

73.4 KB
Loading

0 commit comments

Comments
 (0)