Skip to content

Commit 8881c10

Browse files
committed
Add personal, project details and intro blog
1 parent d5ffb0b commit 8881c10

File tree

4 files changed

+100
-0
lines changed

4 files changed

+100
-0
lines changed

_data/contributors.yml

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -296,6 +296,29 @@
296296
proposal: /assets/docs/de_la_torre_gonzalez_salvador_proposal_gsoc_2025.pdf
297297
mentors: Vassil Vassilev, Lukas Breitwieser
298298

299+
- name: Aditi Milind Joshi
300+
info: "Google Summer of Code 2025 Contributor"
301+
302+
github: "https://github.com/aditimjoshi"
303+
linkedin: "https://www.linkedin.com/in/aditi-joshi-149280309/"
304+
education: "B.Tech in Computer Science and Engineering (AIML), Manipal Institute of Technology, Manipal, India"
305+
active: 1
306+
projects:
307+
- title: "Implement and improve an efficient, layered tape with prefetching capabilities"
308+
status: Ongoing
309+
description: |
310+
Automatic Differentiation (AD) is a computational technique that enables
311+
efficient and precise evaluation of derivatives for functions expressed in code.
312+
Clad is a Clang-based automatic differentiation tool that transforms C++ source
313+
code to compute derivatives efficiently. A crucial component for AD in Clad is the
314+
tape, a stack-like data structure that stores intermediate values for reverse mode AD.
315+
While benchmarking, it was observed that the tape operations of the current implementation
316+
were significantly slowing down the program. This project aims to optimize and generalize
317+
the Clad tape to improve its efficiency, introduce multilayer storage, enhance thread safety,
318+
and enable CPU-GPU transfer.
319+
proposal: /assets/docs/Aditi_Milind_Joshi_Proposal_2025.pdf
320+
mentors: Vassil Vassilev, David Lange, Aaron Jomy
321+
299322
- name: "This could be you!"
300323
photo: rock.jpg
301324
info: See <a href="/careers">openings</a> for more info

_pages/team/aditi-milind-joshi.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
---
2+
title: "Compiler Research - Team - Aditi Milind Joshi"
3+
layout: gridlay
4+
excerpt: "Compiler Research: Team members"
5+
sitemap: false
6+
permalink: /team/AditiMilindJoshi
7+
8+
---
9+
10+
{% include team-profile.html %}
Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
---
2+
title: "Implement and improve an efficient, layered tape with prefetching capabilities"
3+
layout: post
4+
excerpt: "A GSoC 2025 project focussing on optimizing Clad's tape data structure for reverse-mode automatic differentiation, introducing slab-based memory, thread safety, multilayer storage, and future support for CPU-GPU transfers."
5+
sitemap: true
6+
author: Aditi Milind Joshi
7+
permalink: blogs/gsoc25_aditi_introduction_blog/
8+
banner_image: /images/blog/gsoc-banner.png
9+
date: 2025-05-22
10+
tags: gsoc clad clang c++
11+
---
12+
13+
### Introduction
14+
15+
I'm Aditi Joshi, a third-year B.Tech undergraduate student studying Computer Science and Engineering (AIML) at Manipal Institute of Technology, Manipal, India. This summer, I will be contributing to the Clad repository as part of Google Summer of Code 2025, where I will be working on the project "Implement and improve an efficient, layered tape with prefetching capabilities."
16+
17+
**Mentors:** Vassil Vassilev, David Lange, Aaron Jomy
18+
19+
### Briefly about Automatic Differentiation and Clad
20+
21+
Automatic Differentiation (AD) is a computational technique that enables efficient and precise evaluation of derivatives for functions expressed in code. Unlike numerical differentiation, which suffers from approximation errors, or symbolic differentiation, which can be computationally expensive, AD systematically applies the chain rule to compute gradients with minimal overhead.
22+
23+
Clad is a Clang-based automatic differentiation tool that transforms C++ source code to compute derivatives efficiently. By leveraging Clang’s compiler infrastructure, Clad performs source code transformations to generate derivative code for given functions, enabling users to compute gradients without manually rewriting their implementations. It supports both forward-mode and reverse-mode differentiation, making it useful for a range of applications.
24+
25+
### Understanding the Problem
26+
27+
In reverse-mode automatic differentiation (AD), we compute gradients efficiently for functions with many inputs and a single output. To do this, we need to store intermediate results during the forward pass for use during the backward (gradient) pass. This is where the tape comes in — a stack-like data structure that records the order of operations and their intermediate values.
28+
29+
Currently, Clad uses a monolithic memory buffer as the tape. While this approach is lightweight for small problems, it becomes inefficient and non-scalable for larger applications or parallel workloads. Frequent memory reallocations, lack of thread safety, and the absence of support for offloading make it a limiting factor in Clad’s usability in complex scenarios.
30+
31+
### Project Goals
32+
33+
The aim of this project is to design a more efficient, scalable, and flexible tape. Some of the key enhancements include:
34+
35+
- Replacing dynamic reallocation with a slab-based memory structure to minimize copying overhead.
36+
- Introducing Small Buffer Optimization (SBO) for short-lived tapes.
37+
- Making the tape thread-safe by using locks or atomic operations.
38+
- Implementing multi-layer storage, where parts of the tape are offloaded to disk to manage memory better.
39+
- (Stretch Goal) Supporting CPU-GPU memory transfers for future heterogeneous computing use cases.
40+
- (Stretch Goal) Introducing checkpointing for optimal memory-computation trade-offs.
41+
42+
### Implementation Plan
43+
44+
The first phase of the project will focus on redesigning Clad’s current tape structure to use a slab-based memory model instead of a single contiguous buffer. This change will reduce memory reallocation overhead by linking fixed-size slabs dynamically as the tape grows. To improve performance in smaller workloads, I’ll also implement Small Buffer Optimization (SBO) — a lightweight buffer embedded directly in the tape object that avoids heap allocation for short-lived tapes. These improvements are aimed at making the tape more scalable, efficient, and cache-friendly.
45+
46+
Once the core memory model is in place, the next step will be to add thread safety to enable parallel usage. The current tape assumes single-threaded execution, which limits its applicability in multi-threaded scientific workflows. I’ll introduce synchronization mechanisms such as std::mutex to guard access to tape operations and ensure correctness in concurrent scenarios. Following this, I will implement a multi-layered tape system that offloads older tape entries to disk when memory usage exceeds a certain threshold — similar to LRU-style paging — enabling Clad to handle much larger computation graphs.
47+
48+
As stretch goals, I plan to explore CPU-GPU memory transfer support for the slabbed tape and introduce basic checkpointing functionality to recompute intermediate values instead of storing them all, trading memory usage for computational efficiency. Throughout the project, I’ll use benchmark applications like LULESH to evaluate the performance impact of each feature and ensure that the redesigned tape integrates cleanly into Clad’s AD workflow. The final stages will focus on extensive testing, documentation, and contributing the changes back to the main repository.
49+
50+
### Why I Chose This Project
51+
52+
My interest in AD started when I was building a neural network from scratch using CUDA C++. That led me to Clad, where I saw the potential of compiler-assisted differentiation. I’ve since contributed to the Clad repo by investigating issues and raising pull requests, and I’m looking forward to pushing the limits of what Clad’s tape can do.
53+
54+
This project aligns perfectly with my interests in memory optimization, compiler design, and parallel computing. I believe the enhancements we’re building will make Clad significantly more powerful for real-world workloads.
55+
56+
### Looking Ahead
57+
58+
By the end of the summer, I hope to deliver a robust, feature-rich tape that enhances Clad’s reverse-mode AD performance across CPU and GPU environments. I’m excited to contribute to the scientific computing community and gain deeper insights into the world of compilers.
59+
60+
---
61+
62+
### Related Links
63+
64+
- [Clad Repository](https://github.com/vgvassilev/clad)
65+
- [Project Description](https://hepsoftwarefoundation.org/gsoc/2025/proposal_Clad-ImproveTape.html)
66+
- [GSoC Project Proposal](/assets/docs/Aditi_Milind_Joshi_Proposal_2025.pdf)
67+
- [My GitHub Profile](https://github.com/aditimjoshi)
1010 KB
Binary file not shown.

0 commit comments

Comments
 (0)