Skip to content

Add intro blog and contributor info for GSoC 2025 #300

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
May 22, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .github/actions/spelling/allow/names.txt
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@ Svrin
Tadel
Taras
Thessaloniki
Timmaraju
Universitat
Unveristy
Uppili
Expand Down Expand Up @@ -196,6 +197,7 @@ tapaswenipathak
tfransham
thakkar
tharun
timmaraju
tlattner
vaibhav
vassil
Expand Down
3 changes: 3 additions & 0 deletions .github/actions/spelling/allow/terms.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,13 @@ CMSSW
Cppyy
Debian
GPGPU
GPT
GSo
GSoC
HSF
JIT'd
Jacobians
LLMs
LLVM
NVIDIA
NVMe
Expand All @@ -30,6 +32,7 @@ gitlab
gridlay
gsoc
gpu
llm
llvm
pushforward
linkedin
Expand Down
28 changes: 28 additions & 0 deletions _data/contributors.yml
Original file line number Diff line number Diff line change
Expand Up @@ -310,6 +310,34 @@
proposal: /assets/docs/de_la_torre_gonzalez_salvador_proposal_gsoc_2025.pdf
mentors: Vassil Vassilev, Lukas Breitwieser

- name: Rohan Timmaraju
photo: Rohan_Timmaraju.jpg
info: "Google Summer of Code 2025 Contributor"
email: [email protected]
education: "B.S. Computer Science, Columbia University"
github: "https://github.com/Rohan-T144"
active: 1
linkedin: "https://www.linkedin.com/in/rohan-timmaraju-650ba3221/"
projects:
- title: "Enhancing LLM Training Efficiency with Clad for Automatic Differentiation"
status: Ongoing
description: |
Training Large Language Models is computationally expensive, often
limited by the performance limitations of Python-based frameworks. This
project addresses this challenge by enhancing LLM training efficiency
within a C++ environment through the integration of Clad, a Clang/LLVM
compiler plugin for automatic differentiation (AD). We will develop a
custom C++ tensor library specifically designed for optimal interaction
with Clad. The core objective is to replace traditional runtime or
manual gradient computations with Clad's efficient compile-time
differentiation for key LLM operations within a GPT-2 training pipeline.
This involves investigating effective strategies to bridge Clad's static
analysis with dynamic neural network computations, benchmarking the
resulting performance gains in speed and memory usage against a non-Clad
baseline, and leveraging OpenMP for further parallelization.
proposal: /assets/docs/Rohan_Timmaraju_Proposal_2025.pdf
mentors: Vassil Vassilev, David Lange, Jonas Rembser, Christina Koutsou

- name: Abdelrhman Elrawy
photo: Abdelrhman.jpg
info: "Google Summer of Code 2025 Contributor"
Expand Down
10 changes: 10 additions & 0 deletions _pages/team/rohan-timmaraju.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: "Compiler Research - Team - Rohan Timmaraju"
layout: gridlay
excerpt: "Compiler Research: Team members"
sitemap: false
permalink: /team/RohanTimmaraju
email: [email protected]
---

{% include team-profile.html %}
52 changes: 52 additions & 0 deletions _posts/2025-05-21-enhancing-llm-training.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: "Enhancing LLM Training Efficiency with Clad for Automatic Differentiation"
layout: post
excerpt: "This GSoC project leverages Clad to optimize LLM training in C++, aiming to boost efficiency by developing a custom tensor library and integrating Clad for compiler-level gradient calculations."
sitemap: true
author: Rohan Timmaraju
permalink: blogs/gsoc25_rohan_introduction_blog/
banner_image: /images/blog/LLM_project_banner.jpg
date: 2025-05-21
tags: gsoc c++ clang clad llm
---

### Introduction

I am Rohan Timmaraju, a Computer Science student at Columbia University. During Google Summer of Code 2025, I will be working on the "Enhancing LLM Training Efficiency with Clad for Automatic Differentiation" project with the Compiler Research group.

**Mentors**: Vassil Vassilev, David Lange, Jonas Rembser, Christina Koutsou

### About LLM Training

Large Language Models (LLMs) like ChatGPT have revolutionized AI, but their training is incredibly computationally intensive. Currently, Python-based frameworks such as PyTorch and TensorFlow are the go-to tools. While they offer excellent flexibility and a rich ecosystem, their reliance on interpreted execution and dynamic computation graphs can lead to performance bottlenecks and high memory consumption. This is particularly noticeable when we consider deploying or training these models in resource-constrained environments or within C++-centric high-performance computing (HPC) setups, which are common in scientific research.

While C++ provides the tools for fine-grained control over system resources and has proven its capabilities in efficient LLM inference (as seen with projects like [llama.cpp](https://github.com/ggml-org/llama.cpp)), the critical component for *training* – flexible and efficient Automatic Differentiation (AD) – presents an ongoing challenge for C++ solutions.

### Why Use Clad?

This project proposes to tackle this challenge by integrating Clad, an Automatic Differentiation plugin for the Clang compiler. Unlike traditional AD libraries that often operate at runtime, Clad performs source-to-source transformation. It analyzes the C++ Abstract Syntax Tree (AST) at compile time and generates optimized C++ code for computing derivatives. This compiler-level approach has the potential to reduce runtime overhead and improve memory efficiency compared to dynamic methods.

To facilitate this integration, I am developing a custom C++ tensor library to be used in neural network training. Inspired by the powerful approaches of libraries such as [llm.c](https://github.com/karpathy/llm.c) and [pytorch](https://docs.pytorch.org/cppdocs/), this library is being designed from the ground up with Clad compatibility in mind. The core idea is to replace manual or internally managed gradient computations with Clad's reverse-mode AD (as in `clad::gradient`) for key LLM operations like matrix multiplications, activation functions, normalization layers, and the final loss function.

### Implementation Plan
1. **Foundation & Baseline:** The implementation will start by implementing a complete GPT-2 training loop in C++ *without* Clad. This will serve as our performance baseline. GPT-2 is chosen here as a relatively simple open-source LLM architecture capable of being trained on local devices. This could be extended to other architectures like Llama or Mistral.
2. **Core Clad Integration Strategy:** We will investigate and evaluate different strategies for applying Clad to tensor network gradient calculations, potentially also identifying potential areas where Clad itself could be enhanced for deep learning workloads.
3. **Expanding Integration:** Once a promising strategy is identified and validated on simpler operations, we'll systematically integrate Clad into more complex components of the GPT-2 architecture.
4. **Benchmarking & Optimization:** Benchmarking against our baseline will be crucial to quantify the performance gains (speed, memory). We'll also use profiling tools to identify bottlenecks and optimize the tensor library with Clad. OpenMP may be employed for parallelization to further boost performance.
5. **Documentation & Potential Extensions:** Thorough documentation of the tensor library, the Clad integration process, and our findings will also be a primary focus. Time permitting, we'll explore extending this work to other LLM architectures like Llama.


### Conclusion
By successfully integrating Clad into a C++ LLM training pipeline, we aim to:
* **Demonstrate Performance Gains:** Show tangible improvements in training speed and memory efficiency.
* **Clad for ML:** Provide a significant real-world use case, potentially identifying areas for Clad's improvement in supporting ML tasks.
* **Offer a C++ Alternative:** Provide a foundation for more efficient, compiler-driven LLM training within the C++ ecosystems.
* **Learn and Share:** Gain insights into the practicalities of applying compiler-based AD to complex ML problems and share these learnings with the community.

I believe this project has the potential to make a valuable contribution to both the compiler research field and the ongoing efforts to make powerful AI models more accessible and efficient to train.

### Related Links

- [Project Description](https://hepsoftwarefoundation.org/gsoc/2025/proposal_Clad-LLM.html)
- [Clad Repository](https://github.com/vgvassilev/clad)
- [My GitHub Profile](https://github.com/Rohan-T144)
Binary file added assets/docs/Rohan_Timmaraju_Proposal_2025.pdf
Binary file not shown.
Binary file added images/blog/LLM_project_banner.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/team/Rohan_Timmaraju.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading