Skip to content
View debashishc's full-sized avatar
Coffee please, no milk.
Coffee please, no milk.
  • US
  • 04:32 (UTC -04:00)

Highlights

  • Pro

Block or report debashishc

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
debashishc/README.md

Hi there 👋

I'm a Machine Learning Research Fellow at Johns Hopkins University making computer learn all different kinds of modes.

🔬 Current Focus

  • Multi-Modal ML: Building systems that understand text, video, and beyond
  • Model Efficiency: Compression, quantization, and distillation techniques for production-ready AI
  • RAG Systems: Advancing Retrieval Augmented Generation for textual and video domains at JHU
  • GPU Programming: Leveraging CUDA and Triton for high-performance ML implementations (spare time)

🛠️ Tech Stack

Languages & Frameworks

  • Python | C++ | CUDA
  • PyTorch | Triton

Specializations

  • Model Compression (Quantization, Pruning, Knowledge Distillation)
  • Distributed Training & Inference
  • Edge Deployment Optimization
  • Multi-Modal Architecture Design

🎯 Research Interests

  • Scalable Multi-Modal Architectures: Developing models that efficiently process diverse data types in distributed environments
  • Cloud-to-Edge ML Pipeline: Streamlining the entire ML lifecycle from training to deployment across cloud and edge devices
  • Hardware-Aware Optimization: Implementing compression techniques that leverage specific hardware capabilities for maximum inference efficiency

🤝 Open to Collaborate On

  • Model distillation and quantization projects
  • Efficient training strategies for large-scale models
  • Multi-modal ML applications
  • Edge deployment optimization

Languages and Tools

python git aws bash bitbucket cmake cplusplus docker fastapi gcc gitlab grafana haskell latex markdown neovim numpy pytorch streamlit vim

GitHub Stats

Pinned Loading

  1. kernelheim kernelheim Public

    KernelHeim – development ground of custom Triton and CUDA kernel functions designed to optimize and accelerate machine learning workloads on NVIDIA GPUs. Inspired by the mythical stronghold of the …

    Python 3

  2. semantic-search semantic-search Public

    Implementation of semantic search using Sentence-BERT (SBERT) for a workshop. It demonstrates how to generate sentence embeddings and perform search based on cosine similarity, allowing for meaning…

    Jupyter Notebook

  3. classification-real-fake-text classification-real-fake-text Public

    Classifying real and fake text using metrics measuring human-written and machine-generated text

    Jupyter Notebook 1

  4. deep-learning-project-template deep-learning-project-template Public template

    Forked from Lightning-AI/deep-learning-project-template

    Pytorch Lightning code guideline for conferences

    Python

  5. cuda-mode-lectures cuda-mode-lectures Public

    Forked from gpu-mode/lectures

    Material for cuda-mode lectures

    Jupyter Notebook 1