Skip to content
#

gpu-training

Here are 15 public repositories matching this topic...

This repository features an image sharpening pipeline using Knowledge Distillation. A high-capacity Restormer acts as the teacher model, while a lightweight Mini-UNet is trained as the student to mimic its performance.

  • Updated Oct 30, 2025
  • Jupyter Notebook

This repo documents my participation in the Kaggle red-teaming competition focused on probing OpenAI's newly released gpt-oss-20b model for previously undiscovered vulnerabilities and harmful behaviors. The goal is to identify, document, and report up to five distinct issues, contributing to the safety and alignment of open-source AI models.

  • Updated Aug 18, 2025
  • Jupyter Notebook

Developed an end-to-end LLM pipeline that extracts Python code from GitHub, builds a high-quality dataset, fine-tunes CodeGen, and performs advanced code generation with DeepSeek. Demonstrates strong capabilities in LLM training, data engineering, and model optimization.

  • Updated Dec 8, 2025
  • Python

Improve this page

Add a description, image, and links to the gpu-training topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the gpu-training topic, visit your repo's landing page and select "manage topics."

Learn more