Skip to content
View abhinandansamal's full-sized avatar
πŸ‘¨β€πŸ’»
Curious , Judgemental, Argumentative
πŸ‘¨β€πŸ’»
Curious , Judgemental, Argumentative

Block or report abhinandansamal

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
abhinandansamal/README.md

Hi there, I'm Abhinandan Samal! πŸ‘‹

πŸš€ AI & Machine Learning Engineer | Data Scientist

πŸ“ Based in Berlin, Germany

I bridge the gap between robust Data Engineering (6+ years exp. at IBM/TCS) and cutting-edge Generative AI (M.Sc. Research). My passion lies in building scalable, production-grade AI systemsβ€”from real-time Kafka pipelines to fine-tuned LLMs.


πŸ› οΈ Tech Stack & Arsenal

Domain Technologies
Generative AI & NLP Transformers (Hugging Face) PEFT / LoRA LangChain RAG OpenAI API Google Gemini
Machine Learning PyTorch Scikit-learn XGBoost LightGBM MLFlow
Cloud & MLOps AWS (EMR, S3) GCP Docker Kubernetes Terraform
Data Engineering Apache Kafka KSQL PySpark Airflow SQL/NoSQL

🌟 Featured Repositories

Research on optimizing Multilingual Transformers using PEFT (LoRA) vs. Full Fine-Tuning.

  • Tech: PyTorch, Hugging Face, NLLB-200, BLEU/TER Metrics.
  • Result: Demonstrated that LoRA achieved superior fluency & morphological precision (π—•π—Ÿπ—˜π—¨ πŸπŸ”.πŸ‘πŸ‘, π—§π—˜π—₯ πŸ•πŸ‘.πŸ’πŸ) and efficiency (training only 𝟎.πŸ’% of parameters) whereas FFT showed a slight edge in morphological precision (𝗰𝗡𝗿𝗙 πŸ’πŸ–.πŸ“πŸ’) for the challenging German-to-Odia direction. For the Odia-to-German direction, LoRA proved to be the superior strategy across all metrics (π—•π—Ÿπ—˜π—¨ πŸ•πŸ’.πŸ”πŸ, 𝗰𝗡𝗿𝗙 πŸ–πŸ.πŸ‘πŸ‘, π—§π—˜π—₯ πŸ‘πŸ—.πŸ‘πŸ—).

An Agentic AI tool that autonomously retrieves, analyzes, and synthesizes research papers.

  • Tech: Google Gemini, Vector Search, MLOps (ROUGE Metrics).
  • Highlights: Implemented automated drift monitoring and agentic retrieval workflows.

Full-stack GenAI application for querying unstructured technical documents.

  • Tech: LangChain, OpenAI, ChromaDB, Flask, AWS S3.
  • Highlights: Features context-aware memory buffers for multi-turn reasoning.

Robust Data Science lifecycle project from warehousing to ensemble modeling.

  • Tech: Python, SQL, Redshift, Stacking/Blending Regressors.

πŸ’Ό Professional Experience (Highlights)

  • Cloud Data Engineer @ IBM (2022-2023): Architected real-time data flywheels using Kafka/KSQL and Terraform on Azure.
  • ML Engineer / Data Scientist @ TCS (2020-2022): Developed & deployed Churn Prediction models on Kubernetes and AWS, automating MLOps pipelines. Developed a hybrid system (Collaborative & Content-based filtering) using CMFRec to detect financial events and recommend offers.

πŸ“« Connect with Me

Abhinandan's GitHub stats

Pinned Loading

  1. Advanced_Chatbot_with_Memory Advanced_Chatbot_with_Memory Public

    Jupyter Notebook

  2. AI_Research_Agent_and_Evaluation_Pipeline AI_Research_Agent_and_Evaluation_Pipeline Public

    Jupyter Notebook

  3. BigMart_Sales_Prediction BigMart_Sales_Prediction Public

    Jupyter Notebook

  4. Enterprise_RAG_Knowledge_System Enterprise_RAG_Knowledge_System Public

    Python