This repository is a group of all machine learning/artificial intelligence algorithms I have experimented with. Each notebook is self-contained with relevant information.
- [✅] Softmax
- [✅] Sigmoid
- [✅] MinMax Scaling
- [✅] Euclidean Distance
- [✅] Mahalanobis Distance
- [✅] Cross-Entropy Loss
- [✅] Gradient Descent
- [] ReLU
- [✅] Using NumPy
- [✅] Using TensorFlow
- [✅] Using Scikit learn
- [✅] Using PyTorch
- [✅] For Hypothesis testing
- [] Using Python
- [✅] Using NumPy
- [] Using TensorFlow
- [] Using Sci-Kit Learn
- [] Using PyTorch
- [] Using Python
- [] Using NumPy
- [] Using TensorFlow
- [] Using Sci-Kit Learn
- [] Using PyTorch
- [] For classification
- [✅] Adaboost for Binary Classification
- [] Extreme Gradient Boosting
- [] Using Python
- [] KMeans
- [] Density Based
- [] Hierarchical
- [✅] Demonstration of a Bayesian Network
- [✅] Model Based RL- Value Iteration, Policy Iteration
- [] Model Free RL- Monte Carlo, Bootstrapping, Temporal Difference
- [] Model Free RL- Q-learning
- [] RL on Dataset
Diffusion Modelling- Link
- [✅] Zero-Shot Classification using CLIP
- [✅] Zero-Shot Classification on Distorted Image using CLIP
- [✅] Text to Image using Stable Diffusion Pipeline
- [✅] Custom Noise Schedule for Image-to-Text Generation
- [✅] Interpolation of Latent Space to transition between two prompts
- [✅] Manipulation of Latent Space to impact specific attributes of a generated image
- [✅] Parameter tuning of guidance scale parameter
- [✅] Guided Image Generation using ControlNet
- [✅] Improving an image by refining gradients
- [] Next Word Prediction From Scratch
- [✅] Next Word Prediction Using Finetuning on GPT2
- [✅] Contrastive Language Image Pretraining (CLIP)
- [] Deep Imbalanced Regression (DIR)
- [] Generative Adversarial Networks
- [] Attention is all you need