This repository contains the code and documentation for my bachelor project on bandit algorithms for recommendation systems. The project explores the application of bandit algorithms to optimize the recommendation process and improve user engagement and satisfaction.
In this project, we explore various bandit algorithms and their applications in recommendation systems. We aim to build a recommendation model that adapts and learns over time to provide personalized recommendations to users. The main goals of the project are:
- Understand the concepts and principles of bandit algorithms in the context of recommendation systems.
- Implement and compare different bandit algorithms to evaluate their effectiveness.
- Develop a recommendation system that leverages bandit algorithms to optimize user recommendations.
- Evaluate the performance and user satisfaction of the recommendation system through experiments and metrics.
To run the code and reproduce the experiments conducted in this project, you need to clone the repository
- Clone the repository:
git clone https://github.com/Thosam1/BachelorProject.git
To use the code in this repository, follow the steps below:
-
Open any Jupyter Notebook to access the main codebase and experiments.
-
Execute the code cells sequentially to replicate the experiments and observe the results.
-
Customize the code and experiment setups according to your requirements.
The repository includes several examples and experiments to showcase the functionality and effectiveness of the bandit algorithms for recommendation systems. Some of the examples available are:
- Comparison of different bandit algorithms using a simulated environment. For both multi-armed bandit algorithms and linear bandits.
- Evaluating the impact of exploration-exploitation trade-offs on recommendation performance.
- Linear bandtis on real-world recommendation datasets.
Contributions to this project are welcome! If you find any bugs, issues, or have suggestions for improvements, please open an issue or submit a pull request. We appreciate your feedback and contributions.
This project is licensed under the MIT License. See the LICENSE file for more details.