The main solution in the color blindness system is to build a color correction sub-system for people with color blindness.
This is done in two steps: the first is to simulate the problem and gather data, second is to build a deep learning model for color correction to be used.
First, We worked on 9500 images from different environments collected by capture videos on the street by ourselves and using online adas datasetes (called Original Image).
Second, Applied the custom simulator to this dataset to generate dataset with corrected colors (called Target Images).
Third, Applied these 2 datasets to the Pix2Pix model which is an image to image translation architecture. We need to prepare the dataset to be paired images (one image contains the original version and the corrector image of the same image) as shown below figure.
All dataset and results we have uploaded to google drive while working on it on this link: https://drive.google.com/drive/folders/1c6FK3TvBSq8YqB46kcuSzHV-8pH25Rew?usp=sharing
The solution for correction images for people with colorblindness is a GANs Model. we used a deep learning model called Pix2Pix which divides to 2 sub models: Generator and Discrimiator.
The below diagram shows how pix2pix works.
the following link which is Colab Notebook contains the latest progreess while data preperation and GANs Training: https://colab.research.google.com/drive/1OOHw38WpO8D9OmEwvK-Tl9OvZWvgIrXB?usp=sharing