Real-time translation of sign language gestures into English text using computer vision and deep learning.
- Python - Backend logic and AI integration
- OpenCV - Computer vision for gesture detection
-
Clone the repository:
git clone https://github.com/BENi-Aditya/Include.AI.git
-
Navigate to the project directory:
cd Include.AI
-
Create and activate a virtual environment:
python3 -m venv venv source venv/bin/activate # For Windows use `venv\Scripts\activate`
-
Install the required libraries:
pip install -r requirements.txt
-
Run the application:
python app.py
-
Open your web browser and navigate to:
http://localhost:5000
- Start the application and allow camera access for real-time gesture detection.
- Perform sign language gestures in front of the camera.
- See instantaneous translation of gestures into English text on the interface.
- Enjoy seamless communication between sign language users and non-users.
- Real-time translation of sign language gestures into English text.
- Integration of computer vision for accurate gesture detection.
- Immediate response with low latency for natural conversations.
- Enhanced accessibility for users of sign language.
Contributions are welcome to improve this project and expand its functionalities!
To contribute:
- Fork the repository: https://github.com/BENi-Aditya/Include.AI
- Create a new branch:
git checkout -b feature-branch-name
- Make your changes and commit them:
git commit -m 'Description of your changes'
- Push to the branch:
git push origin feature-branch-name
- Submit a pull request