Cyberbullying is the use of digital platforms to harass, intimidate, or harm others. It can occur anonymously, spread rapidly, and have serious emotional and psychological effects.
This project is an AI-powered system designed to detect online harassment and cyberbullying in text, providing tools to analyze, predict, and prevent harmful content.
- Detects offensive or harmful content in messages and social media posts
- Uses Natural Language Processing (NLP) and Machine Learning algorithms
- Provides real-time alerts and analysis for potential cyberbullying
- Supports text classification, sentiment analysis, and context-aware detection
- Users input text from messages or social media.
- Text is preprocessed and analyzed for harmful content.
- AI model classifies the text as safe or cyberbullying.
- Results are displayed with alerts and severity scores.
