In the digital age, cyberbullying has become a widespread and dangerous problem that causes serious emotional pain and psychological harm, especially to teenagers and young adults. In order to successfully detect and minimize instances of cyberbullying on multiple online communication channels, an automated, reliable, and accurate method is required. Current cybercrime tracking techniques frequently rely on systems like SVM and Naive Bayes, which perform poorly with large, noisy datasets, or they simply do a binary classification (crime or not a crime) and do not work on the sort of crime. Furthermore, existing systems frequently rely on ineffective human reporting for intervention and lack real-time analysis.In order to get over these restrictions, this study suggests a sophisticated cyberbullying detection system that makes use of Long Short-Term Memory (LSTM) networks, a kind of Recurrent Neural Network (RNN) that is ideal for analysing sequential text input. The main goal is to create a reliable model that can precisely recognize and categorize instances of hate speech and cyberbullying in text. Importantly, the idea incorporates an automated user blocking mechanism and a unique reputation score, which sets it apart from just predictive methods. When wrongdoing is discovered, a user's dynamic reputation score is continuously decreased. The technology initiates an instantaneous, automated block from the platform when this score drops below a predetermined threshold.
Kesanakurthi Naga SiddharthaK. Raj KumarK. Jayanth VarmaM. AmoghMamatha Samson
Gaurav SinghShubham KumarSurya VijayanThinagaran PerumalMithileysh Sathiyanarayanan