Ganga GudiMallamma V. ReddyM. Hanumanthappa
Advancements in assistive technology have greatly improved accessibility for visually impaired individuals, enabling seamless interaction with textual content. This research introduces a novel approach that converts Kannada text into both speech and Braille, promoting multilingual accessibility. The proposed system incorporates a support vector machine (SVM) for Kannada text-to-Braille conversion and a deep learning-based text-to-speech (TTS) model for speech synthesis. The Braille translation module accurately maps Kannada characters to their respective Braille representations using SVM classifiers, ensuring precise conversion. Simultaneously, the speech synthesis component utilizes Tacotron2 for converting Kannada text into mel-spectrograms, followed by WaveNet/HiFi-GAN to produce high-quality Kannada speech. A dataset containing 2000 Kannada text-Braille pairs and corresponding text-speech samples is employed for training and evaluation. Experimental findings validate the effectiveness of the proposed system in accurately translating Braille while generating clear and natural Kannada speech. The integration of machine learning and deep learning techniques enhances efficiency, scalability, and usability, making this a reliable assistive tool for visually impaired Kannada-speaking individuals.
Aakansha KhannaInzimam Ul HassanInderdeep Kaur
Sudhakiran PonnuruM. ChandanaM. RavikumarK. RakeshKriti S. DevathaM. Balaji
M. K. SnigdhaLokesh PappalaDudla V Joshita LavanyaNagella SankeerthanaA. Srisaila