EmotiSense: Enhancing Information Accessibility and User Experience through Multimodal Emotion Recognition for Individuals with Disabilities

Main Article Content

Dhruvil Shah, Isha Shah, Smriti Raman, Sahil Shah, Komal Patil, Aruna Gawade, Nilesh Rathod, Angelin Florence

Abstract

Effective emotional recognition is critical for improving information access and user experience, particularly for individuals with disabilities who may face challenges in verbal communication. This study presents EmotiSense, a multimodal deep learning approach that integrates audio and visual data to enhance emotion recognition in individuals with special needs. By leveraging Long Short-Term Memory (LSTM) networks for speech patterns and Convolutional Neural Networks (CNN) for facial expressions, EmotiSense offers an innovative solution to monitor emotional well-being in non-verbal interactions. The system aims to support librarians, educators, and caregivers by automating the detection of emotional shifts, thus improving accessibility to digital library services and educational resources. The preliminary results (LSTM: 73.6%, CNN: 65.28%) demonstrate the potential of this approach to enhance user experience and provide tailored support, contributing to more inclusive information environments.

Article Details

Section
Articles