Accomplishments

Enhancing Accessibility with LSTM-Based Sign Language Detection Systems


  • Details
  • Share
Category
Articles
Publisher
Iaes
Publishing Date
01-Apr-2025
volume
14
Issue
2
Pages
1355-1362

Individuals who are deaf or experience difficulties with hearing and speech predominantly rely on sign language as their medium to communicate, which is not universally comprehended leading to obstacles in achieving effective communication. Advances in deep learning technologies in recent years have enabled the development of systems intended to autonomously interpret gestures in sign language and translate them into spoken language. This paper introduces a system built on deep learning methodologies for recognizing sign language. It uses long short-term memory (LSTM) architecture to distinguish and classify hand gestures which are static and dynamic. The system is divided into three primary components, including dataset collection, neural network assessment, and sign detection component that encompasses hand gesture extraction and sign language classification. The module to extract hand gestures makes use of recurrent neural networks (RNNs) for the detection and tracking of hand movements in video sequences. Another RNN that is incorporated by classification module categorizes these gestures into established sign language classes. Upon evaluation on a custom dataset, the proposed system attains an accuracy rate of 99.42%, thus making it visualize its promise as an assistive technology for handicapped hearing individuals. This system can further be enhanced by including further classes on sign language and real-time gesture interpretation.

© Somaiya 2025 / All rights reserved.
Get in Touch