Universal Access in the Information Society, 2025 (SCI-Expanded)
The development of a communication system for individuals with hearing impairment is of major importance to facilitate connecting with community, depending on their communication tools called Sign Languages (SLs). Therefore, this study exploits the development of Artificial Intelligence to advocate for the implementation of a Human–Computer Interaction Sign Language System (HCISLS), comprising four integral stages: the construction of an unprecedented Graphical User Interface (GUI) utilizing an innovative segmentation technique, the creation of a unique Turkish Sign Language Alphabet (TSLA) dataset, the development of a novel learning methodology, and the real-time deployment of the system. Primarily, the GUI of the Sign Language System (G-SLS) relies on an unprecedented segmentation technique for delineating objects for dataset collection and real-time recognition. The proposed GUI is superior to the existing SLSs research models. Subsequently, the dataset is amassed, comprising 106,170 frames of TSLA. This dataset not only bridges a gap in SLSs but also outperforms existing datasets. Furthermore, this study proposes the development of TSLAnet, a Deep Learning method exhibiting approximately 100% accuracy, F1-score, precision, and recall across different dataset classes, with a low loss of 0.0089 and without leading to overfitting. Finally, an exhaustive comparison of TSLAnet versions is undertaken, along with comparisons to common deep learning structures like AlexNet, VGG16, VGG19, DenseNet, ResNet, EfficientNet, Mobilenet, and other structures employed in prior studies. These comparisons confirm the efficacy of the novel model, providing high performance metrics, reduced computation time, and a minimal number of parameters in comparison to other state-of-the-art models.