Sign language recognition using deep neural networks - comparison of different models and hyperparemeters
The aim of the study was to compare different deep learning models for the detection of sign language gestures. One of the key challenges was to investigate the impact of parameter changes to achieve the best possible accuracy in classifying sign gestures based on images. The main goal was to write a program that could classify 24 categories of static sign signs representing individual letters from the American Sign Language (ASL) with an accuracy of over 90%. The scope of work includes preparing a data set, creating models, training them and comparing them with previous iterations and models trained on large data sets. The work consists of 5 chapters. Chapter 2 presents the history of deep neural networks, explains the concepts related to this field, presents the principle of operation and the main problems with training them. Chapter 3 was devoted to the construction of a data set and a program for training, validating and tracking experiments. Chapter 4 presented the course of the research with a description of the results. The last chapter summarizes the results of the research and compares them to the theoretical part.