SUMMARY
Most interpersonal communication takes place through the use of languages. Specially gifted people are compelled to employ unconventional forms of communication, such as sign language. Communication amongst people with special needs is made possible by this, but communication with the general public is restricted. This gap in communication may lead to misinterpretation of information or tone. In order to address the aforementioned issues, a method of translating Indian sign language to text and speech is implemented in this work. It method records the input as serially rendered Indian Sign Language alphabets using a webcam or video file. Then, using the machine learning model, the data is processed to find distinct alphabets. Then, this can be translated into other languages, such as regional Indian languages. The entire workflow is enabled using technologies like opencv2 for image recognition and Keras for the AI/ML model.