Gifting ADS
Academic Project
Overview
Developed a system to recognize emotions from speech using Artificial Neural Networks (ANN). Leveraged the RAVDESS dataset for training, with features like MFCCs for emotion classification. Preprocessing involved noise reduction and normalization to enhance speech quality. The model achieved high accuracy in classifying emotions such as happiness, sadness, and anger. Technologies used include Python, TensorFlow, Keras, and Flask/Django for UI. This project showcases expertise in AI, speech processing, and neural networks.
Key Features
In human communication, emotions play a critical role in expressing intent and context. However, machines and automated systems often fail to understand or respond to the emotional state of users, limiting their ability to provide meaningful interactions. This project aims to address the challenge of developing a system that can accurately recognize and classify human emotions from speech signals using Artificial Neural Networks (ANN). The solution will enable emotion-aware applications for more personalized and effective human-computer interactions.
Print Applications
To address the problem of emotion recognition in speech, this project implements an Artificial Neural Network (ANN) model that processes speech signals to classify emotions accurately. Using the RAVDESS dataset, speech features such as Mel-frequency cepstral coefficients (MFCCs) are extracted to capture the emotional characteristics of the audio. The ANN model is trained to recognize different emotional states, like happiness, sadness, and anger, based on these features. The solution is further integrated into a user-friendly interface using Flask/Django for real-time emotion detection. This system enhances human-computer interaction by enabling machines to respond more empathetically and contextually.