Empath AI Can Detect Emotions from Voice in Real-Time
[the_ad id='1307']
One of the biggest challenges AI has ever faced is undoubtedly its inability to efficiently detect and respond to human emotions. A lot of tech giants including Amazon have been investing a lot for imparting emotional intelligence to their AI-based systems. Now, a Tokyo-based startup named Empath has entered the scene that claims to detect emotions from your voice.
Empath uses algorithms that is trained on voice samples provided by Smartmedical, a Japanese health tech company. The startup has developed a platform that it calls “Emotion AI”. Given any audio sample, Emotion AI will classify choose one of the following emotions – joy, anger, calmness, and sorrow.
The process is done in real-time and works even in environments where the ambient noise is high. In-fact, Emotion AI is capable of determining various characteristics of the input voice sample like pitch, speed, and tone. These characteristics will be used to accurately judge the emotional state of the speaker.
Empath has a Web API ready to be incorporated into Windows, iOS, and Android that will effectively add emotion detection capabilities to existing apps. There is also an app named My Mood forecast from Empath used by employees to keep track of their emotions at work.
Considering the potential of Empath AI, the company’s solutions have been deployed in many call centers. Empath claims that overtime of supervisors in call centers has been reduced by 20%. Moreover, a boost in sales up to 400% has been noticed. A 400% boost in sales is indeed a great achievement for an AI-based solution and that directly flaunts the efficiency of Empath’s AI.
The future plans of the startup include concentrating in fields like video games, robots, and vehicles. So, what are your thoughts on Empath AI? Share with us in the comments.
[the_ad id='1307']
Source link
[the_ad id='1307']