User behavioral predictive analytics through deep learning-based emotion recognition
This session will introduce embedding Deep learning-based vision software that reads facial micro-expressions in real-time into various camera-enabled devices that we use everyday.
Today’s Advanced Computer Vision Technologies rely primarily on image and pattern recognition through vision sensors; facial expression recognition; however, relies – for the most part – on the hybrid of both. A large number of applications can benefit the IoT, automotive and Consumer Electronics eco-system by adding Artificially Intelligent Emotion Recognition and Vision Sensing to the mix.
Using Deep Learning on Convolutional Neural Networks (CNNs) alongside Advanced Machine Learning Techniques, vision algorithms for embedded systems can now allow today’s and future camera-enabled devices and machines to become “Affective Devices” that recognize the full range of facial emotions – including Joy, Surprise, Anger, Fear, disgust, Sadness and neutral, along with Mood Indicators and other facial behavioral metrics, faster than real-time, in a 30th of a second.
A large number of applications in a wide range of industries including gaming, healthcare, smart avatars, robotics, automotive and others can benefit from embedding the next generation Emotional Intelligence, for a full immersive Context Awareness Experience through Ambient Intelligence.
This session also includes a highly rated 1-minute Live Demo of the technology on stage.
|Your Review / Feedback :|
|Your Company :|