Back

Speaker "JR Alaoui" Details Back

 

Topic

Adaptive interfaces for Ambient Intelligence (AmI) powered by big data from facial emotion recognition.

Abstract

This session will reveal a novel use of Convolutional Neural Networks (CNNs) as a Deep Learning architecture towards the creation of a facial expression recognition vocabulary. The first half of this session will cover how this new approach allows vision software algorithms to read micro-expressions in real-time an generate a wealth of facial big data analytics with a high level of accuracy, speed and customization. The second half of this session will reveal the state of current industry verticals, namely embedded systems and video analytics, that benefit today from integrating emotion recognition technology into their commercial applications to amplify Human Computer Interaction and context awareness, to subsequently enhance users experiences through better Ambient Intelligence. This session will include a highly rated live demo on stage!

Profile

Modar is a serial entrepreneur and expert in Artificially Intelligent vision technologies, Deep Learning and Ambient Intelligence (AmI). He is currently founder and CEO at Eyeris, the worldwide leading Deep Learning-based emotion recognition software. The company’s flagship product EmoVu reads facial micro-expressions in real-time and uses Convolutional Neural Networks (CNN's) as a Deep Learning architecture to train and deploy its algorithm into a myriad of today’s commercial applications. Modar combines a decade of experience between Human Machine Interaction (HMI) and Audience Behavioral Measurement. He is a frequent speaker and keynoter on “Ambient Intelligence”, a winner of several technology and innovation awards and has been featured in many major publications for his work.