Back

Speaker "Siddha Ganju" Details Back

 

Topic

Embedded Deep Learning: deep learning for embedded systems

Abstract

Deep neural networks (DNN), as shown in recent computer science competitions and conferences, are the go-to solution for many problems. DNN algorithms use Artificial Neural Networks which are modeled roughly like the biological neural networks in us. These ANN’s can’t compute at the speed or accuracy that our brains can. Being, extremely computation and memory intensive makes them difficult to deploy on everyday lightweight, power-aware devices. There is a limit to the amount of computation that can be effectively packed into the memory of such devices which comes at a cost to battery life or overall power. We will talk about the constraints that current devices have and how to significantly reduce the memory footprint while making use of a DL powered inference engine. Meanwhile, the growing popularity of mobile computing platforms means that mobile devices must necessarily have high performance. Deep learning is radically changing how sensor data is interpreted to extract the high-level information needed by mobile apps. Locally executing the DL algorithms allows data to stay on the device, avoiding latency due to data transmission to the cloud and potentially alleviating privacy concerns. However, Dl algorithms, by nature, are computationally expensive and memory intensive, making them challenging to deploy on a mobile device. Nonetheless, it is critical that the gains in inference and accuracy that deep models afford become embedded in future generations of embedded and handheld everyday devices. Additionally, recent experimental results make a strong case for more widespread adoption of deep learning in wearables and IoT. We will be presenting DL applications and demos for everyday mobile devices like Smartphones, Security Cameras, IoT devices, Smart watches, Drones and self-driving cars. Utilizing deep learning algorithms in these systems can make everyday life easier. Incidentally, all developers have felt that the tooling landscape of deep learning is fragmented by a growing gap between the generic and productivity-oriented tools that optimize for algorithm development and the task-specific ones that optimize for speed and scale. This creates an artificial barrier to bring new innovations into real-world applications. Developing new hardware is a problem at the heart of deep learning. Abiding by the famous quote from Alan Kay, ‘People who are really serious about software should make their own hardware.’ we developed DeepVision’s processor which has better performance and energy efficiency than current GPU's - our performance/watt is 50 times better than Nvidia Tegra X1 and would cater to all software developers. Attendees will leave knowing DeepVision’s solution for energy-efficient deep learning, at the intersection of machine learning and computer architecture. We will also talk about the general learnings that are applicable to all deep learning systems running on compute constraint devices.

Profile

Siddha is an experienced data scientist specializing in machine learning and deep learning to get insights from CERN's peta-byte scale data. She has been an invited speaker at the Strata+Hadoop Conference and Grace Hopper Conference. She was the Youth Women's Representative for India, 2013 at IET. Siddha is also a member of the Open Leadership Cohort, Mozilla Science Lab. Currently, she is working as a Deep Learning Data Scientist at Deep Vision Inc.