Back

Speaker "Bharadwaj Pudipeddi" Details Back

 

Topic

Large-Scale Modular Acceleration for AI, Search, and In-Memory Computing

Abstract

We propose a new modular FPGA-based server for specialized cloud computing in AI and search with extreme scalability, flexibility, and integrated performance. One of the key features in the server is a low-level software abstraction layer that partitions a large modular array of FPGAs into groups allocated for accelerating various workloads such as computer vision inference, elastic search, and in-memory computing. The FPGAs are configured dynamically with high-abstraction bit-streams for complete offload acceleration. The software layer provides low-latency conduit between and within the FPGA groups, creating real-time performance for connected application such as taking a captured image (from a camera) to search through documents and retrieve geoinformatics data from an in-memory database for recommendations and security.

Profile

Bharadwaj is the co-founder and CTO of NVXL which is building a highly dense clustered acceleration platform for AI, Big Data, and transcoding workloads. He is a product entrepreneur and enterprise cloud architect who has previously worked for many years at Intel and later at startups in the areas of CPU design, high performance fabrics, flash memory storage, and scalable computing. He has a Master’s from Virginia Tech in Computer Engineering, and several patents and papers published in the areas of CPU, flash memory, and scalable computing. He is currently interested in large-scale data processing for AI by combining acceleration with storage for low latencies and extreme scalability. In his off time, he likes to read, travel, and hike.