Center for Signal and Information Processing (CSIP) Seminar Series presents:
Deep Learning based Analog Joint Source-Channel Coding for Distributed Functional Computation
Introducing Research from Yashas Saidutta
Date: Friday, February 26, 2021
Bluejeans link: https://bluejeans.com/621436705
Yashas Saidutta is a PhD student working with Professor Faramarz Fekri with focus on developing machine learning methods for applications in communication systems. His other research interests include Bayesian Optimization and Reinforcement Learning. He was a research intern at Halliburton Oilfield Services during the summers of 2018 and 2019. Before coming to Georgia Tech, he obtained his bachelor’s degree from National Institute of Technology Karnataka, India.
ABSTRACT: With the number of IoT devices set to exceed 75 billion by 2025, it is important to design systems where distributed sensors communicate efficiently with the information receiver in a target aware manner. In this talk, we look at Joint Source-Channel Coding for distributed analog functional computation over both Gaussian Multiple Access and AWGN channels. Particularly, we will look at deep neural network based solutions where the encoders and decoders are learned. We will look at three methods of training them. The first is based on autoencoders. The second is based on the Lagrangian formulation of the optimization objective that incorporates the power constraint. The third method is based on the information bottleneck principle. We will look at theoretical connections of these methods to the indirect rate-distortion problem and also look at the theoretical basis for the superiority of the third method over the first two. Finally, we will look at empirical performance results for image classification over the CIFAR-10 dataset.