Learning to Compress and Communicate
Work packagesAuthors: Dragotti, Gündüz, Kontoyiannis, Johnson and Jaggi
Computation, communication and coordination lie at the core of intelligence, which, in turn, rely on the transmission and transformation of information.
Information theory has played a prominent role in identifying the fundamental limits of compression and communication, as well as guiding the design of practical algorithms and codes achieving these limits. In parallel, there has been significant recent progress in data-driven approaches to compression and communication problems.
In this WP, we will use information-theoretic principles to design efficient architectures and training strategies for learning under communication constraints.
First, exploiting the natural sparsity present in most common information sources, we will establish precise achievability and converse coding theorems for lossless and lossy compression. Then, benefiting from recent advances in generative models (GANs, VAEs, diffusion models), we will design learning algorithms for joint source-channel compression and coding.
Equipped with these tools, we will tackle problems such as distributed compression and multi-user communication with feedback, for which structured model-driven approaches have failed.
Finally, we note that machine learning algorithms are highly sensitive to data manipulation by malicious actors and leakage of sensitive private information, problems exacerbated in multi-agent systems. We will address privacy and security challenges by significantly extending current robustness frameworks based on error-correcting codes.