« All Events

  • This event has passed.

Seminar: Capacity upper bounds for deletion-type channels

November 28, 2018 @ 4:00 pm - 5:00 pm

Title: Capacity upper bounds for deletion-type channels
Speaker: Dr Mahdi Cheraghchi
Affiliation: Dept of Computing, Imperial College London
Location: 218 Huxley Building
Time: 16:00 – 17:00

Abstract. We develop a systematic approach, based on convex programming and real analysis, for obtaining upper bounds on the capacity of the binary deletion channel and, more generally, channels with i.i.d. insertions and deletions. Other than the classical deletion channel, we give a special attention to the Poisson-repeat channel introduced by Mitzenmacher and Drinea (IEEE Transactions on Information Theory, 2006). Our framework can be applied to obtain capacity upper bounds for any repetition distribution (the deletion and Poisson-repeat channels corresponding to the special cases of Bernoulli and Poisson distributions). Our techniques essentially reduce the task of proving capacity upper bounds to maximizing a univariate, real-valued, and often concave function over a bounded interval. We show the following:

1. The capacity of the binary deletion channel with deletion probability $d$ is at most $(1-d)log(phi)$ for $d > 1/2$, and, assuming the capacity function is convex, is at most $1-d log(4/phi)$ for $d<1/2$, where $phi=(1+sqrt{5})/2$ is the golden ratio. This is the first nontrivial capacity upper bound for any value of $d$ outside the limiting case $d to 0$ that is fully explicit and proved without computer assistance.

2. We derive the first set of capacity upper bounds for the Poisson-repeat channel.

3. We derive several novel upper bounds on the capacity of the deletion channel. All upper bounds are maximums of efficiently computable, and concave, univariate real functions over a bounded domain. In turn, we upper bound these functions in terms of explicit elementary and standard special functions, whose maximums can be found even more efficiently (and sometimes, analytically, for example for $d=1/2$).

Along the way, we develop several new techniques of potentially independent interest in information theory, probability, and mathematical analysis.

[Based on an article published in proceedings of ACM STOC 2018]

Biography. Dr Mahdi Cheraghchi is a Lecturer in the Department of Computing at Imperial College London. Previously, he has been a Qualcomm Research Fellow at the Simons Institute for the Theory of Computing of U.C. Berkeley and have held post-doctoral researcher positions at the MIT Computer Science and Artificial Intelligence Lab, Computer Science Department of the Carnegie Mellon University and the University of Texas at Austin. He received his PhD in Computer Science from EPFL, Switzerland, where he worked as a research assistant in the Laboratory of Algorithms (ALGO) under the supervision of Amin Shokrollahi. Dr Cheraghchi main research interest is in the field of Theoretical Computer Science.


November 28, 2018
4:00 pm - 5:00 pm