Abstract. A connection between the distributed alternating direction method of multipliers (ADMM) and lifted Markov chains was recently proposed by Franca et al. 2016 for a non-strictly-convex consensus problem parametrized by a graph G. This was followed by a conjecture that ADMM is faster than gradient descent by a square root factor in its convergence time, in close analogy to the mixing speedup achieved by lifting several Markov chains. Nevertheless, a proof was lacking. In this talk, I will start by revisiting Franca’s conjecture. Afterwards, I will fully characterize the distributed over-relaxed ADMM for this consensus problem in terms of the topology of the graph G. To be specific, I will relate its convergence rate with the mixing time of random walks on the graph G. A consequence of this result is a proof of the aforementioned conjecture, which, interestingly, is valid for any graph, even those whose random walks cannot be accelerated via Markov chain lifting.
About the speaker. José Bento completed his Ph.D. in Electrical Engineering at Stanford University where he worked with Professor Andrea Montanari on statistical inference and structural learning of graphical models. After his Ph.D., he moved to Disney Research, Boston lab, where he worked with Dr. Jonathan Yedidia on algorithms for distributed optimization, robotics, and computer vision. He is now with the Computer Science department at Boston College. His current research lies at the intersection of distributed algorithms and machine learning. In 2014 he received a Disney Inventor Award for his work on distributed optimization, which recently lead to an approved patent. In 2016 he was awarded a $10M NIH joint grant to study the emergence of antibiotic resistance and in 2017 a $2M NSF joint grant to study measures of distance between large graphs.