Data Science and Analytics Seminar | Adventures in Distributed Asynchronous Optimization and Learning.
Supporting the below United Nations Sustainable Development Goals:支持以下聯合國可持續發展目標:支持以下联合国可持续发展目标:
Many tasks in networked systems, such as federated learning, economic dispatch in power systems, and multi-robot coordination, can be formulated as distributed optimization or learning problems. When solving these problems, asynchronous algorithms often have enhanced efficiency, implementation flexibility, and robustness against single-node failures compared to their synchronous counterparts. However, existing asynchronous algorithms often use an upper bound on the information delays in the system to determine step-sizes. Not only are the delay bounds hard to obtain in advance, but they also result in unnecessarily small step-sizes and slow convergence. In this talk, I will share our recent efforts to address this issue. We first show that actual delays in the system can be easily acquired and adapt step-sizes to the actual delays to accelerate algorithm convergence. We also propose a class of asynchronous methods that can converge under a delay-free step-size condition. Compared to step-sizes relying on the delay bounds, our delay-adaptive and delay-free step-sizes are easier to determine, less conservative, and yield much faster convergence. These are significant departures from the state-of-the-art. Moreover, the ideas of adapting step-sizes to the actual delays and developing asynchronous schemes that can converge with delay-free step-size conditions are general. They may apply to a broad range of asynchronous algorithms.
Xuyang Wu received the B.S. degree in Information and Computing Science from Northwestern Polytechnical University, Xi’an, China, in 2015, and the Ph.D. degree in Communication and Information Systems from the University of Chinese Academy of Sciences, China, in 2020. He is currently a postdoctoral researcher at the division of decision and control, school of electrical engineering and computer science, KTH, working with Prof. Mikael Johansson and Prof. Sindri Magnusson. His research interests include distributed optimization and learning.