Accelerated Multi-Agent Optimization Method over Stochastic Networks

More Info
expand_more

Abstract

We propose a distributed method to solve a multi-agent optimization problem with strongly convex cost function and equality coupling constraints. The method is based on Nesterov's accelerated gradient approach and works over stochastically time-varying communication networks. We consider the standard assumptions of Nesterov's method and show that the sequence of the expected dual values converge toward the optimal value with the rate of \mathcal{O}(1/{k^2}). Furthermore, we provide a simulation study of solving an optimal power flow problem with a well-known benchmark case.

Files

09304307.pdf
(pdf | 1.34 Mb)
Unknown license

Download not available