摘要
This article proposes a distributed stochastic projection-free algorithm for large-scale constrained finite-sum optimization whose constraint set is complicated such that the projection onto the constraint set can be expensive. The global cost function is allocated to multiple agents, each of which computes its local stochastic gradients and communicates with its neighbors to solve the global problem. Stochastic gradient methods enable low computational complexity, while they are hard and slow to converge due to the variance caused by random sampling. To construct a convergent distributed stochastic projection-free algorithm, this article incorporates variance reduction and gradient tracking techniques in the Frank–Wolfe (FW) update. We develop a novel sampling rule for the variance reduction technique to reduce the variance introduced by stochastic gradients. Complete and rigorous proofs show that the proposed distributed projection-free algorithm converges with a sublinear convergence rate and enjoys superior complexity guarantees for both convex and nonconvex objective functions. By comparative simulations, we demonstrate the convergence and computational efficiency of the proposed algorithm.
源语言 | 英语 |
---|---|
页(从-至) | 2479-2494 |
页数 | 16 |
期刊 | IEEE Transactions on Automatic Control |
卷 | 70 |
期 | 4 |
DOI | |
出版状态 | 已出版 - 4月 2025 |