Research Article
Distributed Stochastic Optimization for Constrained and Unconstrained Optimization
@INPROCEEDINGS{10.4108/icst.valuetools.2011.245895, author={Pascal Bianchi and J\^{e}r\^{e}mie Jakubowicz}, title={Distributed Stochastic Optimization for Constrained and Unconstrained Optimization}, proceedings={5th International ICST Conference on Performance Evaluation Methodologies and Tools}, publisher={ICST}, proceedings_a={VALUETOOLS}, year={2012}, month={6}, keywords={Stochastic approximation distributed optimization differential inclusion}, doi={10.4108/icst.valuetools.2011.245895} }
- Pascal Bianchi
Jérémie Jakubowicz
Year: 2012
Distributed Stochastic Optimization for Constrained and Unconstrained Optimization
VALUETOOLS
ICST
DOI: 10.4108/icst.valuetools.2011.245895
Abstract
In this paper, we analyze the convergence of a distributed Robbins-Monro algorithm for both constrained and unconstrained optimization in multi-agent systems. The algorithm searches local minima of a (nonconvex) objective function which is supposed to coincide with a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. It is proved that i) an agreement is achieved between agents on the value of the estimate, ii) the algorithm converges to the set of Kuhn-Tucker points of the optimization problem. The proof relies on recent results about differential inclusions. In the context of unconstrained optimization, intelligible sufficient conditions are provided in order to ensure the stability of the algorithm. In the latter case, we also provide a central limit theorem which governs the asymptotic fluctuations of the estimate. We illustrate our results in the case of distributed power allocation for ad-hoc wireless networks.