Journal article icon

Journal article

Policy gradient methods find the Nash equilibrium in N-player general-sum linear-quadratic games

Abstract:
We consider a general-sum N-player linear-quadratic game with stochastic dynamics over a finite horizon and prove the global convergence of the natural policy gradient method to the Nash equilibrium. In order to prove convergence of the method we require a certain amount of noise in the system. We give a condition, essentially a lower bound on the covariance of the noise in terms of the model parameters, in order to guarantee convergence. We illustrate our results with numerical experiments to show that even in situations where the policy gradient method may not converge in the deterministic setting, the addition of noise leads to convergence.
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:
Publication website:
http://jmlr.org/papers/v24/21-0842.html

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Mathematical Institute
Role:
Author
ORCID:
0000-0003-0086-0695


Publisher:
Journal of Machine Learning Research
Journal:
Journal of Machine Learning Research More from this journal
Volume:
24
Issue:
139
Pages:
1−56
Article number:
21-0842
Publication date:
2023-04-01
Acceptance date:
2023-03-24
EISSN:
1533-7928
ISSN:
1532-4435


Language:
English
Keywords:
Pubs id:
1334894
Local pid:
pubs:1334894
Deposit date:
2023-03-29

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP