Conference item icon

Conference item

Combining Policy Search with Planning in Multi-agent Cooperation

Abstract:
It is cooperation that essentially differentiates multi-agent systems (MASs) from single-agent intelligence. In realistic MAS applications such as RoboCup, repeated work has shown that traditional machine learning (ML) approaches have difficulty mapping directly from cooperative behaviours to actuator outputs. To overcome this problem, vertical layered architectures are commonly used to break cooperation down into behavioural layers; ML has then been used to generate different low-level skills, and a planning mechanism added to create high-level cooperation. We propose a novel method called Policy Search Planning (PSP), in which Policy Search is used to find an optimal policy for selecting plans from a plan pool. PSP extends an existing gradient-search method (GPOMDP) to a MAS domain. We demonstrate how PSP can be used in RoboCup Simulation, and our experimental results reveal robustness, adaptivity, and outperformance over other methods. © 2009 Springer Berlin Heidelberg.
Publication status:
Published

Actions


Access Document


Publisher copy:
10.1007/978-3-642-02921-9_46

Authors



Host title:
ROBOCUP 2008: ROBOT SOCCER WORLD CUP XII
Volume:
5399
Pages:
532-543
Publication date:
2009-01-01
DOI:
EISSN:
1611-3349
ISSN:
0302-9743
ISBN:
9783642029202


Keywords:
Pubs id:
pubs:58045
UUID:
uuid:217ce326-dce5-4423-adbb-946bd20e04dc
Local pid:
pubs:58045
Source identifiers:
58045
Deposit date:
2012-12-19

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP