Conference item icon

Conference item

Active Policy Learning for Robot Planning and Exploration under Uncertainty

Abstract:

This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partially-observed sequential decision processes. The algorithm is tested in the domain of robot navigation and exploration under uncertainty. In such a setting, the expected cost, that must be minimized, is a function of the belief state (filtering distribution). This filtering distribution is in turn nonlinear and depends on an observation model with discontinuities. These discontinuities arise becau...

Expand abstract

Actions


Authors


Ruben Martinez−Cantin More by this author
Nando de Freitas More by this author
Arnaud Doucet More by this author
Jose Castellanos More by this author
Publisher:
Atlanta‚ GA‚ USA
Publication date:
2007-06-01
URN:
uuid:0ecfa6b4-a9c5-4730-8338-815fdefc8b79
Local pid:
cs:7481

Terms of use


Metrics



If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP