Internet publication
Learning feed-forward one-shot learners
- Abstract:
- One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.
- Publication status:
- Published
- Peer review status:
- Not peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 689.8KB, Terms of use)
-
- Publisher copy:
- 10.48550/arxiv.1606.05233
Authors
- Host title:
- arXiv
- Publication date:
- 2016-06-16
- DOI:
- EISSN:
-
2331-8422
- Language:
-
English
- Pubs id:
-
1771203
- Local pid:
-
pubs:1771203
- Deposit date:
-
2024-12-11
Terms of use
- Copyright holder:
- Bertinetto et al
- Copyright date:
- 2016
- Rights statement:
- ©2016 The Authors
- Licence:
- Other
If you are the owner of this record, you can report an update to it here: Report update to this record