Conference item
Understanding in-context learning in transformers and LLMs by learning to learn discrete functions
- Abstract:
- In order to understand the in-context learning phenomenon, recent works have adopted a stylized experimental framework and demonstrated that Transformers can match the performance of gradient-based learning algorithms for various classes of real-valued functions. However, the limitations of Transformers in implementing learning algorithms, and their ability to learn other forms of algorithms are not well understood. Additionally, the degree to which these capabilities are confined to attention-based models is unclear. Furthermore, it remains to be seen whether the insights derived from these stylized settings can be extrapolated to pretrained Large Language Models (LLMs). In this work, we take a step towards answering these questions by demonstrating the following: (a) On a test-bed with a variety of Boolean function classes, we find that Transformers can nearly match the optimal learning algorithm for ‘simpler’ tasks, while their performance deteriorates on more ‘complex’ tasks. Additionally, we find that certain attention-free models perform (almost) identically to Transformers on a range of tasks. (b) When provided a teaching sequence, i.e. a set of examples that uniquely identifies a function in a class, we show that Transformers learn more sample-efficiently. Interestingly, our results show that Transformers can learn to implement two distinct algorithms to solve a single task, and can adaptively select the more sample-efficient algorithm depending on the sequence of in-context examples. (c) Lastly, we show that extant LLMs, e.g. LLaMA-2, GPT-4, can compete with nearest-neighbor baselines on prediction tasks that are guaranteed to not be in their training set.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 2.2MB, Terms of use)
-
- Publication website:
- https://openreview.net/forum?id=ekeyCgeRfC
Authors
- Publisher:
- OpenReview
- Host title:
- Proceedings of the 12th International Conference on Learning Representations (ICLR 2024)
- Publication date:
- 2024-03-15
- Acceptance date:
- 2024-01-16
- Event title:
- 12th International Conference on Learning Representations (ICLR 2024)
- Event location:
- Vienna, Austria
- Event website:
- https://iclr.cc/
- Event start date:
- 2024-05-07
- Event end date:
- 2024-05-11
- Language:
-
English
- Pubs id:
-
2001852
- Local pid:
-
pubs:2001852
- Deposit date:
-
2024-05-30
Terms of use
- Copyright date:
- 2024
- Notes:
- This paper was presented at the 12th International Conference on Learning Representations (ICLR 2024), 7th-11th May 2024, Vienna, Austria.
- Licence:
- CC Attribution (CC BY)
If you are the owner of this record, you can report an update to it here: Report update to this record