Conference item
Chained generalisation bounds
- Abstract:
- This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique. By developing a general theoretical framework, we establish a duality between generalisation bounds based on the regularity of the loss function, and their chained counterparts, which can be obtained by lifting the regularity assumption from the loss onto its gradient. This allows us to re-derive the chaining mutual information bound from the literature, and to obtain novel chained information-theoretic generalisation bounds, based on the Wasserstein distance and other probability metrics. We show on some toy examples that the chained generalisation bound can be significantly tighter than its standard counterpart, particularly when the distribution of the hypotheses selected by the algorithm is very concentrated.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 527.2KB, Terms of use)
-
- Publication website:
- https://proceedings.mlr.press/v178/clerico22a.html
Authors
- Publisher:
- Proceedings of Machine Learning Research
- Pages:
- 4212-4257
- Series:
- Proceedings of Machine Learning Research
- Series number:
- 178
- Publication date:
- 2022-06-28
- Acceptance date:
- 2022-05-07
- Event title:
- 35th Annual Conference on Learning Theory (COLT 2022)
- Event location:
- London
- Event website:
- http://learningtheory.org/colt2022/
- Event start date:
- 2022-07-02
- Event end date:
- 2022-07-05
- ISSN:
-
2640-3498
- Language:
-
English
- Keywords:
- Pubs id:
-
1282361
- Local pid:
-
pubs:1282361
- Deposit date:
-
2022-10-11
Terms of use
- Copyright holder:
- Clerico et al.
- Copyright date:
- 2022
- Rights statement:
- © 2022 E. Clerico, A. Shidani, G. Deligiannidis & A. Doucet.
If you are the owner of this record, you can report an update to it here: Report update to this record