Conference item icon

Conference item

Do deep generative models know what they don't know?

Abstract:

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we c...

Expand abstract
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Statistics
Oxford college:
University College
Role:
Author
ORCID:
0000-0001-5365-6933
Journal:
International Conference on Learning Representations Journal website
Host title:
International Conference on Learning Representations
Publication date:
2019-05-06
Acceptance date:
2018-12-21
Event location:
New Orleans, USA
Source identifiers:
936925
Pubs id:
pubs:936925
UUID:
uuid:1e34f829-8de6-4fc9-954d-089b0f4950f0
Local pid:
pubs:936925
Deposit date:
2019-02-06

Terms of use


Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP