Conference item icon

Conference item

Do deep generative models know what they don't know?

Abstract:

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, out-of-distribution inputs. In this paper we c...

Expand abstract
Publication status:
Published
Peer review status:
Peer reviewed
Version:
Accepted Manuscript

Actions


Access Document


Files:

Authors


Nalisnick, E More by this author
Matsukawa, A More by this author
More by this author
Institution:
University of Oxford
Division:
MPLS Division
Department:
Statistics
Oxford college:
University College
ORCID:
0000-0001-5365-6933
Lakshminarayanan, B More by this author
Publication date:
2019-05-06
Acceptance date:
2018-12-21
Pubs id:
pubs:936925
URN:
uri:1e34f829-8de6-4fc9-954d-089b0f4950f0
UUID:
uuid:1e34f829-8de6-4fc9-954d-089b0f4950f0
Local pid:
pubs:936925

Terms of use


Metrics



If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP