Internet publication
HoloDiffusion: training a 3D diffusion model using 2D images
- Abstract:
- Diffusion models have emerged as the best approach for generative modeling of 2D images. Part of their success is due to the possibility of training them on millions if not billions of images with a stable learning objective. However, extending these models to 3D remains difficult for two reasons. First, finding a large quantity of 3D training data is much more complex than for 2D images. Second, while it is conceptually trivial to extend the models to operate on 3D rather than 2D grids, the associated cubic growth in memory and compute complexity makes this infeasible. We address the first challenge by introducing a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision; and the second challenge by proposing an image formation model that decouples model memory from spatial memory. We evaluate our method on real-world data, using the CO3D dataset which has not been used to train 3D generative models before. We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.
- Publication status:
- Published
- Peer review status:
- Not peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 30.0MB, Terms of use)
-
- Publisher copy:
- 10.48550/arxiv.2303.16509
Authors
- Host title:
- arXiv
- Publication date:
- 2023-03-29
- DOI:
- Language:
-
English
- Pubs id:
-
1771115
- Local pid:
-
pubs:1771115
- Deposit date:
-
2024-11-20
Terms of use
- Copyright holder:
- Karnewar et al
- Copyright date:
- 2023
- Rights statement:
- ©2023 The Authors
- Licence:
- Other
If you are the owner of this record, you can report an update to it here: Report update to this record