Internet publication
Free3D: consistent novel view synthesis without 3D representation
- Abstract:
- We introduce Free3D, a simple accurate method for monocular open-set novel view synthesis (NVS). Similar to Zero-1-to-3, we start from a pre-trained 2D image generator for generalization, and fine-tune it for NVS. Compared to other works that took a similar approach, we obtain significant improvements without resorting to an explicit 3D representation, which is slow and memory-consuming, and without training an additional network for 3D reconstruction. Our key contribution is to improve the way the target camera pose is encoded in the network, which we do by introducing a new ray conditioning normalization (RCN) layer. The latter injects pose information in the underlying 2D image generator by telling each pixel its viewing direction. We further improve multi-view consistency by using light-weight multi-view attention layers and by sharing generation noise between the different views. We train Free3D on the Objaverse dataset and demonstrate excellent generalization to new categories in new datasets, including OmniObject3D and GSO. The project page is available at this https URL: https://chuanxiaz.com/free3d/
- Publication status:
- Published
- Peer review status:
- Not peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 5.9MB, Terms of use)
-
- Publisher copy:
- 10.48550/arxiv.2312.04551
Authors
- Host title:
- arXiv
- Publication date:
- 2023-12-07
- DOI:
- Language:
-
English
- Pubs id:
-
1771100
- Local pid:
-
pubs:1771100
- Deposit date:
-
2024-09-05
Terms of use
- Copyright holder:
- Zheng and Vedaldi
- Copyright date:
- 2023
- Rights statement:
- ©2023 The Authors. This paper is an open access article distributed under the terms of the Creative Commons Attribution (CC BY-NC-SA) license (https://creativecommons.org/licenses/by-nc-sa/4.0/)
If you are the owner of this record, you can report an update to it here: Report update to this record