Internet publication
Training-free layout control with cross-attention guidance
- Abstract:
- Recent diffusion-based generators can produce high-quality images from textual prompts. However, they often disregard textual instructions that specify the spatial layout of the composition. We propose a simple approach that achieves robust layout control without the need for training or fine-tuning of the image generator. Our technique manipulates the cross-attention layers that the model uses to interface textual and visual information and steers the generation in the desired direction given, e.g., a user-specified layout. To determine how to best guide attention, we study the role of attention maps and explore two alternative strategies, forward and backward guidance. We thoroughly evaluate our approach on three benchmarks and provide several qualitative examples and a comparative analysis of the two strategies that demonstrate the superiority of backward guidance compared to forward guidance, as well as prior work. We further demonstrate the versatility of layout guidance by extending it to applications such as editing the layout and context of real images.
- Publication status:
- Published
- Peer review status:
- Not peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 29.0MB, Terms of use)
-
- Publisher copy:
- 10.48550/arxiv.2304.03373
Authors
- Host title:
- arXiv
- Publication date:
- 2023-04-06
- DOI:
- Language:
-
English
- Pubs id:
-
1771113
- Local pid:
-
pubs:1771113
- Deposit date:
-
2024-11-20
Terms of use
- Copyright holder:
- Chen et al
- Copyright date:
- 2023
- Rights statement:
- ©2023 The Authors. This paper is an open access article distributed under the terms of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/)
- Licence:
- CC Attribution (CC BY)
If you are the owner of this record, you can report an update to it here: Report update to this record