Internet publication
Rethinking visual prompting for multimodal large language models with external knowledge
- Abstract:
- In recent years, multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets, enabling them to generally understand images well. However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs, limiting their ability to answer questions requiring an understanding of detailed or localized visual elements. Drawing inspiration from the Retrieval-Augmented Generation (RAG) concept, this paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models (e.g., instance segmentation/OCR models), into MLLMs. This is a promising yet underexplored direction for enhancing MLLMs' performance. Our approach diverges from concurrent works, which transform external knowledge into additional text prompts, necessitating the model to indirectly learn the correspondence between visual content and text coordinates. Instead, we propose embedding fine-grained knowledge information directly into a spatial embedding map as a visual prompt. This design can be effortlessly incorporated into various MLLMs, such as LLaVA and Mipha, considerably improving their visual understanding performance. Through rigorous experiments, we demonstrate that our method can enhance MLLM performance across nine benchmarks, amplifying their fine-grained context-aware capabilities.
- Publication status:
- Published
- Peer review status:
- Not peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 23.9MB, Terms of use)
-
- Publisher copy:
- 10.48550/arXiv.2407.04681
Authors
- Host title:
- arXiv
- Publication date:
- 2024-07-05
- DOI:
- Language:
-
English
- Pubs id:
-
2036946
- Local pid:
-
pubs:2036946
- Deposit date:
-
2024-10-07
Terms of use
- Copyright holder:
- Lin et al.
- Copyright date:
- 2024
- Rights statement:
- © 2024 The Authors.
If you are the owner of this record, you can report an update to it here: Report update to this record