Internet publication
Energy-latency manipulation of multi-modal large language models via verbose samples
- Abstract:
- Despite the exceptional performance of multi-modal large language models (MLLMs), their deployment requires substantial computational resources. Once malicious users induce high energy consumption and latency time (energy-latency cost), it will exhaust computational resources and harm availability of service. In this paper, we investigate this vulnerability for MLLMs, particularly image-based and video-based ones, and aim to induce high energy-latency cost during inference by crafting an imperceptible perturbation. We find that high energy-latency cost can be manipulated by maximizing the length of generated sequences, which motivates us to propose verbose samples, including verbose images and videos. Concretely, two modality non-specific losses are proposed, including a loss to delay end-of-sequence (EOS) token and an uncertainty loss to increase the uncertainty over each generated token. In addition, improving diversity is important to encourage longer responses by increasing the complexity, which inspires the following modality specific loss. For verbose images, a token diversity loss is proposed to promote diverse hidden states. For verbose videos, a frame feature diversity loss is proposed to increase the feature diversity among frames. To balance these losses, we propose a temporal weight adjustment algorithm. Experiments demonstrate that our verbose samples can largely extend the length of generated sequences.
- Publication status:
- Published
- Peer review status:
- Not peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Version of record, pdf, 2.6MB, Terms of use)
-
- Publisher copy:
- 10.48550/arXiv.2404.16557
Authors
- Host title:
- arXiv
- Publication date:
- 2024-04-25
- DOI:
- Language:
-
English
- Keywords:
- Pubs id:
-
2005416
- Local pid:
-
pubs:2005416
- Deposit date:
-
2024-06-07
Terms of use
- Copyright holder:
- Gao et al.
- Copyright date:
- 2024
- Rights statement:
- © 2024 The Author(s).
If you are the owner of this record, you can report an update to it here: Report update to this record