Anywhere: A Multi-Agent Framework for Reliable and Diverse Foreground-Conditioned Image Inpainting

Rui Ma2, Qian Wang*,3
Xiaoqian Ye3, Feixuan Liu4,
Ying Tai1, Zhenyu Zhang1, and Zili Yi*,1
1Nanjing University 2Jilin University
3China Mobile Communications Group
4Larkagent AI
*Corresponding Author
Teaser Image

Our approach enables any object to be placed in any suitable and diverse locations.

Abstract

Recent advancements in image inpainting, particularly through diffusion modeling, have yielded promising outcomes. However, when tested in scenarios involving the completion of images based on the foreground objects, current methods that aim to inpaint an image in an end-to-end manner encounter challenges such as “over-imagination”, inconsistency between foreground and background, and limited diversity. In response, we introduce Anywhere, a pioneering multi-agent framework designed to address these issues. Anywhere utilizes a sophisticated pipeline framework comprising various agents such as Visual Language Model (VLM), Large Language Model (LLM), and image generation models. This framework consists of three principal components: the prompt generation module, the image generation module, and the outcome analyzer. The prompt generation module conducts a semantic analysis of the input foreground image, leveraging VLM to predict relevant language descriptions and LLM to recommend optimal language prompts. In the image generation module, we employ a text-guided canny-to-image generation model to create a template image based on the edge map of the foreground image and language prompts, and an image refiner to produce the outcome by blending the input foreground and the template image. The outcome analyzer employs VLM to evaluate image content rationality, aesthetic score, and foreground-background relevance, triggering prompt and image regeneration as needed. Extensive experiments demonstrate that our Anywhere framework excels in foreground-conditioned image inpainting, mitigating “overimagination”, resolving foreground-background discrepancies, and enhancing diversity. It successfully elevates foreground-conditioned image inpainting to produce more reliable and diverse results.

Method

Anywhere is a multi-agent image generation framework comprising agents of various modalities, such as large language model, visual language model, controlled image generation model, and inpainting model. Its workflow encompasses three modules: the prompt generation module, the image generation module, and the outcome analyzer, as illustrated in Figure Anywhere achieves background generation by processing images through modules utilizing diverse agents.

Image0

Comparison with Previous Works

Image0

Comparison with Business Products

Image0

The Influence of Different Modules in Our Anywhere Framework

Image0

The prompt generation module (PG). It reveals that without the prompt generation module, our system tends to produce less diverse results with uniform empty backgrounds..

Image0

The repainting agent (RA). The red circles highlight the regions with “over-imagination”. As shown, the repainting agent contributes to mitigating the “over-imagination” issue.

Image0

The outcome analyzer. As shown, the mechanism of feedback-based regeneration significantly improves the quality of the final outcomes. Rows 1 and 4 indicates instances of view inconsistency without the feedback-loop. Rows 2, 3 and 6 indicates foreground-background irrelevance without the feedback-loop. Rows 5 indicates content irrationality due to erroneous relative size without the regeneration mechanism.

More Comparisons

Image0
Image0

BibTeX

@misc{xie2024anywhere,
      title={Anywhere: A Multi-Agent Framework for Reliable and Diverse Foreground-Conditioned Image Inpainting}, 
      author={Tianyidan Xie and Rui Ma and Qian Wang and Xiaoqian Ye and Feixuan Liu and Ying Tai and Zhenyu Zhang and Zili Yi},
      year={2024},
      eprint={2404.18598},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}