InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition

Overall framework

Abstract

We propose InternLM-XComposer, a vision-language large model that enables advanced image-text comprehension and composition. The innovative nature of our model is highlighted by three appealing properties: 1) Interleaved Text-Image Composition: InternLM-XComposer can effortlessly generate coherent and contextual articles that seamlessly integrate images, providing a more engaging and immersive reading experience. Simply provide a title, and our system will generate the corresponding manuscript. It can intelligently identify the areas in the text where images would enhance the content and automatically insert the most appropriate visual candidates. 2) Comprehension with Rich Multilingual Knowledge: The text-image comprehension is empowered by training on extensive multi-modal multilingual concepts with carefully crafted strategies, resulting in a deep understanding of visual content. 3) State-of-the-art Performance: Our model consistently achieves state-of-the-art results across various mainstream benchmarks for vision-language foundational models, including MME Benchmark, MMBench, MMBench-CN, Seed-Bench, and CCBench (Chinese Cultural Benchmark). Collectively, InternLM-XComposer seamlessly blends advanced text-image comprehension and composition, revolutionizing vision-language interaction and offering new insights and opportunities.

Publication
Arxiv 2023
Jiaqi Wang 王佳琦
Jiaqi Wang 王佳琦
Research Scientist
Shanghai AI Laboratory

Jiaqi Wang is a Research Scientist at Shanghai AI Laboratory. His research interests focus on Multimodal Learning, Visual Perception, and AI Content Creation in both 2D and 3D open worlds.

Related