科学研究

Research

首页 >  论文  >  详情

InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks

发表会议及期刊:arXiv

Zhe Chen2,1† Jiannan Wu3,1†, Wenhai Wang1,4, Weijie Su6,1†, Guo Chen2,1†, Sen Xing5, Muyan Zhong5, Qinglong Zhang1, Xizhou Zhu5,7,1, Lewei Lu7,1, Bin Li6, Ping Luo3, Tong Lu2

Yu Qiao1, Jifeng Dai5

1B1OpenGVLab, Shanghai AI Laboratory  2Nanjing University

3The University of Hong Kong  4The Chinese University of Hong Kong  5Tsinghua University

6University of Science and Technology of China  7SenseTime Research

 

Abstract

The exponential growth of large language models (LLMs) has opened up numerous possibilities for multimodal AGI systems. However, the progress in vision and vision-language foundation models, which are also critical elements of multi-modal AGI, has not kept pace with LLMs. In this work, we design a large-scale vision-language foundation model (InternVL), which scales up the vision foundation model to 6 billion parameters and progressively aligns it with the LLM, using web-scale image-text data from various sources. This model can be broadly applied to and achieve state-of-the-art performance on 32 generic visual-linguistic benchmarks including visual perception tasks such as image-level or pixel-level recognition, visionlanguage tasks such as zero-shot image/video classification, zero-shot image/video-text retrieval, and link with LLMs to create multi-modal dialogue systems. It has powerful visual capabilities and can be a good alternative to the ViT-22B. We hope that our research could contribute to the development of multi-modal large models.

comm@pjlab.org.cn

上海市徐汇区云锦路701号西岸国际人工智能中心37-38层

沪ICP备2021009351号-1