code:https://github.com/infly-ai/INF-MLLM 3阶段训练:Pretraining、Multitask finetuning、Instruction Tuning LLM不使用LORA,而是直接finetune self-attention里面的QV 资源: 32×A800 GPUs Batch size 1024 mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality paper:https://arxiv.org/...