Multimodal Interaction & World Model
The Seed Multimodal Interaction and World Model team is dedicated to developing models that boast human-level multimodal understanding and interaction capabilities. The team also aspires to advance the exploration and development of multimodal assistant products.
Research topics
Foundations and applications of multimodal understanding models
Develop integrated models that understand audio-visual and linguistic inputs, enhance fundamental understanding of images and videos such as text, layout, grounding, and spatial relation as well as multimodal reasoning capabilities. Improve the efficiency of model training and inference, achieve long-term memory retention for users, and optimize the model's performance across various devices for better experience.
Multimodal
Foundation
Multimodal agent and inference
Achieve advanced capabilities for multimodal models including multimodal RAG, visual CoT, and agent, building general multimodal agents for GUI/games in the virtual world.
Multimodal
Foundation
Agent
Unified models for generation and understanding
Explore unified representation and training methods for both continuous and discrete signals, and develop models that can interleave both generation and understanding.
Multimodal
World Model
World Model
Employ pre-training and simulation technologies to model various environments of the virtual and physical world, providing foundational capabilities for multimodal interactive exploration.
Multimodal
World Model

Selected Papers

May 20, 2025
Emerging Properties in Unified Multimodal Pretraining
Unifying multimodal understanding and generation has shown impressive capabilities in cutting-edge proprietary systems. In this work, we introduce BAGEL, an open0source foundational model that natively supports multimodal understanding and generation. BAGEL is a unified, decoder0only model pretrained on trillions of tokens curated from large0scale interleaved text, image, video, and web data. When scaled with such diverse multimodal interleaved data, BAGEL exhibits emerging capabilities in complex multimodal reasoning. As a result, it significantly outperforms open-source unified models in both multimodal generation and understanding across standard benchmarks, while exhibiting advanced multimodal reasoning abilities such as free-form image manipulation, future frame prediction, 3D manipulation, and world navigation. In the hope of facilitating further opportunities for multimodal research, we share the key findings, pretraining details, data creation protocal, and release our code and checkpoints to the community.
Chaorui Deng, Deyao Zhu, Kunchang Li, Chenhui Gou, Feng Li, Zeyu Wang, Shu Zhong, Weihao Yu, Xiaonan Nie, Ziang Song, Guang Shi, Haoqi Fan
Computer Vision
May 13, 2025
Seed1.5-VL Technical Report
We present Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning. Seed1.5-VL is composed with a 532M-parameter vision encoder and a Mixture-of-Experts (MoE) LLM of 20B active parameters. Despite its relatively compact architecture, it delivers strong performance across a wide spectrum of public VLM benchmarks and internal evaluation suites, achieving the state-of-the-art performance on 38 out of 60 public benchmarks. Moreover, in agent-centric tasks such as GUI control and gameplay, Seed1.5-VL outperforms leading multimodal systems, including OpenAI CUA and Claude 3.7. Beyond visual and video understanding, it also demonstrates strong reasoning abilities, making it particularly effective for multimodal reasoning challenges such as visual puzzles. We believe these capabilities will empower broader applications across diverse tasks. In this report, we mainly provide a comprehensive review of our experiences in building Seed1.5-VL across model design, data construction, and training at various stages, hoping that this report can inspire further research. Seed1.5-VL is now accessible at this https URL (Volcano Engine Model ID: doubao-1-5-thinking-vision-pro-250428)
Seed Multimodal Team
LLM
Jan 21, 2025
UI-TARS: Pioneering Automated GUI Interaction with Native Agents
This paper introduces UI-TARS, a native GUI agent model that solely perceives the screenshots as input and performs human-like interactions (e.g., keyboard and mouse operations). Unlike prevailing agent frameworks that depend on heavily wrapped commercial models (e.g., GPT-4o) with expert-crafted prompts and workflows, UI-TARS is an end-to-end model that outperforms these sophisticated frameworks. Experiments demonstrate its superior performance: UI-TARS achieves SOTA performance in 10+ GUI agent benchmarks evaluating perception, grounding, and GUI task execution. Notably, in the OSWorld benchmark, UI-TARS achieves scores of 24.6 with 50 steps and 22.7 with 15 steps, outperforming Claude (22.0 and 14.9 respectively). In AndroidWorld, UI-TARS achieves 46.6, surpassing GPT-4o (34.5). UI-TARS incorporates several key innovations: (1) Enhanced Perception: leveraging a large-scale dataset of GUI screenshots for context-aware understanding of UI elements and precise captioning; (2) Unified Action Modeling, which standardizes actions into a unified space across platforms and achieves precise grounding and interaction through large-scale action traces; (3) System-2 Reasoning, which incorporates deliberate reasoning into multi-step decision making, involving multiple reasoning patterns such as task decomposition, reflection thinking, milestone recognition, etc. (4) Iterative Training with Reflective Online Traces, which addresses the data bottleneck by automatically collecting, filtering, and reflectively refining new interaction traces on hundreds of virtual machines. Through iterative training and reflection tuning, UI-TARS continuously learns from its mistakes and adapts to unforeseen situations with minimal human intervention. We also analyze the evolution path of GUI agents to guide the further development of this domain.
Yujia Qin, Yining Ye, Junjie Fang, Haoming Wang, Shihao Liang, Shizuo Tian, Junda Zhang, Jiahao Li, Yunxin Li, Shijue Huang, Wanjun Zhong, Kuanye Li, Jiale Yang, Yu Miao, Woyu Lin, Longxiang Liu, Xu Jiang, Qianli Ma, Jingyu Li, Xiaojun Xiao, Kai Cai, Chuang Li, Yaowei Zheng, Chaolin Jin, Chen Li, Xiao Zhou, Minchao Wang, Haoli Chen, Zhaojian Li, Haihua Yang, Haifeng Liu, Feng Lin, Tao Peng, Xin Liu, Guang Shi
Computer Vision
Learn More
Technical capability demonstration
Seed1.5-VL
Vision-language multimodal large models demonstrate outstanding performance in tasks such as visual reasoning, image question answering, chart understanding and question answering, visual grounding/counting, video understanding, and GUI agent tasks.
UI-TARS
An open-source multimodal agent built upon a powerful vision-language model. It is capable of effectively performing diverse tasks within virtual worlds.
BAGEL
An open-source Unified Multimodal Model which possesses multiple capabilities such as image generation, image editing, style transformation, and image expansion, and is capable of delivering precise, accurate, and photorealistic outputs.

Featured Jobs

Research Scientist/Engineer - Multimodal Interaction & World Model
Singapore
Experienced Hiring
Apply Now
Research Scientist- Foundation Model, Vision and Language
San Jose / Seattle
Experienced Hiring
Apply Now
Research Scientist, Multimodal Interaction & World Model - 2025 Start
Singapore
Campus Recruitment
Apply Now
Research Scientist Graduate- (Foundation Model, Vision and Language) - 2025 Start (PhD)
San Jose / Seattle
Campus Recruitment
Apply Now
Research Scientist Intern - Multimodal Interaction & World Model - 2025 Start
Singapore
Internship
Apply Now
Student Researcher (Seed - Foundation Model - Vision and Language) - 2025 Start (PhD)
San Jose / Seattle
Internship
Apply Now