
Multimodal Interaction & World Model
The Seed Multimodal Interaction and World Model team is dedicated to developing models that have human-level multimodal understanding and interaction capabilities. The team is working to advance the exploration and development of multimodal assistant products.
Latest advancements
Selected papers
May 20, 2025
Emerging Properties in Unified Multimodal Pretraining
Computer Vision
May 13, 2025
Seed1.5-VL Technical Report
LLM
Jan 21, 2025
UI-TARS: Pioneering Automated GUI Interaction with Native Agents
Computer Vision
Featured roles
Research Scientist - Seed Multimodal Interaction and World Model
San Jose
Experienced Hiring
Apply Now
Research Scientist Graduate- (Multimodal Interaction and World Model) - 2026 Start (PhD)
San Jose
Campus Recruitment
Apply Now
Student Researcher [Seed – Multimodal Interaction & World Model - RL Focused] – 2026 Start (PhD)
San Jose
Internship
Apply Now