
Multimodal Interaction & World Model
The Seed Multimodal Interaction and World Model team is dedicated to developing models that have human-level multimodal understanding and interaction capabilities. The team is working to advance the exploration and development of multimodal assistant products.
Latest advancements
Selected papers
Apr 22, 2026
Seed3D 2.0: Advancing High-Fidelity Simulation-Ready 3D Content Generation
Computer Vision
May 20, 2025
Emerging Properties in Unified Multimodal Pretraining
Computer Vision
May 13, 2025
Seed1.5-VL Technical Report
LLM
Featured roles
Research Scientist - Seed Multimodal Interaction and World Model
San Jose
Experienced Hiring
Apply Now
Research Scientist Graduate- (Multimodal Interaction and World Model) - 2026 Start (PhD)
San Jose
Campus Recruitment
Apply Now
Student Researcher [Seed – Multimodal Interaction & World Model - RL Focused] – 2026 Start (PhD)
San Jose
Internship
Apply Now