Top Seed Talent Program
What We Are Looking For
“About the Program”
The Top Seed Talent Program is an exclusive initiative launched by the ByteDance Seed team to attract top-tier university research talent. The program includes full-time positions for recent Ph.D. graduates and research internships for outstanding current students.
We are committed to identifying and recruiting the world's leading AI researchers globally to join us in pushing the boundaries of AI.
Firm technical conviction and passion
A willingness to tackle the industry's most challenging problems, explore uncharted technical paths.
Exceptional research capabilities
Demonstrate deep technical expertise in a specific area of AI, publish high-quality and impactful papers, or make significant open-source contributions.
Curiosity and Drive
Possess refined technical taste and intuition, coupled with curiosity, drive, and a proven track record. While past achievements are necessary, we equally value your research potential.
Research Topics
LLMs
Generalization of reward models and reinforcement learning models
Self-learning of reward and reinforcement learning models
Interpretability of large language models
Authenticity of large language models
Next-generation reinforcement learning algorithms
Large language models based on self-play
Machine Learning Algorithms and Systems
Develop efficient LLM architectures to optimize performance while minimizing training and inference costs.
Research massive training clusters to enhance training stability and Machine Fraction Utilization (MFU), facilitating effective cross-cluster training.
Address memory-bound issues during inference, investigate multi-machine inference, and develop parallel inference strategies.
Integrate next-generation computing systems to advance model architectures, training methods, and inference techniques.
Research algorithmic challenges in scaling up large models.
Multimodal Generation
Multimodal Generative Foundation Models: Developing controllable and interactive visual generation models integrating vision, text, and speech, and exploring their generalization across multiple tasks.
Visual World Models: Building generative visual perception and physical world modeling for 3D/4D generation, exploring generation-driven rendering and physics engines towards physics AI.
Multimodal Generative Architectures: Investigating and improving Transformer and diffusion model architectures for efficient scaling, including visual tokenizers, reinforcement learning, visual reasoning, model distillation, and inference acceleration.
Multimodal Understanding
Multimodal Understanding Foundation Models: Develop general models integrating language, vision, and audio, and strengthen fundamental capabilities such as text, layout, and spatial relationships in images and videos.
Multimodal Reasoning and Agent Breakthroughs: Multimodal retrieval augmented generation, visual chain-of-thought reasoning, and the construction of general-purpose agents for GUI and game scenarios.
Unified Generative-Understanding Modeling: Joint representation and training methods for continuous and discrete signals to enable dynamic interaction.
Multimodal World Model Construction: Model virtual/real environments using simulation and pre-training to explore multimodal interaction.
Speech
Multimodal Interaction: Developing unified generation and understanding capabilities through multimodal joint learning.
Personalized Dialogue: Improving dialogue systems with memory, learning mechanisms, and online learning.
Reinforcement Learning: Developing and optimizing RL algorithms and systems for speech/audio multimodal tasks and creating speech agents.
Campus Recruitment
PhD candidates graduating between September 2025 and August 2026
Research Internship
Candidates graduating in or after September 2025
Campus Recruitment
PhD candidates graduating between September 2025 and August 2026
01
We Provide a Research-friendly Environment for Top Seed
No background limits,
we value your research potential.
No tech complacency,
we push the boundaries of AI.
We offer a top-tier research environment
with abundant real-world application opportunities.
We fully recognize the value of your research,
and your contributions will be fairly and fully rewarded.
02
Recent Graduates are
Making a Significant Impact at Seed
Trusted by:
Z built and open-sourced the first multilingual code repair benchmark, Multi-SWE-bench, covering 7 programming languages and 1,632 real-world bug-fixing tasks. This significantly improves the accuracy of evaluating large models on advanced programming capabilities.
Z
Seed-LLM
Q spearheaded the development and open-sourced UI-TARS, a multimodal agent capable of efficiently executing various tasks in virtual worlds. The project gained strong traction among developers, boasting over 10,000 GitHub stars for its desktop version.
Q
Seed-Multimodal Interaction & World Model
H first-authored UltraMem architecture — an ultra-sparse model that effectively addresses the high memory access cost during MoE inference. UltraMem achieves 2–6× faster inference speed compared to traditional MoE architectures, with up to 83% reduction in inference cost.
H
Seed-LLM
Research Internship
Candidates graduating in or after September 2025
Research Internship
Candidates graduating in or after September 2025
01
We Prioritize the Experience of
Top Seed Interns
Interns are Highly Valued
Interns can receive the same access and resources as full-time employees.
High Degree of Research Freedom
Interns can independently choose their research topics and benefit from flexible internship arrangements, including remote work and university-industry collaborations.
Open and Collaborative Culture
We encourage the publication of research findings and foster a culture of open knowledge-sharing within the team.
Competitive Compensation
We offer top-tier internship compensation far exceeding the industry average.
02
Interns are Conducting Globally Impactful Research in Seed Team
Trusted by:
After a three-year internship with the Seed team, R's research on VideoWorld, published as the first author, was accepted to CVPR 2025 and has garnered significant attention. Ren was invited to share their work with top academic teams, including DeepMind.
R
Seed-Vision
During his internship, M led the efforts in developing and open-sourcing HybridFlow (VeRL), a reinforcement learning framework utilizing a hybrid programming model as the first author. HybridFlow achieves a 1.5-20x improvement in training throughput over state-of-the-art baselines. The project has garnered 4.3K stars on GitHub, and the accompanying paper has been accepted to EuroSys 2025.
M
Seed-Infra
As a co-first author, S published and open-sourced COMET, a key optimization technique for Mixture-of-Experts (MoE) architectures. COMET has been deployed in ByteDance's massive multi-GPU cluster training, saving millions of GPU hours in total. This work was accepted to MLSys 2025 with high scores.
S
Seed-Infra
Q&A
Campus Recruitment
Research Internship
Q
What is the difference between the Top Seed Talent Program and the ByteDance Soaring Star Talent Program?
A
Both talent programs are aimed at PhD candidates with different research directions. If you aspire to work in fields such as LLM, speech, vision, world models, foundational architectures, AI infrastructure, and next-generation AI interaction, choose the Top Seed Talent Program. If you are keen on diving into fields such as AI applications, search, recommendations, advertising, AI safety, privacy and security, hardware, video architecture, and engineering architecture, go for the ByteDance Soaring Star Program.
Q
What is the application mechanism like for the two talent programs?
Q
If I am a candidate for the class of 2026 and have already received an internship offer from another team, can I still apply for a position under the Top Seed Talent Program?
If you have any questions,
feel free to contact us via our email
topseed@bytedance.com
Apply Now