Systems Research Engineer
Role details
Job location
Tech stack
Job description
· Distributed Systems Research & Development: Architect, implement, and evaluate distributed system components for emerging AI and data-centric workloads. Drive modular design and scalability across CPU, GPU, and NPU clusters, building highly efficient serving and scheduling systems.
· Performance Optimization & Profiling: Conduct in-depth profiling and performance tuning of large-scale inference and data pipelines, focusing on KV cache management, heterogeneous memory scheduling, and high-throughput inference serving using frameworks like vLLM, Ray Serve, and modern PyTorch Distributed systems.
· Scalable Model Serving Infrastructure: Develop and evaluate frameworks that enable efficient multi-tenant, low-latency, and fault-tolerant AI serving across distributed environments. Research and prototype new techniques for cache sharing, data locality, and resource orchestration and scheduling within AI clusters.
· Research & Publications: Translate innovative research ideas into publishable contributions at leading venues (e.g., OSDI, NSDI, EuroSys, SoCC, MLSys, NeurIPS, ICML, ICLR) while driving internal adoption of novel methods and architectures.
· Cross-Team Collaboration: Communicate technical insights, research progress, and evaluation outcomes effectively to multidisciplinary stakeholders and global Huawei research teams.
Requirements
We are seeking Systems Research Engineers with a strong interest in computer systems, distributed AI infrastructure, and performance optimization. These roles are ideal for recent PhD graduates or exceptional BSc/MSc engineers looking to build research-driven engineering experience in areas such as operating systems, distributed systems, AI model serving, and machine learning infrastructure. You will work closely with senior architects on real-world projects, helping to prototype and optimize next-generation AI infrastructure., · Bachelor's or Master's degree in Computer Science, Electrical Engineering, or related field.
· Strong knowledge of distributed systems, operating systems, machine learning systems architecture, Inference serving, and AI Infrastructure.
· Hands-on experience with LLM serving frameworks (e.g., vLLM, Ray Serve, TensorRT-LLM, TGI) and distributed KV cache optimization.
· Proficiency in C/C++, with additional experience in Python for research prototyping.
· Solid grounding in systems research methodology, distributed algorithms, and profiling tools.
· Team-oriented mindset with effective technical communication skills.
Desired Qualifications and Experience:
· PhD in systems, distributed computing, or large-scale AI infrastructure.
· Publications in top-tier systems or ML conferences (NSDI, OSDI, EuroSys, SoCC, MLSys, NeurIPS, ICML, ICLR).
· Understanding of load balancing, state management, fault tolerance, and resource scheduling in large-scale AI inference clusters.
· Prior experience designing, deploying, and profiling high-performance cloud or AI infrastructure systems.