Epicareer Might not Working Properly
Learn More

Staff AI Infra Engineer (serving API)

Salary undisclosed

Apply on


Original
Simplified

Job Description

Job Description

We are Genmo, a research lab dedicated to building open, state-of-the-art models for video generation towards unlocking the right brain of AGI. Join us in shaping the future of AI and pushing the boundaries of what's possible in video generation.

Role Overview

We are looking for a senior/staff software engineer to join our inference team. In this role, you will be responsible for designing and scaling our inference systems as they grow to support over millions of users across more than 20 different data centers.

Key Responsibilities
  • Develop high-performance, high-throughput, efficient, and low-latency inference pipelines.

  • Design, develop, and maintain scalable backend services that support our AI-powered content creation platform.

  • Implement and optimize model serving infrastructure using Kubernetes and other cloud-native technologies.

  • Collaborate with ML engineers to transition models from research to production.

  • Design APIs for integrating our AI capabilities into our partner ecosystem.

  • Implement monitoring, logging, and alerting systems for backend services and model inference.

  • Develop monitoring infrastructure for our ML serving pipeline and apply advanced model compression and optimization techniques (quantization, pruning, distillation) to improve inference performance.

Qualifications
  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field

  • 5+ years of experience in software engineering, with at least 3 years focusing on backend systems and ML infrastructure

  • Must Have:

    • Strong past experience with Ray or Kubernetes

    • Strong proficiency in Python and at least one systems programming language (Rust, C++ or Go)

    • Solid understanding of model serving frameworks (e.g., TensorFlow Serving, NVIDIA Triton)

    • Experience with a ML framework such as TensorFlow, PyTorch, or JAX

    • Experience with model compression and optimization techniques

    • Strong knowledge of cloud platforms (AWS, GCP, or Azure) and their ML-specific services

    • Familiarity with distributed systems and microservices architectures

    • Experience with high-performance, low-latency systems

  • Ideal candidate will have

    • Experience with GPU programming is a plus

Additional information

The role is based in the Bay Area (San Francisco). Candidates are expected to be located near the Bay Area or open to relocation.

Genmo is an Equal Opportunity Employer. Candidates are evaluated without regard to age, race, color, religion, sex, disability, national origin, sexual orientation, veteran status, or any other characteristic protected by federal or state law. Genmo, Inc. is an E-Verify company and you may review the Notice of E-Verify Participation and the Right to Work posters in English and Spanish.

Genmo is an Equal Opportunity Employer. Candidates are evaluated without regard to age, race, color, religion, sex, disability, national origin, sexual orientation, veteran status, or any other characteristic protected by federal or state law. Genmo, Inc. is an E-Verify company and you may review the Notice of E-Verify Participation and the Right to Work posters in English and Spanish.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job