Member of Technical Staff - Model Serving / API Backend Engineer

Published: 2025-11-16

At Black Forest Labs, we’re on a mission to advance the state of the art in generative deep learning for media, building powerful, creative, and open models that push what’s possible. Born from foundational research, we continuously create advanced infrastructure to transform ideas into images and videos. Our team pioneered Latent Diffusion, Stable Diffusion, and FLUX.1 – milestones in the ...

Job details

San Francisco, United States (city)
$180k - $300k
On-site
Full-time

What if the gap between a breakthrough research model and something millions of people can actually use is making inference fast enough and APIs reliable enough to matter?

We're the ~50-person team behind Stable Diffusion, Stable Video Diffusion, and FLUX.1—models with 400M+ downloads. But here's what doesn't show up in download numbers: transforming research models into production systems that serve millions of requests reliably. Models that generate in seconds, not minutes. APIs that don't fall over under load. Demos that convince people this technology is real. That's the infrastructure you'll build.

What You'll Pioneer

You'll be the bridge between research breakthroughs and production reality. This isn't about maintaining existing APIs—it's about taking models fresh from research, optimizing them for inference, wrapping them in robust serving infrastructure, and shipping demos that show the world what's possible.

You'll be the person who:

  • Develops and maintains robust APIs for serving machine learning models at scale—because reliability matters when millions depend on your endpoints
  • Transforms research models into production-ready demos and MVPs that showcase capabilities without pretending research prototypes are production systems
  • Optimizes model inference for improved performance and scalability using whatever techniques work—batching, quantization, custom kernels, compiler optimizations
  • Implements and manages user preference data acquisition systems that help us understand what actually works in production
  • Ensures high availability and reliability of model serving infrastructure—because downtime means users can't create
  • Collaborates with ML researchers to rapidly prototype and deploy new models, moving from research checkpoint to API endpoint faster than seems reasonable
Questions We're Wrestling With
  • How do you make diffusion model inference fast enough for interactive experiences without sacrificing quality?
  • What's the right serving architecture for models that don't fit standard deployment patterns?
  • How do you balance inference optimization techniques (quantization, batching, compilation) against model quality?
  • Where should optimization happen—in the model, in the serving layer, or in custom CUDA code?
  • How do you build APIs that are reliable enough for production but flexible enough for research iteration?
  • What does "production-ready" actually mean for models that are still actively being improved?
  • How do you instrument systems to understand not just that they're working, but how well they're working for users?

These aren't theoretical—they're tradeoffs you'll navigate daily as research becomes product.

Who Thrives Here

You've built ML serving infrastructure and understand the gap between research checkpoints and production APIs. You know how to optimize inference without making models worse. You're comfortable in Python for ML and backend development, but you also know when to drop down to custom CUDA or use compiler optimizations to hit performance targets.

You likely have:

  • Strong proficiency in Python and its ecosystem for machine learning, data analysis, and web development
  • Extensive experience with RESTful API development and deployment for ML tasks—you've built APIs that real products depend on
  • Familiarity with containerization and orchestration technologies (Docker, Kubernetes) for deploying ML services at scale
  • Knowledge of cloud platforms (AWS, GCP, or Azure) for deploying and scaling ML services in production
  • Proven track record in rapid ML model prototyping using tools like Streamlit or Gradio—because demos matter for showing what's possible
  • Experience with distributed task queues and scalable model serving architectures that handle variable load
  • Understanding of monitoring, logging, and observability best practices for ML systems—because you can't fix what you can't see

We'd be especially excited if you:

  • Have experience with frontend development frameworks (Vue.js, Angular, React) for building compelling demos
  • Bring familiarity with MLOps practices and tools
  • Know database systems and data streaming technologies
  • Have experience with A/B testing and feature flagging in production environments
  • Understand security best practices for API development and ML model serving
  • Have built real-time inference systems with low-latency optimizations
  • Know CI/CD pipelines and automated testing for ML systems
  • Bring expertise in ML inference optimizations including:
    • Reducing initialization time and memory requirements
    • Implementing dynamic batching
    • Utilizing reduced precision and weight quantization
    • Applying TensorRT optimizations
    • Performing layer fusion and model compilation
    • Writing custom CUDA code for performance enhancements
What We're Building Toward

We're not just serving models—we're building the infrastructure that makes frontier AI research usable at scale. Every optimization you ship makes inference faster. Every API you build enables new applications. Every demo you create shows the world what's possible. If that sounds more compelling than maintaining existing systems, we should talk.

Base Annual Salary: $180,000–$300,000 USD

We're based in Europe and value depth over noise, collaboration over hero culture, and honest technical conversations over hype. Our models have been downloaded hundreds of millions of times, but we're still a ~50-person team learning what's possible at the edge of generative AI.

Apply