Forward Deployed Machine Learning Engineer

Published: 2025-11-16

At Black Forest Labs, we’re on a mission to advance the state of the art in generative deep learning for media, building powerful, creative, and open models that push what’s possible. Born from foundational research, we continuously create advanced infrastructure to transform ideas into images and videos. Our team pioneered Latent Diffusion, Stable Diffusion, and FLUX.1 – milestones in the ...

Job details

Germany, Western Europe (country)
$180k - $300k
On-site
Full-time

What if the hardest part of generative AI isn't training the model, but making it work in production under constraints no one anticipated?

We're the ~50-person team behind Stable Diffusion, Stable Video Diffusion, and FLUX.1—models with 400M+ downloads. But here's what keeps us up at night: the gap between a model that works beautifully in research and one that serves millions of requests with sub-second latency while maintaining quality. That's not a solved problem. That's your problem to help us solve.

What You'll Pioneer

You'll live at the intersection of cutting-edge research and brutal production reality. Your customers won't just want FLUX to work—they'll need it optimized for their specific hardware, fine-tuned for their unique use cases, and integrated into systems that weren't designed for diffusion models in the first place.

You'll be the person who:

  • Ensures FLUX models perform optimally in customer environments—whether that's on-premise GPU clusters or BFL-hosted infrastructure—balancing the eternal tension between latency and output quality
  • Architects deep product integrations that go far beyond "here's an API endpoint"—helping customers with everything from model hosting and deployment to inference optimization techniques that haven't made it into textbooks yet
  • Customizes our foundation models for visual media to solve problems customers couldn't articulate until you helped them understand what's possible
  • Sits in technical deep-dives with customers to diagnose performance bottlenecks, then translates those findings into solutions (and sometimes into research questions for our core team)
  • Discovers where generative visual AI should go next by understanding what industries are struggling with problems we could solve
Questions We're Wrestling With
  • What does "optimal performance" actually mean when one customer needs 100ms latency and another needs photorealistic quality at any cost?
  • How do you fine-tune a foundation model for a customer's specific use case without losing what made it powerful in the first place?
  • When should a customer run FLUX on their own infrastructure versus use our hosted solution—and how do we help them make that decision honestly?
  • What inference optimizations work in theory but break in production, and vice versa?
  • Which industries don't yet realize they have a generative visual AI problem we could solve?

We're figuring these out together, at the edge of what's technically possible.

Who Thrives Here

You understand diffusion models not just conceptually, but viscerally—you've debugged them, optimized them, served them at scale. You've been in the room when a customer's integration goes wrong and you need to diagnose whether it's a model issue, an infrastructure issue, or a fundamental misunderstanding of what the model can do.

You likely have:

  • Direct experience working with customers on generative AI deployment—the kind where you're iterating on solutions in real-time, not just following a playbook
  • Hands-on expertise with generative modeling approaches, particularly finetuning, optimizing, and serving deep learning models in production environments
  • A proven track record as an ML engineer who's shipped models that real systems depend on
  • Strong Python skills and intuitive understanding of API design (because demos and prototypes are how you communicate what's possible)
  • The ability to explain why a diffusion model is slow to an executive and how to fix it to an engineer—in the same meeting

We'd be especially excited if you:

  • Have deep knowledge of diffusion models and/or flow matching, including finetuning and distillation techniques that go beyond standard tutorials
  • Know the FLUX ecosystem intimately—ComfyUI, common training frameworks, the tools practitioners actually use
  • Have battle-tested experience optimizing inference for transformer-based models (and the scars to prove it)
  • Can architect solutions in complex enterprise environments where "just add more GPUs" isn't an option
  • Contribute to open-source projects in the diffusion model space and understand the community
  • Have deployed models on cloud platforms using state-of-the-art serving infrastructure
What We're Building Toward

We're not just supporting customers—we're learning what it actually takes to bring frontier generative AI into production at scale. Every customer integration teaches us something we didn't know. Every optimization challenge reveals gaps in our understanding. If that sounds more compelling than having all the answers documented, we should talk.

Base Annual Salary: $180,000–$300,000 USD

We're based in Europe and value depth over noise, collaboration over hero culture, and honest technical conversations over hype. Our models have been downloaded hundreds of millions of times, but we're still a ~50-person team learning what's possible at the edge of generative AI.

Apply