Jobs
Meine Anzeigen
Jobs per E-Mail
Anmelden
Stellenangebote Job Tipps Unternehmen
Suchen

Member of technical staff - image / video applications

Freiburg (Elbe)
Black Forest Labs
Inserat online seit: Veröffentlicht vor 2 Std.
Beschreibung

What if the gap between a research breakthrough and a tool creators actually use is giving them the right controls—not just better outputs? We're the ~50-person team behind Stable Diffusion, Stable Video Diffusion, and FLUX.1—models with 400M downloads. But here's what we've learned: raw generation power isn't enough. Creators need precise control—hex color palettes, transparency channels, custom aspect ratios—the practical mechanisms that turn experimental models into production tools. That's the bridge you'll build. What You'll Pioneer You'll develop control mechanisms that make our models genuinely useful for real-world creative workflows. This isn't about academic novelty—it's about training large-scale diffusion models with the practical controls that designers, filmmakers, and developers actually need to ship work they're proud of. You'll be the person who: Trains large-scale diffusion transformer models with advanced control mechanisms—hex color control, transparency generation, custom aspect ratios, and other production-ready features that bridge research and practice Develops conditioning mechanisms for practical production requirements in image and video generation, understanding what creators need before they can articulate it Rigorously ablates design choices for applied controls, running experiments that tell us not just what works, but what works well enough to ship—and communicating those insights to shape our research direction Reasons about the speed-quality tradeoffs of control architectures for real-world applications where both matter and neither can be sacrificed completely Questions We're Wrestling With How do you give users precise color control without breaking what makes diffusion models powerful in the first place? What's the right way to condition models on transparency requirements—and why is it harder than it looks? Where do practical controls help creativity, and where do they constrain it? How do you evaluate whether a control mechanism actually works for real creators versus just working in ablation studies? What's the speed-quality tradeoff for production control architectures, and how do we make it acceptable for real workflows? Which controls matter most to creators, and which are we building because they're technically interesting? These aren't theoretical questions—they determine whether our models get used or abandoned. Who Thrives Here You've trained large-scale diffusion models and understand the gap between research capabilities and production requirements. You know that shipping a useful control mechanism is harder than publishing a paper about one. You've fine-tuned models for real applications and developed intuition for what actually matters to users. You likely have: Hands-on experience training large-scale diffusion models for image and video data—the kind where you've debugged training instabilities and understood why controls sometimes break generation quality Experience fine-tuning diffusion models for image and video applications like upscalers, inpainting/outpainting models, or other applied tasks where constraints matter Deep understanding of how to effectively evaluate image and video generative models—knowing the difference between metrics that correlate with quality and metrics that just look good on paper Strong proficiency in PyTorch, transformer architectures, and the full ecosystem of modern deep learning Solid understanding of distributed training techniques—FSDP, low precision training, model parallelism—because our models don't fit on one GPU We'd be especially excited if you: Have experience writing forward and backward Triton kernels and ensuring their correctness while considering floating point errors Bring proficiency with profiling, debugging, and optimizing single and multi-GPU operations using tools like Nsight or stack trace viewers Understand the performance characteristics of different control mechanisms at scale Have shipped features that real users depend on, not just research prototypes What We're Building Toward We're not just adding features—we're figuring out how to give creators the controls they need without sacrificing what makes generative models powerful. Every control mechanism we ship enables new creative workflows. Every ablation study teaches us what users actually care about versus what we think they should care about. If that sounds more compelling than pure research, we should talk. We're based in Europe and value depth over noise, collaboration over hero culture, and honest technical conversations over hype. Our models have been downloaded hundreds of millions of times, but we're still a ~50-person team learning what's possible at the edge of generative AI.

Bewerben
E-Mail Alert anlegen
Alert aktiviert
Speichern
Speichern
Mehr Stellenangebote
Ähnliche Angebote
Jobs Stade (Kreis)
Jobs Freiburg (Elbe)
Jobs Niedersachsen
Home > Stellenangebote > Member of Technical Staff - Image / Video Applications

Jobijoba

  • Job-Ratgeber
  • Bewertungen Unternehmen

Stellenangebote finden

  • Stellenangebote nach Jobtitel
  • Stellenangebote nach Berufsfeld
  • Stellenangebote nach Firma
  • Stellenangebote nach Ort
  • Stellenangebote nach Stichworten

Kontakt / Partner

  • Kontakt
  • Veröffentlichen Sie Ihre Angebote auf Jobijoba

Impressum - Allgemeine Geschäftsbedingungen - Datenschutzerklärung - Meine Cookies verwalten - Barrierefreiheit: Nicht konform

© 2025 Jobijoba - Alle Rechte vorbehalten

Bewerben
E-Mail Alert anlegen
Alert aktiviert
Speichern
Speichern