What if the next breakthrough in generative AI isn't a new architecture, but understanding the tradeoffs well enough to make better training decisions? We're the ~50-person team behind Stable Diffusion, Stable Video Diffusion, and FLUX.1—models with 400M downloads. But here's what keeps us at the frontier: relentlessly questioning every design choice, ablating rigorously, and understanding not just what works, but why—and at what cost. That's the research you'll do. What You'll Pioneer You'll train large-scale diffusion models for image and video generation, pushing the boundaries of what's possible while maintaining the rigor that separates actual progress from incremental tweaks. This isn't about following established recipes—it's about running the experiments that reveal which architectural choices matter and which are just folklore. You'll be the person who: Trains large-scale diffusion transformer models for image and video data, working at the scale where intuitions break and empirical evidence matters Rigorously ablates design choices—running experiments that isolate variables, control for confounds, and produce insights you can actually trust—then communicating those results to shape our research direction Reasons about the speed-quality tradeoffs of neural network architectures in production settings where both constraints matter simultaneously Fine-tunes diffusion models for specialized applications like image and video upscalers, inpainting/outpainting models, and other tasks where general-purpose models aren't enough Questions We're Wrestling With Which architectural choices actually matter for image and video quality, and which are just expensive distractions? How do you design ablation studies that isolate the signal from the noise at billion-parameter scale? What are the real speed-quality tradeoffs for different architectures—and how do they change with scale? When does fine-tuning a foundation model work better than training from scratch, and why? How do you evaluate generative models in ways that correlate with what users actually care about? Which training techniques (FSDP configurations, precision strategies, parallelism approaches) matter for model quality versus just training speed? These aren't solved problems—they're questions we're actively figuring out through rigorous experimentation. Who Thrives Here You've trained large-scale diffusion models and developed strong intuitions about what matters. You understand that at research scale, every design choice has tradeoffs, and the only way to know which ones are worth making is through careful ablation. You're as comfortable debugging distributed training failures at 3am as you are presenting research findings to the team. You likely have: Hands-on experience training large-scale diffusion models for image and video data—the kind where you've hit every failure mode and learned what actually matters Experience fine-tuning diffusion models for specialized applications—upscalers, inpainting, outpainting, or other tasks where understanding the domain matters as much as understanding the architecture Deep understanding of how to effectively evaluate image and video generative models—knowing which metrics correlate with quality and which are just convenient proxies Strong proficiency in PyTorch, transformer architectures, and the full ecosystem of modern deep learning Solid understanding of distributed training techniques—FSDP, low precision training, model parallelism—because our models don't fit on one GPU and training decisions impact research outcomes We'd be especially excited if you: Have experience writing forward and backward Triton kernels and ensuring their correctness while considering floating point errors Bring proficiency with profiling, debugging, and optimizing single and multi-GPU operations using tools like Nsight or stack trace viewers Understand the performance characteristics of different architectural choices at scale Have published research that changed how people think about generative models What We're Building Toward We're not just training models—we're figuring out what actually matters in generative AI through rigorous experimentation. Every ablation study reveals assumptions we didn't know we were making. Every architecture decision teaches us about the tradeoffs that matter. Every training run at scale surfaces insights that don't show up at smaller scales. If that sounds more compelling than following established approaches, we should talk. We're based in Europe and value depth over noise, collaboration over hero culture, and honest technical conversations over hype. Our models have been downloaded hundreds of millions of times, but we're still a ~50-person team learning what's possible at the edge of generative AI.