Join a growing European SaaS company building AI-powered tools that help lawyers, consultants, and financial professionals process large volumes of complex documents in seconds.
Their flagship product uses LLMs, retrieval-augmented generation (RAG), and custom fine-tuning to extract key insights from contracts, filings, and internal documentation — streamlining research, risk analysis, and compliance work.
Why this company?
* Backed by top-tier investors and scaling fast across Europe and the US
* They’ve built their own fine-tuned LLM pipelines — no overreliance on off-the-shelf APIs
* Product is in production with hundreds of enterprise users
* Research and engineering work is deeply intertwined (not siloed)
What you’ll do :
* Fine-tune and deploy transformer models for multi-language legal / NLP tasks
* Build and monitor real-time document classification, summarisation and QA pipelines
* Work with MLOps engineers to scale model performance
* Collaborate with legal domain experts and product managers
What you’ll need :
* 3+ years in ML / DS with strong Python skills
* Experience working with Hugging Face, PyTorch, or similar
* Prior work on NLP projects, ideally including LLMs or RAG architectures
* Bonus : AWS / GCP experience, exposure to LangChain, Pinecone, or Weaviate
If you’re excited by real-world LLM deployment — not just prototypes — let’s talk.
#J-18808-Ljbffr