Sherpany is the Swiss market leader for meeting management software. Since its founding in 2011, it has pursued the goal of creating a world in which every meeting counts. Over 400 European companies and 12,000 decision-makers are already using Sherpany with the aim of making their business relevant meetings more productive and thereby increasing corporate success. With headquarters in Zurich and 130 Sherpanees from 27 different nationalities, we are an international company with a flat hierarchy, in which you can take lot of responsibility and your ideas are always welcome.
Your mission as Machine Learning Engineer is to:
Develop, deploy, and optimize self-hosted ML pipelines, focusing on LLM and NLP model performance and reliability
Collaborate closely with backend, security and DevOps teams to align LLM model architecture with infrastructure and security protocols
Design workflows for prompt templating, orchestration and optimization of LLMs to power Sherpany’s product features
Manage model performance monitoring, logging and fine-tuning for continuous improvements
Develop backend solutions that securely interface with ML models, ensuring efficient API performance and scaling
Play a pivotal role in translating data science insights into actionable products
Stay current with AI/ML industry developments and apply best practices to Sherpany’s platform
What we will love about you:
Strong background in ML and NLP with proven experience, preferred in self-hosted LLMs (e.g. LLaMA)
Proficiency in backend engineering, especially in Python and building scalable API integrations
Skilled in Docker and Kubernetes/similar for model deployment within on-prem infrastructure
Experience with data pipelines, logging and monitoring for large-scale ML models
Solution-oriented team player with a proactive, autonomous work style and strong communication verbal and written skills in English
How you can imagine us:
You are part of an international company with a flat hierarchy, in which you can take lot of responsibility and your ideas are always welcome
In order to maintain your work-life balance we offer flexible working hours and remote-workin
Your personal and professional development is important to us which is why we offer financial support for further education, trainings etc.
Last but not least: Our corporate culture means a lot to us which is why we organize regular team events and cultivate a value-driven cooperation
Recruiting process:
Interview with our Talent Acquisition Specialist
Show us your skills in a Challenge
Meet our Squad Lead and the Product Manager
Technical Interview with our Experts
Meet the Team members & our VP Engineering
Job offer
Milestones
1-3 months
Understand Sherpany as a product and technology, the Product Team organization, and how we work within the lab squad and backend/data science chapters
Gain familiarity with Sherpany’s on-prem infrastructure, technology stack, and self-hosted LLM systems
Schedule coffee chats with every squad member, your Sherpybuddy and start building connections with the backend members
3-6 months
Get feedback on how you are doing as an ML engineer within the squad/chapter
Actively contribute to the squad’s AI-feature projects and initiatives (ML model deployment and backend integration)
Gain solid knowledge of our technology stack, tooling used in Sherpany, AI architecture and infrastructure
Have your first interactions with all colleagues across Sherpany you will work with on a regular basis, including TechLeads, Jedis (our backend chapter), our Security team and DevOps team
6+ months
Take ownership of LLM optimization initiatives and propose improvements for efficiency and scalability
Contribute to the roadmap for on-prem LLM implementations and model tuning
Are you ready for the challenge? We look forward to receiving your application!
Tagged as: 3-5 Years, 5+ Years, docker, Kubernetes, machine learning, Python