Evaluations Program Leader
Uber
About the Role
The Evaluations Program Leader will own the end-to-end strategy, design and execution of human evaluations for one or more of Uber’s GenAI-powered products at a time. These include conversational AI, voice AI, agent workflows and auto-evaluation systems. This role sits within the Global Digital Experience team, the operational arm of Uber’s customer support tech organisation, and is a critical driver of quality, safety, and performance across Uber’s next-generation AI solutions.
This leader will build and scale Uber’s Manual Evaluation framework: defining methodologies, creating evaluation rubrics, ensuring annotation quality, and generating the insights that shape model tuning, product improvements, and release decisions. They will partner closely with Product, Engineering, Data Science and Product Ops to understand product goals, build the pipeline for Manual Evaluations to achieve these - defining methodologies, creating evaluation rubrics, ensuring annotation quality - and translate evaluation outcomes into clear technical and operational actions to shape model tuning, product improvements, and release decisions.
The role includes both strategic leadership and operational execution. They will be responsible for setting the quality bar for evaluations, ensuring consistent delivery at scale, and driving continuous improvement of the evaluation pipeline.
The ideal candidate brings strong technical literacy in GenAI systems, exceptional program design and operational skills, and the ability to lead high-impact cross-functional initiatives. They are comfortable navigating ambiguity, building strong partnerships across Uber, and influencing product direction through rigorous evaluation insights. This is a rare opportunity to play a leading role in one of Uber’s most transformative technology programs and help shape the future of Uber’s AI-driven experiences.
What the Candidate Will Do
- Own the end-to-end strategy, design, and execution of Manual Evaluations for one or more of Uber’s GenAI-powered products (chatbots, voice AI, automated workflows, autoeval systems)
- Develop and continuously improve evaluation methodologies, including rubrics, taxonomies, annotation guidelines, quality standards and success metrics
- Partner with Product, Engineering, Data Science and Product Operations to understand product goals and ensure human evaluations directly inform model tuning, safety improvements, product design changes, and release decisions, as well as scaled operations teams to deliver on time, at short notice and to a high quality standard
- Package insights into clear, actionable narratives and present them to cross-functional leaders, influencing product and operational strategy
- Lead evaluation projects across multiple AI products simultaneously, ensuring timelines, quality and delivery expectations are met
- Establish processes and tools that scale, including workflow optimization, evaluator training, QA systems and feedback loops.
- Oversee a global manual evaluations operation, including indirect leadership of evaluators at multiple business sites and ongoing assessment of internal vs external resources to deliver the best evaluation outcomes
- Serve as a subject-matter expert in human evaluation for GenAI, staying current with best practices in safety testing, multimodal evaluation and human-in-the-loop systems.
---- Basic Qualifications ----
- Bachelors degree in engineering
- 5+ years of experience in program management, product operations, quality operations, research operations, or technical program leadership in an AI-related environment.
- First hand experience with GenAI systems, LLM evaluation, model safety, failure pattern analysis, prompt evaluation, or AI product quality.
- Experience designing or running structured evaluation or quality frameworks, such as human labeling, annotation, audit workflows or manual review processes.
- Familiarity with evaluation methodologies (rubric design, taxonomies, annotation guidelines, reliability scoring, inter-rater agreement, etc.).
- Strong project management abilities, with experience running multiple complex programs simultaneously.
- Proven experience managing outsourced teams to execute high-quality manual evaluation processes
---- Preferred Qualifications ----
- Demonstrated ability to work cross-functionally with Product, Engineering, Data Science, and Operations teams.
- Knowledge of automated evaluation systems, LLM-as-judge frameworks, or hybrid human+machine evaluation pipelines.
- Background in service design, conversational AI, voice UX, or agent workflows.
- Strong analytical and problem-solving skills, with experience turning ambiguous data into clear insights.
- Excellent written and verbal communication skills, capable of translating technical evaluation outputs into business-relevant insights.
- Experience in global operations, including scaling teams, training processes, and quality management across regions.
Uber's mission is to reimagine the way the world moves for the better. Here, bold ideas create real-world impact, challenges drive growth, and speed fuelds progress. What moves us, moves the world - let’s move it forward, together.
Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role.
*Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to accommodations@uber.com.