We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
Remote New

Principal Applied Scientist - Responsible AI

MultiPlan
United States
Mar 30, 2026

At Claritev, we pride ourselves on being a dynamic team of innovative professionals. Our purpose is simple - we strive to bend the cost curve in healthcare for all. Our dedication to service excellence extends to all our stakeholders -- internal and external - driving us to consistently exceed expectations. We are intentionally bold, we foster innovation, we nurture accountability, we champion diversity, and empower each other to illuminate our collective potential.

Be part of our amazing transformational journey as we optimize the opportunity towards becoming a leading technology, data, and innovation voice in healthcare. Onward and Upward!!!

JOB SUMMARY:

The Principal Applied Scientist - Responsible AI is responsible for the design, development, and technical evaluation of AI/ML systems, with a focus on production readiness, model reliability, and technical risk validation. This role serves as the technical authority within the Responsible AI program, ensuring that all AI systems meet defined standards for performance, observability, and risk mitigation prior to production deployment.

Your work directly impacts audit readiness, risk mitigation, and the trustworthiness of AI systems used across the business. This role does not own governance frameworks, policy, or enterprise AI architecture, but ensures that AI systems are technically compliant, measurable, and defensible within those frameworks.

JOB ROLES AND RESPONSIBILITIES:

1. Model Evaluation & Technical Risk Assessment



  • Develop and implement evaluation frameworks for model performance, reliability, and robustness
  • Assess risks including bias, hallucination, data leakage, and system failure modes
  • Provide technical validation for AI systems prior to governance approval


2. Production Readiness & Deployment Standards



  • Define and validate technical criteria for "production-ready AI"
  • Ensure systems meet required thresholds for deployment (performance, monitoring, controls)
  • Support go/no-go decisions with objective, evidence-based analysis


3. Observability & Monitoring



  • Design and implement model monitoring and evaluation pipelines
  • Define metrics for performance tracking, drift detection, and system health
  • Ensure systems produce traceable, audit-ready outputs


4. Cross-Functional Technical Leadership



  • Partner across Core AI, Engineering, Product, Legal, and Compliance teams
  • Serve as the technical reviewer in governance and production approval workflows
  • Provide guidance to engineering teams on aligning with governance requirements


5. Innovation & Applied Research



  • Research and evaluate emerging AI technologies, including generative AI and agentic systems
  • Prototype and test new approaches aligned with production and governance constraints
  • Contribute to continuous improvement of AI capabilities and standards



6. Mentorship & Technical Standards



  • Mentor applied scientists and data scientists
  • Promote best practices in experimentation, validation, and reproducibility
  • Elevate technical rigor across AI development teams



7. Governance Alignment (Non-Ownership)



  • Ensure AI systems comply with defined Responsible AI and governance standards
  • Partner with the AI Governance Architect to operationalize technical requirements
  • Support audit readiness through well-documented technical evidence



8. Technical Planning, Prioritization & Pipeline Management



  • Manage a portfolio of AI initiatives aligned to Responsible AI and business objectives
  • Provide transparency into pipeline status, technical dependencies, and risks
  • Balance research, development, and production support across multiple concurrent efforts
  • Ensure disciplined execution aligned with governance and production milestones

Applied = 0

(web-bd9584865-ksnsn)