Director, Data and AI Governance
Job Description
Role Overview
As the Director of Data & AI Governance, you will establish and lead enterprise-wide data management programs that ensure safe, compliant, high-quality data and AI. You will oversee Data Governance, AI Governance, and Data + AI Stewardship, serving as the central authority on policies, forums, and controls across R&D, Lab Operations, Commercial, and SG&A domains. You will lead the Data & AI Governance Council through advocacy and a well thought data management strategy.
This role goes beyond policies into technical governance — it requires experience of how to build frameworks, deploy controls in code, and integrate governance into engineering delivery. The role spans all dimensions of governance, including: quality, privacy, security, agentic automation, AI risk management, bias/fairness testing, evals, and vendor AI evaluation.
Key Responsibilities
Governance Council & Operating Model
Define enterprise data management strategy and operating model and ensure that it is executed consistently across the enterprise.
Chair and operationalize the AI & Data Governance Council, driving decision-making and accountability across legal, regulatory, compliance, IT, security, engineering, and product.
Lead a federated stewardship model, ensuring business units own data while governance enforces consistency and compliance.
Establish governance forums (steering committees, working groups, architecture boards) with clear outcomes.
Data Governance & Quality
Build and drive adoption of 360° master/reference datasets (e.g., Case360, Patient 360, Provider 360, Billing 360) and ensure they are maintained as sources of truth for analytics and AI
Partner with engineering teams to build interoperable standards that can be used to connect domain datasets to create longitudinal data products
Define and enforce enterprise data governance policies, ensuring consistency in data definitions, lineage, and stewardship across all domains
Build and manage enterprise data catalogs and metadata services to make data discoverable, trustworthy, and reusable across the organization.
Establish and operate data quality frameworks with validation rules, anomaly detection, and automated testing to ensure accuracy, completeness, and timeliness.
Embed data quality checks and lineage tracking directly into data and AI pipelines so that governance guardrails can be adopted without friction.
AI Policy Engineering & Implementation
Develop AI use case risk management framework (RMF) to evaluate AI use cases from a governance, regulatory, medical, privacy, security, and risk standpoint
Build and maintain an AI risk register and incident response plan for all AI use cases
Develop governance policies (privacy, security, quality, fairness, integrity) aligned to HIPAA, CLIA, FDA, GDPR, and emerging AI regulations.
Translate policies into technical implementations by embedding controls into:
ETL pipelines, feature stores, and model registries
CI/CD workflows for ML/GenAI models
Prompt orchestration and output logging for LLMs
Bias/fairness testing, drift detection, explainability dashboards
AI Risk & Automation
Build and execute agentic automation processes and associated guardrails to enable business process automation
Build documentation and process to ensure agent accountability through change history, audit logs, versioning etc.
Track external regulatory trends and industry standards (e.g., NIST AI RMF, EU AI Act, FDA AI/ML guidance)
AI Change Management & Vendor Governance
Lead AI change management initiatives, including training programs, awareness campaigns, and a network of governance champions to drive adoption of best practices.
Partner with Corporate Communications to cascade governance updates, AI guardrails, and usage guidelines across all levels of the organization.
Develop and enforce vendor and third-party AI evaluation frameworks, assessing external AI tools for governance, data security, model risk, and compliance posture before integration.
Track and manage vendor AI risks through standardized assessments, approvals, and monitoring processes.
Qualifications
Required:
15+ years in Data or AI governance with at least 7+ in leadership roles.
Excellent stakeholder management and executive communication skills.
Proven track record of building and leading enterprise-wide governance programs, councils, and stewardship networks that span both data governance (catalogs, lineage, quality) and AI/ML governance (model risk, bias/fairness, explainability, monitoring).
Deep understanding of healthcare regulatory frameworks impacting AI: HIPAA, CLIA, FDA, GDPR, and regulations (EU AI Act, NIST AI RMF, Japan AI Promotion Act).
Experience with LLM/GenAI governance implementation as per common regulatory frameworks: prompt logging, bias testing, guardrails, explainability.
Experience in vendor AI/ML evaluation and governance, including due diligence of third-party AI/ML tools and platforms.
Preferred:
Certifications in data governance, privacy, or risk management (DAMA CDMP, CIPP, CRISC)
Advanced degree (MS/PhD) in Computer Science, AI/ML, engineering or related field.
Experience driving AI change management and building a culture of responsible AI adoption (training, awareness, champion networks).
Proven experience embedding governance in code (pipelines, registries, CI/CD).
Is this company safe?
Ask Hyrizon AI to scan this company for potential red flags.
Safety First
- Never pay for a job application.
- Do not share sensitive bank info.
- Verify the client before starting work.