Our Methodology

The Altiri AI GRC Framework —
From Assessment to Operationalized Governance

A structured methodology for organizations navigating AI adoption in regulated industries. Four phases, five maturity levels, and alignment to every framework regulators actually care about — including NIST CSF 2.0, which bridges cybersecurity controls with GRC program requirements.

87%
of organizations claim AI governance — only 25% have operationalized it
more likely to face regulatory action without a documented AI risk program
62%
of enterprise AI projects stall at governance — before they reach production

The gap is methodology,
not intent

Most organizations want to govern AI responsibly. They adopt NIST AI RMF. They train on ISO 42001. They download Gartner reports. Then nothing changes.

"The organizations that succeed don't have better frameworks — they have a structured process for operationalizing them. That's the gap we solve."
— Altiri Framework Methodology v2.1
📋
For CISOs & Risk Officers
A defensible, auditable AI risk program aligned to the frameworks regulators and auditors already recognize — without rebuilding your GRC infrastructure from scratch.
🎯
For CDOs & AI Leaders
Governance that accelerates AI deployment rather than blocking it. A structured program that earns organizational trust — and keeps projects moving through approval faster.
⚖️
For Compliance Officers
Regulatory evidence that maps directly to your audit requirements — HIPAA, FINRA, CMMC, and state AI laws — without custom documentation for every framework.

Four phases. One coherent program.

Most organizations treat AI governance as a compliance checkbox. Altiri's framework is designed to operationalize governance — embedding risk controls and oversight into how AI actually gets built and deployed.

01
Phase 1
AI Readiness Assessment
  • Current-state analysis across six governance domains
  • AI maturity scoring with domain-level radar chart
  • Complete AI system inventory and classification
  • Risk exposure mapping by system criticality
02
Phase 2
GRC Gap Analysis
  • Governance gaps mapped against NIST AI RMF & ISO 42001
  • Regulatory exposure assessment by jurisdiction
  • Compliance blind spots — HIPAA, CMMC, SOX, state AI laws
  • Prioritized remediation roadmap with ownership
03
Phase 3
Strategic AI Alignment
  • Custom 90-day governance roadmap
  • Framework selection and mapping (NIST / ISO / Gartner)
  • Policy infrastructure build — acceptable use, incident response
  • Board & executive alignment on AI risk posture
04
Phase 4
Operationalized Governance
  • Embedded vCAIO leadership & program ownership
  • Continuous monitoring & automated compliance checks
  • Quarterly governance reviews & control testing
  • Regulatory change management & board reporting cadence

Where framework phases
become delivered services

Our four-phase methodology is executed through three practice areas. Each pillar has defined activities, deliverables, and regulatory framework references — so you know exactly what you're buying and what evidence it produces.

Pillar 01
🛡️
AI Governance & Risk Management
Build the governance infrastructure that regulators expect to see — policies, controls, risk registers, and oversight structures grounded in recognized standards.
Key Activities & Deliverables
  • AI system inventory and risk classification
  • AI acceptable use policy + incident response playbooks
  • Board-level risk register with documented owners
  • Governance committee charter and operating model
  • Quarterly control testing and board reporting
NIST AI RMF ISO 42001 Gartner AI TRiSM
Pillar 02
⚖️
Compliance Framework Alignment
Map your AI program to the specific regulatory frameworks your auditors care about — producing evidence-ready documentation without redundant work across frameworks.
Key Activities & Deliverables
  • Multi-framework gap analysis with remediation roadmap
  • NIST AI RMF core function documentation
  • ISO 42001 AIMS readiness assessment
  • Sector-specific compliance crosswalk (HIPAA / FINRA / CMMC)
  • Audit-ready evidence package
NIST CSF 2.0 ISO 27001 HIPAA CMMC
Pillar 03
🧠
vCAIO Strategic Leadership
Embedded fractional AI governance leadership that owns the program — not a consultant who delivers a report and disappears. Ongoing accountability for keeping AI governance operational.
Key Activities & Deliverables
  • Ongoing AI governance program ownership
  • Regulatory change management & horizon scanning
  • Executive and board communication on AI risk
  • Vendor AI risk assessment and procurement guidance
  • Monthly governance reviews + incident support
NIST AI RMF EU AI Act State AI Laws

Where does your organization stand?

Altiri's five-level maturity model gives organizations a common language for AI governance progress — and a clear picture of what "good" looks like at each stage.

1
Ad Hoc
No Formal AI Governance
AI adoption is reactive and uncoordinated. No policies, no risk framework, no inventory of AI systems. Compliance exposure is high and growing with each new deployment.
Governance Maturity
2
Foundational
Basic Policies & Awareness Training
AI acceptable use policies exist on paper. Leadership has AI literacy training. Risk assessments are informal or project-specific. No centralized oversight or model registry.
Governance Maturity
3
Structured
Framework-Aligned, Risk Registers Established
Organization has adopted a recognized framework (NIST AI RMF or ISO 42001). AI risk register exists with documented owners. Governance committee established. Most organizations that engage Altiri start here.
Governance Maturity
4
Managed
Continuous Monitoring, Automated Compliance Checks
Governance is operational, not ceremonial. Continuous monitoring of AI systems, automated drift detection, regular control testing, and integration with the organization's broader GRC tooling.
Governance Maturity
5
Optimized
AI Governance as Competitive Advantage
Governance enables faster, more confident AI adoption — not a bottleneck. Proactive regulatory engagement, measurable risk reduction tied to business outcomes, and AI governance as a differentiator in enterprise sales and procurement.
Governance Maturity
Not sure where you land? The free AI Readiness Assessment scores you across six governance domains and gives you a maturity level — in under 15 minutes.
Find out where you stand →

The free assessment is your
framework entry point

The AI Readiness Self-Assessment isn't just a score — it's your diagnostic tool for locating your organization on the maturity model and routing you to the right framework phase. Take it first, engage later with a clear roadmap.

Free AI Readiness Assessment
10 Questions. Six Governance Domains.
The assessment covers every dimension regulators examine — scoring your current state and identifying the highest-priority gaps before you spend a dollar on consulting.
AI Strategy & Leadership Accountability
Risk Management & Oversight Structures
Data Governance & Privacy Controls
Compliance Readiness & Regulatory Mapping
AI Security & Technical Controls
Transparency & Explainability Standards
Take the Free Assessment →
Your Score → Your Starting Point
Maturity Level Routes Framework Entry
Your score maps directly to a maturity level and a recommended framework entry phase — so you know exactly where to start and what to prioritize.
Level 1
Ad Hoc
Full assessment → emergency policy infrastructure + AI system inventory
Phase 1–2
Level 2
Foundational
Gap analysis against NIST AI RMF → structured governance roadmap
Phase 2–3
Level 3
Structured
Most Altiri clients start here → operationalize existing framework adoption
Phase 3–4
Level 4
Managed
vCAIO leadership → continuous improvement + board program optimization
Phase 4
Level 5
Optimized
Advisory only → competitive differentiation + regulatory engagement
Advisory

Built on the standards
regulators recognize

Altiri's methodology maps directly to the frameworks governing AI in regulated industries. Each phase of our engagement addresses specific requirements — so there's no translation work when you face an auditor.

NIST AI RMF
AI Risk Management Framework
The foundational U.S. standard for voluntary AI risk management. Our methodology maps directly to all four core functions — ensuring every governance deliverable has a clear NIST AI RMF home.
How our phases map
Govern
Phase 1–4: Governance structures, accountability, risk culture, and board-level oversight established across the engagement lifecycle
Map
Phase 1–2: AI system inventory, context documentation, risk categorization, and stakeholder impact mapping
Measure
Phase 2–3: Quantified risk analysis, bias evaluation, explainability gap assessment, and performance metrics
Manage
Phase 3–4: Control implementation, monitoring cadence, incident response integration, and continuous improvement
ISO 42001
AI Management System Standard
The first international standard for AI management systems. Designed for organizations that want certification-ready governance — or need to demonstrate AI management rigor to enterprise customers.
How our phases map
Context
Phase 1: Organizational context analysis, interested parties, AI system scope definition, and AI policy intent
Planning
Phase 2–3: Risk and opportunity assessment, AI objectives, operational planning, and control selection
Support
Phase 3: Competence frameworks, awareness programs, communication plans, and documentation controls
Improvement
Phase 4: Nonconformity management, corrective action, and management review for continuous improvement
Gartner AI TRiSM
AI Trust, Risk & Security Management
Gartner's AI TRiSM framework addresses the operational trust and security dimensions of deployed AI — particularly relevant for organizations with production AI systems facing adversarial or reliability risks.
How our phases map
Explainability
Phase 2: Model documentation, explainability requirements by risk tier, and stakeholder communication standards
ModelOps
Phase 3–4: Model lifecycle governance, performance monitoring, drift detection, and retraining controls
Data Anomaly
Phase 2: Data lineage documentation, training data governance, and anomaly detection requirements
Adversarial
Phase 3–4: Adversarial risk controls, prompt injection policies, and model security assessment procedures
NIST CSF 2.0
Cybersecurity Framework 2.0
NIST CSF 2.0 added Govern as a new core function — acknowledging that AI systems require the same organizational accountability as cybersecurity programs. Our methodology provides the crosswalk between CSF and AI-specific controls. GRC is the management layer that makes cybersecurity investments defensible. Read: GRC & Cybersecurity — NIST CSF for Regulated Industries →
How our phases map
Govern
Phase 1–4: AI governance policy, roles, risk management strategy, and supply chain risk management for AI
Identify
Phase 1: AI asset management, business environment analysis, and risk assessment for AI-enabled systems
Protect / Detect
Phase 3–4: AI access controls, anomaly detection, continuous monitoring, and security event identification
Respond / Recover
Phase 3–4: AI incident response playbooks, communications, and recovery planning for AI system failures

Framework component → regulation mapping

Every deliverable in our engagement traces to specific requirements across multiple regulatory frameworks. No orphaned artifacts. No re-documentation for auditors.

Framework Component NIST AI RMF ISO 42001 Healthcare Financial Defense
AI System Inventory
Phase 1 deliverable
MAP 1.1MAP 1.5 Clause 4.3Clause 8.4 HIPAA §164.308 SR 11-7 §3.1 CMMC L2 CM.L2
AI Risk Register
Phase 1–2 deliverable
GOVERN 1.2MAP 2.2 Clause 6.1Annex A.6.1 FDA SaMD Risk FINRA 17a-4 DoD RAI §4.2
AI Governance Policy
Phase 2–3 deliverable
GOVERN 1.1GOVERN 2.2 Clause 5.2Clause 7.5 HIPAA §164.316 SOX §302/906 CMMC L2 PL.L2
Bias & Fairness Assessment
Phase 2 deliverable
MEASURE 2.5MEASURE 2.6 Annex A.6.2Annex A.10.3 OCR AI Guidance ECOA Fairness DoD RAI §3.c
Model Documentation
Phase 2–3 deliverable
MAP 5.1MEASURE 1.1 Clause 7.5Annex A.8.4 FDA 21 CFR §820 SR 11-7 §4 FedRAMP SSP
Continuous Monitoring Program
Phase 4 deliverable
MANAGE 4.1MANAGE 4.2 Clause 9.1Clause 10.2 HIPAA §164.308(a)(8) SR 11-7 Ongoing FedRAMP ConMon
Incident Response Plan
Phase 3 deliverable
MANAGE 3.2MANAGE 4.3 Annex A.9.5 HIPAA Breach 45 CFR §164.400 FINRA Rule 4370 CMMC IR.L2-3.6

Sector requirements don't fit
generic frameworks

Regulated industries carry AI compliance obligations that go beyond horizontal frameworks. Our methodology incorporates sector-specific requirements from day one.

🏥
Healthcare
HIPAA · Clinical AI · FDA SaMD · PHI Pipelines
HIPAA AI
PHI exposure in LLM-based clinical tools — BAA requirements, data minimization, and de-identification controls for AI training pipelines
Clinical AI
Diagnostic and treatment recommendation model validation — FDA Software as a Medical Device alignment and clinical performance monitoring
Patient Safety
Bias auditing for clinical decision support, population health algorithms, and AI-assisted triage — with documented safety thresholds
🏦
Financial Services
SR 11-7 · FINRA · SOX · Algorithmic Fairness
SR 11-7
Extending Model Risk Management to generative AI and LLM-based applications — validation standards, model inventories, and ongoing performance tracking
FINRA
Algorithmic trading governance, explainability requirements, and AI supervision obligations under current and emerging FINRA guidance
SOX / AI
AI controls for financial reporting automation — audit trail requirements, change management documentation, and compensating controls for AI-assisted decisions
🛡️
Defense & Government
CMMC · FedRAMP · DoD AI Ethics · RAI
CMMC
CMMC Level 2/3 AI controls for defense contractors — CUI protection in AI systems, access controls, and configuration management for ML infrastructure
FedRAMP+AI
AI-enabled cloud service authorization — continuous monitoring controls, AI component security documentation, and FedRAMP boundary definition for ML systems
DoD RAI
DoD Responsible AI principles and AI Ethics Framework implementation — traceability, reliability, governability, and bias controls for government contractor AI

Start with a free
AI Readiness Assessment

Understand your current governance maturity, identify your highest-risk gaps, and get a prioritized action plan — no sales call required.