Our Methodology

The Altiri AI GRC Framework —
From Assessment to Operationalized Governance

A structured methodology for organizations navigating AI adoption in regulated industries. Four phases, five maturity levels, and alignment to every framework regulators actually care about.

Four phases. One coherent program.

Most organizations treat AI governance as a compliance checkbox. Altiri's framework is designed to operationalize governance — embedding risk controls and oversight into how AI actually gets built and deployed.

01
Phase 1
AI Readiness Assessment
  • Current-state analysis across six governance domains
  • AI maturity scoring with domain-level radar chart
  • Complete AI system inventory and classification
  • Risk exposure mapping by system criticality
02
Phase 2
GRC Gap Analysis
  • Governance gaps mapped against NIST AI RMF & ISO 42001
  • Regulatory exposure assessment by jurisdiction
  • Compliance blind spots — HIPAA, CMMC, SOX, state AI laws
  • Prioritized remediation roadmap with ownership
03
Phase 3
Strategic AI Alignment
  • Custom 90-day governance roadmap
  • Framework selection and mapping (NIST / ISO / Gartner)
  • Policy infrastructure build — acceptable use, incident response
  • Board & executive alignment on AI risk posture
04
Phase 4
Operationalized Governance
  • Embedded vCAIO leadership & program ownership
  • Continuous monitoring & automated compliance checks
  • Quarterly governance reviews & control testing
  • Regulatory change management & board reporting cadence

Where does your organization stand?

Altiri's five-level maturity model gives organizations a common language for AI governance progress — and a clear picture of what "good" looks like at each stage.

1
Ad Hoc
No Formal AI Governance
AI adoption is reactive and uncoordinated. No policies, no risk framework, no inventory of AI systems. Compliance exposure is high and growing with each new deployment.
Governance Maturity
2
Foundational
Basic Policies & Awareness Training
AI acceptable use policies exist on paper. Leadership has AI literacy training. Risk assessments are informal or project-specific. No centralized oversight or model registry.
Governance Maturity
3
Structured
Framework-Aligned, Risk Registers Established
Organization has adopted a recognized framework (NIST AI RMF or ISO 42001). AI risk register exists with documented owners. Governance committee established. Most organizations that engage Altiri start here.
Governance Maturity
4
Managed
Continuous Monitoring, Automated Compliance Checks
Governance is operational, not ceremonial. Continuous monitoring of AI systems, automated drift detection, regular control testing, and integration with the organization's broader GRC tooling.
Governance Maturity
5
Optimized
AI Governance as Competitive Advantage
Governance enables faster, more confident AI adoption — not a bottleneck. Proactive regulatory engagement, measurable risk reduction tied to business outcomes, and AI governance as a differentiator in enterprise sales and procurement.
Governance Maturity
Not sure where you land? The free AI Readiness Assessment scores you across six governance domains and gives you a maturity level — in under 15 minutes.
Find out where you stand →

Built on the standards
regulators recognize

Altiri's methodology maps directly to the frameworks governing AI in regulated industries. Each phase of our engagement addresses specific requirements — so there's no translation work when you face an auditor.

NIST AI RMF
AI Risk Management Framework
The foundational U.S. standard for voluntary AI risk management. Our methodology maps directly to all four core functions — ensuring every governance deliverable has a clear NIST AI RMF home.
How our phases map
Govern
Phase 1–4: Governance structures, accountability, risk culture, and board-level oversight established across the engagement lifecycle
Map
Phase 1–2: AI system inventory, context documentation, risk categorization, and stakeholder impact mapping
Measure
Phase 2–3: Quantified risk analysis, bias evaluation, explainability gap assessment, and performance metrics
Manage
Phase 3–4: Control implementation, monitoring cadence, incident response integration, and continuous improvement
ISO 42001
AI Management System Standard
The first international standard for AI management systems. Designed for organizations that want certification-ready governance — or need to demonstrate AI management rigor to enterprise customers.
How our phases map
Context
Phase 1: Organizational context analysis, interested parties, AI system scope definition, and AI policy intent
Planning
Phase 2–3: Risk and opportunity assessment, AI objectives, operational planning, and control selection
Support
Phase 3: Competence frameworks, awareness programs, communication plans, and documentation controls
Improvement
Phase 4: Nonconformity management, corrective action, and management review for continuous improvement
Gartner AI TRiSM
AI Trust, Risk & Security Management
Gartner's AI TRiSM framework addresses the operational trust and security dimensions of deployed AI — particularly relevant for organizations with production AI systems facing adversarial or reliability risks.
How our phases map
Explainability
Phase 2: Model documentation, explainability requirements by risk tier, and stakeholder communication standards
ModelOps
Phase 3–4: Model lifecycle governance, performance monitoring, drift detection, and retraining controls
Data Anomaly
Phase 2: Data lineage documentation, training data governance, and anomaly detection requirements
Adversarial
Phase 3–4: Adversarial risk controls, prompt injection policies, and model security assessment procedures
NIST CSF 2.0
Cybersecurity Framework 2.0
NIST CSF 2.0 added Govern as a new core function — acknowledging that AI systems require the same organizational accountability as cybersecurity programs. Our methodology provides the crosswalk between CSF and AI-specific controls.
How our phases map
Govern
Phase 1–4: AI governance policy, roles, risk management strategy, and supply chain risk management for AI
Identify
Phase 1: AI asset management, business environment analysis, and risk assessment for AI-enabled systems
Protect / Detect
Phase 3–4: AI access controls, anomaly detection, continuous monitoring, and security event identification
Respond / Recover
Phase 3–4: AI incident response playbooks, communications, and recovery planning for AI system failures

Sector requirements don't fit
generic frameworks

Regulated industries carry AI compliance obligations that go beyond horizontal frameworks. Our methodology incorporates sector-specific requirements from day one.

🏥
Healthcare
HIPAA · Clinical AI · FDA SaMD · PHI Pipelines
HIPAA AI
PHI exposure in LLM-based clinical tools — BAA requirements, data minimization, and de-identification controls for AI training pipelines
Clinical AI
Diagnostic and treatment recommendation model validation — FDA Software as a Medical Device alignment and clinical performance monitoring
Patient Safety
Bias auditing for clinical decision support, population health algorithms, and AI-assisted triage — with documented safety thresholds
🏦
Financial Services
SR 11-7 · FINRA · SOX · Algorithmic Fairness
SR 11-7
Extending Model Risk Management to generative AI and LLM-based applications — validation standards, model inventories, and ongoing performance tracking
FINRA
Algorithmic trading governance, explainability requirements, and AI supervision obligations under current and emerging FINRA guidance
SOX / AI
AI controls for financial reporting automation — audit trail requirements, change management documentation, and compensating controls for AI-assisted decisions
🛡️
Defense & Government
CMMC · FedRAMP · DoD AI Ethics · RAI
CMMC
CMMC Level 2/3 AI controls for defense contractors — CUI protection in AI systems, access controls, and configuration management for ML infrastructure
FedRAMP+AI
AI-enabled cloud service authorization — continuous monitoring controls, AI component security documentation, and FedRAMP boundary definition for ML systems
DoD RAI
DoD Responsible AI principles and AI Ethics Framework implementation — traceability, reliability, governability, and bias controls for government contractor AI

Start with a free
AI Readiness Assessment

Understand your current governance maturity, identify your highest-risk gaps, and get a prioritized action plan — no sales call required.