On April 7, 2026, NIST released a concept note for an AI RMF Profile on Trustworthy AI Critical Infrastructure. I’ve read the note. I’ve read the coverage. I’ve read the predecessor guidance from CISA and DHS. And I keep coming back to the same conclusion.

This isn’t a new framework. It’s the opening page of an AI controls catalogue for people who run the pipes, the grid, and the treatment plants.
If you’re a critical infrastructure operator, you should care. If you’re an SRE or infrastructure engineer, you already have most of the muscle memory to act on it. The hard part is translation.
What NIST actually said about AI Critical Infrastructure

The concept note is short. It’s also deliberately humble.
NIST is not dropping a prescriptive rulebook. They’re launching a Profile — a contextualised application of the existing AI Risk Management Framework for a specific sector. The AI RMF has been around since January 2023. It defines seven characteristics of trustworthy AI:
- Valid and reliable
- Safe
- Secure and resilient
- Accountable and transparent
- Explainable and interpretable
- Privacy-enhanced
- Fair, with harmful bias managed
The Profile will guide critical infrastructure operators in adopting specific risk management practices when deploying AI-enabled capabilities across IT, Operational Technology, and Industrial Control Systems. NIST is opening a Community of Interest. Feedback is welcome. The project started in April 2026 and is ongoing.
That’s it. No mandates. No deadlines. No enforcement mechanism.
If you’ve been waiting for the regulator to tell you what to do, this isn’t that document. It’s something more useful.
Why this matters more than it reads
Here’s the thing. Critical infrastructure has been integrating AI quietly for years. Predictive maintenance on turbines. Anomaly detection on SCADA traffic. Demand forecasting in grid operations. Route optimisation in rail. None of this was new in 2026.

What’s new is that the context has shifted.
Ransomware attacks on industrial control systems rose 355% between 2020 and 2025, going from roughly 1,400 incidents to nearly 6,500.
CISA published over 450 ICS advisories in 2025 alone. Critical manufacturing took 45.8% of the vulnerability share. Energy systems took 21.3%. And that’s just the cyber angle.
The policy gap is wider. The Pacific AI 2025 Governance Survey found 75% of organisations have an AI usage policy, but only 36% have adopted a formal governance framework. McKinsey’s 2026 AI Trust Maturity Survey showed that only about 30% of organisations reach maturity level 3 or higher in strategy, governance, and agentic AI controls.
Translate those numbers into critical infrastructure terms. You have operators running AI-enabled capabilities with policies that say “be careful” and zero actionable controls underneath. In a power grid or water treatment plant, that’s not a compliance problem.
That’s a safety case problem.
The reframe: treat this like a controls catalogue
Every infrastructure engineer I’ve worked with in thirty years has lived inside a controls catalogue. ITIL. ISO 27001. NIST 800-53. PCI DSS. SOX change control.
The specifics vary. The shape is always the same.

You have a capability. You have a risk. You have control. You have evidence that the control works. You have an audit trail.
The NIST Profile, when it ships, will be that shape for AI in CI. I’d bet on it. The concept note already signals this. It says the Profile will “align with, contextualise, reference, interpret, adapt, and facilitate the operationalisation of existing guidance documents at the intersection of AI, IT, OT, ICS, software development, cybersecurity, and critical infrastructure.”
That’s control catalogue language. That’s not a new framework. That’s the sentence you write when you’re building a crosswalk.
Mapping what you already have
If you’re running infrastructure today, you have five disciplines that translate directly.
1. Change control → AI model deployment control
Every model version, every retrain, every prompt template change, every RAG corpus update needs to go through a change window with a back-out plan.
2. Incident response → AI incident response
Hallucinations, drift, adversarial prompts, data poisoning, and agent misbehaviour all need runbooks, severity levels, and post-incident reviews.
3. Safety case → AI safety case
Before an AI system gets anywhere near a controllable asset, you need a documented argument for why it is safe, what its failure modes are, and what the compensating controls are.
4. Supply chain controls → AI bill of materials
You need to know what model you’re using, what it was trained on, who hosts it, what its update cadence is, and what happens when the vendor changes terms.
5. Observability → AI observability
Prompts, responses, token counts, latency, drift signals, and output distributions all need to be logged, traced, and alerted on.
You already have the platforms. You already have the SRE rituals. What you don’t have yet is the AI-specific controls layered on top.
What to do first
You don’t need the Profile to be published to start. Here’s what I’d do this quarter if I were running infrastructure for a CI operator.
- Build an AI inventory. Every AI-enabled capability, every model, every vendor integration, every agent. If you can’t list it, you can’t govern it.
- Map each entry to the seven AI RMF trustworthiness characteristics. Mark the ones that are weakest.
- Pick one high-stakes use case and run a tabletop exercise. Pretend the AI has hallucinated or been poisoned. Walk through detection, escalation, containment, and recovery. Find the gaps. Fix the biggest two.
- Join the NIST Community of Interest. The Profile is being built in public. If you have scar tissue from running real systems, your feedback will shape it. And the relationships you build in that community will matter when the Profile ships.
- Stop treating AI governance as a separate track. It isn’t. It’s a new chapter in the catalogue you already run.
The honest take
I’m not going to pretend this is exciting news. A concept note is not a regulation. A Community of Interest is not an enforcement mechanism. Most operators will ignore this until a peer has an incident or a regulator references it. That’s human nature.
But the ones who treat this note as what it actually is — the opening chapter of an AI controls catalogue — will spend the next eighteen months building muscle memory while their peers argue about scope.
When the Profile ships, likely in late 2026 or 2027, the mature operators will already have most of it up and running. The others will be scrambling.
I’ve seen this pattern every time a new control set lands. PCI DSS 3.0. GDPR. NIST CSF 2.0. The operators who win are the ones who treat draft guidance as final guidance, except for the font change.
The tech has changed. The discipline hasn’t.
Sources and further reading
- NIST — Concept Note: AI RMF Profile on Trustworthy AI in Critical Infrastructure
- NIST AI Risk Management Framework
- NIST AIRC — Characteristics of Trustworthy AI
- Industrial Cyber — NIST develops Trustworthy AI in Critical Infrastructure Profile
- CISA/ASD — Principles for the Secure Integration of AI in Operational Technology
- SOCRadar — CISA ICS Advisories Recap 2025
- McKinsey — State of AI Trust in 2026
Work with me
If you’re working through AI governance for critical infrastructure and want to compare notes, subscribe to the BetaTesterLife newsletter. I write honestly about what works and what doesn’t when AI meets real infrastructure.
Want to teach this? I built a companion deck.
Reading a 1,200-word article is one thing. Walking a team through it in a Monday stand-up, a lunch-and-learn, or a steering committee is another.
So I built a Gamma deck that mirrors this piece — the same three angles, the same five discipline mappings, the same honest take — but designed to present, explain, and discuss rather than read.
Use it to brief your infrastructure team on what the NIST concept note actually means. Walk your CISO through the controls-catalogue reframe. Hand it to a peer who hasn’t got time for the long version. Or just steal the slides you find useful.
→ View and present the companion deck on Gamma
The deck is free to view and present. If you adapt it for your own organisation, all I ask is that you don’t strip the source citations.
FAQ
What is the NIST AI RMF Critical Infrastructure Profile?
It’s a sector-specific Profile of the NIST AI Risk Management Framework (AI RMF 1.0), announced via concept note on April 7, 2026. It will guide critical infrastructure operators on specific risk management practices when deploying AI across IT, OT, and ICS environments.
Is the Profile mandatory?
No. Like the AI RMF itself, the Profile is voluntary. But as with prior NIST frameworks (CSF, 800-53), sector regulators are likely to reference it in their expectations for safe deployment.
When will the final Profile be published?
NIST has not published a timeline. Based on prior AI RMF Profile timelines, a publication in late 2026 or 2027 is realistic.
How do I join the Community of Interest?
Sign up via the NIST mailing list form.

Leave a Reply