CISA sets AI red lines for OT

CISA sets AI red lines for OT

New global guidance reins in AI across operational technology deployments. e2e-assure warns that attackers are already exploiting AI while OT security controls lag behind official advice.


US and allied cyber agencies have issued joint guidance on how artificial intelligence should be introduced into operational technology, warning critical infrastructure operators that poorly governed deployments could create new failure modes and attack paths in safety-critical environments.

The 25-page document, Principles for the Secure Integration of Artificial Intelligence in Operational Technology, was published this week by the US Cybersecurity and Infrastructure Security Agency (CISA) alongside partners including the NSA, Australia’s ACSC, the UK’s NCSC, Germany’s BSI, and cyber agencies in Canada, the Netherlands, and New Zealand. It is aimed squarely at operators of critical infrastructure such as energy, water, transport, and manufacturing.

Rather than encouraging rapid experimentation, the guidance sets a cautious baseline. It asks owners and operators to first “understand AI,” and its specific risks in OT; to “consider AI use in the OT domain” only where there is a clear business case; to “establish AI governance and assurance frameworks”; and to “embed safety and security practices” into any AI-enabled OT system.

Examples outlined in the document range from predictive maintenance and anomaly detection on industrial equipment to optimisation of plant operations. While these use cases promise efficiency gains, the guidance highlights risks such as data poisoning, model evasion, model drift, and expanded connectivity between IT, cloud, and OT — all of which can undermine safety if not tightly controlled.

For Rob Demain, CEO of UK-based security provider e2e-assure, the timing simply underlines how far security is trailing real-world AI adoption. “AI is being adopted widely across IT, for efficiency and automation, but we are far from being able to secure it in IT, let alone OT. AI will introduce new systemic risks in OT environments, including model drift and mis-generalisation that can lead to unsafe control actions, safety-process bypasses when AI recommendations override manual checks, and an expanded attack surface,” he said.

Demain points out that most industrial environments are only just beginning to embed AI into day-to-day operations, which makes the control gap more acute. “Right now, adoption of AI across industrial and critical infrastructure sectors is limited, however, predictive maintenance, anomaly detection, and optimisation tools are being integrated into OT workflows. We see some organisations piloting LLM-based assistants for engineering and operational support. While adoption will certainly accelerate, security controls are not keeping pace.”

Register to read the rest of this article, as well as our full features and opinion pieces section. Subscriptions are free. Click here to register or here to log in.


Stories for you


  • Made Smarter leadership drive passes 250

    Made Smarter leadership drive passes 250

    Made Smarter has passed 250 digital leaders in manufacturing. The milestone reflects growing demand for leadership and change-management skills across SME transformation programmes.


  • Trio combines motion control with PLC

    Trio combines motion control with PLC

    Trio Motion has extended machine control with integrated motion. The new Motion-PLC range combines EtherCAT servo coordination, PLC logic, and expandable I/O.