Energy and powerPower transmission

The path to grid resilience: Coupling asset fragility and weather

The path to grid resilience: Coupling asset fragility and weather

Image courtesy 123rf

When it comes to grid resilience, coupling data on extreme weather conditions with the concept of asset fragility is the clearest way to quantify long-term utility asset risks from extreme weather events, writes Rahul Dubey of Rhizome.

It’s clear: the impacts of climate change are here, and plans must be made to weather them.

In just the last three years, there have been 90 separate billion-dollar disasters in the US – 53 of which were caused by severe storms.

Extreme weather events are noticeably increasing in frequency and severity as climate change worsens, resulting in a 500% increase in average natural disaster costs since 2000.

The most devastating natural disaster in 2022 was Hurricane Ian, which was the third most costly hurricane on record in the US with $112.9 billion in damage done.

Have you read?
PG&E deploys its first 100% renewable remote grid in wildfire mitigation push
UL Lafayette receives $87M to address grid resiliency

Ian impacted roughly 2.7 million residential and commercial customers in Florida. The East Coast wasn’t alone in weather and climate disasters; the Western and Central Drought/Heat Wave did $22.1 billion in damage last year, as one of the most costly droughts on record.

Given the intensity and frequency of ongoing and future climate disasters, how can utility companies and grid operators work together to best safeguard the grid?

Map provided by the National Centers for Environmental Information (Source).

Pairing asset fragility with extreme weather events

Grid assets are subject to various disruptions and damages as a result of physical hazards presented by a changing climate. Transmission conductor ice buildup, substation flooding and downing of distribution equipment, just to name a few, are all pain points for the grid that are growing in frequency and severity. But which assets, given their condition and direct exposure to natural forces, are most likely to face failures?

The clearest way to understand and quantify long-term utility asset risks from extreme weather events is by coupling data on extreme weather conditions with the concept of asset fragility –– the vulnerability of utility assets like poles, cables, and transformers to external forces. In fact, it’s the only way to holistically and reliably build models that ensure we are protecting the grid to the fullest extent.

This process begins with data assimilation. A fragility curve, or damage function, enumerates how specific infrastructure is damaged by a specific hazard. Fragility curves help to quantify the likelihood and extent of service interruption on any section of the distribution network that is susceptible to hazards like high wind gusts and floods.

In order to understand the impact of severe weather, it is critical to understand the following about an electric utility’s distribution system:

  1. Location and layout of the system

Where are the critical parts of the infrastructure located and what is the distribution system’s network topology? This is critical to understanding the different physical hazards, like floods or wind gusts, that affect distribution assets. Further, which customers experience outages when certain assets experience a failure?

  1. Standards and specifications of each asset

Every asset on the distribution system –– poles, transformers, conductors, substation relays etc. –– has specifications which govern its operating constraints. Learning different environmental stressors that each asset has historically faced informs the likelihood of a future asset failure given a new set of conditions. Specific codes, standards, specifications provided by ANSI or IEEE, for instance, provide useful metrics that help understand the resilience of electric infrastructure.

  1. Asset fragility and vulnerability

Understanding an asset’s health and its associated vulnerabilities is the key to creating a baseline for risks to the distribution system. Combined with an asset’s specifications and maintenance records, an electric distribution utility’s outage management and asset data can be used to train machine learning and statistical algorithms to understand the drivers for asset failure and their associated vulnerabilities. Two examples include:

● Pole Strength

Pole strength degrades over time. Coupling data on past weather events with the pole’s current condition, specifications, and location can create inferences for which poles will be affected by future extreme weather events.

● Transformers

Extreme heat days can overload transformers by overheating their internal windings, leading to a potential short. Understanding transformers’ conditions on the distribution system and the likelihood of more extreme heat days can predict which parts of the distribution system are most vulnerable to heat.

How do we do this?

It’s critical to build a machine learning or statistical model that learns and correlates different vulnerabilities and disruptions to services and assets.

Machine learning is uniquely qualified to uncover climate-asset risk and automate planning workflows, helping to quantify the probability of failure under various extreme weather conditions.

This framework also accounts for the type of customers on the distribution system and their needs to better inform the level of investment needed to support unique needs, such as customers on life support systems.

This places a true, prioritised value on grid resiliency – a value measured both economically, but more importantly, in terms of lives and safety.

True resilience is understood, valued, and built from the ground up. To do this precisely and holistically, we must understand the layout of grid infrastructure and the current and future vulnerabilities that drive fragility so that utility operators can plan grid hardening investments where we need them most.

About the author

Rahul Dubey is co-founder and CTO of Rhizome

Rahul Dubey is the co-founder and CTO of Rhizome, and has two decades of experience building global products using HPC, big data analytics, applied machine learning and AI.