Energy and powerNewsPower transmission

Open-source AI to protect power grids from DER fluctuations

Open-source AI to protect power grids from DER fluctuations

Image: KTH / David Callahan CC by 2.0

Researchers at the KTH Royal Institute of Technology have developed open-source AI algorithms to protect grids from random fluctuations introduced by variable renewables.

The increasing complexity of power grids with high levels of inverter-based variable renewables and unpredictable electric vehicle charging patterns has brought with it challenges for power grid operation and the need for real-time control as key for maintaining voltage stability.

Based on deep reinforcement learning – a subset of machine learning – the new algorithms are designed to solve this challenge by delivering intelligence to power converters in the grid utilising what the researchers describe as a novel data synchronisation strategy to optimise the large-scale coordination of energy sources safely under fast fluctuations without real-time communication.

“Centralised control is not cost-efficient or fast under continuous fluctuations of renewable energy and electric vehicles,” says Qianwen Xu, one of the researchers involved in the project.

Have you read?
It’s time for a new era of low voltage grid management
How digitalisation can solve grid challenges for TSOs

“Our purpose is to improve control strategies for power converters, by making them more adaptive and intelligent in order to stabilise complex and changing power grids.”

The research, which was demonstrated in KTH’s smart microgrid hardware platform and published in the IEEE Transactions on Sustainable Energy, proposes a projection-embedded deep reinforcement learning algorithm to achieve decentralised optimal control with guaranteed 100% safety.

This is intended to overcome the challenge of existing deep reinforcement learning methods in power system applications of not being able to achieve optimal performance and guarantee safe operation at the same time.

In essence, the approach of the researchers is to formulate the grid control problem as a deep reinforcement learning problem with hard physical constraints and then based on this to project a multi-agent algorithm onto a set of constraints characterised by the physics of the distribution system.

With this, the proposed method can achieve the optimal control of the distribution system in a decentralised manner without real-time communication while guaranteeing the physical constraints of the system all the time. As such, it is thus flexible for scalability and practical deployment.

The research formed part of KTH’s Digital Futures Centre which collaborates with researchers from the Universities of California, Berkeley and Stockholm University.

Deep reinforcement learning combines deep learning and reinforcement learning and has been developed for application in complex, unpredictable systems.