Muthanna Journal of Engineering and Technology
Volume (14), Issue (2), Year (2026), Pages (44-62)
DOI:10.52113/3/eng/mjet/2026-14-02-/44-62
Research Article By:
Zeyad Abdullah Abdul Rahman
Corresponding author E-mail: zeyadabdullahzaa@gmail.com
ABSTRACT
Modern power systems are more vulnerable to cascading failure due to the increasing complexity and renewable integration in modern power systems. Cooperative, adaptive protection schemes are considered fundamental to improving grid resilience, but as a matter of fact, the implementation of such schemes is inherently limited by the data privacy and ownership rules on multi-entity grids. The current centralized and multi-agent deep reinforcement learning (DRL) solutions require consolidation of sensitive operational information, which forms critical single points of failures and privacy violations. To address this gap, this paper will present a new Federated Deep Reinforcement Learning (FDRL) framework. The methodology develops relay coordination as a Partially Observable Markov Decision Process (POMDP) and uses a Federated Deep Deterministic Policy Gradient (F-DDPG) algorithm such that distributed relay agents learn local models with privately available data and just provides encrypted model parameters to a central aggregator to securely train the federating process. Simulation outcomes on the IEEE 39-bus system show that the proposed scheme decreases the cascade size by 52.5% and load shedding by 54.7% relative to traditional protection, and has fault discrimination accuracy equal to 95.8, and can operate at 13% of the speed of a privacy-violating centralized DRL benchmark. The framework manages to accomplish intelligent and collaborative protection without data confidentiality.
Keywords:
Adaptive Protection, Cascading Failures, Federated Learning, Privacy Preservation, Smart Grid.