In the last decades, climate change is causing our environment to change rapidly, unprecedented in recent history. Civil engineering structures are dependent on the deteriorating environment they are situated in. Changes can cause an increase in loading due to, for example, extre
...
In the last decades, climate change is causing our environment to change rapidly, unprecedented in recent history. Civil engineering structures are dependent on the deteriorating environment they are situated in. Changes can cause an increase in loading due to, for example, extreme weather events or alter the structure’s resistance by, for instance, accelerated corrosion or an increase in the number of frost days. However, planning for such events depends significantly on the state of climate change, which is not considered in the sequential decision-making optimization for inspecting and maintaining our infrastructures.
Sequential decision-making optimization refers to the process of finding an optimal sequence of actions to be performed to keep the structure safe. Optimality often refers to the plan with the lowest costs. Traditional inspection and maintenance plans seek a solution by applying heuristic-decision rules on, for instance, time- or condition constraints and fail to find an optimal solution. During the last decade, a new method called Deep Reinforcement Learning (DRL) has been applied and proven to find an optimal strategy to beat traditional approaches. Especially the ability of DRL to find an optimal solution for partially observable environments makes it a perfect candidate for the problem at hand.
This thesis has developed a framework that incorporates partial observability over possible climate scenarios in the decision-making of engineering structures’ inspection and maintenance planning. The framework explains the steps required to translate a physical system towards a Partially Observable Markov Decision Process (POMDP), as the mathematical framework needed to seek an optimal se- quence of inspection and maintenance actions for. Then, it is explained how the POMDP can be used to find an optimal policy with a DRL algorithm, which includes benchmarking and testing the policy. A belief state has been incorporated over the climate scenarios to simulate the partial observability of the climate. Updating over the scenarios is provided by Bayesian Inference, using a climate parameter, such as temperature.
The framework has been applied to a case study in the second part of the thesis. The case study con- figures a stochastic deterioration process for different climate scenarios. An optimal policy has been found by applying Proximal Policy Optimization (PPO) with a decentralized policy for the various com- ponents. Two well-established heuristic-based maintenance policies, time-based maintenance (TBM) and condition-based maintenance (CBM) have been configured as benchmarks for the framework. The framework is compared against the benchmarks using lifecycle costs and safety as metrics. The policy provided by the framework has outperformed both benchmarks in terms of costs while maintaining the same safety. The policy beat TBM with 5% and CBM with 1% in terms of lifecycle costs. Another important distinction is that the benchmarks are optimized for the climate scenarios, while the frame- work finds this distinction without prior knowledge. It is therefore concluded that the framework can find an optimal policy under the uncertainties related to climate change. The case study, however, does not fully capture the complexity of engineering structures that the framework can catch. It is therefore recommended to use the framework for a more complex structure in the future.