Infrastructure is at risk to climate uncertainty due to a combination of long life spans, complexity of the systems it is embedded in, and the high investment costs often necessary. Current infrastructure planning approaches lose efficacy under deep uncertainty, necessitating new
...
Infrastructure is at risk to climate uncertainty due to a combination of long life spans, complexity of the systems it is embedded in, and the high investment costs often necessary. Current infrastructure planning approaches lose efficacy under deep uncertainty, necessitating new approaches that function better under conditions where the future cannot be predicted. The approaches that attempt to deal with this are also called Decision Making under Deep Uncertainty (DMDU). Bangladesh, the Netherlands, and New Zealand have all already adopted the use of these DMDU approaches in their delta protection guidance. One popular DMDU technique is Robust Decision Making (RDM). RDM can be seen as a computational extension of scenario planning, where proposed plans are tested against every potential combination of uncertainties. The Deep South Challenge (DSC), a New Zealand based research institute is looking into using RDM on a regional scale to discover vulnerabilities and identify robust strategies based on them. One of the test cases is in Helensville, where a Wastewater Treatment Plant (WTP) serving a small community is located in the middle of a floodplain. The WTP discharges its effluent into a strongly tidally influenced river, which drains the entire watershed and flows past large tidal flats into a dynamic estuarine environment. In order to identify potential vulnerabilities in the system, robust decision making uses a vulnerability analysis. This consists of a scenario discovery and global sensitivity analysis which sample through every combination of uncertainties to characterize the vulnerabilities of the system. In order to facilitate this, usually simple conceptual models are used due to the high number of runs necessary. However, these types of models can oversimplify complex physical processes and topography. These complicating factors are all present at the current case site selected by the DSC. This research investigates whether the added computational demand of a complex model is worth it compared to a simple conceptual model. To do this, two models are selected and forced for the same event. They are then compared on predicted system behavior, identified vulnerabilities, and potential policy advice. From a larger selection, the FLORES and SFINCS models were chosen. FLORES uses a simple hydrological balance to calculate the water level in the subbasins for each timestep. SFINCS is a reduced physics solver which only uses the Local Inertial Equations (LIE). Both models are forced by a compound rainfall and stormtide event for a storm with a 24 hour duration, for which they were calibrated and validated using the results of previous modeling efforts in the region. After the calibration and validation, a sensitivity analysis and scenario discovery were run for both models. The results of the sensitivity analysis show similar model behavior between FLORES and SFINCS. The upstream part of the model domain is only sensitive to rainfall, while the downstream part is mostly sensitive to storm surge and mean sea level, and to a lesser degree to tidal amplitude. This downstream part includes the wastewater treatment plant. Compared to SFINCS, the outcomes for FLORES on average overestimated water levels at the WTP. This is most likely due to the lack of flood attenuation taken into account by FLORES compared to SFINCS. The scenario discovery showed similar results for each model’s box describing 73% of the outcomes where failure occurs. Both models had the same three thresholds: storm surge, mean sea level, and tidal amplitude. The main difference between the boxes was the storm surge threshold being 21 centimeters lower for FLORES compared to SFINCS, indicating FLORES overestimates the water level at the WTP. The results of the scenario discovery also showed a linear relationship between these three factors. From this relationship, it is possible to see that, keeping all else similar, and with the same high tidal amplitude and storm surge, for SFINCS the plant only starts flooding when a mean sea level of at least 0.4 meters is reached, while for FLORES this is 0.25 meters. Using a RCP4.5 emissions scenario, a mean sea level of 0.25 meters will be reached in 20-30 years, and a mean sea level of 0.4 meters in 50 years. The proposed policy options for both SFINCS and FLORES would be to mitigate storm surge as long as possible, since the water level at the WTP is most sensitive to this factor. Once this is no longer possible, the WTP should be relocated. The results of SFINCS indicate this relocation is necessary later than for FLORES. These results show that while the behaviors exhibited by both models is relatively similar, the small differences in accuracy affect which are most likely due to the lack of flood attenuation taken into account for FLORES lead to a different timing of proposed adaptations. This leads to reason that while a conceptual model such as FLORES works well to identify important factors within the system, a more accurate model such as SFINCS can be more helpful once timing of adaptation becomes important. Further recommendations are to repeat this research for more models, further calibrate and validate the models, and to include scenario discovery methods that better deal with the found linear relation.