Developing a Surrogate Model for Predicting Metallic Corrosion Events
|Name||Affiliation||Phone Number||Email Address|
|Phil Monks||AWE firstname.lastname@example.org|
British Crown Owned Copyright 2017/AWEThe corrosion of uranium during long-term storage presents a challenge to the safe operation of nuclear facilities and equipment. The formation of uranium hydride is of particular interest, since it can lead to embrittlement of the metal (potentially leading to structural failure), and the resulting hydride powder is pyrophoric (potentially igniting upon exposure to air). Predicting the occurrence of hydride corrosion is therefore an important challenge for materials research.
AWE have developed a conceptual model of the key physical processes leading to hydride formation , namely: i) the oxidation of uranium metal by humid air, which leads to the formation of hydrogen as a by-product, ii) once the oxidation source is depleted, diffusion of hydrogen through the protective oxide over-layer is enhanced through local defects and thinned regions, whilst also being hindered by the presence of trap sites in the oxide, iii) accumulation of hydrogen in the metal approaches its terminal solubility limit, leading to the nucleation of a hydride site, iv) the hydride site breaches the surface of the material, is no longer protected by the oxide over-layer, and subsequently grows rapidly to form discrete corrosion “pits” on the sample surface (see Figure 1).
There are many factors which influence the rate at which the hydriding reaction proceeds, e.g. the temperature, extent of oxidation, availability of hydrogen, sample production route (which determines the levels of impurities), and sample microstructure and mechanical properties (such as grain size, work-hardening, and residual stresses). The hydriding model must be capable of predicting, among other things, the “initiation time”, which is the time at which the first hydride site ruptures the surface. A numerical diffusion model has been developed which estimates the initiation time by simulating the transport of hydrogen through the oxide over-layer and metal substrate. The operating range of the model is wide, since the relevant model parameters (diffusion coefficients, microstructure, etc.) are notoriously difficult to measure experimentally and are therefore subjected to a high degree of uncertainty. The model is also expensive computationally, and so global sensitivity and risk analyses remain a challenge.
This use case is aimed at developing a surrogate model of a simplified case which can be sampled much more frequently and efficiently, allowing for extensive sensitivity analysis and quantification of risk, which is required in system-level assurance assessments. It is anticipated that the existing numerical code could be run numerous times to sample an adequate range of model inputs, building a suitable response surface which can be used as an emulator. It may prove beneficial, however, to approach the problem at a more fundamental level and examine the underlying governing equations and variables. The key output of interest is the initiation time, though it would also be desirable to apply the chosen technique more generally to other model predictions such as the hydrogen concentration at the oxide / metal surface.
An additional aspect of interest relates to the use of the surrogate model to design an efficient set of validation experiments. We measure validation performance via the use of a Validation Metric , which provides a useful level of objectivity to the assessment of a given model across a range of experimentally-controlled variables (e.g. temperature and pressure), as well as tracking its performance as a function of time. Validation experiments can be expensive, both in terms of material resources and research time, and so it would be beneficial to optimise the experimental programme by determining i) the most important variables to test (which may be highlighted by a suitable model sensitivity analysis), ii) the number of experiments and variable values to use, iii) the length of time an experiment should be conducted for. The latter point is important since ageing experiments could continue for months in some cases, and model accuracy tends to degrade over this period as the prediction extends far beyond the original timescales involved in the training datasets. The key question arising is, therefore: given a set of uncertainties in the model input parameters and a desired operational timescale, when can the experiment be stopped without premature estimation of the Validation Metric and subsequent loss of confidence (or over-confidence) in the model?
The major model inputs which will be used as part of the analysis are as follows:
- Diffusion coefficient in the oxide
- Various trapping rate constants in the oxide
- Diffusion coefficient in the metal
- Surface concentration of hydrogen
- Terminal solubility of hydrogen in the metal
- Oxide thickness distribution
The uncertainties in these parameters have been partially characterized either through experimental measurement (which in some cases involves significant extrapolation from high temperature data), through assumption, or else according to the operational experimental range. The uncertainty ranges cover multiple orders of magnitude in most cases, and correspond to a variety of distributions (i.e. normal distributions arising from regression, uniform distributions where detailed knowledge is lacking). The manner in which the model parameter values may be interdependent is understood only to a first level of approximation.
The numerical code has been developed in-house in Mathematica. There exists the functionality for reading and writing of inputs / outputs to file, which provides a basic ability for interrogating the model. Computational times per simulation vary according to the input parameters, however timescales of seconds are typical (depending on the degree of coarseness of the numerical mesh). The model being used here is also simple enough that participants could code up their own versions in a reasonably short period of time using whatever numerical methods they wish.
The main output would be a set of polynomials, a reference / lookup table or algorithm that acts as the surrogate. This should be independent of the main numerical code, allowing it to be interrogated separately during other dependent assessments.
Otherwise, there are no restrictions on the visualisation of the UQ analysis. Top-level summaries of key uncertainties and risk factors would be desirable to aid communication across various levels of our business (e.g. modelling, engineering analysis, programme management, and leadership teams).
To date, UQ techniques adopted have involved “brute force” Monte Carlo sampling of the input distributions in order to apply a global sensitivity analysis technique (which involves the crude assumption of linear relationships between the inputs and output). Lengthy computation times have limited the usefulness of this technique. Limited investigations into developing a response surface using the PSUADE UQ package  have also been carried out, however irregularities in the output surface have resulted in unreliable results, and so more in-depth experience of applying these types of tools to problems of this type is sought.
- J. Glascott; “A model for the initiation of reaction sites during the uranium-hydrogen reaction assuming enhanced hydrogen transport through thin areas of surface oxide”; Philos. Mag., 94(3), 221-241 (2014).
- W. Li, W. Chen, Z. Jiang, Z. Lu, and Y. Liu; “New validation metrics for models with multiple correlated responses”; Reliability Engineering and System Safety, 127, 1-11 (2014).
- PSUADE - Problem Solving environment for Uncertainty Analysis and Design Exploration; https://computation.llnl.gov/projects/psuade-uncertainty-quantification