← Back to Use Cases

The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

Authors:
Name Affiliation Phone Number Email Address
Matt Butchers The Knowledge Transfer Network matt.butchers@ktn-uk.org
Industrial Sectors:

Aeronautics

1. DESCRIPTION OF USE CASE

This use case is extracted and adapted from the NASA LaRC UQ Challenge, full details at Ref [1]. I. Introduction NASA missions often involve the development of new vehicles and systems that must be designed to operate in harsh domains with a wide array of operating conditions. These missions involve high-consequence and safety-critical systems for which quantative data is either very sparse or prohibitively expensive to collect. Limited heritage data may exist, but is also usually sparse and may not be directly applicable to the system of interest, making uncertainty quantification extremely challenging. NASA modelling and simulation standards require estimates of uncertainty and descriptions of any processes used to obtain these estimates. to better meet this standard, NASA recently sought responses to this challenge problem to address the following:

  • Modeling and refinement of uncertainty given sparse data
  • Propagation of mixed aleatory and epistemic uncertainties through system models
  • Parameter ranking and sensitivity and sensitivity analysis in the presence of uncertainty
  • Identification of the parameters whose uncertainty is the most / least consequential
  • Worst-case system performance assessment
  • Design in the presence of uncertainty
 
II. Problem Formulation
Let S denote the mathematical model of the multidisciplinary system under investigation. This model is used to evaluate the performance of a physical system and evaluate its suitability. Denote by p a vector of paramters in the system model whose value is uncertain and by d a vector of design variables whose value can be set by the analyst. Furthermore, denote by g a set of requirement metrics used to evaluate the system's performance. The value of g depends on both p and d. The system will be regarded as requirement compliant if it satisfies the set of inequality (a) constraints g < 0. For a fixed value of the design variable d, the set of p-points where g < 0 is called the safe domain, while its complement set is called the failure domain. Therefore, the failure domain corresponding to a fixed design point is comprised of all the parameter points p where at least one of the requirements is violated.

The relationship between the inputs p and d, and the output g is given by several functions, each representing a different subsystem or discipline. The function prescribing the output of the multidisciplinary system is given by $$\textbf{g} = \textbf{f}(\textbf{x},\textbf{d}),   (1)$$ where x is a set of intermediate variables whose dependence on p is given by $$\textbf{x} = \textbf{h}(\textbf{p}).   (2)$$ The components of x, which can be interpreted as outputs of the fixed discipline analyses in (2), are the inputs to the cross-displine analyses in (1). The components of g and x are continuous functions of the inputs that prescribe them.   III. The Physical System [caption id="attachment_213" align="aligncenter" width="485"]Figure 1. NASA GTM Problem Structure Figure 1. NASA GTM Problem Structure[/caption] This challenge problem is based upon a model of the NASA Langley Generic Transport Model (GTM), see [2] and [3]. The GTM is a 5.5 % dynamically scaled, remotely piloted, twin-turbine, research aircraft used to conduct experiments for the NASA Aviation Safety Program. Although a discipline-specific application is the focus of the challenge problem, the problem was specifically structured so that specialised aircraft knowledge is not required. Eight stability and performance requirements are imposed upon the system. These requirements are representative of conventional measure of goodness typically used in aircraft stability and control, e.g., lateral & longitudinal stability, good command tracking, actuator saturation, etc. The mathematical model S describes the dynamics of the Generic Transport Model (GTM), a remotely operated twin-jet aircraft developed by NASA Langley Research Center. The aircraft is piloted from a ground station via radio frequency links by using on-board cameras and synthetic vision technology. The parameters in p are used to describe losses in control effectiveness and time delays resulting from telemetry and communications as well as to model a spectrum of flying conditions that extend beyond the normal flying envelope. The requirements in g are used to describe the vehicles stability and performance characteristics in regard to pilot command tracking and handling / riding qualities. The "black box" format of the formulation of this challenge problem aims at making the problem amenable to the largest possible audience without favouring or hindering respondents depending upon their particular field of expertise.   IV. The Objectives Uncertainty Characterisation:
  • Advance methods for the refinement of uncertainty models using limited experimental data
  • Refine uncertainty models given the following:
    • A mapping from parameters to output
    • Limited "truth data"
    • An initial representation of he uncertainty model
The classes of uncertainty models considered are:
  • Aleatory uncertainties modelled as random variable with a fixed functional forms and known coefficients. (Class I)
  • Epistemic uncertainties modelled as fixed but unknown constants within prescribed intervals. (Class II)
  • Aleatory uncertainties modeled as a distributional probability boxes, with their parameters modelled as intervals. (Class III)
Sensitivity Analysis:
  • Develop methods for the identification of critical parameters from within a multidisciplinary parameter space. This objective is similar to classical sensitivity analysis, but here metrics of interest are probabilistic and uncertain parameters belong to diverse classes of uncertainty models
  • We consider this as an open problem with great practical significance.
Uncertainty Propagation:
  • Deploy approaches for propagating uncertainties in multidisciplinary systems subject to both aleatory and epistemic uncertainty. This will involve the computation of ranges of both failure probabilities as well as of expected values.
  • This objective exploits the fact that very few tools exist to propagate mixed aleatory and epistemic uncertainties through general nonlinear systems. Their applicability and practicality remain to be demonstrated
Extreme Case Analysis:
  • Identify the combination of uncertainties that lead to best- and worst-case outcomes according to two probabilistic measures of interest.
Robust Design:
  • Determine design points that provide optimal worst-case probabilistic performance.
 

2. KEY UQ&M CONSIDERATIONS
2.1 Process Inputs

Subproblem A: Uncertainty Characterisation

Here we consider a subsystem whose scalar output \(\textbf{x}_1\) depends on five uncertain parameters (b) as given by $$\textbf{x}_1 = \textbf{h}_1(\textbf{p}_1,\textbf{p}_2,\textbf{p}_3,\textbf{p}_4,\textbf{p}_5),   (3)$$ Specific information on those parameters is provided in Table 1. The first column provides the parameter's symbol, the second one its category (see above for a description of the categories), and the third one describes its uncertainty model (c). While the symbol \(\Delta\) denotes the support set or parameter range, \(\rho\), E[\(\cdot\)], V[\(\cdot\)], and P[\(\cdot\)] denote the correlation, expected value, variance, and probability operators respectively. In this subproblem, the tasks of interest are as follows.

A1) We provide software to evaluate \(\textbf{h}_1\) and n = 25 observations of \(\textbf{x}_1\)  corresponding to the “true uncertainty model”, i.e., a model where \(\textbf{p}_1\)  is a fully prescribed Beta random variable,\(\textbf{p}_2\)  is a fixed constant and \(\textbf{p}_4\)  and \(\textbf{p}_5\)  are described by a single and possibly correlated bivariate Normal. Use this information to improve the uncertainty model of the category II and III parameters (refer to Section II for the definition of a reduced/improved uncertainty model). The resulting models should only exclude the elements of the original models that fail to explain the observations.

A2)  Use an additional n = 25 observations to validate the models found in A1.

A3)  Improve the uncertainty models further by using all the 50 samples available.

A4)  Account for the effect of the number of observations n on the fidelity of the resulting uncertainty models. How much better is the model found in A3 as compared to the model found in A1?

Table 1. Uncertain parameters.

Symbol

Category

Uncertainty Model

\(p_1\)

III

Unimodal Beta, \(3/5 \leq E[p_1] \leq 4/5, 1/50 \leq V[p_1] \leq 1/25\)

\(p_2\)

II

\(\Delta\) = [0,1]

\(p_3\)

I

Uniform, \(\Delta\) = [0, 1]

\(p_4\),\(p_5\)
III

Normal, −5 \(\leq E[p_i] \leq 5, 1/400 \leq V[p_i] \leq 4, [\rho] \leq 1\) for i = 4.5

  Subproblem B: Sensitivity Analysis

We now consider the multidisciplinary system S having the input p\(\in\mathbb{R}^{21}\) and the output g\(\in\mathbb{R}^{8}\). The first 5 input parameters should be modeled using the results from task A3, while the remaining 16 parameters are given in Table 2.

Table 2. Uncertain parameters.

Symbol

Category

Uncertainty Model

\(\textbf{p}_3\)

II

\(\Delta\) = [0,1]

\(\textbf{p}_7\)

III

Beta, 0.982 \(\leq a \leq 3.537, 0.619 \leq b \leq 1.080\)

\(\textbf{p}_8\)

III

Beta, 7.450 \(\leq a \leq 14.093, 4.285 \leq b \leq 7.864\)

\(\textbf{p}_9\)

I

Uniform; \(\Delta\) = [0,1]

\(\textbf{p}_{10}\)

III

Beta, 1.520 \(\leq a \leq 4.513, 1.536 \leq b \leq 4.750\)

\(\textbf{p}_{11}\)

I

Uniform, \(\Delta\) = [0,1]

\(\textbf{p}_{12}\)

II

\(\Delta\) = [0,1]

\(\textbf{p}_{13}\)

III

Beta, 0.412 \(\leq a \leq 0.737, 1.000 \leq b \leq 2.068\)

\(\textbf{p}_{14}\)

III

Beta, 0.931 \(\leq a \leq 2.169, 1.000 \leq b \leq 2.407\)

\(\textbf{p}_{15}\)

III

Beta, 5.435 \(\leq a \leq 7.095, 5.287 \leq b \leq 6.945\)

\(\textbf{p}_{16}\)

II

\(\Delta\) = [0,1]

\(\textbf{p}_{17}\)

III

Beta, 1.060 \(\leq a \leq 1.662, 1.000 \leq b \leq 1.488\)

\(\textbf{p}_{18}\)

III

Beta, 1.000 \(\leq a \leq 4.266, 0.553 \leq b \leq 1.000\)

\(\textbf{p}_{19}\)

I

Uniform, \(\Delta\) = [0,1]

\(\textbf{p}_{20}\)

III

Beta, 7.530 \(\leq a \leq 13.492, 4.711 \leq b \leq 8.148\)

\(\textbf{p}_{21}\)

III

Beta, 0.421 \(\leq a \leq 1.000, 7.772 \leq b \leq 29.621\)

  The relationship between the input \(\textbf{p}\) and the output \(\textbf{g}\) is given by Equations (1) and (2), where the intermediate variable \(\textbf{x}\in\mathbb{R}^5\) is given by (3) and $$\textbf{x}_2 =\textbf{h}_2 (\textbf{p}_6\textbf{p}_7,\textbf{p}_8,\textbf{p}_9,\textbf{p}_{10}),~~~(4)$$ $$\textbf{x}_3 =\textbf{h}_3 (\textbf{p}_{11}\textbf{p}_{12},\textbf{p}_{13},\textbf{p}_{14},\textbf{p}_{15}),~~~(5)$$ $$\textbf{x}_4 =\textbf{h}_4 (\textbf{p}_{16}\textbf{p}_{17},\textbf{p}_{18},\textbf{p}_{19},\textbf{p}_{20}),~~~(6) $$ $$\textbf{x}_5 = \textbf{p}_{21}. ~~~(7) $$ Note that the propagation of the uncertainty model of \(\textbf{p}\) through \(\textbf{h}\) yields distributional p-boxes for \(\textbf{x}_1,\textbf{x}_2,\textbf{x}_3\) and \(\textbf{x}_4\). If the uncertainty models of the category II and III parameters are improved, so will be the resulting p-boxes of \(\textbf{x}_1,\textbf{x}_2,\textbf{x}_1\) and \(\textbf{x}_1\). In this subproblem we want to perform the following tasks:

B1)  Rank the 4 category II-III parameters entering Equation (3) according to degree of refinement in the p-box of \(\textbf{x}_1\) which one could hope to obtain by refining their un- certainty models. Are there any parameters that can be assumed to take on a fixed constant value without incurring in significant error? If so, evaluate/bound this error, list which parameters and set their corresponding constant values. Do the same for the 4 category II-III parameters prescribing \(\textbf{x}_2,\textbf{x}_3\) and \(\textbf{x}_4\).

B2)  Rank the 17 category II-III parameters of Tables 1 and 2 according to the reduction in the range of the expected value $$ J_1 = E[w(\textbf{p},\textbf{d}_{\rm{baseline}})],~~~(8)$$

which one could hope to obtain by refining their uncertainty models. In this expression, $$ w(\textbf{p},\textbf{d}) = \rm{max}_{1\leq i \leq 8}\textbf{ g}_i = \rm{max}_{1\leq i \leq 8} \textbf{ f}_i(\textbf{h}(\textbf{p},\textbf{d}),~~~ (9)$$

is the worst-case requirement metric. Are there any parameters that can be assumed to take on a fixed constant value without incurring in significant error? If so, eval- uate/bound this error, list which parameters and set their corresponding constant values.

B3) Rank the 17 category II-III parameters of Tables 1 and 2 according to the reduction in the range of the failure probability: $$ J_2 = 1 - P[w(\textbf{p},\textbf{d}_{\rm{baseline}})< 0],~~~(10)$$ which one could hope to obtain by refining their uncertainty models (d). Are there any parameters that can be assumed to take on a fixed constant value without incurring significant error? If so, evaluate/bound this error, list which parameters and set their corresponding constant values.

Compare the above rankings and eventual parameter simplifications. Note that while the tasks in B1 are of interest to experts in the disciplines modelled by h, the tasks in B2 and B3 are of interest to analysts of the integrated system. Further notice that each ranking can be used to determine the key parameters whose uncertainty model we want to improve.

Footnotes 

(b) The components of vector quantities and vector functions will be specified as subscripts, e.g., the scalar \(\textbf{p}_1\) is the first component of p while h1 is the first function of h.

(c) A random variable whose probability density function (PDF) has a single peak at the interior of the support set will be called unimodal.

(d) Note that \(J_2\) is equal to \(P[\cup^8_{i=1}\{\textbf{p} : \textbf{f}_i(\textbf{h}(\textbf{p}),\textbf{d}_{\rm{baseline}}) > 0\}]\).

2.2 Propagation

Subproblem C: Uncertainty Propagation

This subproblem aims at finding the range of the metrics \(J_1\) and \(J_2\) in Equations (8) and (10) that results from propagating both the original uncertainty model and a reduced one. The challengers will provide each respondent with a reduced uncertainty model in which 4 out of the 17 category II and III parameters of the respondent’s choice have been improved. This is a practical limitation that may stem from having a limited amount of time or money to generate better models. In particular, the tasks of interest are as follows:

C1)  Find the range of \(J_1\) corresponding to an uncertainty model based on your response to A3 and the information in Table 2

C2)  Find the range of \(J_2\) corresponding to an uncertainty model based on your response to A3 and the information in Table 2

C3)  Select 4 category II-III parameters out of the 17 available according to the rankings in B2 and B3, and request from us an improved uncertainty model for them. While improved models for any four parameters will likely lead to smaller ranges of variation, the set of 4 leading to the smallest ranges is ideal. Each working group may request a reduced uncertainty model of 4 parameters of their choice. Only one set of reduced parameters will be provided to each working group.

C4)  Use the reduced uncertainty model to recalculate the ranges of \(J_1\) and \(J_2\).

A cautionary note on the approaches used to calculate the ranges of \(J_1\) and \(J_2\) is in order. Methods to calculate these ranges may lead to underpredictions or overpredictions of the actual range. Each of these two outcomes has its own drawbacks. An underprediction (e.g., a situation where the search for the end points of the range fails to converge to a global optima) can lead the decision maker to the wrong decision (e.g., the estimate of largest failure probability is half of the actual value). An overprediction (e.g., a situation resulting from replacing a distributional p-box with a free p-box) can not only lead the decision maker to the wrong decision (e.g., the estimate of largest failure probability is twice the actual value) but also prevent him/her from making any decision (e.g., the predicted range of failure probabilities covers the entire [0, 1] interval).

2.3 Interpretation and Communication of Results

Subproblem D: Extreme Case Analysis

This subproblem aims at identifying the epistemic realisations that prescribes the extreme values of \(J_1\) and \(J_2\). In particular we want to

D1) Find the epistemic realisations of the category II and III parameters that yield the smallest and the largest value of \(J_1\) for the original uncertainty model used in C1. Do the same for the reduced uncertainty model used in C4.

D2) Find the epistemic realisation of the category II and III parameters that yield the smallest and the largest value of \(J_2\) for the original uncertainty model used in C2. Do the same for the reduced uncertainty model used in C4.

D3) Identify a few representative realisations of x leading to \(J_2\) > 0. These realisations should typify different failure scenarios. Describe the corresponding relationships between x and g quantitatively, e.g., the combination of small values of \(\textbf{x}_1\) and large values of \(\textbf{x}_2\) yields to violations in the \(\textbf{g}_1 < 0\) and \(\textbf{g}_4 < 0\) requirements.

The response to subproblems C and D should be n agreement. Note however, that some approaches for addressing subproblem C are unable to find the extreme-case epistemic realisations sought for here.

Subproblem E: Robust Design

In this section we consider the multidisciplinary system having the uncertain parameter \(\textbf{p}\in\mathbb{R}^{21}\) and the design variable  \(\textbf{d}\in\mathbb{R}^{14}\) as inputs; and the same  \(\textbf{g}\in\mathbb{R}^{8}\) used previously as output. The objective of this subproblem is to identify design points d with improved robustness  reliability characteristics. In particular, we want to find a design point d that:

E1) Minimises the largest value of \(J_1\) for the uncertainty model used in task C4. Provide the resulting value of d and the corresponding range of expected values. In regard to the range of J1, is the resulting design better than \(\textbf{d}_{\rm{baseline}}\)?

E2) Minimised the largest value of \(J_2\) for the uncertainty model used in task C4. Provide the resulting value of d and the corresponding range of failure probabilities. In regard to the range of \(J_2\), is the resulting design better than \(\textbf{d}_{\rm{baseline}}\)?

E3) Apply the sensitivity analysis of task B2 to the design point found in E1, and the sensitivity analysis of task B3 to the design point found in E2. Compare the rankings with those for the baseline design performed previously.

3. CURRENT STATE OF MATURITY

This challenge was set to the international community by NASA, respondents presented approaches to the above problems at a dedicated session of the 16th AIAA Non-Deterministic Approaches Conference, held in January 13-17, 2014 at National Harbor Maryland, USA. Additional details on the conference are available at www.aiaa.org/scitech2014.aspx

References:

http://uqtools.larc.nasa.gov/nda-uq-challenge-problem-2014/