The NASA Langley Multidisciplinary Uncertainty Quantification Challenge
Name  Affiliation  Phone Number  Email Address 

Matt Butchers  The Knowledge Transfer Network  matt.butchers@ktnuk.org  
Aeronautics
This use case is extracted and adapted from the NASA LaRC UQ Challenge, full details at Ref [1]. I. Introduction NASA missions often involve the development of new vehicles and systems that must be designed to operate in harsh domains with a wide array of operating conditions. These missions involve highconsequence and safetycritical systems for which quantative data is either very sparse or prohibitively expensive to collect. Limited heritage data may exist, but is also usually sparse and may not be directly applicable to the system of interest, making uncertainty quantification extremely challenging. NASA modelling and simulation standards require estimates of uncertainty and descriptions of any processes used to obtain these estimates. to better meet this standard, NASA recently sought responses to this challenge problem to address the following:
 Modeling and refinement of uncertainty given sparse data
 Propagation of mixed aleatory and epistemic uncertainties through system models
 Parameter ranking and sensitivity and sensitivity analysis in the presence of uncertainty
 Identification of the parameters whose uncertainty is the most / least consequential
 Worstcase system performance assessment
 Design in the presence of uncertainty
The relationship between the inputs p and d, and the output g is given by several functions, each representing a different subsystem or discipline. The function prescribing the output of the multidisciplinary system is given by $$\textbf{g} = \textbf{f}(\textbf{x},\textbf{d}), (1)$$ where x is a set of intermediate variables whose dependence on p is given by $$\textbf{x} = \textbf{h}(\textbf{p}). (2)$$ The components of x, which can be interpreted as outputs of the fixed discipline analyses in (2), are the inputs to the crossdispline analyses in (1). The components of g and x are continuous functions of the inputs that prescribe them. III. The Physical System [caption id="attachment_213" align="aligncenter" width="485"] Figure 1. NASA GTM Problem Structure[/caption] This challenge problem is based upon a model of the NASA Langley Generic Transport Model (GTM), see [2] and [3]. The GTM is a 5.5 % dynamically scaled, remotely piloted, twinturbine, research aircraft used to conduct experiments for the NASA Aviation Safety Program. Although a disciplinespecific application is the focus of the challenge problem, the problem was specifically structured so that specialised aircraft knowledge is not required. Eight stability and performance requirements are imposed upon the system. These requirements are representative of conventional measure of goodness typically used in aircraft stability and control, e.g., lateral & longitudinal stability, good command tracking, actuator saturation, etc. The mathematical model S describes the dynamics of the Generic Transport Model (GTM), a remotely operated twinjet aircraft developed by NASA Langley Research Center. The aircraft is piloted from a ground station via radio frequency links by using onboard cameras and synthetic vision technology. The parameters in p are used to describe losses in control effectiveness and time delays resulting from telemetry and communications as well as to model a spectrum of flying conditions that extend beyond the normal flying envelope. The requirements in g are used to describe the vehicles stability and performance characteristics in regard to pilot command tracking and handling / riding qualities. The "black box" format of the formulation of this challenge problem aims at making the problem amenable to the largest possible audience without favouring or hindering respondents depending upon their particular field of expertise. IV. The Objectives Uncertainty Characterisation:
 Advance methods for the refinement of uncertainty models using limited experimental data
 Refine uncertainty models given the following:
 A mapping from parameters to output
 Limited "truth data"
 An initial representation of he uncertainty model
 Aleatory uncertainties modelled as random variable with a fixed functional forms and known coefficients. (Class I)
 Epistemic uncertainties modelled as fixed but unknown constants within prescribed intervals. (Class II)
 Aleatory uncertainties modeled as a distributional probability boxes, with their parameters modelled as intervals. (Class III)
 Develop methods for the identification of critical parameters from within a multidisciplinary parameter space. This objective is similar to classical sensitivity analysis, but here metrics of interest are probabilistic and uncertain parameters belong to diverse classes of uncertainty models
 We consider this as an open problem with great practical significance.
 Deploy approaches for propagating uncertainties in multidisciplinary systems subject to both aleatory and epistemic uncertainty. This will involve the computation of ranges of both failure probabilities as well as of expected values.
 This objective exploits the fact that very few tools exist to propagate mixed aleatory and epistemic uncertainties through general nonlinear systems. Their applicability and practicality remain to be demonstrated
 Identify the combination of uncertainties that lead to best and worstcase outcomes according to two probabilistic measures of interest.
 Determine design points that provide optimal worstcase probabilistic performance.
Subproblem A: Uncertainty Characterisation
Here we consider a subsystem S whose scalar output \(\textbf{x}_1\) depends on five uncertain parameters (b) as given by $$\textbf{x}_1 = \textbf{h}_1(\textbf{p}_1,\textbf{p}_2,\textbf{p}_3,\textbf{p}_4,\textbf{p}_5), (3)$$
Specific information on those parameters is provided in Table 1. The first column provides the parameter's symbol, the second one its category (see above for a description of the categories), and the third one describes its uncertainty model (c). While the symbol \(\Delta\) denotes the support set or parameter range, \(\rho\), E[\(\cdot\)], V[\(\cdot\)], and P[\(\cdot\)] denote the correlation, expected value, variance, and probability operators respectively. In this subproblem, the tasks of interest are as follows.
A1) We provide software to evaluate \(\textbf{h}_1\) and n = 25 observations of \(\textbf{x}_1\) corresponding to the “true uncertainty model”, i.e., a model where \(\textbf{p}_1\) is a fully prescribed Beta random variable,\(\textbf{p}_2\) is a fixed constant and \(\textbf{p}_4\) and \(\textbf{p}_5\) are described by a single and possibly correlated bivariate Normal. Use this information to improve the uncertainty model of the category II and III parameters (refer to Section II for the definition of a reduced/improved uncertainty model). The resulting models should only exclude the elements of the original models that fail to explain the observations.
A2) Use an additional n = 25 observations to validate the models found in A1.
A3) Improve the uncertainty models further by using all the 50 samples available.
A4) Account for the effect of the number of observations n on the fidelity of the resulting uncertainty models. How much better is the model found in A3 as compared to the model found in A1?
Table 1. Uncertain parameters.
Symbol 
Category

Uncertainty Model 
\(p_1\) 
III 
Unimodal Beta, \(3/5 \leq E[p_1] \leq 4/5, 1/50 \leq V[p_1] \leq 1/25\) 
\(p_2\) 
II 
\(\Delta\) = [0,1] 
\(p_3\) 
I 
Uniform, \(\Delta\) = [0, 1] 
\(p_4\),\(p_5\)

III

Normal, −5 \(\leq E[p_i] \leq 5, 1/400 \leq V[p_i] \leq 4, [\rho] \leq 1\) for i = 4.5 
We now consider the multidisciplinary system S having the input p\(\in\mathbb{R}^{21}\) and the output g\(\in\mathbb{R}^{8}\). The first 5 input parameters should be modeled using the results from task A3, while the remaining 16 parameters are given in Table 2.
Table 2. Uncertain parameters.
Symbol 
Category 
Uncertainty Model 
\(\textbf{p}_3\) 
II 
\(\Delta\) = [0,1] 
\(\textbf{p}_7\) 
III 
Beta, 0.982 \(\leq a \leq 3.537, 0.619 \leq b \leq 1.080\) 
\(\textbf{p}_8\) 
III 
Beta, 7.450 \(\leq a \leq 14.093, 4.285 \leq b \leq 7.864\) 
\(\textbf{p}_9\) 
I 
Uniform; \(\Delta\) = [0,1] 
\(\textbf{p}_{10}\) 
III 
Beta, 1.520 \(\leq a \leq 4.513, 1.536 \leq b \leq 4.750\) 
\(\textbf{p}_{11}\) 
I 
Uniform, \(\Delta\) = [0,1] 
\(\textbf{p}_{12}\) 
II 
\(\Delta\) = [0,1] 
\(\textbf{p}_{13}\) 
III 
Beta, 0.412 \(\leq a \leq 0.737, 1.000 \leq b \leq 2.068\) 
\(\textbf{p}_{14}\) 
III 
Beta, 0.931 \(\leq a \leq 2.169, 1.000 \leq b \leq 2.407\) 
\(\textbf{p}_{15}\) 
III 
Beta, 5.435 \(\leq a \leq 7.095, 5.287 \leq b \leq 6.945\) 
\(\textbf{p}_{16}\) 
II 
\(\Delta\) = [0,1] 
\(\textbf{p}_{17}\) 
III 
Beta, 1.060 \(\leq a \leq 1.662, 1.000 \leq b \leq 1.488\) 
\(\textbf{p}_{18}\) 
III 
Beta, 1.000 \(\leq a \leq 4.266, 0.553 \leq b \leq 1.000\) 
\(\textbf{p}_{19}\) 
I 
Uniform, \(\Delta\) = [0,1] 
\(\textbf{p}_{20}\) 
III 
Beta, 7.530 \(\leq a \leq 13.492, 4.711 \leq b \leq 8.148\) 
\(\textbf{p}_{21}\) 
III 
Beta, 0.421 \(\leq a \leq 1.000, 7.772 \leq b \leq 29.621\) 
B1) Rank the 4 category IIIII parameters entering Equation (3) according to degree of refinement in the pbox of \(\textbf{x}_1\) which one could hope to obtain by refining their un certainty models. Are there any parameters that can be assumed to take on a fixed constant value without incurring in significant error? If so, evaluate/bound this error, list which parameters and set their corresponding constant values. Do the same for the 4 category IIIII parameters prescribing \(\textbf{x}_2,\textbf{x}_3\) and \(\textbf{x}_4\).
B2) Rank the 17 category IIIII parameters of Tables 1 and 2 according to the reduction in the range of the expected value $$ J_1 = E[w(\textbf{p},\textbf{d}_{\rm{baseline}})],~~~(8)$$
which one could hope to obtain by refining their uncertainty models. In this expression, $$ w(\textbf{p},\textbf{d}) = \rm{max}_{1\leq i \leq 8}\textbf{ g}_i = \rm{max}_{1\leq i \leq 8} \textbf{ f}_i(\textbf{h}(\textbf{p},\textbf{d}),~~~ (9)$$
is the worstcase requirement metric. Are there any parameters that can be assumed to take on a fixed constant value without incurring in significant error? If so, eval uate/bound this error, list which parameters and set their corresponding constant values.
B3) Rank the 17 category IIIII parameters of Tables 1 and 2 according to the reduction in the range of the failure probability: $$ J_2 = 1  P[w(\textbf{p},\textbf{d}_{\rm{baseline}})< 0],~~~(10)$$ which one could hope to obtain by refining their uncertainty models (d). Are there any parameters that can be assumed to take on a fixed constant value without incurring significant error? If so, evaluate/bound this error, list which parameters and set their corresponding constant values.
Compare the above rankings and eventual parameter simplifications. Note that while the tasks in B1 are of interest to experts in the disciplines modelled by h, the tasks in B2 and B3 are of interest to analysts of the integrated system. Further notice that each ranking can be used to determine the key parameters whose uncertainty model we want to improve.
Footnotes
(b) The components of vector quantities and vector functions will be specified as subscripts, e.g., the scalar \(\textbf{p}_1\) is the first component of p while h1 is the first function of h.
(c) A random variable whose probability density function (PDF) has a single peak at the interior of the support set will be called unimodal.
(d) Note that \(J_2\) is equal to \(P[\cup^8_{i=1}\{\textbf{p} : \textbf{f}_i(\textbf{h}(\textbf{p}),\textbf{d}_{\rm{baseline}}) > 0\}]\).
Subproblem C: Uncertainty Propagation
This subproblem aims at finding the range of the metrics \(J_1\) and \(J_2\) in Equations (8) and (10) that results from propagating both the original uncertainty model and a reduced one. The challengers will provide each respondent with a reduced uncertainty model in which 4 out of the 17 category II and III parameters of the respondent’s choice have been improved. This is a practical limitation that may stem from having a limited amount of time or money to generate better models. In particular, the tasks of interest are as follows:
C1) Find the range of \(J_1\) corresponding to an uncertainty model based on your response to A3 and the information in Table 2
C2) Find the range of \(J_2\) corresponding to an uncertainty model based on your response to A3 and the information in Table 2
C3) Select 4 category IIIII parameters out of the 17 available according to the rankings in B2 and B3, and request from us an improved uncertainty model for them. While improved models for any four parameters will likely lead to smaller ranges of variation, the set of 4 leading to the smallest ranges is ideal. Each working group may request a reduced uncertainty model of 4 parameters of their choice. Only one set of reduced parameters will be provided to each working group.
C4) Use the reduced uncertainty model to recalculate the ranges of \(J_1\) and \(J_2\).
A cautionary note on the approaches used to calculate the ranges of \(J_1\) and \(J_2\) is in order. Methods to calculate these ranges may lead to underpredictions or overpredictions of the actual range. Each of these two outcomes has its own drawbacks. An underprediction (e.g., a situation where the search for the end points of the range fails to converge to a global optima) can lead the decision maker to the wrong decision (e.g., the estimate of largest failure probability is half of the actual value). An overprediction (e.g., a situation resulting from replacing a distributional pbox with a free pbox) can not only lead the decision maker to the wrong decision (e.g., the estimate of largest failure probability is twice the actual value) but also prevent him/her from making any decision (e.g., the predicted range of failure probabilities covers the entire [0, 1] interval).
Subproblem D: Extreme Case Analysis
This subproblem aims at identifying the epistemic realisations that prescribes the extreme values of \(J_1\) and \(J_2\). In particular we want to
D1) Find the epistemic realisations of the category II and III parameters that yield the smallest and the largest value of \(J_1\) for the original uncertainty model used in C1. Do the same for the reduced uncertainty model used in C4.
D2) Find the epistemic realisation of the category II and III parameters that yield the smallest and the largest value of \(J_2\) for the original uncertainty model used in C2. Do the same for the reduced uncertainty model used in C4.
D3) Identify a few representative realisations of x leading to \(J_2\) > 0. These realisations should typify different failure scenarios. Describe the corresponding relationships between x and g quantitatively, e.g., the combination of small values of \(\textbf{x}_1\) and large values of \(\textbf{x}_2\) yields to violations in the \(\textbf{g}_1 < 0\) and \(\textbf{g}_4 < 0\) requirements.
The response to subproblems C and D should be n agreement. Note however, that some approaches for addressing subproblem C are unable to find the extremecase epistemic realisations sought for here.
Subproblem E: Robust Design
In this section we consider the multidisciplinary system having the uncertain parameter \(\textbf{p}\in\mathbb{R}^{21}\) and the design variable \(\textbf{d}\in\mathbb{R}^{14}\) as inputs; and the same \(\textbf{g}\in\mathbb{R}^{8}\) used previously as output. The objective of this subproblem is to identify design points d with improved robustness reliability characteristics. In particular, we want to find a design point d that:
E1) Minimises the largest value of \(J_1\) for the uncertainty model used in task C4. Provide the resulting value of d and the corresponding range of expected values. In regard to the range of J1, is the resulting design better than \(\textbf{d}_{\rm{baseline}}\)?
E2) Minimised the largest value of \(J_2\) for the uncertainty model used in task C4. Provide the resulting value of d and the corresponding range of failure probabilities. In regard to the range of \(J_2\), is the resulting design better than \(\textbf{d}_{\rm{baseline}}\)?
E3) Apply the sensitivity analysis of task B2 to the design point found in E1, and the sensitivity analysis of task B3 to the design point found in E2. Compare the rankings with those for the baseline design performed previously.
This challenge was set to the international community by NASA, respondents presented approaches to the above problems at a dedicated session of the 16th AIAA NonDeterministic Approaches Conference, held in January 1317, 2014 at National Harbor Maryland, USA. Additional details on the conference are available at www.aiaa.org/scitech2014.aspx
http://uqtools.larc.nasa.gov/ndauqchallengeproblem2014/