In a recent paper, University of Sheffield researchers Mark Strong and Jeremy Oakley describe a technique for incorporating judgments into a model about structural uncertainty that stems from building an "incorrect" model.
"Perhaps the hardest problem in assessing uncertainty in a computer model prediction is to quantify uncertainty about the model structure, particularly when models are used to predict in the absence of data," Oakley notes. "The methodology in this paper can help model users prioritize where improvements are needed in a model to provide more robust support to decision making."
Two points of uncertainty are encountered when making predictions using computer models--uncertainty in model inputs and uncertainty in model structure. A core approach for managing model structural uncertainty is model averaging, in which predictions of a number of plausible models are averaged with weights based on each model's likelihood or predictive ability.
Meanwhile, model calibration assesses a model based on its external discrepancies. Strong and Oakley's method is founded on internal discrepancies analyzed by first decomposing the model into a series of subunits or subfunctions, and then judging each sub-function for certainty based on whether its output would equal the true value of the parameter from real-world observations.
View Full Article
Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA
No entries found