Researchers are working on the systematic evaluation of climate models



[ad_1]

<a href = "https://3c1703fe8d.site.internapcdn.net/newman/gfx/news/hires/2018/8-researchersw.jpg" title = "When evaluating climate models, experts evaluate generally a series of criteria to arrive at an overall assessment of model fidelity, using their knowledge of the physical system and scientific objectives to assess the relative importance of different aspects of the models in the presence of Burrows arbitrations and al (2018) show that climatologists adjust the importance that they give to the different aspects of a simulation depending on the scientific question to which the model will be used, and their research also shows that the consensus Expertise on the importance differs according to the variables of the model. Progress in the sciences of the atmosphere">
Researchers are working on the systematic evaluation of climate models

In evaluating climate models, experts typically evaluate a set of criteria to arrive at an overall assessment of model fidelity. They use their knowledge of the physical system and scientific objectives to assess the relative importance of different aspects of the models in the presence of compromises. Burrows et al. (2018) show that climatologists adjust the importance they attribute to different aspects of a simulation according to the scientific question that the model will face. Their research also shows that the expert consensus on the importance differs by model variables. Credit: Progress in the sciences of the atmosphere

A research team based at the Pacific Northwest National Laboratory in Richland, Washington, published the results of an international survey to assess the relative importance granted by climatologists to the variables when analyzing the ability of the climate model to simulate the real climate. The results, which have serious implications for studies using the models, have been published as an article. Progress in the sciences of the atmosphere June 22, 2018.

"Climate modellers are devoting a lot of effort to calibrating some model parameters to find a model version that simulates the Earth's observed climate," said Susannah Burrows, first author of the paper and a scientist with the Pacific Northwest National Laboratory. specializes in the analysis and modeling of terrestrial systems.

However, Dr. Burrows notes, there are few systematic studies of how experts prioritize variables such as cloud cover or sea ice to judge the performance of climate models.

"Different people might come up with slightly different ratings of the" good "quality of a particular model, depending on the aspects on which they place the most importance," Mr. Burrows said.

A model, for example, can better simulate sea ice while another model excels in cloud simulation. Every scientist needs to find a balance between his or her priorities and competing goals – something difficult to consistently capture in data analysis tools.

"In other words, there is not a single totally objective definition of what makes a" good "climate model, and this fact is an obstacle to developing approaches and strategies. more systematic tools to facilitate assessments and comparisons, "said Burrows.

The researchers found, from a survey of 96 participants representing the climate modeling community, that the experts took into account specific scientific goals when evaluating a variable of the ## EQU1 ## 39; importance. They found a high degree of consensus that some variables are important in some studies, such as precipitation and evaporation in the evaluation of the Amazonian water cycle. This agreement does not take into account other variables, such as the importance of accurately simulating surface winds during the study of the water cycle in Asia.

According to Burrows, it is important to understand these discrepancies and to develop more systematic approaches to model evaluation, as each new version of a climate model must undergo significant evaluation and be calibrated by several developers and users. . The labor-intensive process can take more than a year.

Tuning, while designed to maintain a rigorous standard, requires experts to compromise between competing priorities. One model can be calibrated to the detriment of a scientific goal in order to achieve another.

Burrows is a member of an interdisciplinary PNNL research team that is working on developing a more systematic solution to this problem of evaluation. The team includes Aritra Dasgupta, Lisa Bramer and Sarah Reehl, experts in data science and visualization, and Yun Qian, Po-Lun Ma, and Phil Rasch, experts in climate science.

To help climate modelers understand these tradeoffs in a clearer and more efficient way, visualization researchers are building interactive and intuitive visual interfaces that allow modelers to summarize and explore complex information about different aspects of model performance. .

Data specialists are working to further characterize the expert assessment of the climate model, relying on the results of the initial survey. Finally, the researchers aim to combine a combination of metrics and human expertise to assess how well climate models are adapted to specific scientific goals, as well as to predict how often experts will agree or disagree. no with this evaluation.

"[We plan] to combine the best of both worlds, using computer science to reduce manual effort and allowing scientists to more effectively apply their human acumen and judgment where it is needed most ", said Mr. Burrows.


Explore further:
A less expensive way to explore distant relationships in climate models

More information:
Susannah M. Burrows et al., Characterizing the relative importance assigned to physical variables by climatologists in assessing the accuracy of the atmospheric climate model, Progress in the sciences of the atmosphere (2018). DOI: 10.1007 / s00376-018-7300-x

Provided by:
Chinese Academy of Sciences

[ad_2]
Source link