We have added 5 new datasets to the ILAMB collection. Please run
ilamb-fetch to update your local collection and check ilamb.cfg for details on how to include them in your local runs. Alternatively you may browse some results against a subset of CMIP6 models. The new additions include:
Help us learn what scientists think about model performance by evaluating pairs of model biases on this feedback form. Simply click on which bias plot you consider to be ‘better’ from the 20 randomly given pairs. While our intention is for you to select the model with the lower error relative to observations, please use whatever definition of ‘better’ makes sense to you as you examine the differences between the plots. We will use your collective responses to evaluate how well our methodolgy captures community opinion. For context, each plot represents either the Gross Primary Productivity (gpp), Sensible Heat (hfss), or Surface Air Temperature (tas) bias of a model in the CMIP5 or CMIP6 era, relative to a reference data product. When you are finished, click the ‘complete’ button to see how well your choices align with our current methodologies.
It has been a while since our last release, but ILAMB continues to evolve. Many of the changes are ‘under the hood’ or bugfixes that are not readily seen. In the following, we present a few key changes and draw attention in particular to those that will change scores. We also have worked to make ILAMB ready to integrate with tools being developed as part of the Coordinated Model Evaluation Capabilities CMEC.
Check out our news archive for older posts.