6th International Conference Energy & Meteorology: Abstract Submission

IEA Wind Recommended Practices for Selecting Renewable Power Forecasting Solutions Part 3: Evaluation of Forecasts and Forecast Solutions (743)

Jethro Browell 1 , Corinna Möhrlen 2 , John Zack 3 , Jakob W Messner 4
  1. Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow, United Kingdom
  2. WEPROG, Assens, Denmark
  3. UL AWS Truepower, Albany, NY, USA
  4. Anemo Analytics, Hørsholm, Denmark

Objective & Background

We present a report on Recommended Practices (RP), specifically part 3: evaluation of forecasts and forecast solutions, from the International Energy Agency’s Wind Task 36 (Wind Power Forecasting). The purpose of this Task 36 is to improve the value of wind energy forecasts to the wind industry, and the present report contains recommendations designed to help forecast users make informed decision relating to the procurement and evaluation of forecast products and services. The RP have been complied by an expert group representing 53 organisations including forecast vendors, consumers and academia from 13 countries, though development of the RP has been led primarily by a small task force group collecting the information gained by experience, discussions and workshops organised within the Task.

The evaluation of forecasts and forecast solutions is a requirement for any forecast provider as well as end-users of forecasts as significant business decisions are often based on evaluation results. Therefore, it is crucial to design forecast evaluation exercises with this importance in mind and to ensure that results are both meaningful and representative. Additionally, forecast skill and quality has to be understood in the context of forecast value: that is to say that the quality of a forecast should be evaluated based on the value it contributes to decision processes.


The RP have been developed through the combined experience and expertise of Task 36 members and wide consultation with the wind and energy forecasting industries. This has included engagement at international conferences[1] and a public consultation on a draft RP.

The RP in Part 3 are based on the principal that evaluation results should be:

  1. Representative of true forecast performance that can be expected operationally
  2. Significant in the sense that apparent differences in forecast performance are properties of the forecasting system and not a result of random variation
  3. Relevant to the specific business function for which the forecast service is employed



A set of recommendations is made relating to the design of evaluation exercises (metric selection, evaluation period, fairness and transparency, quantification of uncertainty), data quality (treatment of non-weather effects, IT failures, data quality assurance) and interpretation of results (verification vs metric-based approaches, cost-loss models and the economic impact of “typical” and “extreme” forecast errors, types of error, e.g. phase vs level error).

The recommendations may be summarised as follows:

  • Verification is subjective: it is important to understand the limitations of a chosen verification methods and metrics
  • Verification has an inherent uncertainty due to its dependence on the evaluation dataset: every effort should be made to maximise the representativeness of evaluation datasets
  • Evaluation should contain a set of metrics to measure a range of application-relevant performance attributes
  • Evaluation should include an application-specific “cost function” in order to assess the value of the forecasting system


We present a summary of the RP and some practical information as to how best to incorporate the recommendations into practise.


[1]A full list of publications can be found at http://www.ieawindforecasting.dk/publications