The effectiveness of short-term renewable energy forecasts in reducing the cost of managing the variability of wind and solar power generation is dependent upon both the accuracy of the forecasts and the ability to effectively use the forecast information in an organization’s decision-making processes. Today, there is considerable motivation for stakeholders to obtain high quality forecasts and efficiently and effectively use the forecast information in their operational processes. However, many forecast users do not have the skills or experience to conduct an effective forecast solution evaluation procedure that provides meaningful and application-relevant information to the user’s solution selection process. The result is that many users are employing solutions that are much less than optimal for their applications.
This issue has been addressed in the first phase of IEA Wind Task 36 (2016-2018) by the development of a series of three “recommended practices” documents that provide guidance on the selection and operation of forecasting solutions for power market applications. The first document entitled “Forecast Solution Selection Process” provides an overview of the background information that should be collected and evaluated when initiating or renewing a forecasting solution. The second document entitled "Design and Execution of Benchmarks and Trials", provides guidance in how to plan and execute forecasting benchmarks and trials. The third document, which is named "Evaluation of Forecasts and Forecast Solutions", provides information about typical metrics used for the evaluation of forecasts and guidance on how to establish a meaningful and application-relevant verification framework for the evaluation of forecasting solutions. These documents are intended to provide guidance to stakeholders who are seeking a forecasting solution that fits a specific purpose and to enable them to efficiently and economically implement a solution. This presentation provides an overview of the key information in the first two documents.
The first part of the presentation, which will be based on the first IEA document, will focus on the key elements to consider when selecting or designing a forecasting solution. It also provides guidance on the process of identifying the most suitable forecasting methodologies for a user’s application. A decision support tool has been developed to provide guidance to stakeholders for the design of a user-customized selection process.
The second part of the presentation, based on the material in the second IEA document, will focus on the design and execution of forecasting trials and benchmarks (“t/b”) that are intended to compare the performance of alternative forecast solutions in order to identify the best solution for a user’s application. The first issue addressed is whether a t/b is likely to be an effective approach for the user’s situation. Guidance on how to optimize each of the three key phases of a t/b: (1) pre-t/b preparation, (2) execution and (3) post-t/b analysis.
The presentation concludes with a summary of the key points in the first two parts of the three-part series. This will include an overview of best practices in each area, how to apply the information in practice and a list of “pitfalls to avoid”.