What is it about?

Two sensitivity studies of binary forecast verification scores are examined. The first is the random changes of forecasts based on truly binary systems. The second is the sensitivity to the chosen threshold, when the binary forecast is obtained thresholding a continuous forecasting parameter.

Featured Image

Why is it important?

Do you know that some binary forecast verification scores could be improved even by random changes? When you select a threshold on a continuous forecasting parameter to optimize a given forecast verification score you should know what you are really doing. When you optimize BIAS you want that forecast and observed counts are the same. When you maximize the Pierce Skill Score you want that the forecast posterior probability is equal or larger to the event prior probability (baserate) when you issue a YES. When you optimize Percent Correct you want to forecast an event when the forecast probability exceed 0.5. On the other hand, it is not clear what does it mean to maximize CSI or HSS and how that choice depends on the event prior probability.

Perspectives

Would you like to guess a little more on that issue? Have a look on this paper... Enjoy!

agostino manzato
ARPA FVG - OSMER

Read the Original

This page is a summary of: Behaviour of verification measures for deterministic binary forecasts with respect to random changes and thresholding, Quarterly Journal of the Royal Meteorological Society, April 2017, Wiley,
DOI: 10.1002/qj.3050.
You can read the full text:

Read

Contributors

The following have contributed to this page