Cross-calibration of probabilistic forecasts
From MaRDI portal
Abstract: When providing probabilistic forecasts for uncertain future events, it is common to strive for calibrated forecasts, that is, the predictive distribution should be compatible with the observed outcomes. Several notions of calibration are available in the case of a single forecaster alongside with diagnostic tools and statistical tests to assess calibration in practice. Often, there is more than one forecaster providing predictions, and these forecasters may use information of the others and therefore influence one another. We extend common notions of calibration, where each forecaster is analysed individually, to notions of cross-calibration where each forecaster is analysed with respect to the other forecasters in a natural way. It is shown theoretically and in simulation studies that cross-calibration is a stronger requirement on a forecaster than calibration. Analogously to calibration for individual forecasters, we provide diagnostic tools and statistical tests to assess forecasters in terms of cross-calibration. The methods are illustrated in simulation examples and applied to probabilistic forecasts for inflation rates by the Bank of England.
Recommendations
Cited in
(18)- A score regression approach to assess calibration of continuous probabilistic predictions
- Adjusting for information content when comparing forecast performance
- On the usefulness of cross-validation for directional forecast evaluation
- Determining the MSE-optimal cross section to forecast
- Regression diagnostics meets forecast evaluation: conditional calibration, reliability diagrams, and coefficient of determination
- Generic Conditions for Forecast Dominance
- Forecast dominance testing via sign randomization
- Early warning with calibrated and sharper probabilistic forecasts
- Calibration tests for multivariate Gaussian forecasts
- On the ordering of probability forecasts
- Sequentially valid tests for forecast calibration
- Testing Multiple Forecasters
- Forecaster's dilemma: extreme events and forecast evaluation
- Copula calibration
- Elicitability and identifiability of set-valued measures of systemic risk
- Probabilistic Forecasts, Calibration and Sharpness
- Comparing forecasting performance in cross-sections
- Calibrated forecasting and merging
This page was built for publication: Cross-calibration of probabilistic forecasts
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q521318)