Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testing quality of forecast #1

Open
turgeonmaxime opened this issue Nov 2, 2018 · 1 comment
Open

testing quality of forecast #1

turgeonmaxime opened this issue Nov 2, 2018 · 1 comment

Comments

@turgeonmaxime
Copy link
Member

If we are to put a forecasting tool in the hands of non-technical users, we need some way of automatically assess the quality of the forecasts and tell the user when they need to reach out for support. This can tricky: we want to be accurate, but we don't want too many false positives...

Here are two tests that we could implement:

  • Anomaly detection via Generalized ESD Test. See for example Twitter's package.

  • Ljung-Box test for autocorrelation in the residuals.

Another approach would be to fit a couple simple models and compare their accuracy with the overall model; if the simple models give better forecasts than prophet, that would be a good signal to contact a statistician.

@turgeonmaxime
Copy link
Member Author

After some experimentation with the data, it seems that the the first approach would create a fair amount of false positives. In the second approach, we could restrict ourselves to a naive model with a seasonal component. If the accuracy of the prophet forecast over a certain horizon is worse than this naive model, that's a major red flag that something is wrong (e.g. overfitting recent changes).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant