You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Best model has RMSE score of 10.212347636694373 and following config: ....
I'm assuming that the best model's RMSE score is computed on the validation data provided.
Yet, when I score the validation data using the best model and compute the RMSE score, the result is quite different from the best score announced.
Is the best score indeed based on the validation data?
Is a non-standard definition of RMSE being used here?
What could I be doing wrong? (Btw, the difference is significant enough to matter.)
Regards.
The text was updated successfully, but these errors were encountered:
I have since started using GameScoringDriver CLI to do my scoring. And, the RMSE given by the scoring driver matches the RMSE output by the training driver (for the best model). Everything seemed great.
Except, when you calculate the RMSE using the predictionScore and the label values, it's exactly what I was getting previously when I used the API !! And, it doesn't match what the game scoring driver and game training driver claim.
The best model's RMSE score doesn't match what's given by the validation data when scored by the best model.
I pass the following to the GameTrainingDriver:
And, receive:
I'm assuming that the best model's RMSE score is computed on the validation data provided.
Yet, when I score the validation data using the best model and compute the RMSE score, the result is quite different from the best score announced.
Is the best score indeed based on the validation data?
Is a non-standard definition of RMSE being used here?
What could I be doing wrong? (Btw, the difference is significant enough to matter.)
Regards.
The text was updated successfully, but these errors were encountered: