Lesson # 108 - Course A.I & Machine Learning #116
Replies: 2 comments
-
@mrdbourke ADDITION to my previous question: as well as using cross validation to improve models I think that it is importanto to verify for overfitting by tracking both training and validation scores. It is not necessarily correct to say that we have to choose the model that just maximises the validation score. We have to control both form model complexity and generalization power, therefore we can choose a set of hyperparameters that improves our model's performance (without being the maximising ones) and at the same time keep the distance between training and validation scores as low as possible. |
Beta Was this translation helpful? Give feedback.
-
Hi all, I am AyoTunde from 🇳🇬. I have over a decade of experience as a research assistant in agronomy and an I.T. infrastructure technician at the International Institute of Tropical Agriculture (#IITA). With my diverse educational background and extensive work experience in both research and technology fields, that led me to start the AI | ML Course. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Back in lesson #24 ("Splitting data") you discussed how one would split data for your Model into three sets (the "3 sets")
Training: 70-80%
Validation (to tune): 10-15%
Test: 10-15%
but the code demonstrated here only split the data into two sets; TRAIN and TEST
And from what i can gather from the lesson, tweaking of the model was performed on the TEST data set - rather than a VALIDATION data set?
Beta Was this translation helpful? Give feedback.
All reactions