Cross validation is a way of calculating the likely uncertainty of any model (it doesn’t have to be a machine learning model).
A common cross validation approach is LOOCV (leave one out cross validation), for small datasets. Another is K-folds cross validation. In any case, the basics is to leave out “some amount” of your training data, totally removed from the training process, then you train your model, then you validate it on the trained model. You then repeat this process over the k-folds or each unit of your training data to create a valid uncertainty.
So a few things. First, this a standard approach in machine learning, because once you get stop making the assumptions of frequentism (and you probably should), you no longer get things like uncertainty for free, because the assumptions aren’t met.
In some approaches in machine learning, this is necessary because there really isn’t a tractable way to get uncertainty from the model (although in others, like random forest, you get cross validation for free).
Cross validation is great because you really don’t need to understand anything about the model itself; you just implement the validation strategy and you get a valid answer for the model uncertainty.
What’s cross validation?
A joke to you.
Cross validation is a way of calculating the likely uncertainty of any model (it doesn’t have to be a machine learning model).
A common cross validation approach is LOOCV (leave one out cross validation), for small datasets. Another is K-folds cross validation. In any case, the basics is to leave out “some amount” of your training data, totally removed from the training process, then you train your model, then you validate it on the trained model. You then repeat this process over the k-folds or each unit of your training data to create a valid uncertainty.
So a few things. First, this a standard approach in machine learning, because once you get stop making the assumptions of frequentism (and you probably should), you no longer get things like uncertainty for free, because the assumptions aren’t met.
In some approaches in machine learning, this is necessary because there really isn’t a tractable way to get uncertainty from the model (although in others, like random forest, you get cross validation for free).
Cross validation is great because you really don’t need to understand anything about the model itself; you just implement the validation strategy and you get a valid answer for the model uncertainty.