The usual approach is to apply a nested cross-validation procedure: hyperparameter selection is performed in the inner cross-validation, while the outer cross-validation computes an unbiased estimate of the exp
The first approach is actually hold out evaluation (although CV is used for tuning) and the second approach is cross validationIFyou just consider the hyperparameters (eg, the feature importance and number of features and K, etc.) to be parameters of some modelingprocessthat you intend to eva...
To illustrate why this is happening, let’s use an example. Suppose that we are working on a machine learning task, in which we are selecting a model based onnrounds of hyperparameter optimization, and we do this by using a grid search and cross-validation. Now, if we are using the sa...
First, we apply nested cross validation to allow flexible and effective hyperparameter tuning while producing non-contaminated estimates of heldout model performance even on small trials datasets. Second, we use multiple performance metrics to assess both a model's ability to predict outcome and ...