Machine Learning- Concept of Overfitting

Armel Djangone
Geek Culture
Published in
2 min readMar 22, 2021

--

Building a Machine Learning algorithm is more than just feeding data into it; several flaws impair any model’s performance. Overfitting in Machine Learning is one such flaw that reduces the model’s accuracy and efficiency.

When we feed a mathematical model with much more data (including noise) than it takes, it is overfitted. Consider wanting to blend into bulky clothing to make it more relatable. As a model suits more data than it does, it begins to detect noisy data and incorrect values in the data. As a consequence, the model’s performance and consistency suffer.

Training with more data will help the model determine patterns in the data, allowing it to make more precise predictions. While this can be a valuable strategy for avoiding over-fitting, the data must be cleaned and appropriate (i.e., no “noisy” data), or the technique may be unsuccessful.

Cross-Validation is a technique for predicting how well a statistical model can do in reality. Cross-validation means partitioning a set of data into subsets, evaluating the training set, and validating the analysis on the testing set. Cross-validation aims to see how a model can simulate new data that wasn’t used in the estimation process in order to see if over-fitting is a concern.

Stopping Early — You will decide how well each iteration of a model works if you’re teaching it iteratively. Up to a certain extent, new iterations will also assist in the improvement of a model.

Regularization is a form of regression in which the model’s coefficient estimates are constrained towards zero. To prevent over-fitting, this strategy discourages a more complicated model. Ridge Regression and Lasso Regression are two different types of regularisation. Although Ridge Regression reduces the coefficients of minor predictors to near zero, Lasso Regression reduces the coefficients of insignificant predictors to zero, effectively performing the variable selection.

Ensemble approaches are methods for creating several models and then combining them to yield better performance. In some instances, these models can deliver more precise results than a single model. The voting classifier is a standard ensemble system. The predictor class that obtained the most votes from each individual model will be selected using hard voting. Soft voting adds up the cumulative number of votes from each particular model to determine the prediction class.

Additional reading

Estimating soil properties within a field using hyperspectral remote sensing

Pdf can be obtained at http://researchmathsci.org/JMIart/JMI-v19-5.pdf

--

--

Armel Djangone
Geek Culture

Artificial IntelligenceI/ML/Data Vizualisation