Another post starts with you beautiful people! Hope you enjoyed my previous post about improving your model performance by confusion metrix . Today we will continue our performance improvement journey and will learn about Cross Validation (k-fold cross validation) & ROC in Machine Learning. A common practice in data science competitions is to iterate over various models to find a better performing model. However, it becomes difficult to distinguish whether this improvement in score is coming because we are capturing the relationship better or we are just over-fitting the data. To find the right answer of this question, we use cross validation technique. This method helps us to achieve more generalized relationships. What is Cross Validation? Cross Validation is a technique which involves reserving a particular sample of a data set on which we do not train the model. Later, we test the model on this sample before finalizing the model. Here are the steps involved in...
This blog is dedicated to all aspiring data scientists. I will share my learning about Data Science, Machine Learning, Deep Learning, OCR, and Computer Vision with my blog posts. So keep reading my posts and be a better version of yourself. Contact me for any AI-related freelance or consultancy work.