When we evaluate a model we analysis few parameters to verify the performance of our model. These parameters demonstrate the performance of our model using confusion matrices.
Few more frequently used performance parameters are Accuracy, Precision, Recall and F1 score. Let me give you an idea what they are in this article so that when we talk about our model in next articles would not be confused with terms.

So let’s say our model is ready and we want to know how good our model is?
These terms help the audience of our hypothesis to understand how good predictions are. As below you can see ROC curve and evaluation results of my model.

Lets starts by looking at our ROC(Radio operator characteristics) curve as AUC(area under the curve) is important. More this curve cover the area under the left y-axis more the accuracy. It can be more clear if we understand confusion matrices. A confusion matrices is a table that describe the performance of a model where true values are known.

True Positives (TP) – These are the correctly predicted positive values which means that the value of actual class is yes and the value of predicted class is also yes.
True Negatives (TN) – These are the correctly predicted negative values which means that the value of actual class is no and value of predicted class is also no.
False Positives (FP) – When actual class is no and predicted class is yes.
False Negatives (FN) – When actual class is yes but predicted class in no.

False positives and false negatives, these values occur when your actual class contradicts with the predicted class. True positive and true negatives are the observations that are correctly predicted. We want to minimize false positives and false negatives.

Accuracy – Accuracy is the most intuitive performance measure and it is simply a ratio of correctly predicted observation to the total observations. One may think that, if we have high accuracy then our model is best. For our model, we have got 0.839 which means our model is approx. 83% accurate. But, we have to look at other parameters to evaluate the performance of model.

Accuracy = (TP+TN) / (TP+FP+FN+TN)

Precision – Precision is the ratio of correctly predicted positive observations to the total predicted positive observations. High precision relates to the low false positive rate. We have got 0.878 precision which is pretty good.

Precision = TP / (TP+FP)

Recall (Sensitivity) – Recall is the ratio of correctly predicted positive observations to the all observations in actual class – yes. The question recall answers is: Of all the that truly predicted, how many did we label? We have got recall of 0.919 which is good for this model as it’s above 0.5.

Recall = TP/TP+FN

F1 score – F1 Score is the weighted average of Precision and Recall. Therefore, this score takes both false positives and false negatives into account. Intuitively it is not as easy to understand as accuracy, but F1 is usually more useful than accuracy, especially if you have an uneven class distribution. Accuracy works best if false positives and false negatives have similar cost. If the cost of false positives and false negatives are very different, it’s better to look at both Precision and Recall. In our case, F1 score is 0.898.

F1 Score = 2*(Recall * Precision) / (Recall + Precision)

So, whenever we discuss our model, this article should help you to figure out what the parameters mean and how good our model has performed.

 

Leave a Reply

Your email address will not be published. Required fields are marked *