Precision and recall are metrics that measure classification performance, each using its own criteria, given by the formulas below:
Precision=TPTP+FPPrecision=TPTP+FPRecall=TPTP+FNRecall=TPTP+FN
Where:
TP = True Positive
FP = False Positive
FN = False Negative
In other words, precision is the ratio of correctly classified positive cases over all cases predicted as positive, while recall is the ratio of correctly classified positive cases over all positive cases.
Precision is an appropriate measure when the cost of a false positive is high (e.g. email spam classification), while recall is appropriate when the cost of a false negative is high (e.g. fraud detection).
Both are also frequently used together in the form of the F1-score, which is defined as:
F1=2∗Precision∗RecallPrecision+RecallF1=2∗Precision∗RecallPrecision+Recall
The F1-score balances both precision and recall, so it’s a good measure of classification performance for highly imbalanced datasets.