• 0208 432 6218
  • WhatsApp
  • Register

Accuracy, Precision and Recall Explained Simply

Accuracy, precision and recall are important metrics used to evaluate classification models. They help us understand not only how often a model is correct, but also what types of mistakes it makes.

Why Do We Need Evaluation Metrics?

In classification problems, a model predicts categories such as yes/no, spam/not spam, fraud/not fraud, or purchased/not purchased. We need evaluation metrics to check how well the model is performing.

Example

A model predicts whether a customer will buy a product.

We need to know how many predictions were correct and what type of mistakes the model made.

Evaluation metrics help us measure model performance

Classification Example

Suppose we are building a model to predict whether a customer will purchase a product.

Actual Result Model Prediction Meaning
Purchased Purchased Correct prediction
Not Purchased Not Purchased Correct prediction
Not Purchased Purchased False alarm
Purchased Not Purchased Missed opportunity

Confusion Matrix Terms

Accuracy, precision and recall are based on four important outcomes.

Term Meaning Example
True Positive (TP) Model predicts Yes and actual is Yes Predicted customer will buy, and they did buy
True Negative (TN) Model predicts No and actual is No Predicted customer will not buy, and they did not buy
False Positive (FP) Model predicts Yes but actual is No Predicted customer will buy, but they did not
False Negative (FN) Model predicts No but actual is Yes Predicted customer will not buy, but they did buy

What is Accuracy?

Accuracy measures how many predictions were correct overall.

Accuracy = Correct Predictions ÷ Total Predictions

Example

If a model makes 100 predictions and 85 are correct, the accuracy is 85%.

Simple meaning: Accuracy tells us how often the model is correct overall.

Accuracy Can Be Misleading

Accuracy is useful, but it does not always tell the full story. This is especially true when the data is imbalanced.

Example

Suppose only 5 out of 100 transactions are fraudulent.

A model could predict “not fraud” for every transaction and still be 95% accurate.

The model looks accurate, but it fails to detect fraud.

Important: High accuracy does not always mean a useful model.

What is Precision?

Precision measures how many of the positive predictions were actually correct.

Precision = True Positives ÷ (True Positives + False Positives)

Simple Meaning

When the model predicts “Yes”, how often is it correct?

Precision is important when false positives are costly.

Precision Example

A marketing model predicts 20 customers are likely to buy. Out of those 20, only 15 actually buy.

Precision = 15 ÷ 20 = 0.75 = 75%

This means 75% of the customers targeted by the model were correctly identified.

Business meaning: Higher precision means fewer wasted marketing efforts.

What is Recall?

Recall measures how many actual positive cases the model successfully found.

Recall = True Positives ÷ (True Positives + False Negatives)

Simple Meaning

Out of all real “Yes” cases, how many did the model find?

Recall is important when missing positive cases is costly.

Recall Example

There are 30 customers who actually buy. The model correctly identifies 24 of them.

Recall = 24 ÷ 30 = 0.80 = 80%

This means the model found 80% of all real buyers.

Business meaning: Higher recall means fewer missed opportunities.

Accuracy vs Precision vs Recall

Metric Question It Answers Useful When
Accuracy How often is the model correct overall? Classes are balanced
Precision When the model says Yes, how often is it correct? False positives are costly
Recall Out of all real Yes cases, how many did the model find? False negatives are costly

Business Examples

When Precision Matters

A company sends expensive sales calls only to customers predicted as likely buyers. False positives waste time and money.

When Recall Matters

A fraud detection system should catch as many fraud cases as possible. Missing fraud can be very costly.

Precision-Recall Trade-Off

In many classification problems, improving recall may reduce precision, and improving precision may reduce recall. This is called a trade-off.

Marketing Example

If we lower the prediction threshold, we may target more potential buyers. This can increase recall, but may also include more false positives.

Key idea: The best metric depends on the business goal.

Metrics in Python

Scikit-learn makes it easy to calculate accuracy, precision and recall.

from sklearn.metrics import accuracy_score, precision_score, recall_score

accuracy = accuracy_score(y_test, predictions)
precision = precision_score(y_test, predictions)
recall = recall_score(y_test, predictions)

print("Accuracy:", accuracy)
print("Precision:", precision)
print("Recall:", recall)

Classification Report in Python

A classification report gives multiple evaluation metrics in one place.

from sklearn.metrics import classification_report

print(classification_report(y_test, predictions))

This is very useful when comparing model performance across different classes.

Choosing the Right Metric

Scenario Metric to Focus On
Balanced dataset with similar mistake costs Accuracy
Marketing campaign with limited budget Precision
Fraud detection or medical screening Recall
Need balance between precision and recall F1-score

What is F1-Score?

F1-score combines precision and recall into one metric. It is useful when you want a balance between both.

F1-score = balance between precision and recall

F1-score is especially useful when classes are imbalanced.

Common Beginner Mistakes

  • Using accuracy only for imbalanced datasets
  • Ignoring false positives and false negatives
  • Choosing a metric without considering business cost
  • Assuming high accuracy always means a good model
  • Not checking precision and recall separately
Remember: The best metric depends on the problem you are solving.

Quick Practice

A fraud detection model has high accuracy but low recall.

Question: What is the problem?

Suggested answer: The model may be missing many real fraud cases. In fraud detection, low recall can be a serious problem.

Key Takeaway

Accuracy tells us how often the model is correct overall. Precision tells us how reliable positive predictions are. Recall tells us how many actual positive cases the model finds.

Simple rule: Accuracy is not enough. Always choose metrics based on the business problem.

Want to Learn More?

Explore our practical courses in Data Analysis, Machine Learning and AI to apply classification metrics in real-world projects.

View Courses

What we do?

At London Academy of IT, we provide instructor-led online and in-person IT training in Data Analytics, SQL, Python, Power BI, and more. Our cutting-edge courses are designed to boost performance and enhance employability, providing the competitive edge employers look for.

Our Contacts

London Academy of IT
64 Broadway
Stratford
London E15 1NT
United Kingdom

Regional Training

2012 - 2026 © London Academy of IT Limited. All Rights Reserved.
UKPRN: 10045491. Registered in England & Wales with company no. 07923992.