Matthews Correlation Coefficient Formula:
From: | To: |
Definition: MCC is a statistical measure that evaluates the quality of binary classifications, ranging from -1 (total disagreement) to +1 (perfect prediction).
Purpose: It provides a balanced evaluation metric even when classes are of very different sizes, unlike accuracy which can be misleading for imbalanced datasets.
The calculator uses the formula:
Where:
Explanation: MCC considers all four confusion matrix categories to produce a value between -1 and 1, where 1 represents perfect prediction, 0 random prediction, and -1 inverse prediction.
Details: MCC is particularly valuable in machine learning and statistics because it:
Tips: Enter the counts from your confusion matrix:
Q1: What does an MCC value of 0 mean?
A: An MCC of 0 indicates that the classifier is no better than random prediction.
Q2: How does MCC compare to accuracy?
A: MCC is more informative than accuracy when classes are imbalanced, as it considers all four confusion matrix categories.
Q3: What's considered a "good" MCC value?
A: Generally, MCC > 0.3 is acceptable, > 0.7 is strong, and > 0.9 is excellent. However, this depends on your specific application.
Q4: Can MCC be negative?
A: Yes, MCC ranges from -1 to +1. Negative values indicate inverse prediction (worse than random).
Q5: When should I use MCC instead of F1 score?
A: MCC is generally preferred when you care about both classes equally, while F1 focuses more on the positive class.