Net Sensitivity Formula:
From: | To: |
Definition: Net Sensitivity is a weighted measure of a model's ability to correctly identify positive cases, accounting for an importance factor.
Purpose: It helps evaluate classification models in machine learning and statistics, especially when different cases have varying importance.
The calculator uses the formula:
Where:
Explanation: First calculates standard sensitivity (recall), then applies a weight to account for the relative importance of detecting positives.
Details: Provides a more nuanced performance metric than standard sensitivity when different correct identifications have varying importance.
Tips: Enter the count of true positives, false negatives, and the weight factor (default 1.0). TP + FN must be > 0.
Q1: When should I use net sensitivity instead of regular sensitivity?
A: Use when some positive cases are more important to identify correctly than others (e.g., medical diagnosis of severe vs mild cases).
Q2: What does a weight of 1.0 mean?
A: A weight of 1.0 makes net sensitivity equal to standard sensitivity (no weighting applied).
Q3: What's a good net sensitivity value?
A: Values closer to the weight factor are better, with 1.0 being perfect sensitivity (if weight=1).
Q4: Can net sensitivity be greater than 1?
A: Yes, if your weight factor is greater than 1 (though this is uncommon in practice).
Q5: How is this different from F1 score?
A: F1 combines precision and recall, while net sensitivity focuses only on recall (sensitivity) with case importance weighting.