Sensitivity

Sensitivity, also widely known in machine learning as Recall or the True Positive Rate (TPR), is a critical metric used to evaluate a classifier's ability to identify all relevant instances within a specific class. It specifically measures the proportion of actual positive instances that were correctly identified as positive by the model.

1. Calculation and Building Sensitivity

Sensitivity is derived from the confusion matrix. It is built by taking the number of correct positive predictions and dividing it by the total number of actual positive cases in the dataset.

2. How to Interpret Sensitivity

Sensitivity answers the fundamental question: "Of all the actual positive cases, how many did the model find?".

3. Key Trade-offs and Relationships

Sensitivity cannot be viewed in isolation, as it is inextricably linked to other performance metrics:

4. Critical Importance in Imbalanced Datasets

Sensitivity is often far more informative than Accuracy when dealing with skewed or imbalanced datasets.

Powered by Forestry.md