Member-only story
The First Step in Equitable Predictions
When we use data to predict something, there’s more than one way to improve the equity of that process. The one that we usually start with is setting a tolerance level for the gap between the group our predictive model works best for and the one it performs worst at. What do we mean by that?
The typical way that predictive models get improved is by measuring the prediction of each iteration of the model — each time we tweak and then run the model — against what actually happened. In the simplest binary (yes or no) cases, there are four general possibilities.
A: We predicted something would happen, and it did happen. This is called a true positive.
B: We predicted something would happen but it didn’t happen. This is called a false positive.
C: We predicted something would not happen, and it didn’t happen. This is called a true negative.
D: We predicted something would not happen, but it did happen. This is called a false negative.
The way that we know how effective our predictive model is, is by comparing our predictions to what happened in reality. When we do this, we get several measures reflecting how well our predictive model performed:
Accuracy
Specificity
Precision
Sensitivity (aka Recall)
F1-score (aka F-Score /…