Accuracy Difference (AD)
Accuracy difference (AD) metric is the difference between the prediction accuracy for different facets. This metric determines whether the classification by the model is more accurate for one facet than the other. AD indicates whether one facet incurs a greater proportion of Type I and Type II errors. But it cannot differentiate between Type I and Type II errors. For example, the model may have equal accuracy for different age demographics, but the errors may be mostly false positives (Type I errors) for one agebased group and mostly false negatives (Type II errors) for the other.
Also, if loan approvals are made with much higher accuracy for a middleaged demographic (facet a) than for another agebased demographic (facet d), either a greater proportion of qualified applicants in the second group are denied a loan (FN) or a greater proportion of unqualified applicants from that group get a loan (FP) or both. This can lead to within group unfairness for the second group, even if the proportion of loans granted is nearly the same for both agebased groups, which is indicated by a DPPL value that is close to zero.
The formula for AD metric is the difference between the prediction accuracy for facet a, ACC_{a}, minus that for facet d, ACC_{d}:
AD = ACC_{a}  ACC_{d}
Where:

ACC_{a} = (TP_{a} + TN_{a})/(TP_{a} + TN_{a} + FP_{a} + FN_{a})

TP_{a} are the true positives predicted for facet a

TN_{a} are the true negatives predicted for facet a

FP_{a} are the false positives predicted for facet a

FN_{a} are the false negatives predicted for facet a


ACC_{d} = (TP_{d} + TN_{d})/(TP_{d} + TN_{d} + FP_{d} + FN_{d})

TP_{d} are the true positives predicted for facet d

TN_{d} are the true negatives predicted for facet d

FP_{d} are the false positives predicted for facet d

FN_{d} are the false negatives predicted for facet d

For example, suppose a model approves loans to 70 applicants from facet a of 100 and rejected the other 30. 10 should not have been offered the loan (FP_{a}) and 60 were approved that should have been (TP_{a}). 20 of the rejections should have been approved (FN_{a}) and 10 were correctly rejected (TN_{a}). The accuracy for facet a is as follows:
ACC_{a} = (60 + 10)/(60 + 10 + 20 + 10) = 0.7
Next, suppose a model approves loans to 50 applicants from facet d of 100 and rejected the other 50. 10 should not have been offered the loan (FP_{a}) and 40 were approved that should have been (TP_{a}). 40 of the rejections should have been approved (FN_{a}) and 10 were correctly rejected (TN_{a}). The accuracy for facet a is determined as follows:
ACC_{d}= (40 + 10)/(40 + 10 + 40 + 10) = 0.5
The accuracy difference is thus AD = ACC_{a}  ACC_{d} = 0.7  0.5 = 0.2. This indicates there is a bias against facet d as the metric is positive.
The range of values for AD for binary and multicategory facet labels is [1, +1].

Positive values occur when the prediction accuracy for facet a is greater than that for facet d. It means that facet d suffers more from some combination of false positives (Type I errors) or false negatives (Type II errors). This means there is a potential bias against the disfavored facet d.

Values near zero occur when the prediction accuracy for facet a is similar to that for facet d.

Negative values occur when the prediction accuracy for facet d is greater than that for facet a t. It means that facet a suffers more from some combination of false positives (Type I errors) or false negatives (Type II errors). This means the is a bias against the favored facet a.