This method can be used to evaluate the industry standard assessment tools for AI fairness, which look at approval rates across protected groups.
Imagine a situation where an AI system decides who gets approved for a mortgage or who gets a job interview. Traditional fairness methods might only ensure that the same percentage of people from different groups get approved.
"Our findings suggest that social welfare optimization can shed light on the intensely discussed question of how to achieve group fairness in AI," Leben said.