With this data, we were able to reconstruct Rotterdam’s welfare algorithm and see how it scores people. Doing so revealed that certain characteristics—being a parent, a woman, young, not fluent in Dutch, or struggling to find work—increase someone’s risk score. The algorithm classes single mothers like Imane as especially high risk. Experts who reviewed our findings expressed serious concerns that the system may have discriminated against people.
Are those patterns, indicators, of higher risk? That the results are then concentrated in certain demographics isn’t discrimination it’s reality. Or, if you prefer in economic language, the differences between taste discrimination and rational discrimination.
Do note, if AIs don’t reflect reality – however reality might have different outcomes for different demographics – then AIs are useless.