Some of the marginalized groups most likely to be harmed by AI are also most wary of it. A new study’s findings raise questions about equity and consent in technology design.
These findings are consistent with a growing body of research showing how AI systems often misclassify, perpetuate discrimination toward or otherwise harmtrans and disabled people. In particular, identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories. In doing so, AI systems simplify identities and can replicate and reinforce bias and discrimination – and people notice.
Makes sense.
These systems exist to sand off the rough edges of real life artifacts and interactions, and these are people who’ve spent their whole lives being treated like an imperfection that just needs to be smoothed out.
Makes sense.
These systems exist to sand off the rough edges of real life artifacts and interactions, and these are people who’ve spent their whole lives being treated like an imperfection that just needs to be smoothed out.
Why would you not be wary?