BostonGlobe | Even AI giants like Google can’t escape the impact of bias. In 2015, the company’s facial recognition software tagged dark skinned people as gorillas. Executives at FaceApp, a photo editing program, recently apologized for building an algorithm that whitened the users’ skin in their pictures. The company had dubbed it the “hotness” filter.
In
these cases, the error grew from data sets that didn’t have enough
dark-skinned people, which limited the machine’s ability to learn
variation within darker skin tones. Typically, a programmer instructs a
machine with a series of commands, and the computer follows along. But
if the programmer tests the design on his peer group, coworkers, and
family, he’s limited what the machine can learn and imbues it with
whichever biases shape his own life.
Photo apps are one thing,
but when their foundational algorithms creep into other areas of human
interaction, the impacts can be as profound as they are lasting.
The faces of one in two
adult Americans have been processed through facial recognition
software. Law enforcement agencies across the country are using this
gathered data with little oversight. Commercial facial-recognition
algorithms have generally done a better job of telling white men apart
than they do with women and people of other races, and law enforcement
agencies offer few details indicating that their systems work
substantially better. Our justice system has not decided if these
sweeping programs constitute a search, which would restrict them under
the Fourth Amendment. Law enforcement may end up making life-altering
decisions based on biased investigatory tools with minimal safeguards.
Meanwhile, judges in almost every state are using algorithms to assist in decisions about
bail, probation, sentencing, and parole. Massachusetts was sued several years ago
because an algorithm it uses to predict recidivism among sex offenders
didn’t consider a convict’s gender. Since women are less likely to
reoffend, an algorithm that did not consider gender likely overestimated
recidivism by female sex offenders. The intent of the scores was to
replace human bias and increase efficiency in an overburdened judicial
system. But, as mathematician Julia Angwin reported in ProPublica, these algorithms are using biased questionnaires to come to their determinations and yielding flawed results.
A
ProPublica study of the recidivism algorithm used in Fort Lauderdale
found that 23.5 percent of white men were labeled as being at an
elevated risk for getting into trouble again, but didn’t re-offend.
Meanwhile, 44.9 percent of black men were labeled higher risk for future
offenses, but didn’t re-offend, showing how these scores are inaccurate
and favor white men.
While the questionnaires don’t ask
specifically about skin color, data scientists say they “back into race”
by asking questions like: When was your first encounter with police?
The
assumption is that someone who comes in contact with police as a young
teenager is more prone to criminal activity than someone who doesn’t.
But this hypothesis doesn’t take into consideration that policing
practices vary and therefore so does the police’s interaction with
youth. If someone lives in an area where the police routinely stop and
frisk people,
he will be statistically more likely to have had an early encounter with
the police. Stop-and-frisk is more common in urban areas
where African-Americans are more likely to live than whites.This
measure doesn’t calculate guilt or criminal tendencies, but becomes a
penalty when AI calculates risk. In this example, the AI is not just
computing for the individual’s behavior, it is also considering the
police’s behavior.
“I’ve talked to prosecutors who say, ‘Well,
it’s actually really handy to have these risk scores because you don’t
have to take responsibility if someone gets out on bail and they shoot
someone. It’s the machine, right?’” says Joi Ito, director of the Media
Lab at MIT.
0 comments:
Post a Comment