Guardian | Kate Crawford studies the social and political implications of artificial intelligence. She is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. Her new book, Atlas of AI, looks at what it takes to make AI and what’s at stake as it reshapes our world.
You’ve written a book critical of AI but you work for a company that is among the leaders in its deployment. How do you square that circle?
I
work in the research wing of Microsoft, which is a distinct
organisation, separate from product development. Unusually, over its
30-year history, it has hired social scientists to look critically at
how technologies are being built. Being on the inside, we are often able
to see downsides early before systems are widely deployed. My book did
not go through any pre-publication review – Microsoft Research does not
require that – and my lab leaders support asking hard questions, even if
the answers involve a critical assessment of current technological
practices.
What’s the aim of the book?
We
are commonly presented with this vision of AI that is abstract and
immaterial. I wanted to show how AI is made in a wider sense – its
natural resource costs, its labour processes, and its classificatory
logics. To observe that in action I went to locations including mines to
see the extraction necessary from the Earth’s crust and an Amazon
fulfilment centre to see the physical and psychological toll on workers
of being under an algorithmic management system. My hope is that, by
showing how AI systems work – by laying bare the structures of
production and the material realities – we will have a more accurate
account of the impacts, and it will invite more people into the
conversation. These systems are being rolled out across a multitude of
sectors without strong regulation, consent or democratic debate.
What should people know about how AI products are made?
We
aren’t used to thinking about these systems in terms of the
environmental costs. But saying, “Hey, Alexa, order me some toilet
rolls,” invokes into being this chain of extraction, which goes all
around the planet… We’ve got a long way to go before this is green
technology. Also, systems might seem automated but when we pull away the
curtain we see large amounts of low paid labour, everything from crowd
work categorising data to the never-ending toil of shuffling Amazon
boxes. AI is neither artificial nor intelligent. It is made from natural
resources and it is people who are performing the tasks to make the
systems appear autonomous.
Problems of bias have been well documented in AI technology. Can more data solve that?
Bias
is too narrow a term for the sorts of problems we’re talking about.
Time and again, we see these systems producing errors – women offered
less credit by credit-worthiness algorithms, black faces mislabelled –
and the response has been: “We just need more data.” But I’ve tried to
look at these deeper logics of classification and you start to see forms
of discrimination, not just when systems are applied, but in how they
are built and trained to see the world. Training datasets used for
machine learning software that casually categorise
people into just one of two genders; that label people according to
their skin colour into one of five racial categories, and which attempt,
based on how people look, to assign moral or ethical character. The
idea that you can make these determinations based on appearance has a
dark past and unfortunately the politics of classification has become
baked into the substrates of AI.
0 comments:
Post a Comment