Artificial intelligenceBrain power

Madhumita Murgia: “AI can do harm when people don’t have a voice”

Data labourers helping feed the artificial intelligence systems of large technology companies, women becoming victims of deep-fake porn, poor communities being helped by AI medical diagnostics. These are some of the stories reported by Madhumita Murgia, Artificial Intelligence Editor of the Financial Times and an affiliate at LSE Data Science Institute, in her book Code Dependent. In this Q&A, she spoke with Helena Vieira (LSE Business Review) about her findings.


Madhumita Murgia spoke about what it means to be human in a world changed by AI at a public event hosted by LSE’s Data Science Institute on 27 March.
You can listen to a podcast of the event on the LSE Player.

You travelled to different parts of the world and spoke to workers in the underbelly of AI. Can you describe what you found?

I travelled to speak to workers whom I call data labourers, a title that kind of encapsulates what they do. They do digital work, but it’s minute work that requires a lot of focus and concentration, which can take a toll on them. I went to Nairobi, Kenya, and Sofia, Bulgaria. I also spoke to workers in Argentina, quite a big spread.

On the plus side, these jobs are genuinely lifting a lot of workers out of poverty. In Nairobi, they mostly employ people from the Kibera settlements. Pretty much every worker I met had previously come from a manual job, whether that’s selling chicken patties on the street or doing domestic or construction work. Many had been unable to pay to go to university and dropped out after school.

Now this job as a data labourer paid the minimum living wage, which is roughly the equivalent of $3 an hour. In Kenyan currency it’s a decent salary. Some families were able to put their children through school or to get a better home, even if still in Kibera. They definitely improved their quality of life.

I also found that these workers didn’t have agency in the same way that we would expect in the West. Particularly for a digital job, that is, a white-collar job sitting at a computer and working in the supply chain of a technology that is ultimately worth billions, if not trillions of dollars. They aren’t really benefiting from the upside of the worth and value of AI technologies. They’re being paid as if it a were a call centre job, for example, or being treated like factory workers producing things like Gucci shoes, not necessarily involving disturbing labour practices, but in terms of what they’re paid and how much agency and voice they have. They often don’t even know who the client is and are not allowed to raise issues. There are limits and targets of how much labelling they need to do in an hour and how many breaks they can take in a day.

In Sofia, it was a bit different, because they were all freelancers working from home. There wasn’t much pressure in terms of targets or anything like that. Most of the people I spoke to were women, all refugees from war coming mostly from the Middle East, supporting their families with this job but with the flexibility of working from home. I thought it was a very uplifting job. They were paid a global minimum wage of $4 per hour.

But there was a question of how much power they have to protest if they don’t like something. Heba, from Iraq, has three children and a husband and is supporting them through this work. She was told she can’t work at night, because they made a shift system, and she would often work in the evenings because it suited her timings. So she felt this wasn’t fair, talked directly with the client and was suspended from the platform for a month for contacting the client. They are muzzled in many ways, and there’s work to do in bringing more equality and allowing workers to participate in the technology’s upside. But people see benefits to it. They don’t have many other options.

You talk about AI systems replacing HR departments and algorithms deciding how much they get paid or whether they get fired. Can you describe this situation?

I looked specifically at gig work, which refers to delivery apps. The basic concept is that gig workers work for themselves. They’re not employees of a company and can decide when they log on and off. So, there’s a sense of autonomy and freedom. But most of these apps use algorithms to determine many aspects of the job, including who gets assigned the job. It also decides what they get paid because of dynamic pricing models. As customers, we might experience it as surge pricing when it’s raining, for instance. Dynamic pricing is very opaque both for the customer and the worker. How exactly are their wages being calculated? What variables go into it? How can they maximise their own wages? This data is collected and kept mostly within the companies and used to power the algorithms.

The plus side is that a lot more people have access to this type of work. People are to some extent autonomous, because they can decide if they don’t want to work during the day, only during the night, whatever works within their schedule. I spoke to workers everywhere and they valued some of that independence, which you don’t get if you’re in a contract, nine-to-five job.

But they increasingly realised that algorithms are controlling what they do. They don’t have access to data to figure out how to maximise their earnings, information about how much work they do or what they’re getting paid. They are just at the mercy of algorithmic systems, and it’s an opaque equation. It’s not a contracted job, where you sign an agreement with your employer, you’re told what you’re getting paid and you know what’s expected of you. The system doesn’t tell them what they’re doing right, or wrong, or how they can improve or increase their income. It’s outside their control.

Even though they were reliant on this income, many found this very frustrating. In my book I tell the story of Sami, who discovered completely by accident that he was being underpaid. Then he built a tool to analyse if this was the case for other drivers as well. It was a way to peek behind the curtain of these algorithmic systems to show that mistakes are being made. In this case, it was unclear whether it was a mistake, or purposefully done. They never got to the bottom of it. If we don’t understand how these wages are determined, then how do we know if they’re correct or not?

Opacity between employer and employee can be really harmful not only to autonomy, but also to agency to fight back if you’re trying to improve your working conditions or your place in the workplace. It also increases the gap in power between the employer and the employee and essentially creates a faceless workforce.

You cover the horror of deep fakes of women whose faces are used in pornography. What effect did it have on them?

This was one of the scary journeys that I went on for my book, and I think you’ve put your finger on it. The hardest bit is that there aren’t a lot of paths to recourse for fixing the problem. With generative AI and the ability to produce high-quality images that basically look real, there is widespread concern and fear about disinformation flooding our digital public square with political misinformation, especially in the context of elections. But research has shown that overwhelmingly something like 98 per cent of the deep fakes produced online are pornography. And of that, I’d say 99 per cent are of women.

Yes, there are concerns about fake news around elections and the impact on democracy. But pornography is mostly targeted at women. Most of the victims are not celebrities, journalists or activists, but ordinary women who aren’t in the public eye and have no idea that this is happening. It’s done often by men they know, acquaintances or colleagues who have access to these easy-to-use technologies, as easy as texting a bot and getting back a naked photo. They’re flooding the internet with these.

Fake images are often no different from real ones. There’s a visceral reaction when you see images of yourself in these violent compromising positions, with men you’ve never met in your life. It fractures your identity. Even though you know rationally that it’s not real, often that doesn’t matter. The women I spoke to would dream about this. They went on anti-anxiety medication, went off social media. They got tattoos on their body because they felt that was the way to have a private identity that nobody would know about. They’d change their hair and their appearance because they were worried about being recognised.

When you have no control over your image, you don’t know who’s got it and how wide it’s spread. It’s that same question of agency. How can you assert your voice as an individual? How can you change the narrative about yourself? And in this particular case it’s much harder than most. In most parts of the world it remains legal to do this.

The UK’s Online Safety Act has a provision for deep fake that can tell people to remove deep-fake images or be fined. Until this year, when the Act came into effect, if you went to the police, they wouldn’t be able to do anything, because it wasn’t real. This is still the case in most parts of the world. Now people’s awareness is rising because there are many women to whom this is happening.

The discussion about deepfakes tends to be around democracy and elections. Women’s issues are dampened. But now there is more recognition of why it’s important to stop distribution of these images. Enforcement is very hard. Once something is on the internet, people can download it onto their own devices and then put it back online. It’s like playing whack-a-mole.

It’s really important to have cohesive policy solutions globally. We can’t find all the individuals. It has to be done at a platform level. And we need to find a way to identify that something is digitally manipulated or created by AI, a watermark or an identifier that says to the world, “this is fake”.

You mentioned healthcare as one area in which AI seems to have genuinely life-changing potential. How’s that?

There’s a huge shortage of trained doctors in the UK and other parts of the world. For instance, the UK and the US have a shortage of trained radiologists who can read scans. It’s going to take decades to fill the pipeline with human specialists. In the meantime, we’re seeing the rising incidence of diseases.

We have to look for other ways to innovate, not just plug gaps in medical care but enhance how we diagnose and treat illnesses around the world. AI has shown to have huge promise. There have been lots of peer-reviewed studies now showing how you can train algorithmic systems to read scans, whether that’s X rays or CT and MRI scans looking for various cancers. A UK company called Kheiron Medical has a tool called Mia, which reads scans looking for breast cancer, and it’s shown to be as good as the best radiologist in the country. Since it’s peer reviewed, you can see what the false positive and false negative rates are. You can roll these out into practice, which is already happening.

In my book I mention a doctor who’s working in rural India with tribal communities where there’s a high incidence of tuberculosis, which is a treatable disease. Yet people are dying from it because they don’t have access to diagnostics. They don’t know they have the disease, then spread it and die from it. This doctor is helping to train and hopes to roll out this AI tool that can be used to screen people who don’t have access to doctors, so they can get treated. You can see how this could really revolutionise how care is given in places that don’t have access.

No one really sees this as a replacement for today’s doctors because we just don’t have enough of them, anyway. This is a tool to help doctors do more with the limited resources they have or plug gaps where there are no doctors. And this can augment and fill gaps where we’re struggling, like with the NHS here. We all have family and friends who’ve waited months to be seen for conditions that end up being really serious.

When the police use predictive scores, people can get stigmatised. Can you explain how that happens?

For a long time AI as a predictive policing tool has been talked about, built and implemented. There have been tools that specifically find pockets, geographic areas and timings in which you should be sending more policing resources based on data of where and when crimes occur.

But there’s also more individual policing tools which look at the data of defendants, or people who’ve been arrested for various crimes to help juries, prosecutors or judges figure out, for instance, “should somebody get bailed?” Or should somebody be given additional help in the UK? For example, there’s a programme that allows some people who’ve been arrested to get extra assistance and do social work instead of staying in jail. They use an algorithm to figure out who should get that opportunity.

A few years ago in Amsterdam, the mayor’s office decided to work with behavioural psychologists to build a tool that could predict which young people, in many cases children, would go on to commit high-impact crimes. They came up with two lists, one looking at those with a criminal record, but not necessarily for serious crimes. The second list was composed largely of siblings of those in the first list. The mayor’s office worried siblings could follow in their brothers’ footsteps, even when they had never committed a crime.

The goal was obviously not to arrest them in advance. It was supposed to be a social service involving not just the police, but also social and education workers who would come in and figure out what this family needs in order to prevent these children from following a life of crime. It had a good intention but was implemented in a punitive way, because the police were actively involved. Even though these young people were supposed to be supported and cared for, instead it felt very coercive and punitive. In many cases, the parents weren’t informed and didn’t understand why their children were on this list. They often felt targeted.

Many of these families were single mothers and the overwhelming majority of children were boys, mostly from immigrant communities, particularly Moroccan communities living in Amsterdam. It also had a racial undertone. Researchers spent time in the field with officers who were policing the children on the list. They found a lot of racism towards these families, who were scared even if they wanted help and could have benefited. They were afraid to say anything and felt like they were being punished, often with AI systems.

The step after that is, “how do we implement it in a way that has a positive outcome and doesn’t harm the communities that it’s supposed to help?” This is a stark example of a system that was rolled out to support a community or families in the city but ended up cutting them out. These children felt they had a target on their back. This instilled a sense of fear towards the police, which is really the opposite effect of what was intended.

Good intentions and positive objectives get twisted. In reality, when they are implemented without empathy, without empowering the people who the systems are being used on, where they feel that they’ve been cut out, they don’t have a voice or agency in what’s happening. And that’s where harms start to multiply.

So, this is a story of human error and regulatory failures…

Exactly. Some people might read this and find it terrifying, which it can be, but I think the takeaway is that we don’t need to fix a very complex technology that many of us don’t understand. We can fix the things we do understand, which, as you say, are human error, regulatory failure, institutional bias.

Madhumita Murgia wrote the book Code Dependent: Living in the shadow of AI, Picador (2024).


About the authors

Madhumita Murgia Madhumita Murgia is Artificial Intelligence Editor of the Financial Times and an affiliate at the Data Science Institute at LSE.
Helena Vieira Helena Vieira is the managing editor of LSE Business Review.

Article source: Madhumita Murgia: “AI can do harm when people don’t have a voice”, CC BY-NC-ND 2.0. This interview represents the views of the interviewee, not the position of LSE Business Review or the London School of Economics and Political Science.

Header image source: Provided by ©Madhumita Murgia. All rights reserved.

Rate this post

LSE Business Review

LSE Business Review is a new knowledge-exchange initiative designed to share the best of modern social science ideas, theories and evidence with business decision-makers and professionals, and to learn from them in turn. We present the expertise of professors in finance, economics, business studies, law, management, accounting, social psychology, mathematics, public policy, sociology, geography, philosophy, media, cultural and gender studies, and political science, in accessible and relevant ways for business.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button