Magazine

The Algorithms That Make Big Decisions About Your Life

Posted on the 17 August 2020 by Thiruvenkatam Chinnagounder @tipsclear
The algorithms that make big decisions about your life

Thousands of students in England are angry about the controversial use of an algorithm to determine this year's GCSE and A-level results.

They weren't able to take exams due to the block, so the algorithm used school performance data from previous years to determine grades.

It meant that around 40% of this year's A-level results turned out to be lower than expected, which has a huge impact on what students are able to do next. The GCSE results will be released on Thursday.

There are many examples of algorithms that make important decisions about our life, without us necessarily knowing how or when they do so.

Here is a look at some of them.

Social media

In many ways, social media platforms are simply giant algorithms.

Basically, they process what interests you and then give you more, using as many data points as they can get their hands on.

Every "like", watch, click is stored. Most apps also collect more data from web browsing habits or geographic data. The idea is to predict the content you want and keep scrolling - and it works.

And those same algorithms that know you like a cute cat video are also used to sell you stuff.

Any data that social media companies collect about you can also personalize ads incredibly accurately.

But these algorithms can go seriously wrong. It has been shown that they push people towards hateful and extremist content. Extreme content simply does better than nuances on social media. And the algorithms know this.

Facebook's civil rights audit asked the company to do everything in its power to prevent its algorithm from "pushing people into self-reinforcing echo chambers of extremism."

And last month we reported how algorithms on online retail sites, designed to process what you want to buy, were promoting racist and hateful products.

Insurance

Whether it's home, car, health, or any other form of insurance, your insurer must somehow weigh the chances of something actually going wrong.

In many ways, the insurance industry pioneered using past data to determine future results - that's the foundation of the entire industry, according to Timandra Harkness, author of Big Data: Does Size Matter.

Getting a computer to do this was always going to be the next logical step.

"Algorithms can affect your life a lot and yet you as an individual don't necessarily get a lot of input," he says.

"We all know that if you move to a different postcode, your insurance goes up or down.

"It's not your fault, it's because other people were more or less likely to have been victims of crime, or accidents or whatever."

Innovations like the "black box" that can be installed in a car to monitor how an individual drives have helped lower the cost of car insurance for careful drivers who are in a high-risk group.

Could we see more personalized insurance quotes as algorithms learn more about our circumstances?

"Ultimately, the point of insurance is to share the risk, so everyone puts it [money] and people who need it take it off, "says Timandra.

"We live in an unfair world, so any model you create will be unfair in one way or another."

Health care

Artificial intelligence is making great strides in the ability to diagnose various conditions and even suggest treatment pathways.

A study published in January 2020 suggested an algorithm worked better than human doctors when it came to identifying breast cancer from mammograms.

And other hits include:

  • a tool that can predict ovarian cancer survival rates and help determine treatment choices
  • artificial intelligence from University College London, which identified patients who are most likely to skip appointments and therefore need reminders

However, all of this requires a great deal of patient data to train the programs, and that's, frankly, a pretty big can of worms.

In 2017, the UK Information Commission ruled that the Royal Free NHS Foundation Trust had not done enough to safeguard patient data when it shared 1.6 million medical records with Google's AI division, DeepMind.

"There is a fine line between finding exciting new ways to improve care and moving forward with patients' expectations," said DeepMind co-founder Mustafa Suleyman at the time.

Policing

Big data and machine learning have the potential to revolutionize policing.

In theory, algorithms have the power to deliver on the sci-fi promise of "predictive police" by using data, such as where the crime occurred in the past, when and by whom, to predict where to allocate police resources.

But that method can create algorithmic bias and even algorithmic racism.

"It's the same situation you have with exam grades," says Areeq Chowdhury of the WebRoots Democracy tech think tank.

"Why do you judge an individual based on what other people have historically done? The same communities are always overrepresented."

Earlier this year, the RUSI defense and security think tank released a report on algorithmic police.

It raised concerns about the lack of national guidelines or impact assessments. It also called for more research into how these algorithms might exacerbate racism.

Facial recognition, used by the UK police force, including the Met, has also been criticized.

For example, there have been concerns that data going into facial recognition technology may make the algorithm racist.

The charge is that facial recognition cameras are more accurate in identifying white faces, because they have more data on white faces.

"The question is, are you testing it on a fairly diverse demographic of people?" Says Areeq.

"What you don't want is a situation where some groups are misidentified as criminals because of the algorithm."


Back to Featured Articles on Logo Paperblog