“Biased? No, I am not!”
An implicit bias is a bias that you are not aware of. A person who has an implicit bias may believe they treat everyone equally but their implicit biases may cause negative or positive associations that influence their behaviours. Research has shown that biases can result in differential treatment of patients based on, for instance, race, age, income, gender and language.
Certain implicit biases exist in healthcare and these can have detrimental effects on the healthcare a person receives. Very few professionals seek to harm some of their patients but biases can lead to people receiving poor treatment. Furthermore, biases may result in inaccurate diagnosis and delays in diagnosis which may result in stress. Stress in itself can worsen a condition, for example, the Covid-19 pandemic has resulted in delays and disruptions in care for cancer patients.
Willingness to examine possible biases
The world is complex, and we automatically categorise individuals by gender, race, body shape, and education. A problem is that once we learned certain prejudices it is hard to change them. Even when there is evidence that supports the change or points to the contrary we are often reluctant to change. An important step in minimising implicit, or automatic, biases is to provide conditions so that an individual can examine their own possible biases. Our brains are malleable and information and conversations about implicit biases can help us to explore our own biases, something that will help us to change. To change require a desire and intent to explore experiences and acknowledge our own privileges. Searching for biased thinking patterns and a willingness to change even if the process is hard and difficult is important for lots of different situations in life.
Different types of machine learning algorithms
If you are working with machine learning your biases can have serious consequences. Machine learning algorithms are codes that help us explore, analyze and find meaning in complex data sets. It is vital that programmers understand how potential biases can be introduced so that practices and safeguards can be put into place. ML algorithms are created by humans and human make the key decisions about how the system work and the outcome. Computers refine and improve on the data yet, if the information that is used is biased then the end result with have these biases.
A programmer has greater control over the process when supervised machine learning algorithms is used. Here the designer and engineer can decide which data to feed into the system. A specific outcome is also predicted and expected. This means that the machine must process the data in a certain way and solutions in the desired manner. Supervised machine learning must classify the data into different classes or categories and they during a regression stage, identify patterns and information. Based on this the machine predicts certain outcomes.
In contrast, in unsupervised ML, the results itself is not defined, which means that the machine has to define and deliver the result. Often, in healthcare, a mixture of these two is used – semi-supervised machine learning algorithms. This approach is used when there is only a limited set of data available to train the system. This means that the system is only partially trained. The machine combines so-called pseudo data and labelled data to make predictions. Psuedo data is data produced by the machine during partial training.
It is vital, that the ML algorithms are continuously monitored. Machine learning may lead to biases being introduced later and tests need to be made to ensure that new biases are not appearing. AI teams need training and support to ensure that they are aware of biases in the specific area where they are using ML. ML will only become as strong and efficient as our willingness to take on the perspective of others when designing algorithms.
Our biases shape the way we see the world.