Instagram algorithm bias

Instagram algorithm bias

To begin, an algorithm is ​​a sequence of specific rules or computer-implementable instructions which calculate and solve problems. Social media websites and web search engines use algorithms to manage, highlight and control large swaths of data. While we assume algorithms would be without bias because a computer runs them, this is a misconception. Algorithms require a person to program them, this person can have explicit or implicit biases. Additionally, while a person would be able to pick up problems and receptively change the way they respond, we must program algorithms to learn, and this in itself can cause problems because algorithms are often not designed to safeguard individuals. More generally, people are drawn to and spend more time engaged with shocking content. In this circumstance, the algorithm would continue to feed the individual shocking content because they believe the individual is more interested in this due to their screen time on this topic.

Instagram and TikTok have particularly strong algorithms which very easily result in bias. Recently Instagram chose to explain their algorithm. They state “By 2016, people were missing 70% of all their posts in Feed, including almost half of posts from their close connections. So we developed and introduced a Feed that ranked posts based on what you care about most” (Liberty, 2021) Therefore, your main feed and stories are ranked based on which posts you engage with and your relationship with the creator. The explore function is based on posts that are similar to the ones you engage with, and reels are ordered based on what they think you will enjoy based on what others with similar engagement to you enjoy. The problem with this is that if you follow one or two accounts that have a particular viewpoint or aim and engage with their posts occasionally, they will slowly suggest more accounts and show you more content that is similar. This can result in what is called an echo chamber. Where you only see a certain type of content.

However, this is not the only way that Instagram’s algorithm has problems. Simple shortcuts in coding an algorithm or coded biases can affect other elements related to what is seen as acceptable and what is unacceptable on Instagram. A recent debate around Instagram influencers has revealed another discriminatory algorithm. Plus-size influencers argue that the algorithm which controls content and blocks users which violate community standards specifically targets plus-sized influencers. Social media users have argued that plus size account holders and particularly, plus size account holders from ethnic minorities are minimised from the kind of reach than thin white women would reach. An algorithm often reflects and aligns the norm to the preference of the algorithms creator. An experiment was done with plus-size models on Instagram, to compare the censorship of their bodies in comparison to thin bodies. They found that there is a bot in Instagram’s algorithm which measures the amount of clothing to skin ratio and if it detects anything above 60 per cent, it is considered sexually explicit. The influencers argue that the same photo of a woman in a bathing suit will be flagged as sexually explicit on a plus-sized model and completely acceptable on a thinner model. The influencers therefore argue that this algorithm is inherently fat phobic.

Beyond the engagement based algorithm and rules-based judgements. Instagram came under fire recently when it was discovered that a preference for unique content over re-shared posts biased news feeds. The result was newsfeed items, stories and reels favoured pro-Israeli content over pro-palestinian in the most recent eruption of violence. Instagram argues that they favour unique content in stories and newsfeeds because they want to prevent repetition for the user. However, the impact of their current algorithm choice on some types of posts was an unintended side effect and not an attempt to sensor one side. The recent war on Gaza had a record number of people posting and spreading messages of support on both sides. However, on the pro-palestine side, many people were re-sharing content from other accounts, these shows of support were minimised or muted by the algorithm in favour of original content. Instagram have directly said they have recognised the issue and will work on repairing it. The problem is being treated as the issue of a large-scaled automated moderation, rather than an attempt by individuals to restrict content.

Recent research reveals that algorithm decision making does not have any awareness of context and merely looks for similarities. This can be highly problematic and run the risk of replicating and amplifying human bias. Where algorithms are used more and more in daily life this is highly problematic. For example, judges in the U.S have used algorithm software to determine bail and sentencing limits using automated risk assessments of other similar cases. Such technology does not allow space for mistakes and the computer views all data as credible. Over time as mistakes of biases accumulate, particular groups may be discriminated against. Such as people of colour receiving longer sentences or higher bail requirements. However, this data can easily be unrepresentative, incomplete, or reflect historical inequalities. The result of this would be an algorithm that continues to make these mistakes systematically. Furthermore, the data fed into machine learning facial recognition software is more often from white faces. This means that the software is more accurate at detecting white faces and can lead to mistakes when recognising other ethnicities. If unchecked, the dominance of algorithms over human decision making and the chance for exacerbated bias could have serious implications on the rights and freedoms of millions of people worldwide. Before long, we could all be living in a real-life dystopian sci-fi movie.

Melody Waterworth

 

Reference:

Amin, F., 2021. The growing criticism over Instagram’s algorithm bias – CityNews Toronto. [online] CityNews Toronto. Available at: Link [Accessed 30 August 2021].

BBC NEWS, 2021. Gaza conflict: Instagram changes algorithm after alleged bias. [online] BBC News. Available at: Link [Accessed 30 August 2021].

Hutchinson, A., 2020. Instagram Provides an Update on Its Efforts to Address Systemic Bias Within Its Systems. [online] Social Media Today. Available at: Link [Accessed 30 August 2021].

Liberty, S., 2021. Please Explain: Instagram’s algorithms and unconscious bias – AdNews. [online] Adnews.com.au. Available at: Link [Accessed 30 August 2021].

Leave a Reply

Your email address will not be published. Required fields are marked *