Menu Close

Digital public: looking at what algorithms actually do

Message from the Unseen World, an installation of a Turing-inspired algorithm reciting a poem. by Nick Drake. Roger Marks/Flickr, CC BY

The development and expansion of today’s communications platforms have led to a radical change in how public discourse is conducted and public opinion formed. In particular, the traditional boundary between personal and public communication has disappeared.

A prime example is a 2017 case involving the American actor William Shatner – best known for having played the character Captain Kirk in the 1960s TV series Star Trek – tweeted about the organization Autism Speaks, known for its claims that autism is caused by vaccines. Among others, David Gorski, an oncologist at Wayne State University in Detroit who advocates for evidence-based interventions, replied to Shatner’s tweet and explained why Autism Speaks is a controversial organisation. In response, Shatner searched for Gorski’s name on Google and shared articles about him from a conspiracy-oriented website called TruthWiki. Asked why he had not read and linked Gorski’s Wikipedia entry, Shatner responded that TruthWiki was higher up in his Google search results. You can find it “all on Google,” he maintained, as if that itself was a sign of high quality.

Google and other platforms are incredibly powerful tools that allow all of us – and Shatner, too – to locate information in the blink of an eye. To do so they use computer algorithms that measure “relevance”, but the standards used often do not correspond to the criteria that reputable journalists or researchers would use.

Custom-fitted ‘relevance’

Algorithms work mostly descriptively and individually. For example, they adjust relevance for a user based on what links he or she has clicked in the past. Yet many users assume the results are normative (“higher up in the Google results”). In the Shatner/Gorski case, the assertion of a correlation between autism and vaccines is encouraged a small but highly motivated user group in their online activities and ensured that a significant divergence occurred between content quality and “relevance” as determined by Google’s algorithms.

This is not simply a matter of a handful of telling cases. Because of their ubiquity, so-called intermediaries such as Google and Facebook now influence how public opinion is formed. 57% of German Internet users get their information about politics and social affairs from search engines or social networks. And even though the share of those who say social networks are their most important source of news is relatively small – 6% of all Internet users – it is considerably higher among younger users.

The most important intermediaries for information, according to German users. Kantar TNS. Berlin, in Ecke, 2016

As researchers at the Hamburg-based Hans Bredow Institute put it in 2016, the formation of public opinion is “no longer conceivable without intermediaries”.

Maximising engagement

The design principles used by intermediaries are leading to a structural change in public discourse. Anyone can now publish whatever they like, but not everyone will find an audience. Attention is generated only when people interact with algorithmic decision-making (ADM) processes. ADM processes determine the individual relevance of content items on social networks such as Facebook and select the items to be displayed for each user. In assembling an individual user’s feed, Facebook examines which content that person and his or her friends prefer or hide. Both signals are based on actions that are relatively straightforward.

Facebook also undoubtedly deploys signals that users are not consciously aware of sending, such as the amount of time they view a certain entry in the feed. Users who spend more time with any one item signal approval without explicitly doing so. ADM systems play a significant role in other areas, like assisting in legal matters or determining where and when police officers are on duty.

There is much less diversity among intermediaries than among editorially curated media. Even if each person using the services provided by today’s major intermediaries is given an individual choice, the same selection principles are applied to all users, and these are controlled by centralised curators. The new, crucial role played by users reactions and ADM processes is that both determine how much attention the content gets when disseminated.

Negative emotions and cognitive distortions

Studies of networking platforms show that content that rouses emotion is commented on and shared most often – and above all when negative emotions are involved.

Such polarizing effects seem to depend on a number of additional factors such as a country’s electoral system. Societies with “first past the post” systems such as the United States are potentially more vulnerable to extreme political polarisation. In countries with proportional systems, institutionalised multiparty structures and ruling coalitions tend to balance out competing interests.

How YouTube’s algorithm can distort reality.

Existing societal polarisation presumably influences and is influenced by the algorithmic ranking of media content. A 2016 study published by National Academy of Sciences indicates that Facebook users who believe in conspiracy theories tend over time to turn to the community of conspiracy theorists holding the same view. This process is possibly intensified by algorithms that increasingly present them with content “relevant” to their views. These systems could in fact result in the creation of so-called echo chambers among people with extremist views.

Below are three aspects of intermediary platforms that can influence the formation of individual and public opinions:

  • Intermediaries measure engagement through users’ automatic, impulsive reactions. They use numerous variables to calculate relevance, ranging from basic behavioural metrics such as scrolling speed or the duration of page views to the level of interaction among multiple users in a social network. When someone with whom a user has repeatedly communicated on Facebook posts content, the probability is higher that the user will be shown this content than if someone’s posts with whom the user has never truly interacted.

  • Intermediaries constantly change the variables they measure. The metrics signalling relevance are potentially problematic. Platform operators are hesitant to provide details of their metrics because of competition-related factors and the fact that they constantly changing the metrics. Google and Facebook alter their systems continuously; the operators experiment with and tweak almost every aspect of the user interface and other platform features to achieve specific goals such as increased interactivity.

  • Intermediaries with the greatest reach promote unconsidered behaviour. Clicking on a “like” button or a link demands no cognitive effort, and many users are evidently happy to indulge this lack of effort.Empirical studies by the French National Institute for Computer Science (INRIA) and Columbia University suggest that many articles in social networks forwarded with a click to the user’s circle of friends could not possibly have been read. Users thus disseminate media content after having seen only the headline and introduction. To some extent they deceive the algorithm and, with it, their “friends and followers” into believing that they have engaged with the text.

The ease of interaction also promotes cognitive distortions that have been known to social psychologists for years. A prime example is the “availability” heuristic: If an event or memory can easily be recalled, it is assumed to be particularly probable or common. Users frequently encounter unread media content that has been forwarded due to a headline, and the content is thus later remembered as being “true” or “likely.” This is also the case when the text itself points out that the headline is a grotesque exaggeration or simply misleading.

The need for diversity and transparency

Ensuring a diversity of media in the public sphere means ensuring that the ADM processes that assess relevance are diverse as well. Algorithms that rank content and personalise its presentation are the heart of the complex, interdependent process underlying digital discourse. To bring transparency to ADM processes we need to:

  • Make platforms and their impacts more open to external researchers.

  • Promote diversity among algorithmic processes.

  • Establish a code of ethics for developers.

  • Make users more aware of the mechanisms now being used to influence public discourse are essential.

Organisations working for this kind of transparency include Algorithm Watch, based in Germany, and the US media watchdog Pro Publica, which have published a number of donation-funded studies and articles on the issue.

Through the combination of industry self-regulation and legislative measures, an unbiased understanding of the real social and political consequences of algorithmic ranking has the potential to identify and counter dangers early on.

Want to write?

Write an article and join a growing community of more than 180,900 academics and researchers from 4,919 institutions.

Register now