In a recent article for The Atlantic, Adrienne LaFrance described Facebook as a doomsday machine: “a device built with the sole purpose of destroying all human life.” In the Netflix documentary The Social Dilemma, the filmmakers imagine a digital control room where engineers press buttons and turn dials to manipulate a teenage boy through his smartphone. In her book Surveillance Capitalism, Harvard social psychologist Shoshana Zuboff paints a picture of a world in which tech companies have constructed a massive system of surveillance that allows them to manipulate people’s attitudes, opinions and desires.
In each of these dystopian depictions, people are portrayed as powerless victims, robbed of their free will. Humans have become the playthings of manipulative algorithmic systems. But is this really true? Have the machines really taken over?
It is alleged that social media fuels polarisation, exploits human weaknesses and insecurities, and creates echo chambers where everyone gets their own slice of reality, eroding the public sphere and the understanding of common facts. And, worse still, this is all done intentionally in a relentless pursuit of profit.
At the heart of many of the concerns is an assumption that in the relationship between human beings and complex automated systems, we are not the ones in control. Human agency has been eroded. Or, as Joanna Stern declared in the Wall Street Journal in January, we’ve “lost control of what we see, read — and even think — to the biggest social-media companies”.
Defenders of social media have often ignored or belittled these criticisms — hoping that the march of technology would sweep them aside. This is a mistake: technology must serve society, not the other way around. Faced with opaque systems operated by wealthy global companies, it is hardly surprising that many assume the lack of transparency exists to serve the interests of technology elites and not users. In the long run, people are only going to feel comfortable with these algorithmic systems if they have more visibility into how they work and then have the ability to exercise more informed control over them. Companies like Facebook need to be frank about how the relationship between you and their major algorithms really works. And we need to give you more control over how, and even whether, they work for you.
Some critics seem to think social media is a temporary mistake in the evolution of technology — and that once we’ve come to our collective senses, Facebook and other platforms will collapse and we will revert to previous modes of communication. This is a profound misreading of the situation — as inaccurate as the December 2000 Daily Mail headline declaring the internet “may just be a passing fad”. Even if Facebook ceased to exist, social media won’t be — can’t be — uninvented. Data-driven technologies, including those that use Artificial Intelligence (AI), are here to stay. Personalised digital advertising not only allows billions of people to use social media for free, it is also more useful to consumers than untargeted, low-relevance advertising.
Data-driven personalised services like social media have empowered people with the means to express themselves and to communicate with others on an unprecedented scale. And they have put tools into the hands of millions of small businesses around the world which were previously available only to the largest corporations. Turning the clock back to some false sepia-tinted yesteryear — before personalised advertising, before algorithmic content ranking, before the grassroots freedoms of the internet challenged the powers that be — would forfeit so many
benefits to society.
But that does not mean the concerns about how humans and algorithmic systems interact should be dismissed. There are clearly problems to be fixed, and questions to be answered. The internet needs new rules — designed and agreed by democratically elected institutions — and technology companies need to make sure their products and practices are designed in a responsible way that takes into account their potential impact on society. That starts — but by no means ends — with putting people, not machines, more firmly in charge.
It takes two to tango
Imagine you’re on your way home when you get a call from your partner. They tell you the fridge is empty and ask you to pick some things up on the way home. If you choose the ingredients, they’ll cook dinner. So you swing by the supermarket and fill a basket with a dozen items. Of course, you only choose things you’d be happy to eat — maybe you choose pasta but not rice, tomatoes but not mushrooms. When you get home, you unpack the bag in the kitchen and your partner gets on with the cooking — deciding what meal to make, which of the ingredients to use, and in what amounts. When you sit down at the table, the dinner in front of you is the product of a joint effort, your decisions at the grocery store and your partner’s in the kitchen.
The relationship between internet users and the algorithms that present them with personalised content is surprisingly similar. Of course, no analogy is perfect and it shouldn’t be taken literally. There are other people who do everything from producing the food to designing the packaging and arranging the supermarket shelves, all of whose actions impact the final meal. But ultimately, content ranking is a dynamic partnership between people and algorithms. On Facebook, it takes two to tango.
In a recent speech, the executive vice president of the European Commission, Margrethe Vestager, compared social media to the movie The Truman Show. In it, Jim Carrey’s Truman has no agency. He is the unwitting star of a reality TV show, where his entire world is fabricated and manipulated by a television production company. But this comparison doesn’t do justice to users of social media. You are an active participant in the experience. The personalised “world” of your Facebook News Feed is shaped heavily by your choices and actions. It is made up primarily of content from the friends and family you choose to connect to on the platform, the Facebook Pages you choose to follow, and the Facebook Groups you choose to join. Ranking is then the process of using algorithms to
order that content.
This is the magic of social media, the thing that differentiates it from older forms of media. There is no editor dictating the frontpage headline millions will read on Facebook. Instead, there are billions of front pages, each personalised to our individual tastes and preferences, and each reflecting our unique network of friends, pages and groups.
Personalisation is at the heart of the internet’s evolution over the last two decades. From searching on Google, to shopping on Amazon, to watching films on Netflix, a key feature of the internet is that it allows for a rich feedback loop in which our preferences and behaviors shape the service that is provided to us. It means you get the most relevant information and therefore the most meaningful experience. Imagine if, instead of presenting recommendations based on things you’ve watched, Netflix simply listed the thousands upon thousands of movies and shows in its catalog alphabetically. Where would you even start?
When people think of how they experience Facebook, what they probably think of first is what they see on their news feed. This is essentially Facebook’s front page, personalised to you: the vertical display of text, images and videos that you scroll down once you open the Facebook app on your phone or log into facebook.com on your computer. The average person has thousands of posts they potentially could see at any given time, so to help people find the content they will find most meaningful or relevant, we use a process called ranking, which orders the posts in your
feed and puts the things we think you will find most meaningful closest to the top. The idea is that this results in content from your best friend being placed high up in your feed, while content from an acquaintance you met several years ago will often be found much lower down.
Every piece of content that could potentially feature — including the posts you haven’t seen from your friends, the pages you follow, and groups you joined — goes through the ranking process. Thousands of signals are assessed for these posts, ranging from who posted it; when; whether it is a photo, video, or link; how popular it is on the platform; all the way down to things like the type of device you are using. From there, the algorithm uses these signals to predict how likely it is to be relevant and meaningful to you: for example, how likely you might be to “like” it or find that viewing it was worth your time. The goal is to make sure you see what you find most meaningful — not to keep you glued to your smartphone for hours on end. You can think about this sort of like a spam filter in your inbox: it helps filter out content you won’t find meaningful or relevant, and prioritises content you will.
Before we credit ‘the algorithm’ with too much independent judgment, it is of course the case that these systems operate according to rules put in place by people. It is Facebook’s decision makers who set the parameters of the algorithms themselves, and seek to do so in a way that is mindful of potential bias or unfairness. And it is Facebook’s decision makers who ultimately decide what content is acceptable on the platform. Facebook has detailed Community Standards, developed over many years, that prohibit harmful content — and invests heavily in developing ways of identifying it and acting on it quickly.
Of course, whether Facebook draws the line in the right place, or according to the right considerations, is a matter of legitimate public debate. And it is entirely reasonable to argue that private companies shouldn’t be making so many big decisions about what content is acceptable on their own. We agree. It would be better if these decisions were made according to frameworks agreed by democratically accountable lawmakers. But in the absence of such laws, there are decisions that need to be made in real time.
Last year, Facebook established an Oversight Board to make the final call on some of these difficult decisions. It is an independent body and its decisions are binding — they can’t be overruled by Mark Zuckerberg or anyone else at Facebook. Indeed, at the time of writing, the board has already overturned a majority of Facebook’s decisions referred to it. The board itself is made up of experts and civic leaders from around the world with a wide range of backgrounds and perspectives, and they began issuing judgments and recommendations earlier this year. The board is currently considering Facebook’s decision to indefinitely suspend former U.S. President Donald Trump in the wake of his inciting comments that contributed to the horrendous scenes at the Capitol — a decision we hope it will uphold.
Other types of problematic content are addressed more directly through the ranking process. For example, there are types of content that might not violate Facebook’s Community Standards but are still problematic because users say they don’t like them. For these, Facebook reduces their distribution, as it does for posts deemed false by one of the more than 80 independent fact checking organisations that evaluate Facebook content. In other words, how likely a post is to be relevant and meaningful to you acts as a positive in the ranking process, and indicators that the
post may be problematic (but non-violating) act as a negative. The posts with the highest scores after that are placed closest to the top of your feed.
This sifting and ranking process results in a Facebook News Feed that is unique to you, like a fingerprint. But of course, you don’t see the algorithm at work, and you have limited insight into why and how the content that appears was selected and what, if anything, you could do to alter it. And it is in this gap in understanding that assumptions, half-truths and misrepresentations about how Facebook works can take root.
Where does Facebook’s incentive lie?
Central to many of the charges by Facebook’s critics is the idea that its algorithmic systems actively encourage the sharing of sensational content and are designed to keep people scrolling endlessly. Of course, on a platform built around people sharing things they are interested in or moved by, content that provokes strong emotions is invariably going to be shared. At one level, the fact that people respond to sensational content isn’t new. As generations of newspaper subeditors can attest, emotive language and arresting imagery grab people’s attention and engage them. It’s human nature. But Facebook’s systems are not designed to reward provocative content. In fact, key parts of those systems are designed to do just the opposite.
Facebook reduces the distribution of many types of content — meaning that content appears lower
in users’ news feeds — because they are sensational, misleading, gratuitously solicit engagement, or are found to be false by our independent fact checking partners. For example, Facebook demotes clickbait (headlines that are misleading or exaggerated), highly sensational health claims (like those promoting ‘miracle cures’), and engagement bait (posts that explicitly seek to get users to engage with them).
Facebook’s approach goes beyond addressing sensational and misleading content post-by-post.
When Facebook pages and groups repeatedly post some of these types of content, like clickbait or misinformation, Facebook reduces the distribution of all the posts from those pages and groups. And where websites generate a disproportionate amount of their traffic from Facebook relative to the rest of the internet, which is often indicative of a pattern of posting more sensational or spammy content, Facebook likewise demotes all the posts from the pages run by those websites.
Facebook has also adjusted other aspects of its approach to ranking, including fundamental aspects, in ways that would be likely to devalue sensational content. Since the early days of the platform, the company has relied on explicit engagement metrics — whether people ‘liked’, commented on, or shared a post — to determine which posts they would find most relevant. But the undifferentiated use of those metrics has decreased as the way those signals are used has evolved and the other signals Facebook considers have expanded.
In 2018, Mark Zuckerberg announced that his product teams would focus not only on serving people the most relevant content but also on helping them have more meaningful social interactions — primarily by promoting content from friends, family and groups they are part of over content from pages they followed. The effect was to change ranking such that explicit engagement metrics would still play a prominent role in filtering the posts likely to be most relevant to you, but now with an extra layer of assessing which of those potentially relevant posts was also likely to be meaningful to you. In doing this, he recognised explicitly that this shift would lead to people spending less time on Facebook because Facebook Pages, where media entities, sports teams, politicians, and celebrities among others tend to have a presence, generally post more engaging though less meaningful content than, say, your mum or dad. The prediction proved correct, as the change led to a decrease of 50 million hours’ worth of time spent on Facebook per day, and prompted a loss of billions of dollars in the company’s market cap.
This shift was part of an evolution of Facebook’s approach to ranking content. The company has since diversified its approach, finding new ways to determine what content people find most meaningful, including directly asking them. For example, Facebook uses surveys to learn which posts people feel are worth their time and then prioritises posts predicted to fit that bill. Surveys are also used to better understand how meaningful different friends, pages and groups are to people, and ranking algorithms are updated based on the responses. This approach gives a more complete picture of the types of posts users find most meaningful, assessing their experience outside the immediate reaction, including the immediate pull of any sensational content.
Facebook is also in the relatively early stages of exploring whether and how to rank some important categories of content differently — like news, politics, or health — in order to make it easier to find posts that are both relevant and informative. And last month it was announced that Facebook is considering new steps to reduce the amount of political content — where sensationalism is no stranger — in user’s news feeds in response to strong feedback from users that they want to see less of it overall. This follows from Facebook’s recent decision to stop recommending civic and political groups to users in the US, which is now being expanded globally.
This evolution also applies to the groups people join on Facebook around shared interests or experiences. Facebook has taken significant steps to make these spaces safer, including restricting or removing members or groups that violate its Community Standards. Facebook also recognises there are times when it is in the wider interest of society for authoritative information about topical issues to be prioritized in your News Feed. But just as messages from doctors telling us to eat our vegetables or dentists reminding us to floss will never be as engaging as celebrity gossip or political punditry, Facebook understands that it needs to supplement the ranking process to help more people find authoritative information. Last year, it did just that, helping people find accurate, up-to-date information around both coronavirus and the U.S. elections. In both cases, Facebook created information hubs with links and resources from official sources and promoted these at the top of people’s news feeds. Both had huge reach —the Covid-19 Information Center directed more than 2 billion people to credible sources of information, and the Voting Information Center helped more than 4.5 million Americans register to vote.
The reality is, it’s not in Facebook’s interest — financially or reputationally — to continually turn up the temperature and push users towards ever more extreme content. Bear in mind, the vast majority of Facebook’s revenue is from advertising. Advertisers don’t want their brands and products displayed next to extreme or hateful content — a point that many made explicitly last summer during a high-profile boycott by a number of household-name brands. Even though troubling content is a small proportion of the total (hate speech is viewed seven or eight times for every
10,000 views of content on Facebook), the protest showed that Facebook’s financial self-interest is to reduce it, and certainly not to encourage it or optimise for it.
Polarisation
But even if you agree that Facebook’s incentives do not support the deliberate promotion of extreme content, there is nonetheless a widespread perception that political and social polarisation, especially in the United States, has grown because of the influence of social media. This has been the subject of swathes of serious academic research in recent years — the results of which are in truth mixed, with many studies suggesting that social media is not the primary
driver of polarisation after all, and that evidence of the filter bubble effect is thin at best.
Research from Stanford last year looked in depth at trends in nine countries over 40 years, and found that in some countries polarisation was on the rise before Facebook even existed, and in others it has been decreasing while internet and Facebook use increased. Other credible recent studies have found that polarisation in the United States has increased the most among the demographic groups least likely to use the internet and social media, and that levels of ideological polarisation are similar whether you get your news from social media or elsewhere. A Harvard study ahead of the 2020 US election found that election-related disinformation was primarily driven by elite and mass-media, not least cable news, and suggested that social media play only a secondary role. And research from both Pew in 2019 and the Reuters Institute in 2017 showed that you’re likely to encounter a more diverse set of opinions and ideas using social media than if you only engage with other types of media.
An earlier Stanford study showed that deactivating Facebook for four weeks before the 2018 US elections reduced polarisation on political issues but also led to a reduction of users’ news knowledge and attention to politics. However, it did not significantly lessen so-called ‘affective polarisation’, which is a measure of a user’s negative feelings about the opposite party. What evidence there is simply does not support the idea that social media, or the filter bubbles it supposedly creates, are the unambiguous driver of polarisation that many assert. One thing we do
know, is that political content is only a small fraction of the content people consume on Facebook — our own analysis suggests that in the U.S. it is as little as 6%. Last year, Halloween had twice the increase in posting we saw on Election Day — and that’s despite the fact that Facebook prompted people at the top of their news feed to post about voting.
How to train your Facebook algorithm
Unlike the relationship between the couple cooking dinner — one shopping for ingredients, the other cooking, where both sides have a meaningful understanding of what they are putting in and getting out — the relationship between a user and an algorithm isn’t as transparent. That needs to change. People should be able to better understand how the ranking algorithms work and why they make particular decisions, and they should have more control over the
content that is shown to them. You should be able to talk back to the algorithm and consciously adjust or ignore the predictions it makes — to alter your personal algorithm in the cold light of day, through breathing spaces built into the design of the platform.
To better put this into practice, Facebook has launched a suite of product changes to help our users more easily identify and engage with the friends and pages they care most about. And we’re placing a new emphasis not just on creating such tools, but on ensuring that they’re easy to find and to use.
A new product we’re calling ‘Favorites’, which improves on the previous see first control, allows people to see the top friends and pages that Facebook predicts are the most meaningful to them — and, importantly, people can adopt those suggestions or simply add other friends and pages if they want. Posts from people or pages that users manually select will then be boosted in their news feed and marked with a star. Such posts will also populate the new favorites feed, an alternative to the standard news feed. For some time, it has been possible for users to view their Facebook News Feed chronologically, so that the most recent posts appear highest up. This turns off algorithmic ranking, something that should be of comfort to those who mistrust Facebook’s algorithms playing a role in what they see. But this feature hasn’t always been easy to find. So Facebook is introducing a new ‘Feed Filter Bar’ to make toggling between this most recent feed, the standard news feed and the new favorites feed easier.
Similarly, for some time Facebook has given you the ability to see why a particular ad has appeared in your news feed through the ‘Why Am I Seeing This?’ tool, which you can see by clicking on the three dots in the top right corner of an ad. This was extended to most posts in your news feed in 2019, and will soon be available for some suggested posts too, so you can understand why those cookery videos or movie news articles keep appearing as you scroll.
These are just some of the measures Facebook is rolling out this year. Others include providing more transparency about how the distribution of problematic content is reduced; making it easier to understand what content is popular in news feed; launching more surveys to better understand how people feel about the interactions they have on Facebook and transparently adjusting our ranking algorithms based on what we learn; publishing more of the signals and predictions that guide the news feed ranking process; and connecting users with authoritative information in more areas where there is a clear societal benefit, like our climate science and racial justice hubs.
These measures mark a significant shift in the company’s thinking about how it gives users greater understanding of, and control over, how its algorithms rank content, and how Facebook can at the same time utilise content ranking and distribution to ensure the platform has a positive impact on society as a whole. We expect to announce more changes over the course of the year.
Making peace with the machines
Putting more choices into the hands of users is not a panacea for all the problems that can occur on an open social platform like Facebook. Many people can and do choose sensational and polarising content over alternatives.
Social media lets people discuss, share, and criticise freely and at scale, without the boundaries or mediation previously imposed by the gatekeepers of the traditional media industry. For hundreds of millions of people, it is the first time that they have been able to speak freely and be heard in this way, with no barrier to entry apart from an internet connection. People don’t just have a video camera in their pocket — with social media, they also have the means to distribute what they see.
This is a dramatic and historic democratisation of speech. And like any democratising force, it challenges existing power structures. Political and cultural elites are confronting a raucous online conversation that they can’t control, and many are understandably anxious about it.
Wherever possible, I believe that people should be able to choose for themselves, and that people can generally be trusted to know what is best for them. But I am also acutely conscious that we need collectively-agreed ground rules, both on social media platforms and in society at large, to reduce the likelihood that the choices exercised freely by individuals will lead to collective harms. Politics is in large part a conversation about how we define those ground rules in a way that enjoys the widest possible legitimacy, and the challenge that social media now faces is, for
better or worse, inherently political.
Should a private company be intervening to shape the ideas that flow across its systems, above and beyond the prevention of serious harms like incitement to violence and harassment? If so, who should make that decision? Should it be determined by an independent group of experts? Should governments set out what kinds of conversation citizens are allowed to participate in? Is there a way in which a deeply polarised society like the U.S. could ever agree on what a healthy national conversation looks like? How do we account for the fact that the internet is borderless and speech rules will need to accommodate a multiplicity of cultural perspectives?
These are profound questions — and ones that shouldn’t be left to technology companies to answer on their own. Promoting individual agency is the easy bit. Identifying content which is harmful and keeping it off the internet is challenging, but doable. But agreeing on what constitutes the collective good is very hard indeed. A case in point is the decision Facebook took to suspend former President Trump from the platform. Many welcomed the decision — indeed, many argued strongly that it was about time that Facebook and others took such decisive action. It is a decision that I absolutely believe was right. But it was also perhaps the most dramatic example of the power of technology in public discourse, and it has provoked legitimate questions about the balance of responsibility between private companies and public and political authorities.
Whether governments now choose to tighten the terms of online debate or private companies choose to do so themselves, we should remain wary of the conclusion that the answer to these dilemmas is always less speech. While we shouldn’t assume that perfect freedom leads to perfect outcomes, nor should we assume that extending freedom of speech will lead to a degradation of society. Implicit in the arguments made by many of social media’s critics is an assumption that people can’t be trusted with an extensive right to free speech; or that this freedom is an illusion and that their minds are really being controlled by the algorithm and the sinister intentions of its Big Tech masters.
Perhaps it is time to acknowledge it is not simply the fault of faceless machines? Consider, for example, the presence of bad and polarising content on private messaging apps — iMessage, Signal, Telegram, WhatsApp — used by billions of people around the world. None of those apps deploy content or ranking algorithms. It’s just humans talking to humans without any machine getting in the way. In many respects, it would be easier to blame everything on algorithms, but there are deeper and more complex societal forces at play. We need to look at ourselves in the mirror, and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along.
The truth is machines have not taken over, but they are here to stay. We need to make our peace with them. A better understanding of the relationship between the user and the Facebook algorithm is in everyone’s interest. People need to have confidence in the systems that are so integral to modern life. The internet needs new rules for the road that can command broad public consent. And tech companies need to know the parameters within which society is comfortable for them to operate, so that they have permission to continue to innovate. That starts with openness and transparency, and with giving you more control.
This essay was written by Facebook’s vice president of global affairs, Nick Clegg.
Private Media, the publisher of SmartCompany, is negotiating to join Facebook’s new licensing agreement for news media publishers.
COMMENTS
SmartCompany is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while it is being reviewed, but we’re working as fast as we can to keep the conversation rolling.
The SmartCompany comment section is members-only content. Please subscribe to leave a comment.
The SmartCompany comment section is members-only content. Please login to leave a comment.