Facebook and Google have a moral duty to stop online abuse

 It’s the stuff of nightmares: your intimate images are leaked and posted online by somebody you thought you could trust. But in Australia, victims often have no real legal remedy for this kind of abuse.

 

This is the key problem of regulating the internet. Often, speech we might consider abusive or offensive isn’t actually illegal. And even when the law technically prohibits something, enforcing it directly against offenders can be difficult. It is a slow and expensive process, and where the offender or the content is overseas, there is virtually nothing victims can do.

 

So should the onus be placed on the service providers, such as Facebook and Twitter?

 

Lately, it seems online gender-based abuse has been happening faster than we can keep track of it. Critics of misogyny in video game culture have been subject to serious abuse and physical threats.

 

There have been major leaks of intimate photos of celebritiesand ordinary individuals alike.

 

Increasingly, there are calls to criminalise so-called “revenge porn” — which is really often a deliberate strategy to harass, intimidate, humiliate, coerce and blackmail women.

 

Enlisting the help of online intermediaries

 

Internet intermediaries, such as Facebook and Google, are uniquely placed to cheaply respond to content shared on their networks. Because of this, they are increasingly being asked to do more to enforce our laws and social norms.

 

In Europe, under the “right to be forgotten”, Google is now required to remove search results that link to harmful or inaccurate content. European ISPs are required to block access to sites such as The Pirate Bay and sites selling counterfeit goods.

The Australian government also wants private companies to do more. The government intends to create an “e-safety commissioner”, which will have powers to ask (but not order) organisations such as Facebook to removing bullying content targeted at Australian children.

 

A recent review proposes a new civil action for serious invasions of privacy, which would also require online intermediaries to remove content such as “revenge porn” when they learn about it. The federal government also recently announced that it expects internet service providers to do more to police copyright infringement online.

 

Legal liability is not the answer

 

Our law does not make companies legally responsible just because they can cheaply prevent harm. There are problems with making intermediaries such as Facebook legally liable for content posted by their users. It can be unfair, because they’re often not really at fault. It creates real uncertainty and risk, which can decrease investment in new services and technologies — or push them to move offshore.

 

And if we make these organisations liable for harmful material on their networks, they are likely to be too responsive: in order to protect themselves, they will remove content that we think should be protected by the freedom of speech.

 

In creating rules about acceptable content, we need to be careful to take account of the legitimately different views and expectations of the diversity of human beings who use these services to connect and share content.

 

Responsibility and responsiveness

 

It is right that society expects these services, as the providers of the online spaces we inhabit, to be responsive to our laws and our social standards. Hate speech, revenge porn and other abusive content can have real and devastating consequences for the victim.

 

The United Nations endorsed the statement that private businesses have a responsibility to protect human rights, meaning they should seek to minimise adverse impacts on human rights that are directly linked to their services.

 

The companies that run the online spaces we share already make a lot of decisions about what we can and cannot say. But the reasons for their decisions are often shrouded in secrecy, and they generate a lot of controversy.

 

Groups such as the Association for Progressive Communications have complained that private actors are not being responsive enough to concerns about gender-based abuse and hate speech. Facebook moderates its content based on “community standards”, but those standards seem to lead to some surprising conclusions.

 

Jokes about rape, for example, are often allowed on Facebook, but photographs of breastfeeding women have in the past been banned for violating the standards.

 

Working with online service providers

 

There are promising signs, however, that things might be getting a little better. Some activist groups, such as Women, Action and the Media (WAM!) have had success lobbying Twitter and Facebook to take more action on reports of gendered hate speech.

 

These efforts are promising, but they are still in their very early stages. Before we can get any further, we need a lot more information about how these companies are currently making decisions about censoring content.

 

Ultimately, punishing intermediaries for content posted by third parties isn’t helpful. But we do need to have a meaningful conversation about how we want our shared online spaces to feel.

 

The providers of these spaces have a moral, if not legal, obligation to facilitate this conversation.

 

Image credit: Flickr/marcopako

 

The Conversation

This article was originally published on The Conversation. Read the original article.

 

Follow StartupSmart on FacebookTwitter, and LinkedIn.

COMMENTS