Image for A First Look at Canada’s Harmful Content proposal
Avatar image of Matt Hatfield

A First Look at Canada’s Harmful Content proposal

What’s proposed in the government’s harmful content consultation, and where it falls down.

Our government just unveiled a cynical plan to make Canada’s Internet one of the most censored and surveilled in the democratic world.

It’s a classic bait and switch; using our real frustration with toxic online behaviour as the lever, their harmful content consultation would force online platforms like Facebook, Youtube and Twitter to engage in unprecedented censorship and surveillance of lawful speech on their platforms. It would transform them into reporting agents to CSIS and the RCMP— and it puts the actual needs of victims of hate speech and harmful content online dead last. 

In this blog we’ll break down what’s being proposed, and where it goes (very) wrong. None of what we say here is to imply that illegal content and toxic online behaviour are not real problems, or that the government could not lay some constructive role in addressing them. That’s why in a follow-up companion blog, we’ll explore more positive directions the government could take to taking on these issues.

What’s in the harmful content proposal?

On July 29th, the Department of Canadian Heritage unveiled their consultation and technical paper detailing their plans to address ‘harmful content’ online. It seems likely the public consultation we’re being offered is a concession to the public outcry that followed Heritage’s previous efforts to rush through Bill C-10 without one. Despite being billed as a consultation, few competing options or open questions are presented in the paper. Instead, we got a detailed description of exactly what the government intends to do.

The described legislation would:

  • Require online platforms identify, remove and report to law enforcement content that may belong to five already illegal categories of speech:
    • Sexualized content involving children. 
    • Non-consensual sharing of sexual images;
    • Hate speech;
    • Incitement to violence, against persons or property;
    • Incitement to terrorism;
  • In removing these types of content, platforms must:
    • Use artificial intelligence to proactively search for and remove content that may meet the above criteria;
    • Review and respond to requests to remove content from their service for these reasons within 24 hours of it being reported;
    • Report flagged content they remove directly to the RCMP, CSIS, or both.
  • To oversee the system Create a new ‘Digital Safety Commissioner’ to oversee this system, with powers including:
    • Order-making power to require online platforms do or not do anything it judges necessary to ensure compliance with the Act;
    • Extraordinarily broad rights to enter and search platform offices and compel data from them at any time at its discretion;
    • Broad right to conduct its hearings on content takedowns in secret, justified by the need for privacy, commercial secrecy, national security, international relations maintenance, national defence.
    • The Commissioner would have the power to impose stiff penalties on platforms for non-compliance, including:
      • Financial penalties up to 3% of global revenues/$10 million dollars, whichever is greater;
      • Ordering Canadian ISPs to block a service or website from being accessible in Canada.
  • Create a ‘Digital Recourse Council’ in charge of hearing user appeals of platform content moderation decisions. It would:
    • Issue binding decisions that platforms must take down content;
    • Issue non-binding decisions that platforms can choose to reinstate content;
    • Be composed of 3-5 people ‘reflective’ of the diversity of Canada’s population, but with no requirement for legal or constitutional expertise.

Where does it all go wrong?

In the leadup to this proposal, the government has insisted that all they want to do is duplicate our existing norms and limitations around offline freedom of expression in Canada, on the Internet. So where’s the problem? 

Either our government does not understand the consequences of their own proposal, or they’re lying to us. Far from simply duplicating our norms around offline speech, this proposal will create an unprecedented system of online surveillance of ordinary people in Canada, and normalize the removal of much entirely lawful online speech. It won’t make online spaces safer or more pleasant, and it is likely to hurt folk with marginalized identities the most.

1. Harsh penalties with short takedown windows will incentivize mass removal of lawful content: A core problem is that the system provides stiff penalties for online platforms for mistakenly leaving up illegal content beyond 24 hours, and no penalty whatsoever for pulling down legal speech. If you’re running a platform, that makes your decision easy: you need to remove all speech from users that could conceivably be interpreted as in violation of Canada’s law. You’ll tell your moderation team to move quickly and pull anything that gives them a second’s pause, not make a fair and balanced assessment.

The proactive takedown algorithms that platforms are required to adopt under this legislation will be a key driver of the flagging and removal of lawful speech. While many people imagine algorithms to be smart, when it comes to understanding human context and meaning, they’re really quite dumb; they’re just able to sort through large quantities of information quickly. 

The more our government leans on platforms to remove content quickly through this legislation, the more they’ll have to rely on algorithms that will flag for removal satire and humour, documentation of human rights abuses and attacks, sex education and voluntary sexual expression, conversation within marginalized communities about their experience, and more— not just the intended targeted hateful or violent content. Even if a human reviewer needs to approve the algorithm’s suggestion, the legal incentives and limited time they have to make a decision will encourage removing all but the most obviously innocuous types of flagged content.

2. The system is prone to being abused to attack marginalized communities and expression: What about content that’s flagged from a user report - if a human initiates a report using their understanding of context and nuance, does that make it safe to rely on them for rapid content removal? 

Emphatically, no. Any user report system can and will be abused by organized communities of malicious users who troll the Internet looking for opportunities to “brigade” other users who don’t share their identity or politics. These groups abuse reporting systems by mass reporting content from those they don’t like as offensive, or trawling through their past social media history to find anything that might cross some part of a platform’s community rules. Anyone can be suddenly targeted by these types of attacks, but history shows that Internet users with marginalized identities, particularly those from disadvantaged social, ethnic, or gender identity backgrounds, tend to disproportionately attract the attention of communities of such users, sometimes for months or years of sustained attack. 

None of this is to say that a user report system isn’t worth having - they’re a crucial feature of the modern Internet. The takeaway is that responsible online platforms need time, judgment and flexibility in deciding how to respond to barrages of user reports. Yet this exact opposite of what our government’s proposal would force on them. 

A legal requirement that forces platforms to respond quickly and strongly favours removing posts and locking accounts for any content that could be argued to be illegal is handing a powerful tool to the worst corners of the Internet to attack those they don’t like, not making our Internet more safe.

3. The system will become an unprecedented law enforcement surveillance system of much lawful speech in Canada. According to the consultation paper, platforms will be required to report flagged content they remove directly to law enforcement, including the RCMP and CSIS. Depending on the version of legislation adopted, this may either occur only if the platform believes there is a risk of imminent harm, or for any content removed as potentially constituting any of the five harmful content types described above. 

In either case, users will not be informed that they’ve been reported, and there’s no guidance whatsoever in the consultation to what law enforcement can do with that information about lawful speech once it is reported - just a blithe promise that the government will review it for compliance with privacy legislation.

Result? In the worst case, a perpetual growing secret file on everyone in Canada with the RCMP and even CSIS whose online speech has ever touched on sensitive social issues.

4. The system grants the Digital Commissioner unprecedented powers with minimal restriction on their use. The government’s approach of giving broad powers to the Commissioner and letting them set their own limits is simply not how a regulator in a democracy should work. 

Online platforms are now the primary spaces for finding information and expressing ourselves for most people in Canada, and any regulation or regulator whose scope impacts our expression in those spaces must be incredibly carefully and narrowly designed to not impinge on our rights. Giving an untested regulator like the proposed Digital Commissioner breath-taking powers to instruct online platforms to do or stop doing anything the Commissioner sees fit under the Act is simply too broad a mandate. 

Similarly, giving the regulator broad power to order website blocking of sites that host both legal and illegal speech is a disproportionate and dangerous remedy that has previously only been common in authoritarian states like Russia and China, not in democratic states.

Any regulator with power over our online speech must have tightly defined power of limited scope, with the opportunity for careful and cautious amendment later, following academic study and democratic debate.

OpenMedia is calling on our government to take this overreaching proposal back to the drawing board. To add your voice, take action here.

Image credit: "Facebook Application Icon" by Pixabay, licensed under CC0


TOPICS
Take action now! Sign up to be in the loop Donate to support our work