Skip to main contentSkip to navigationSkip to navigation
Facebook
Facebook admitted that using emerging artificial intelligence technologies to spot dangerous material was not yet foolproof. Composite: alamy
Facebook admitted that using emerging artificial intelligence technologies to spot dangerous material was not yet foolproof. Composite: alamy

Facebook admits industry could do more to combat online extremism

This article is more than 6 years old

Admission comes as British PM and French president propose fining firms that move too slowly to remove extremist content

Facebook has conceded that technology companies could do more to counter online extremism after Theresa May and the French president, Emmanuel Macron, proposed fining firms that move too slowly to remove extremist content being shared by terrorist groups.

The social media giant told a meeting between political leaders and its own executives as well as others from Google and Microsoft at the United Nations general assembly in New York that it is now employing thousands of content reviewers around the globe and a staff of 150 people dedicated to countering terrorism on its platform in an attempt to remove more extremist content.

But, along with Google, it warned that using emerging artificial intelligence technologies to spot dangerous material was not yet foolproof.

Facebook sources said the company accepted the industry could do more and said it was committed to building more technology to help address these issues. But it said it was already accelerating its efforts, in particular by using artificial intelligence to flag up extremist content and sharing this data with rival firms.

“AI has begun to help us identify terrorist imagery at the time of upload so we can stop the upload, understand text-based signals for terrorist support, remove terrorist clusters and related content, and detect new accounts created by repeat offenders,” Monika Bickert, Facebook’s director of global policy management, told the summit.

However, the company said human oversight was still necessary, because there remained limitations to the technology. “AI can spot a terrorist’s insignia or flag, but has a hard time interpreting a poster’s intent.,” Bicker added.

Google also owns video-sharing site YouTube while Facebook owns the WhatsApp encrypted messaging service and Instagram.

The British prime minister has urged the tech leaders to “go further and faster in automating the detection and removal of terrorist content online” and set a two-hour goal for its deletion after the first posting.

Her pressure on the tech giants comes in concert with Macron and Italy’s prime minister, Paolo Gentiloni, who are all concerned that Islamic State is too easily able to post videos and online magazines that inspire lethal terror attacks.

Google also struck a note of caution. Kent Walker, Google’s general counsel, said the “larger problem is you can’t necessarily catch everything on the entirety of the internet... The challenge is, while machine learning is a powerful tool, it’s still relatively early in its evolution.”

Facebook declined to comment on whether it could meet the two-hour goal, but London-based digital strategy consultant, Robert Blackie, said two hours should be possible.

“If you report pornography for example, it gets removed very quickly,” he said. “The biggest challenge is that the government hasn’t defined what extremist content is.”

Jim Killock, executive director of Open Rights Group, a campaign to protect privacy and free speech online, also warned of the limitations of relying on automation to take down terrorist material.

“Mistakes will inevitably be made – by removing the wrong content and by missing extremist material,” Killock said. “Automated takedowns will have a wide-reaching effect on the content we see, although not necessarily on the spread of extremist ideas as terrorists will switch to using other platforms.”

One of these is Telegram, an encrypted messaging service, which is known to be used by Islamic State supporters. The company has previously taken action to block dozens of suspected Isis accounts.

“It is crazy to have this conversation without placing Telegram front and centre of the conversation because, if you strip away the rhetoric the reality is this: Islamic State supporters don’t use Twitter or YouTube like they used to – Telegram is their new centre of gravity,” said Charlie Winter, senior research fellow at the International Centre for the Study of Radicalisation at King’s College London.

He said Islamic State’s presence on big internet service providers (ISPs) is now “vanishingly small” by comparison.

There was backing for May’s call for fines from former reviewer of UK terrorism legislation Lord Carlile of Berriew, who said: “The Prime Minister is right. The companies, working with the government, should be doing far more. Fines should be a last resort, but could be an option.”

More on this story

More on this story

  • MPs criticise social media firms for failure to report criminal posts

  • Rise in UK use of far-right online forums as anti-Muslim hate increases

  • How Charlottesville forced Reddit to clean up its act

  • However extreme your views, you’re never hardcore enough for YouTube

  • Why I am seeking to stamp out online echo chambers of hate

  • The far right hails ‘Unite the Right’ a success. Its legacy says otherwise

  • Britain has large audience for online jihadist propaganda, report says

  • The networked far right behind Tommy Robinson is a menace for our digital age

Most viewed

Most viewed