Google develops tool to help flag child sex abuse content online

Google headquarters
Google headquarters

Google

has developed an AI tool to help flag child sex abuse content online.

The free tool uses image recognition to help human moderators spot and remove child sexual abuse material (CSAM) more quickly.

It will reduce moderators' exposure to content that can be traumatic, while hopefully catching greater quantities of child sex abuse content.

The move comes as UK officials have called on Google and other Silicon Valley giants to take greater action against online child sexual abuse.

Some existing systems are able to identify child sex abuse images and video by running it through a database of content that has already been flagged.

It's a helpful tool for spotting content that has been re-posted on the internet, but it doesn't ease the process of finding new images or video.

Instead, human moderators have to review the content themselves.

Google's technology aims to address that issue, by allowing service providers, non-governmental organizations

and other tech firms 'review this content at scale,' the firm said.

It uses deep neural networks to sort through masses of content and prioritizes certain posts for review.

"Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse," Google explained

in a blog post.

"We're making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it."

Google said it has already observed the system's success: It helped a reviewer "take action on 700 percent more CSAM content over the same time period."

The firm is partnering with UK-based charity Internet Watch Foundation (IWF), which is dedicated to stamping out CSAM online.

IWF employs human moderators to identify child sex abuse content online and also investigates sites where CSAM is shared, while working with law enforcement in some cases.

"We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material," said Susie Hargreaves, chief executive at IWF, said in a statement.

"By sharing this new technology, the identification of images could be sped up, which in turn could make the internet a safer place for both survivors and users."

IWF will now test out Google's AI tool to see how it performs.

The announcement from Google came just a few hours after the UK's Home Secretary Sajid Javid doubled down on calls for tech firms to crack down on CSAM.

Javid said he had only recently understood the scale of the issue, claiming that people can livestream child sex abuse online for just

£12 (Sh1,403).

He said tech companies needed to do more than just flag offensive material and threatened legislation if they didn't take more action.

WATCH: The latest videos from the Star