Facebook, after removing 1.5 million videos of the deadly Christchurch mosque terror attack in New Zealand that left 50 people dead, has now shared digital fingerprints of more than 800 "visually-distinct" videos related to the attack, via its collective database along with URLs and context on its enforcement approaches.
The social media giant has been working closely with other tech giants like Google, Twitter and Microsoft through the Global Internet Forum to Counter Terrorism, towards industry cooperation against the range of terrorists and violent extremists operating online.
We’ve shared digital fingerprints of more than 800 visually-distinct videos related to the attack via our collective database, along with URLs and context on our enforcement approaches. https://t.co/o0v6fvaFno— Facebook Newsroom (@fbnewsroom) March 19, 2019
In a blog post, Facebook laid out its efforts towards responding to the attack and helping the New Zealand police with the investigations. It said that the attacker's video was removed within minutes of the police reaching out to them.
Facebook Shares Details of Efforts
Following are the details Facebook provided to help with the investigations:
- It said that the video was viewed less than 200 times during the live broadcast and that no one reported the video during the live streaming. The video was viewed about 4,000 times on Facebook before being removed.
- The first user report came 29 minutes after the video started and 12 minutes after it ended.
- Before Facebook was alerted, someone on 8Chan posted the link to a copy of the video on a file-sharing site.
- It categorised the shooting as a terror attack, meaning that any sort of praise or support and representation of the events will violate Facebook's Community standards.
- Personal accounts of the terrorist were removed from both Facebook and Instagram. It is still removing impostor accounts that show up.
- Facebook hashed the video so that other shares visually similar are detected and deleted automatically (both from FB and Instagram).
- Wherever visuals were difficult to detect, it used audio technology as an additional detection system.
- Removed 1.5 million videos of the attack in the first 24 hours globally
- Shared digital fingerprints of more than 800 "visually-distinct" videos related to the attack, via its collective database.
Global Pressure on Social Media Platforms
Facebook said that it will continue to work on this and provide further updates. This comes after the companies have been called out for airing the live stream of the terror attack.
Earlier, telecom companies in New Zealand had written to Twitter, Facebook and Google demanding an urgent solution to the problem of the video’s circulation.
"We call on Facebook, Twitter and Google, whose platforms carry so much content, to be a part of an urgent discussion at an industry and New Zealand government level on an enduring solution to this issue," read the letter quoted by news portal Stuff NZ.
Facebook, however, has been the most transparent towards what it has done to address the problem.
(With inputs from Stuff New Zealand)
. Read more on Tech News by The Quint.RSS & BJP’s Nehru-Netaji ‘Cosplay’: Irony Dies a Thousand DeathsHardik Pandya’s Workload Needs to be Monitored: Zaheer Khan . Read more on Tech News by The Quint.