In the spring of 2018, Facebook CEO Mark Zuckerberg was called to testify before Congress, largely in response to the news that British consulting firm Cambridge Analytica had captured and used the data of more than 87 million Facebook users to influence elections without the social media giant's knowledge or consent, as well as the admission that Facebook itself had been used by Russians to spread fake news and propaganda. During his testimony, Zuckerberg laid out his views on both the need for, and inevitability of, regulation of technology companies and their products and services.
Zuckerberg has since published a call for governments to regulate the Internet by limiting harmful content, addressing long-standing privacy concerns, securing the integrity of elections, and ensuring data portability. However, as of the writing of this article, there has been little to no substantive action on the part of the U.S. federal government to address these and other IT-related concerns.
"On 15 March 2019, people looked on in horror as, for 17 minutes, a terrorist attack against two mosques in Christchurch, New Zealand, was live streamed. 51 people were killed and 50 injured and the live stream was viewed some 4,000 times before being removed."
Two months later to the day, on 15 May 2019, New Zealand Prime Minister, Jacinda Ardern, and French President, Emmanuel Macron brought together Heads of State and Government and leaders from the tech sector to adopt the Christchurch Call. The Christchurch Call is a commitment by Governments and tech companies to eliminate terrorist and violent extremist content online. It rests on the conviction that a free, open and secure internet offers extraordinary benefits to society. Respect for freedom of expression is fundamental. However, no one has the right to create and share terrorist and violent extremist content online."
For more information: https://www.christchurchcall.com/
Thank you for reading and for your comments. One of the challenges faced by technology companies is being able to quickly scan for and identify objectionable content online. The use of computer vision and machine learning/deep learning algorithms is likely to help in this process, but humans will still be required to write the algorithms, as well as determine the parameters of what may or may not be considered objectionable content.
Displaying all 2 comments