Earlier this month, the Department for Digital, Culture, Media, and Sport (more commonly known by its acronym DCMS) in the United Kingdom released a new ‘online harms’ white paper to regulate social media, search, messaging, and file sharing platforms. Among other things, the proposal called for the creation of an independent regulator to police harmful content online, including implementing fines and holding individual executives to account if problematic material is not removed within a specified time period.
Vague international definitions of terrorism and extremism have partially contributed to the struggle that technology companies face in moderating online content that is seen as extremist. As a result, multiple stakeholders have had to balance the power of policing content online to protect citizens, while upholding the tolerance inherent within liberal societies that allows citizens to express their rights of expression. After all, decisions around policing for extremism have important implications on citizen privacy, freedom of speech, and the ways in which governments and technology companies interact to define a term such as ‘extremism’ and navigate the boundaries of their power.
Read the full article in Forbes.