The European Union (EU) has implemented a sweeping censorship regime that has put major tech giants under unprecedented scrutiny. The new rules, known as the Digital Services Act (DSA), impose obligations on internet giants such as Meta’s Facebook and Instagram, Apple’s online App Store, and certain Google services. These obligations include preventing the spread of harmful content, banning or limiting user-targeting practices, and sharing internal data with regulators and researchers.
The EU is considered a global leader in tech regulation, and the success of these laws will likely influence the introduction of similar rules worldwide. However, critics argue that these regulations amount to censorship under the guise of content moderation and tech regulation.
Under the Digital Services Act, platforms will be required to be more transparent and accountable for their role in disseminating illegal and harmful content. They must implement measures to address risks linked to the spread of illegal content and negative effects on freedom of expression and information. Platforms also need to have clear terms and conditions, enforce them diligently, and allow users to flag illegal content for prompt action. Additionally, they must analyze specific risks and implement mitigation measures to combat disinformation and inauthentic use of their platforms.
In response to these new regulations, European Commissioner Thierry Breton commended Twitter for its commitment to comply with the Digital Services Act. Breton is working to ensure that major tech companies adhere to the new rules, which aim to crack down on hate speech, disinformation, and other harmful and illegal content. The regulations will come into effect on August 25 for the largest platforms.
Meanwhile, YouTube has expanded its policy on medical misinformation to include all forms of medical information. In addition to addressing Covid misinformation, the platform now plans to delist videos promoting cancer treatments that are proven to be harmful or ineffective, effectively prohibiting content creators from advocating for natural cures.
YouTube pledges to implement its medical misinformation policies in cases where public health risks are high, misinformation is prevalent, and official guidance from health authorities is accessible. By streamlining its guidelines, the platform aims to remove content that contradicts treatment recommendations for specific health conditions as advised by local health authorities or the World Health Organization (WHO).
However, concerns arise regarding the censorship of content discussing safe and effective medications such as ivermectin, which was once widely criticized in mainstream media but is now recognized by the US FDA as a treatment for COVID-19.
These developments highlight the growing influence of the multinational technocracy on national governments within the EU. Critics argue that nation-states can no longer be considered free and are now part of an integrated slave colony controlled by powerful organizations like the World Economic Forum. This is seen as an infringement on the sovereignty of formerly independent nations.
Overall, the EU’s implementation of the Digital Services Act and YouTube’s expanded medical misinformation policy reflect the increasing regulation and control exerted by tech giants and multinational organizations. While the aim may be to combat harmful content and misinformation, concerns persist regarding the suppression of free speech and the influence of external entities on national governments.