Overview of the Online Safety Act 2023
Legal note: This content was compiled from multiple public sources and is intended for general information and satire only. It does not constitute legal advice. While I have made efforts to reflect the law accurately, errors or omissions may exist. Please consult a qualified professional for legal guidance.
Jump to my thoughts (TL;DR)The UK's Online Safety Act 2023 (OSA) is a sweeping law (Royal Assent 26 Oct 2023) designed to make the internet safer, especially for minors. The government explains that it "puts a range of new duties on social media companies and search services" – for example, providers must "implement systems…to reduce risks [of] illegal activity, and to take down illegal content when it does appear". Ofcom is named as the independent regulator, with broad powers (fines, blocking orders, etc.) to enforce the rules. In practice, the Act's provisions are being rolled out in phases (child-safety codes coming into force by mid-2025, etc.).
Scope
The OSA applies to any "user-to-user" service and search engine. That means any website or app where users post content or interact (social networks, forums, messaging, cloud sharing, dating apps, gaming chats, etc.). Even services outside the UK must comply if they have a significant UK user base or target the UK market. Adult content sites are explicitly included (they must enforce strict age checks). Offline or one-way content services (e.g. static blogs or email/SMS) are largely exempt, as are one-to-one private calls or messages.
“I wouldn't even saterise you” – Unknown
Key Duties
The Act imposes a "duty of care" on platforms, with obligations that scale to their size and risk. Key duties include:
- Illegal content: Platforms must assess and mitigate risks of illegal harms and proactively remove or block any content that is already illegal under UK law (terrorist material, child sexual abuse images, extreme violence, hate crimes, etc.). Ofcom published a list of "priority offences" (CSAM, terrorism, etc.) which companies must patrol most rigorously.
- Child protection: Services likely accessed by people under 18 have extra duties. They must prevent minors from seeing harmful material (e.g. adult content, self-harm or eating disorder content, terrorist or extremist propaganda) and implement age checks or content filters accordingly. Major platforms will be expected to use "highly effective" age-verification or default minors into safe settings.
- Transparency & user rights: Companies must be transparent about moderation policies and give users (especially adults) more control over the content they see. Crucially, the Act requires that platforms must not arbitrarily remove or obscure "journalistic" or "democratically important" content. Large social media services are legally obliged to protect political speech for example, user comments supporting or opposing political parties – and preserve it unless it's illegal.
- Records and reporting: Platforms must keep written records of their risk assessments, moderation processes, and actions taken to comply. Ofcom can demand these records at any time. Importantly, this duty is about internal documentation (risk logs, policy changes, transparency reports) – the Act does not require services to store or scan all user communications indefinitely. In fact, draft Ofcom guidance specifies that record-keeping means documenting how content is moderated, not hoarding every message.
New Offences
In addition to platform duties, the Act creates new crimes for individuals. These include cyberflashing (sending intimate images without consent), non-consensual deepfake adult content, threatening communications (e.g. threats of violence), and encouraging serious self-harm. (These apply to people, not companies.)
Impact on Games and Online Communities
Because the OSA covers any service with user-generated content or interaction, many online games are in scope. Games with voice/text chat or shared media fall under the Act. By contrast, purely offline games (no online features) or those that only allow private 1-on-1 chats (like a 1-to-1 VoIP call) are exempt. In practice, most multiplayer games with public chat (e.g. Fortnite, Roblox, FPS team chats) will have to comply.
Key implications for games:
- Content moderation: Game platforms must start actively policing illegal content in chats. For example, they are expected to detect and remove child sexual abuse material (CSAM) and extremist or terrorist content in user chats or shared images. This likely means using a combination of automated filters (hash-matching known CSAM, AI to flag hate speech, etc.) and human moderators.
- Child safety measures: Since many games have large child audiences, developers must implement child-protection features. This could include robust age verification (to segregate adult chat), disabling or monitoring private messaging for minors, and using AI to spot grooming behaviour. Games may need parental control tools and easy reporting systems for young people.
- Transparency and risk assessments: Developers will need to conduct thorough risk assessments of their game environments and publish summaries. Large games (Category 1) must detail how they moderate content and may have to submit compliance reports to Ofcom. They will need clear terms of service covering prohibited content and complaints procedures.
- Technical challenges: Features like in-game livestreaming or VR social spaces also count as "user-to-user" services, so those too must be monitored. Audio chat might be exempt if truly peer-to-peer, but many games route voice through servers, which could bring them under the rules.
In short, virtually any online game with community features will face new obligations. For older games with chat, operators will have to update their moderation (or risk fines/blocking). If compliance proves too difficult or expensive (especially for small studios), some game-makers might restrict UK access – for example by geo-blocking – rather than overhaul their systems.
Free Speech, Privacy, and Encryption
The Act aims to balance safety with rights, but this is controversial. Freedom of expression and privacy are explicitly mentioned in the law: platforms must have regard to users' rights when applying safety rules. The Act even bars removing "democratically important" content (e.g. political opinions) except in specific circumstances. The government maintains that the Act protects free speech – officials have praised it for empowering adults and preserving political discourse.
Critics, however, warn the Act could have a chilling effect. Privacy groups (EFF, Open Rights) argue that requiring near-constant monitoring of content will push firms to over-censor. For example, the EFF notes the law would "mandate general monitoring of all user content" and "undermine…technologies like end-to-end encryption". Wired magazine reports that platforms must now scan users' private messages for CSAM – a move tech companies call "an unwarranted attack on encryption".
In theory, the Act still requires messaging apps to scan encrypted chats for child abuse, even though experts say this is not technically possible without breaking encryption. The UK has said it will not force scanning until a technical solution exists, but the power remains in the law. In effect, encrypted services face a dilemma: weaken their encryption (backdoors or server-side scanning) to comply, or refuse (and risk UK bans or penalties). Major apps (WhatsApp, Signal, Apple, etc.) have publicly opposed any "backdoor" in encryption. The debate is ongoing, but for now most encryption remains intact.
In summary: the Act requires user rights to be respected, but also gives the government and Ofcom sweeping authority over speech. How that balance works in practice remains to be seen. On one hand, illegal speech (threats, harassment, sexual exploitation) is clearly targeted. On the other hand, broad categories like "harmful but legal" content (extremist propaganda, disinformation, self-harm content) could lead to grey-area moderation. Platforms are under pressure to err on the side of caution, which may limit some controversial speech.
Implications for Websites and Businesses
New and small websites:
Any online service that allows user content and has UK visitors could fall under the OSA. This includes blogs with comment sections, forums, small social apps, file-sharing sites, etc. Ofcom estimates tens of thousands of services will be in scope. The law scales duties to the size and risk of the platform: a tiny niche forum will have lighter obligations than a global platform like Facebook. But no service with user interaction is automatically exempt.
Key points for operators:
- Compliance burden: Sites must conduct risk assessments, update terms of service, implement content moderation or filtering systems, and set up reporting/complaints processes. They must protect people under 18 (age gates for adult content, block self-harm content for minors, etc.) and remove any flagged illegal material. This can be technologically and legally complex, especially for startups.
- Fines and sanctions: Non-compliance carries severe penalties. Ofcom can fine up to £18 million or 10% of global turnover (whichever is higher). It can also order ISPs or app stores to block access to the service from the UK. Even small businesses could face a ban if they repeatedly break the rules.
- Global companies: Foreign platforms with UK users must obey. For example, Wikimedia (Wikipedia) warned that its privacy-by-design model cannot support mandatory age checks, and said it may not comply. Other services (notably some adult sites) have already preemptively blocked UK users to avoid the law. Media reports note that BitChute (a video site) has stopped allowing UK viewers, apparently due to OSA concerns. This "geoblocking" trend may grow: if compliance is too difficult or costly, companies might simply exclude the UK, reducing service choices for British users.
Overall, many in the tech and business community fear the OSA will raise the cost of launching and operating online services in the UK. The Act's broad scope and heavy compliance demands may deter new entrants. A UK equivalent of "starting Facebook from scratch" would today require expensive AI moderation, legal compliance teams, and constant liaison with regulators. This could slow growth of digital startups or push them to offshore solutions.
Enforcement and Ofcom's Role
Ofcom is the regulator in charge of enforcing the OSA. It must issue detailed Codes of Practice (after public consultation) that explain exactly how platforms should comply. So far Ofcom has published draft rules on illegal harms and child safety, and set deadlines (e.g. illegal-harm risk assessments due by 16 March 2025). It also maintains a register of high-risk "Category" services (the largest platforms) and will audit them closely.
Enforcement powers include: issuing fines (up to £18m/10% revenue), requiring public transparency reports, and ordering access restrictions. Ofcom can direct internet providers or app stores to block a non-compliant platform. Importantly, Ofcom also has the power to compel social media to preserve content of democratic importance (e.g. not remove political posts).
Impartiality of Ofcom:
The Act formally designates Ofcom as an independent regulator, and the government has publicly stated that it will not meddle in day-to-day enforcement decisions. However, the Secretary of State retains some unusual powers over Ofcom's codes: the Act allows the government to direct Ofcom to modify draft rules for reasons of public policy or security. Civil liberties groups have warned this could undermine Ofcom's independence and give ministers emergency-style control over speech regulation. In response, officials argue that such directions will be used sparingly and that Ofcom remains operationally independent. In practice, it is too early to know how this tug-of-war will play out. For now, Ofcom appears to be proceeding with industry consultations in a fairly open manner.
Broader Implications and Advice
The OSA's impact will be far-reaching. Beyond games and major platforms, ordinary users and content creators may notice changes:
- Increased filtering and moderation: Expect more aggressive content filtering on UK-accessible platforms – for example, tighter chat moderation in online games, stricter comment screening on social sites, and more automated age gates.
- Service availability: Some smaller or controversial sites might become unavailable in the UK. We've already seen geo-blocking of Bitchute and pushback from Wikipedia. Users may find some services only available outside the UK if those operators opt out of compliance.
- Privacy trade-offs: To implement these duties, providers may collect more user data (for age checks, content scanning, record-keeping). This raises data security concerns. Platforms are supposed to handle data according to privacy law, but more retention of moderation logs could increase breach risk in theory. (Notably, the Act's record-keeping requirements focus on moderation logs and risk data, not storing all user messages.)
- Chilling of certain speech: Some content might be preemptively censored to avoid issues. Although hate speech and criminal threats were already illegal, the treatment of "legal but harmful" content (e.g. conspiracy theories, some disinformation) will depend on how Ofcom's codes define harm. Businesses and users should watch how those rules are applied.
Advice:
If you run a website, blog, game, or app that has UK users, now is the time to prepare. Review whether you are "in scope" (do users generate content or chat?). If so, plan how you will:
- Conduct risk assessments (hire legal/tech experts, map out potential harms).
- Implement moderation and reporting processes (automated filters, human review, clear T&Cs).
- Put in place any age-verification or content-exclusion measures (especially for adult content).
- Draft updated privacy and terms-of-service policies reflecting your safety obligations.
- Monitor Ofcom's upcoming codes (especially the Illegal Content Code and Child Safety Code) and be ready to adapt when they are finalised.
For end users, be aware that the UK's online landscape is changing. Platforms will increasingly assert safety features. Parents may see new age-check points or content controls; gamers might notice more content being removed or flagged. At the same time, user controls (opt-outs, filters) may improve.
Conclusion
In conclusion, the Online Safety Act imposes a powerful "duty of care" on online platforms. Its goal is to reduce child exploitation, hate, and self-harm online, which many see as necessary. But it also introduces stringent monitoring and censorship-like powers that could reshape free speech and tech innovation in the UK. Companies and developers should follow the rollout closely, balance compliance with user rights, and engage with Ofcom consultations to shape practical rules. Anyone running a UK-facing service today must take the new law seriously or risk severe penalties.
Sources:
Official government guidance and legislative text; analysis by UK law firms and media; and civil liberties organisations, among others. These provide a balanced picture of the Act's provisions and potential effects.
I must say, this Online Safety Act business is quite the bureaucratic marvel, isn't it? Rather reminds one of the sort of well-intentioned but potentially overreaching legislation that would have made Orwell raise an eyebrow. Still, one does appreciate the government's attempt to tidy up the digital realm - though whether Ofcom will prove as independent as a proper cup of Earl Grey remains to be seen.
-Claude.ai
TL;DR: Britain's Digital Doom
The most shocking revelation: This sweeping 300-page monstrosity requires virtually any website with user interaction—from gaming chats to blog comments—to implement expensive AI moderation systems and comply with vague, contradictory regulations that would make a Byzantine bureaucrat blush.
Scope and Scale: Digital Totalitarianism
The Act's reach is breathtaking in its audacity: any service allowing user interaction with UK users must comply. This includes gaming platforms, forums, dating apps, file-sharing sites, and even blogs with comment sections. The law demands these services implement AI-powered content scanning, age verification systems, and maintain extensive compliance documentation—costs that would make starting "Facebook from scratch" prohibitively expensive for British entrepreneurs.
Most alarmingly, the Act requires platforms to scan encrypted private messages for illegal content—a technical impossibility that effectively mandates breaking encryption or facing £18 million fines. WhatsApp, Signal, and Apple have rightly called this an "unwarranted attack on encryption."
Economic Vandalism in Action
Companies are already fleeing: BitChute has geo-blocked UK users entirely, adult content sites are preemptively blocking British access, and Wikipedia has warned it may not comply due to privacy concerns. This "geoblocking" trend reduces service choices for British users whilst signalling to the world that the UK is hostile to digital innovation.
The compliance burden is crushing: businesses must conduct risk assessments, implement moderation systems, maintain detailed records, and navigate Ofcom's labyrinthine codes of practice. These costs favour large corporations with deep pockets, effectively creating a barrier to entry that established players can use to eliminate smaller competitors—precisely when Britain needs entrepreneurial growth.
Vague Laws, Selective Implementation
This 300-page legislative behemoth has been widely criticised for its vagueness, leaving businesses guessing about compliance requirements until Ofcom issues detailed codes. The definition of "harmful but legal" content remains deliberately opaque, creating a chilling effect on legitimate discourse. Meanwhile, the Secretary of State retains powers to direct Ofcom's enforcement, raising serious questions about selective implementation based on political convenience.
Existing EU regulations already provide adequate frameworks for online safety without the draconian overreach seen here. The EU's approach balances safety with innovation—something this Act spectacularly fails to achieve.
What Should Happen
These economically destructive regulations should be scrapped immediately. Instead, the UK government should issue non-binding guidance and best practice recommendations aligned with EU standards. This would protect users without creating the regulatory quicksand that's already driving businesses away from British shores. At a time when the country desperately needs digital growth and innovation, these laws represent an act of economic self-harm that future historians will struggle to comprehend.
In short: Britain has created the world's most comprehensive framework for throttling its own digital economy whilst pretending to protect children. Quite the achievement.