fbpx

All Online Content Could Soon Be Regulated Using 25 Year Old Classifications

Online safety act explained

The dance of online safety moves between protecting society and handing too much power over to governments. The internet, being what it is, is probably in need of some good regulation, but the emphasis here is on good.

Currently, the government is crowdsourcing public feedback on a new set of codes to find and remove harmful content online and restrict it — even before it’s posted.

Now, this predominantly seeks to deal with “bad stuff”, but determining what that is and who gets access to it is a major job. It probably doesn’t help that the industry standards being set up to establish this are based on the classification system we use for films (PG, MA15+ etc) that haven’t been updated in almost two decades.

The impact of these changes could be vast. All social media services, all messaging services, search engines, app stores, internet providers and even those who install your internet box are implicated in the changes.

What Are the Changes?

In 2021, the government passed the Online Safety Act which came into effect in January of this year. According to eSafety, Australia’s independent regulator for online safety, the Act, “expands Australia’s protections against online harm, to keep pace with abusive behaviour and toxic content”.

Good. Great. So what does that mean?

Well, from the looks of it, it makes those who provide online services, from app developers to telcos, responsible for the safety of those using them.

These are known as Basic Online Safety Expectations that require providers to do things like “detect, moderate, report and remove material or activity on the service that is unlawful or harmful”.

Much of the practical application of these new rules will come down to codes that are currently being drafted. eSafety has determined that material falling into a category it called “class 1” should be blocked entirely, while material falling into “class 2” should be blocked from the view of children.

The problem is, in its guidelines, eSafety determines class 1 content to be “material that is, or would likely be, refused classification under the National Classification Scheme.” That scheme was set up in 1995 back when videotapes were all the rage and it hasn’t really been updated since. In fact, that whole scheme is currently being reviewed for whether it’s still relevant in this day and age.

The current standards would ban material that:

“depicts, expresses or otherwise deals with matters of sex, drug misuse or addiction, crime, cruelty, violence or revolting or abhorrent phenomena in such a way that they offend against the standards of morality, decency and propriety generally accepted by reasonable adults.”

Of course, everyone’s tolerance for this kind of stuff is going to be different and there are legitimate reasons why someone would want to access it. Academia, research, and journalism spring to mind as just a few examples.

Nicholas Suzor, internet governance researcher at the Queensland University of Technology, told the ABC that the classification scheme “has long been criticised because it captures a whole bunch of material that is perfectly legal to create, access and distribute.”

It sounds as though eSafety wants all content online to be rated, or at least for internet providers to be aware of what might be deemed unacceptable online. Supposing they already aren’t, and that the ones who are and don’t care will be able to be held accountable, this seems like a monumental task.

Class 2 content is basically watered-down versions of the above. Stuff that might get an X18+ or R18+ rating, but would still be allowed to be distributed on DVD in Australia. This probably covers most pornography but, again, reasonable adults often have disagreements about this.

How Will This Affect Me?

Well, first up, anonymity online is probably going to be massively degraded.

Anonymity, and the freedom to say whatever you like online, is one of those hallowed ‘rights’ that the internet was founded upon that has frequently been abused to spread misinformation, hatred, and abuse.

Of course, it’s also been used to allow freedom of expression over topics that people might not want to be publicly identified with, disseminate political messages contrary to the views of certain governments, and allowed whistleblowers to come forward. Governments around the world, including our own, are swiftly putting a stop to that sort of thing.

The current eSafety regulations state that providers must take steps to detect “unlawful or harmful” content, even if it’s being sent through an encrypted service like WhatsApp now is.

A service offering anonymous accounts, like Reddit or Twitter, will have to “require verification of identity or ownership,” meaning content posted online can be traced back to its creator.

Likely mindful of the need for anonymity, eSafety does state that they don’t want providers to make their encryption services useless, but that content posted using them will have to be identifiable.

Further to this, Digital Rights Watch flagged the Act as likely to “significantly impact the livelihoods of sex workers, sex educators, and activists online.” The Act gives eSafety the power to remove content it deems inappropriate and they argue that it is simply too broad and wide-reaching to not inadvertently impact on pornography content creators and those who enjoy their work.

Why Do We Need Regulation?

Bad things happen online, and the statistics during the pandemic make for grim reading. Online child sexual abuse and exploitation “skyrocketed” through 2020 and 2021, due mainly to the fact that so many young people were online as well as the increase in platforms where people interact anonymously.

This being said, governments often act on a worst-possible-outcomes basis — taking the most extreme examples of behaviour or action and using them to create moral panic to pass laws that affect the general population.

Stopping the online abuse of children is something that everyone in their right mind is in favour of, but using it as a motive to restrict anonymous messaging and regulate the kind of legitimate content that normal adults want to access and discuss could be overreach.

The internet is vast and murky but that doesn’t mean we shouldn’t be trying to clean it up. However, the practical implications of this are not yet known. Will companies be required to have live content moderators scanning for “illegal” material? It’s not yet clear.

Facebook has long tried to keep bad stuff off of its platform with an army of underpaid workers scanning content flagged by its auto-moderation system. These people basically sit in bunkers all day clicking through some of the most horrific content imaginable to determine whether or not it should be allowed on the platform. Many have developed PTSD from this job and most speak about it as incredibly stressful.

Artificial intelligence can only go so far, at present, in detecting what is and is not harmful content. Human oversight is still necessary in the determination of a lot of this stuff.

It’s not wrong to try and remove awful things from the internet, or stop it from being used as a tool to harass, bully, or abuse. It’s just that doing so without clear and transparent practical application risks being heavy-handed and potentially alienting or criminalising people who shouldn’t be caught up in the system. It also puts vast powers to limit what we can and cannot do online into the hands of an unelected organisation.

Australia is already on this path, the legislation has been passed, and it’s now mainly going to be up to industry, the very people the laws are attempting to regulate, to determine the impact of this decision.

RelatedAI Toys Are Coming and They Might Even Be Smarter Than Parents

RelatedTurns Out Your Favourite Shops Have Been Using Facial Recognition Technology to Track You

Read more stories from The Latch and subscribe to our email newsletter.