Australia is changing how it regulates the internet and no one cares


When we surf the web, most of us give little thought to what’s going on behind the scenes—who makes the decisions about the content we may or may not see.

Often, this decision rests in corporate hands: Facebook, TikTok, and most major social media platforms have rules about what content they accept, but enforcement can be inconsistent and less than transparent.

In recent years, the federal government has passed a set of often-controversial laws that allow for greater control over what goes online.

For example, there is a new Online Safety Act that was quickly passed in the middle of last year.

Among other powers, it requires the technology industry – which includes not just social media, but messaging services like SMS, internet service providers and even the company behind your modem – to develop new codes to regulate “harmful online content”.

These codes, drawn up by industry groups, will have a big say in how our technology is managed, but some worry they could have unintended consequences, not least because they borrow from an older classification scheme.

What are the codes?

After the Online Safety Act came into force, the eSafety Commissioner tasked the industry with developing draft codes to regulate “harmful online content”.

This “harmful” material is referred to as “Class 1” or “Class 2” as defined by the Electronic Safety Commissioner.

These are taken from the National Classification Scheme, better known by the ratings you see in movies and computer games. More on that in a moment.

In general, you can think of Class 1 as declassification material, while Class 2 can be classified as X18+ or R18+.

Finally, the industry developed draft codes that describe how to protect against the acquisition or dissemination of this material.

Electronic Safety Commissioner Julie Inman Grant is overseeing the new Online Safety Act.(ABC News: Adam Kennedy)

They vary by sector and size of business. For example, the code may require a company to report offensive social media content to law enforcement, have systems in place to take action against users who violate the policy, and use technology to automatically detect known child sexual exploitation material.

What content will it affect?

For now, the draft codes deal only with so-called Class 1A and IB material.

According to eSafety, Class 1A may include child sexual exploitation material, as well as content that promotes terrorism or depicts extreme crime or violence.

Meanwhile, Class 1B can include material that depicts “matters of gratuitous crime, cruelty or violence” as well as drug-related content, including detailed instruction on the use of prohibited drugs. (Grades 1C and 2 mainly deal with online pornography.)

Obviously, these categories contain content that society deems unacceptable.

The problem, critics say, is that Australia’s approach to classification is confusing and often out of touch with the public. The National Classification Scheme came into force in 1995.

“The classification scheme has long been criticized because it captures all material that is perfectly legal to create, access and distribute,” said Nicolas Suzor, who researches internet governance at the Queensland University of Technology.

Rating a movie for theaters is one thing. Categorizing content online is quite another.



Source link