In a groundbreaking move, Australia’s eSafety Commissioner, Julie Inman Grant, has unveiled world-first industry standards aimed at compelling major tech companies, including Apple, Google, and Meta, to take more substantial action in addressing the rampant issues of online child sexual abuse material and pro-terror content on their platforms. These standards also extend to combatting the disturbing emergence of “deepfake” child pornography created using generative AI.
After more than two years of intensive efforts and the rejection of draft codes proposed by the tech industry, Commissioner Inman Grant has now released draft standards that apply to cloud-based storage services like Apple iCloud, Google Drive, and Microsoft OneDrive, as well as messaging platforms such as WhatsApp. The objective is to compel these companies to intensify their efforts to eliminate unlawful content from their services.
Global impact anticipated
Commissioner Inman Grant, who previously held a position as an executive at Twitter, hopes that these Australian industry standards will serve as a precedent for similar regulations worldwide to address the pervasive issue of harmful content. It is important to note that these requirements will not compel tech companies to compromise their own end-to-end encryption, which is enabled by default on some services like WhatsApp.
While major tech platforms have policies prohibiting child sex abuse material on their public services, Commissioner Inman Grant contends that these companies have not done enough to police their own platforms. She clarified, “We understand issues around technical feasibility, and we’re not asking them to do anything that is technically infeasible. But we’re also saying that you’re not absolved of the moral and legal responsibility to just turn off the lights or shut the door and pretend this horrific abuse isn’t happening on your platforms.”
For example, WhatsApp, an end-to-end encrypted service, employs various behavioral signals to detect and report non-encrypted parts of its services, including profile and group chat names, as well as certain symbols known to represent child pornography, such as “cheese pizza emojis.” This proactive approach allows WhatsApp to report 1.3 million instances of child sexual exploitation and abuse each year.
Challenging deep fake child pornography
The drafted standards also extend their purview to child sexual abuse material and terrorist propaganda generated using open-source software and generative AI. An alarming trend in Australia involves students creating “deep fake porn” of their peers and sharing it within educational settings. Commissioner Inman Grant expressed her concern, stating, “We’re seeing synthetic child sexual abuse material being reported through our hotlines, and that’s particularly concerning to our colleagues in law enforcement, because they spend a lot of time doing victim identification so that they can actually save children who are being abused.”
She emphasized the importance of incorporating regulatory scrutiny into the design phase of these technologies to ensure robust safeguards, preventing the proliferation of harmful content. According to Commissioner Inman Grant, “If we’re not building in and testing the efficacy and robustness of these guardrails at the design phase, once they’re out in the wild, and they’re replicating, then we’re just playing probably an endless and somewhat hopeless game of whack-a-mole.”
Public consultation and implementation
The eSafety Commissioner’s office has initiated a public consultation period for the draft standards, which will run for 31 days. After this period, the final versions of the standards will be presented in federal parliament and are set to take effect six months after their registration.
The standards not only mandate stricter content moderation but also require tech companies to allocate sufficient resources and personnel to trust and safety efforts.
Commissioner Inman Grant stated, “You can’t do content moderation if you’re not investing in those personnel, policies, processes, and technologies. And you can’t have your cake and eat it too. If you’re not scanning for child sexual abuse but then provide no way for the public to report to you when they come across it on your services, then you are effectively turning a blind eye to live crime scenes happening on your platform.”
The introduction of these standards follows a recent incident involving social media giant X, formerly known as Twitter, which refused to pay a $610,500 fine imposed by the eSafety Commissioner for allegedly failing to adequately address child exploitation material on its platform. X has filed an application for a judicial review in the Federal Court. Commissioner Inman Grant’s office continues to evaluate its options regarding X Corp’s non-compliance with the reporting notice but refrains from commenting on ongoing legal proceedings.
In an era where technology has amplified the dissemination of harmful content, Australia’s pioneering move to establish comprehensive industry standards signals a significant step toward ensuring the safety of online spaces, particularly for children and vulnerable individuals. As these standards undergo public consultation and eventual implementation, the world will closely watch to see if other nations follow Australia’s lead in the fight against online child exploitation and pro-terror content.
Source: https://www.cryptopolitan.com/australias-standards-to-fight-online-harm/