Artificial intelligence developers have been told they should “watermark” content so that people can clearly tell it is AI-generated, as the government seeks to address concerns the technology is being used to mislead and harm Australians.
There is no legal requirement to identify content as AI-generated, which has allowed generated content to be confused as real, known as deepfaking.
In guidance to developers and content creators, the federal government has advised that AI content should be “clearly identifiable” by including labels that note content is AI-generated or embedding information to trace the origins of content — a process known as watermarking, which is more difficult to manipulate or remove than labels.
It notes that transparency tools are particularly important where AI-generated content could be used to “adversely affect” people, and become more important the more heavily AI has been involved in creating the content.
“AI is here to stay. By being transparent about when and how it is used, we can ensure the community benefits from innovation without sacrificing trust,” Industry Minister Tim Ayres said in a statement.
“That’s why the Albanese government is urging businesses to adopt this guidance. It’s about building trust, protecting integrity, and giving Australians confidence in the content they consume.”
Some companies, including Google, already watermark AI content.
The rapid spread of generative AI has fuelled fears that the technology could be used for fraud, misinformation, blackmail, or to exploit people by creating convincing fake content that misrepresents what a person has said or done.
The eSafety Commission has warned that deepfake image-based abuse is happening at least once a week in Australian schools.
On Monday, independent senator David Pocock introduced a private senator’s bill to prohibit digitally altered or artificially generated content depicting an individual’s face or voice without their consent.
Senator Pocock said the federal government had been too slow and had failed to comprehensively respond since beginning its review into responsible AI more than two years ago.
National AI Plan due to be released
The new AI guidance has been given to industry ahead of the government releasing a National AI Plan, which is the culmination of several years of consulting and is expected to introduce “mandatory guardrails” to protect against the worst impacts of AI.
The plan will also respond to ideas raised at the government’s productivity roundtable in August, where AI was a central focus of discussion on how to boost the economy and lift wages.
The Productivity Commission warned against mandatory guardrails being introduced at that roundtable, saying it could strangle a $116 billion opportunity for the economy, urging that any legislative response be paused until gaps in the law were properly identified.
But while the government seeks to strike a balance between the risks of AI and a potential economic boom, its recent measures have focused on addressing deep concerns in the community about safety.
Senator Ayres last week announced the government would create an AI Safety Institute, which could monitor and respond to “AI-related risks” and help to build trust in the technology.
Former industry minister Ed Husic, who began the consultations on a federal response to the growth of AI, has called for a dedicated AI Act that could provide a framework to flexibly respond as the technology develops.
Source: https://www.abc.net.au/
