Australasia’s AI unveiling: pedagogy, practice and policy
64975
wp-singular,post-template-default,single,single-post,postid-64975,single-format-standard,wp-theme-bridge,wp-child-theme-WACC-bridge,bridge-core-3.3.4.2,qodef-qi--no-touch,qi-addons-for-elementor-1.9.3,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-title-hidden,qode-smooth-scroll-enabled,qode-child-theme-ver-1.0.0,qode-theme-ver-30.8.8.2,qode-theme-bridge,qode_header_in_grid,qode-wpml-enabled,wpb-js-composer js-comp-ver-8.6.1,vc_responsive,elementor-default,elementor-kit-41156

Australasia’s AI unveiling: pedagogy, practice and policy

Anne Kruger and Richard Murray

Responses from policymakers, professionals and educators to the urgency of AI draw many parallels to earlier phenomena in the growth of online misinformation and disinformation. A key difference however was the sudden global generative AI product releases that forced stakeholders into the frontiers reckoning with AI generated harms and opportunities. Two years on, Australia’s response could be described as purposeful in areas of social media safety by design, but piecemeal in terms of digital literacy and media empowerment.

In navigating the frontiers of AI generated harms, we need to also consider the potential chaos that AI creates for everyday citizens’ digital literacy levels. As such, the role of journalists has never been more necessary to support their audiences’ critical thinking. While safety standards (discussed below) have understandably been an urgent focus of governments for technological and social media companies, others in the media ecosystem have been left – literally and figuratively – to their own devices.

That was the case for Australia’s news organisations. On the one hand, they were faced with how to serve their publics through delivering quality journalism and sift through potential AI powered explosions of mis- and disinformation; while on the other hand, the same organisations had to consider the opportunities AI may deliver for news production and effective workflows. As this article demonstrates, while news organisations and institutions are still figuring it out via trial and error, there is a clear case for regulations that include AI labelling and mandated media or digital literacy training. These media developments are also areas where journalism and communications focused academics are well placed to provide support.

An ethical template

Advances in digital technology during the mid 2010s led to a rise in Open Source Intelligence (OSINT) techniques available to the mainstream. Proponents encouraged transparency in their reports that showed the investigative methods they used. This was a new hook given journalism had previously shied away from the inclusion of details in reports showing how the so-called sausage was made. Through OSINT, journalists began to show to their audiences how their use of publicly available digital tools could provide powerful evidence in their investigations – tools that ranged from being able to determine the provenance of images to being able to provide critical evidence to hold “governments and other powerful actors to account” in international courts of law.1

In a watershed case, open source investigations by the independent collective of researchers known as Bellingcat2 uncovered crucial evidence into the origin of the Buk missile launcher that downed Malaysia Airlines Flight MH17 on July 17, 2014 in Ukraine. Investigative methods included as part of Bellingcat reports, provided the basis of a transparent and ethical framework where audiences are clearly aware of what is used – and sometimes not used – in the process of information gathering. The at times step-by-step explanation of the tools and resources used in investigations not only built evidence and credibility but also a new approach to transparency.

The OSINT approach is one that can be used to inform media as they experiment with the uptake of AI to improve productivity, workflows or create content. Frameworks for the mainstream media’s ethical and transparent use of tools and processes have undergone an AI transformation. But this work is still in its infancy and Australia has experienced very public missteps along the way. For example, in April 2025 unsuspecting audiences found out that “Thy”, a popular radio DJ, who had been on air in Australia for six months, was not a real person, but AI generated. This “triggered an industry-wide discussion on diversity, consent, and AI’s place in creative audio environments.”3

The Thy example revealed a big part of the perceived problem is Australia’s lack of AI labelling regulations. There are no specific restrictions on the use of AI in broadcast content, and no obligation to disclose its use. This deficit found its way into the political domain with Australia’s May 2025 federal election rife with unlabelled AI enhanced political campaigns; and questions were repeated about Australia’s lack of truth in political advertising regulation (discussed below).

Surveys in Australia and globally have revealed newsrooms began experimenting cautiously with generative AI4 while the development of transparency and ethical use frameworks was slow.5 Australia’s public broadcaster and news organisation, the Australian Broadcasting Corporation (ABC) released its AI principles in June 2024, which “reflect its values and editorial standards and will govern the ways in which it will use AI/ML technologies.”6 Arguably, the ABC’s in-house experimentation and trials – such as transcripts to enhance podcast accessibility7 – has reduced risk compared with more public facing trial and error approaches as the Thy example showed.

Globally, the lead has come from larger organisations such as Thomson Reuters, whose CEO Steve Hasker noted they “used artificial intelligence to radically transform the company into a technology business.” That included the Reuters news agency – although somewhat ironically Hasker added the news agency side is “…the smallest part of our business … that accounts for a bit less than 10 per cent of our revenue and a bit less than five per cent of our profit.” Hasker conceded, “In many places, it’s the only part of the company that anyone recognises.”

Having a large organisation behind a news agency provides a huge advantage in terms of resources to build responsible AI systems and processes. The news agency is transparent about its journalists’ use of AI8 and the company as a whole has developed AI principles.9 Other notable improvements from newsrooms globally come from Agence France Presse’s (AFP) Hong Kong-based standards and ethics director, Eric Wishart.10 Wishart incorporated uptake of AI in a common sense approach that aligns with his years of journalistic practice and news wire expertise.

Regulations, elections, shortcomings

The Australian Government has been an early leader globally as a pioneer in digital policies addressing issues such as cyberbullying. It established the office of the eSafety Commissioner in accordance with the initial Enhancing Online Safety Act 201511 (amended in 2017).12 This laid a foundation for social media regulation standards and protections in the area of online safety and safety by design. However, the establishment of misinformation and disinformation regulation (which has strong intersections with tech developments such as AI) was slower. Mis- and disinformation regulation eventually arose as something of a by-product from the Australian Treasury’s Digital Platform Inquiry that began in 2017.

The central focus of the Digital Platform Inquiry was whether and if large digital platforms and social media organisations that operate in Australia should pay local news publishers for news content. The Digital Platforms Inquiry report was handed down in July 2019.13 Initially policy makers and stakeholders expected a voluntary code to address the market competition between news and platforms; and a mandatory or co-regulatory code to address disinformation. However, what eventuated flipped these expectations was a mandatory News Media Bargaining Code, (NMBC) that came into effect in March 2021.14

And while the Digital Platforms Inquiry recommended a co-regulatory response to combat disinformation, the Australian Government at the time instead asked digital platforms to develop a voluntary code of practice to help address disinformation. This was assigned to the Digital Industry Group Inc. (DIGI), a not-for-profit industry association that advocates for the digital industry in Australia. Eventually, a code that covered both disinformation and misinformation was tasked, under strict guidance notes from Australia’s media and communications regulator, the Australian Communications and Media Authority (ACMA). The code, that covered both misinformation and disinformation was groundbreaking given the dearth of similar international regulation, and the heavy-handed approaches by near neighbours in Asia to adopt blunt force legislation.15

The explosion of generative AI into the wider information ecosystem in 2023 created a new urgency to ensure safety and information integrity. Australia’s government was quick to acknowledge generative AI ushered in “new and emerging harms.”16 Not surprisingly, it responded with announcements of a statutory review of the Online Safety Act, and an intention to amend the Basic Online Safety Expectations (Bose) system, both of which apply to tech and social media companies. This resulted in a series of draft new codes17 that focus on measures to restrict Australian children from accessing adult content online and other harms. In January 2024 the government also released its findings from consultations with stakeholders on potential AI “guardrails”18 required throughout industry. However, it was Australia’s new legislation that passed through Parliament in December 2024 setting a minimum age limit for social media platforms that attracted the spotlight from international headlines.

Similarly, Australia’s near neighbour, New Zealand, adopted a framework of online safety for the Aotearoa New Zealand Code of Practice for Online Safety and Harms.19 This voluntary code commits signatories to a set of Guiding Principles and Commitments “that acknowledges the need for flexible responses to ever-changing risks from harm.”20 Online discourse had offline, real life implications in New Zealand during the Covid-19 pandemic. This ranged from the Parliamentary Protests21 co-opted by online harmful campaigns during Covid-19, to the lessons learned in the need to craft culturally appropriate messages to encourage vaccine uptake among diverse communities.22 This is important given now the capacity for AI-generated content to take advantage of unsuspecting citizens and would exacerbate the susceptibility of audiences to fall for dangerous mis- and disinformation. This highlights the need for digital literacy in broader society. In terms of AI regulations, the New Zealand Government has introduced a non-legally binding best practice principles-based framework to guide the responsible use of Artificial Intelligence technologies across the public sector.23

Generative AI played a role in elections around the world in 2024 – from “voice clones” of imprisoned opposition leader Imran Khan in Pakistan,24 to Britain’s Independent Candidate “AI Steve”,25 and the use of AI in India to translate campaign speeches into multiple ethnic languages.26 Researchers have described the year as “the good, the bad and the in-between.”27 The downright “absurd” can be added to the list. When outspoken like-minded supporters create and repost AI-generated misinformation, this can galvanise their communities’ political views, frame the narratives or online discourse, and often break through into the mainstream media. This was the case in the US when President Donald Trump not only picked up on false online slurs28 about pet eating immigrants, but decided to give this oxygen during a live, internationally televised Presidential debate.29

While the AI Apocalypse or Armageddon that was feared for the global elections may not quite have eventuated, researchers from the Alan Turing Institute’s Centre for Emerging Technology and Security noted “deceptive AI-generated content still influenced election discourse, amplified harmful narratives and entrenched political polarisation.”30 This supports research by DIGI in Australia that an individual’s perception of what constitutes misinformation can be skewed by their political bias.31

Journalists and broadcasters noted the influx of satirical AI generated political campaigns on social media during the 2025 Australian Federal Elections. Broadcaster Sofie Formica noted how the rise in “fake political videos” that can “manipulate the message” makes the job of media more challenging32 in sifting through what is legitimate, and there is too much of a lag in addressing it. Indeed, there is a new urgency to ensure journalists are equipped with the skills to uphold information integrity through the discernment of a range of online content in order to deliver quality news. Equally, there is a need for sustained, government support of mandatory media and digital literacy curricula for ongoing generations, rather than the piecemeal approach of projects being funded for single digit years.

Many of the AI generated political videos in the 2025 campaign were obvious or satire. But this begs the question – what happens as AI becomes harder to detect, or satire is more nuanced? A combination of AI specific standards and change in political advertising laws could go far in mitigating the effects. This includes standards that require labelling of AI generated content; and laws that address Australia’s patchwork of accountability mechanisms to prevent lies in political advertising. The lack of truth in political advertising laws has arguably compromised the Australian public’s ability to identify mis- and disinformation.33 It also did little to assist Australians during the landmark 2023 Voice Referendum which was aimed at giving Indigenous Australians a voice to parliament. Further lessons from the Voice also showed how easy it is for misinformation to spread in more closed spaces such as emails, screenshots in chat apps and via newsletters. 34-35

While the lack of truth in political laws is a matter for Australian legislators and policymakers to address, there is more immediate hope on the horizon in terms of AI labelling. Researchers from the “Partnership on AI” have collated years of case studies that showcase the benefits of labelling AI,36 where one can reflect with “what happens when you don’t”? Further to this, the Coalition for Content Provenance and Authenticity (C2PA) and Content Authenticity Initiative (CAI)37 have developed an open industry standard for content authenticity and provenance. The C2PA has trademarked a “cr” icon that notifies a consistent standard in provenance credentials that travels with content for use by creators, editors, publishers, media platforms, and consumers. Such tools assist OSINT investigations and if put to wider use in Australia, this could support digital literacy programs, giving audiences a visual cue with the “cr” icon. Additionally current regulation such as the ACPDM could push for the adoption of such labelling in Australia.

A community of practice

The University of Queensland’s School of Communication and Arts faculty launched its AI and the Next Generation of Journalists Community of Practice on September 6, 2024. The aim is to encourage and facilitate ongoing discussion, skills building and curricular responses for staff and students alongside industry. The launch heard from learning innovation designers; Adobe solutions experts; and the ABC’s Product Strategy Manager. This is an opportune time for us as journalism educators with strong industry links, to deliver experiential learning via the establishment of an ongoing industry-academica-student Community of Practice38 and bring an ethical lens to practical research outputs.

Through experiential learning and research, students will experience work authenticity adapting substantial developments concurrently with industry leaders.39    We have selected a cross section of media with leaders working and experimenting with generative AI workflows.  Our approach with collaborative industry experts in this time of testing and trialling generative AI workflows continuously and iteratively informs future curriculum design in journalism (and communications) for the decade to come.

Training the trainers

One of the greatest challenges for the Community of Practice is that newsrooms are notoriously competitive ecosystems – both internally and externally. How to get a bunch of editors and journalists in a room sharing potential secrets and even proprietary information? One answer is to approach newsrooms individually and bring them together at key points for seminars with glossy outside experts from tech firms as the guest speakers. Another approach focuses on training. And on the trainers. Shaun Davies, formerly a principal product manager in trust and safety at Microsoft, recently developed and helped to lead the Google News Initiative AI Workshops – a 16 week program for small to medium newsrooms across Australia.40 Davies also consults throughout Asia and spoke from Japan for this article about how he builds maturity in newsroom AI uptake from “ad hoc vibes usage, to a more structured, formally tested and proven use case.” Davies’s said his aim is to develop robust processes:

And the way you do that is not by throwing stuff at the wall and going it feels good. You actually have to make structured tests – say you want [AI] to save time. Well, let’s benchmark the amount of time it takes to do this [task].41

Davies advises newsrooms to first set a bar where output should be no worse than it currently is now, any use of AI should meet the same quality. “And to do that, you need to define what quality means to your organization, and then you need to develop a test data set that demonstrates what that quality is,” he said.

From there, newsrooms are developing more robust solutions that even in test phases can compare outputs of the model. For example, if testing for an AI to provide a quick summary at the top of an article, Davies noted:

Get a number of experts which could be three of your most senior writers to rate the prompts, and blind test against human written content. In that case you’ve got the rest of the article so you can test for hallucinations.

Davies’ training further provides the prompts and checks of inputs to remove hallucinations.

Meanwhile other platform experts have joined University of Queensland Data Journalism classes to introduce AI “productivity tools”. Assessment for the course requires students to produce a long form journalistic report with data, visualisations and interviews. Throughout the course a student chased a local municipal council for environmental data. He had a breakthrough in the form of a “data dump” of technically public, but practically impenetrable, links to files from council meetings. The stash came during a two-hour tutorial workshop, during which the student uploaded the files into Notebook LM, and for comparison Pinpoint. While this took some initial conversion time, both allowed the student to search and organise hundreds of documents, potentially saving weeks of work. And while Notebook LM may have the capacity, we didn’t encourage the student to turn the council files into a podcast.

Lessons are taken from the experts to our classrooms. Davies’ earlier advice for newsrooms, also extends to strategic communications professionals – and thus an even wider range of students. In terms of quick background research for journalists and communications professionals, Davies sees great potential in the deep research models that can concurrently search the internet:

You can type a research query in, and Gemini will go out and search the web. It will probably search through about 300 websites, and it will compile the information it’s found into a really detailed report for you. You can be quite explicit about what you need to know.42

In terms of employment risk and opportunity, Davies noted he tells newsroom staff it’s not so much as “an AI [will take] their job”, but rather, “somebody could use AI more effectively than they could.” As tertiary educators, this feeds into the pedagogical aims, particularly in a journalism program where students need to graduate with an edge, ahead of the AI curve, in terms of skills and ethics.

Conclusion

While Australia has made inroads in coming to terms with AI, the current regulatory framework does not sufficiently cover existing risks from AI. Cyberbullying and safety by design has been led by the eSafety Commissioner. The sense of urgency and the resources provided by the government in that area needs to be extended across the digital ecosystem in order for a proper coordination of risk mitigation. Australia’s ACPDM regulatory code is technology neutral, open to further signatories and allows for agility in addressing AI developments. While the code of practice is in place for platforms and social media organisations to address mis- and disinformation which can incorporate technological developments in AI, this does not include the whole information ecosystem. For example, news media and political parties could each produce codes of practice to address mis- and disinformation. Where a suite of legislation is a blunt force reserved for the highest of risks such as safety and national security; codes of practice have the ability to set standards that can adjust with the pace of technological change.

This article recommends the adoption of regulation and codes of practice from a wide range of stakeholders incorporate transparency, truth, and promote evidence-based information in an era of AI. Voluntary codes provide safe testing grounds that mitigate unintended consequences that may stem from legislation as technology develops. Action is provided with real guardrails and allows time for any consideration of future legislative requirements. News media and political parties should develop industry-wide transparency codes in the uptake and use of AI in content and practices, and these should be explicit for digital platforms via the current ACPDM. Templates exist in the form of OSINT techniques as transparency and software developments in labelling of AI from the CAI show how this can be implemented at scale. Examples from missteps in Australian media show audiences appreciate transparency and ethical applications of AI in media.

In terms of legislation, urgent work is required in addressing truth in political advertising (which may include labelling of AI moving from regulatory codes to legislation) and mandatory digital literacy curricula. Governments must move fast to introduce a suite of intergenerational, culturally relevant long-term funded media literacy initiatives to enhance critical thinking skills throughout society as standardised education.

In summary, while governments in Australia and globally are figuring out the unveiling effects of AI, in the meantime there is a clear case for regulations or legislation that include AI labelling and mandated media or digital literacy training. AI can speed up workflows and so too, spread mis- and disinformation. Journalists therefore need training in both AI production tools and ethical transparency frameworks as well as how to protect themselves and their audiences from AI-enhanced disinformation risks. This enables journalism to support society and our democratic systems. These are media development issues where academia is well suited to support industry and policy, and to provide graduates who are ready to lead. ν

Notes

1. https://www.icfj.org/news/fundamentals-open-source-intelligence-journalists

2. https://www.bellingcat.com/

3. Lee, N. (2025) ARN’s AI voice creator speaks out on ‘presenter’ Thy as bigger questions remain unanswered. Published April 28, 2025. Mediaweek. https://www.mediaweek.com.au/arn-breaks-silence-on-ai-host-thy-but-keeps-silent-on-key-issues/

4. Attard, M., Davis, M., Main, L. (2024) Gen AI and Journalism. Centre for Media Transition. University of Technology Sydney Access via https://www.uts.edu.au/research/centre-media-transition/projects-and-research/gen-ai-and-journalism

5. Henriksson, T. (2023) New survey finds half of newsrooms use Generative AI tools; only 20% have guidelines in place. Survey by World Association of News Publishers. Published May 25, 2023

6. ABC AI Principles Published June 28, 2024. ABC. https://www.abc.net.au/about/abc-ai-principles/104036790

7. https://www.abc.net.au/innovation-lab/abc-transcribe/103125708

8. Reuters and AI. Retrieved from https://www.reuters.com/info-pages/reuters-and-ai/

9. Thomas Reuters https://www.thomsonreuters.com/en/artificial-intelligence/ai-principles

10. Wishart, E. (2024) Journalism Ethics: 21 Essentials from Wars to Artificial Intelligence, Hong Kong University Press.

11. Enhancing Online Safety Act 2015 (Cth), https://www.legislation.gov.au/C2015A00024/2017-06-23/text.

12. Enhancing Online Safety for Children Amendment Bill 2017 (Cth), https://parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;query=Id%3A%22legislation%2Fbillhome%2Fr5 794%22

13. ACCC, “Digital Platforms Inquiry 2017–19: Preliminary Report,” accessed May 15, 2025, https://www.accc.gov.au/by-industry/digital-platforms-and-services/digital-platforms-inquiry-201719/preliminary-report.

14. ACCC, “News Media Bargaining Code,” accessed May 15, 2025, https://www.accc.gov.au/by-industry/digitalplatforms-and-services/news-media-bargaining-code/news-media-bargaining-code.

15. https://sites.brown.edu/informationfutures/2022/11/04/information-futures-labs-apac-partner-crosscheck-launches-training-booklet-for-southeast-asia/ Accessed May 22, p 7-8.

16. Butler, J. (2023). Australia to force social media companies to crack down on ‘emerging harms’ of AI deep fakes and hate speech. Published November 22, 2023, The Guardian.

17. https://www.esafety.gov.au/industry/codes/background-to-the-phase-1-standards

18. https://consult.industry.gov.au/supporting-responsible-ai

19. Transparency International New Zealand, “Aotearoa New Zealand Code of Practice for Online Safety and Harms,” October 13, 2022, https://www.transparency.org.nz/blog/aotearoa-new-zealand-code-of-practice-foronline-safety-and-harms

20. Kruger, Anne (2024). A review of the landmark Australian Code of Practice On Disinformation and Misinformation (ACPDM). St Lucia, QLD, Australia: The University of Queensland. https://doi.org/10.14264/58981b2P ages 28-33. P29

21. https://www.theguardian.com/world/video/2022/feb/10/anti-vaccine-protesters-clash-with-police-outside-new-zealand-parliament-video and effects on communities noted here https://www.rnz.co.nz/news/national/483800/impact-of-parliament-protests-still-being-felt-in-the-thorndon-and-pipitea-community

22. Kruger, Anne (2024). A review of the landmark Australian Code of Practice On Disinformation and Misinformation (ACPDM). St Lucia, QLD, Australia: The University of Queensland. https://doi.org/10.14264/58981b2 Pages 28-33.

23. https://www.dlapiper.com/en/insights/publications/2025/02/new-zealands-public-service-ai-framework-guiding-responsible-innovation

24. https://www.theguardian.com/world/2023/dec/18/imran-khan-deploys-ai-clone-to-campaign-from-behind-bars-in-pakistan

25. https://www.ai-steve.co.uk/

26. https://www.techpolicy.press/indias-experiments-with-ai-in-the-2024-elections-the-good-the-bad-the-inbetween/

27. https://www.techpolicy.press/indias-experiments-with-ai-in-the-2024-elections-the-good-the-bad-the-inbetween/

28. https://www.theguardian.com/us-news/article/2024/sep/09/republicans-haitian-migrants-pets-wildlife-ohio

29. https://www.theguardian.com/us-news/article/2024/sep/10/trump-springfield-pets-false-claims

30. https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-safeguarding-future-elections

31. https://www.uts.edu.au/news/2022/07/australians-and-misinformation

32. https://www.4bc.com.au/podcast/deepfake-surge-sparks-election-fears-most-aussies-cant-spot-ai-generated-content/

33. Anne Kruger and Esther Chan, “Australian Election Misinformation Playbook,” First Draft, March 26, 2022, https://firstdraftnews.org/articles/australian-election-misinformation-playbook/. As cited in Kruger, Anne (2024). A review of the landmark Australian Code of Practice On Disinformation and Misinformation (ACPDM). St Lucia, QLD, Australia: The University of Queensland. https://doi.org/10.14264/58981b2

34. https://www.rmit.edu.au/news/crosscheck/voice-misinformation-intervention

35. https://www.rmit.edu.au/news/all-news/2023/jun/croscheck-voice-referendum

36. https://partnershiponai.org/from-deepfakes-to-disclosure-pai-framework-insights-from-three-global-case-studies/

37. https://c2pa.org/

38. Cox, A (2005). What are communities of practice? A comparative review of four seminal works. Journal of Information Science, 31(6), 527–540.

39. Kolb, D. A. 2015. Experiential Learning: Experience as the Source of Learning and Development, (2nd ed.). Upper Saddle River, NJ: Pearson.

40. Google and Bastion roll out AI pilot program for Australian news organisations – Bastion

41. Shaun Davies interview with Dr Anne Kruger, May 16, 2025.

42. Shaun Davies interview with Dr Anne Kruger, May 16, 2025.

 

Dr Anne Kruger is a member of the governance board for the Australian Code of Practice on Disinformation and Misinformation (ACPDM).  She was co-chief investigator in the development of the code, formed under policy directives from the Australian Treasury’s Digital Platforms Inquiry. Dr Kruger is currently Convenor of the Bachelor of Journalism and Bachelor of Arts Journalism and Mass Communication Programs at the University of Queensland. Previously she was Asia Pacific Director for online verification NGO First Draft News, and has worked across senior roles in industry and academia in Singapore, Hong Kong and Australia.

Dr Richard Murray is Senior Lecturer & Director of Indigenous Engagement, School of Communication and Arts, Affiliate of Centre for Communication and Social Change, and Affiliate of Centre for Digital Cultures & Societies, and Affiliate of Research Centre in Creative Arts and Human Flourishing, at the University of Queensland, Australia. Dr Murray researches journalism in a time of rapid change. His research specialties include the role law and lawyers play in contemporary journalism, rural, regional and remote journalism, and international journalism with a focus on how South Korea and North Korea are covered and reported on.
No Comments

Sorry, the comment form is closed at this time.