Book Accommodation, Tours and Events with Norfolk Online News!
14 March 2025
West Island academics Professor Daniel Angus and Professor Mark Andrejevic have recently written that the rise of “dark advertising” - personalised advertisements increasingly powered by artificial intelligence that evade public scrutiny - means that voters face a murky information landscape going into the federal election.
They worry that such dark influencing is already rife and that combined with Australia’s failure to enact truth-in-advertising legislation and big tech’s backtracking on fact-checking, means voters are left vulnerable to ad-powered misinformation campaigns. And that’s not good for democracy. Tackling misinformation requires legislative action, international collaboration and continued pressure on platforms to open their systems to scrutiny.
The failure of American tech platforms to police patently false and misleading dark ads should serve as a red flag to the West Island that industry self-regulation will not solve the disinformation challenge. The authors say that political advertising plays a pivotal role in shaping elections, even while it is shrouded in opacity and increasing misinformation.
In the lead-up to the 2025 federal election, a significant volume of deceptive advertising and digital content has already surfaced. That’s not surprising, given the Australian Electoral Commission limits its oversight to the official campaign period, meaning false claims can proliferate freely before the official campaign. At the heart of this challenge lies the evolution of digital political advertising.
Modern campaigns rely heavily on social media platforms, leveraging associative ad models that tap into beliefs or interests to deliver digital advertising. Unlike traditional media, where ads are visible and subject to better regulatory and market scrutiny, digital ads are often fleeting and hidden from public view. Recent AI developments make it easier and cheaper to create false and misleading political ads in large volumes with multiple variations increasingly difficult to detect.
This ‘dark advertising’ creates information asymmetries, in this case one where groups have access to information and can control and shape how it’s delivered. That leaves voters exposed to tailored messages that may distort reality.
For some years, West Island parties and candidates have attempted to get specific messages out to identifiable community groups, such as nurses, small business owners or tradespeople. But the use of AI and dark advertising means that essentially hidden messaging can be targeted to particular groups or individuals to selectively provide voters with very different views of the same candidate or party. Angus and Andrejevic give an example from the 2024 US presidential election, where a political action committee linked to X owner Elon Musk targeted Arab-American voters with the message that Kamala Harris was a diehard Israel ally, while simultaneously messaging Jewish voters that she was an avid supporter of Palestine.
This tactic has also been used on the West Island to smear activists calling for an end to the slaughter in Gaza, and is increasingly spreading over topics including climate change, natural disasters and cost of living, where ad targeting online lets political advertisers single out groups likely to be influenced by selective, misleading or false information.
Deliberate lies and fake “facts” in advertising – political or otherwise - are bad enough when they are out in the open, where at least they can be challenged. But when they are embedded in dark ads – often accompanied by real-looking computer-generated AI images - they become a very real threat to our way of life and democratic freedoms. Attempts at any form of regulation have been actively resisted by international tech giants, who make billions of dollars through hosting unchecked dark ads. Their threats of legal action resulted in the federal government recently withdrawing a proposed truth-in-political-advertising bill, leaving voters vulnerable to misleading content that undermines democratic integrity.
Angus and Andrejevic say that the bill was never introduced to parliament and its future remains uncertain. The transparency tools provided by Meta, which covers Facebook and Instagram, and Google parent company Alphabet — which include ad libraries and “Why Am I Seeing This Ad?” explanations — also fall woefully short of enabling meaningful oversight. These tools reveal little about the algorithms that determine ad delivery or the audiences being targeted. They do include some demographic breakdowns but say little about the combination of ads an individual user might have seen and in what context. Ads frequently employed AI-generated content, including fabricated audio of political figures, to mislead users and harvest personal information. Ad-driven technology firms such as Meta and Alphabet have backed away from previous initiatives to curb misinformation and deceptive advertising and enforce minimum standards.
On the West Island, we have seen what happened in the US but fundamental differences in media consumption, political structure and culture and regulatory frameworks mean that we may not necessarily follow the same trajectory. The AEC does enforce specific rules on political advertising, particularly during official campaign periods. Oversight is weak outside these periods, meaning misleading content can circulate unchecked.
The media blackout period bans political ads on radio and TV three days before the federal election, but it does not apply to online advertising, meaning there is little time to identify or challenge misleading ads. Ad-driven technology firms such as Meta and Alphabet have backed away from previous initiatives to curb misinformation and deceptive advertising and enforce minimum standards and in the West Island, it is unrealistic to expect platforms to proactively police content effectively.
There might be some slight hopes or redressing the situation. The authors report that independent computational tools have emerged attempting to address these issues. They include browser plugins and mobile apps that allow users to donate their ad data. During the 2022 election, the ADM+S Australian Ad Observatory project collected hundreds of thousands of advertisements, uncovering instances of undisclosed political ads. In the lead-up to the 2025 election, that project will rely on a new mobile advertising toolkit capable of detecting digital political advertising served on Facebook, Instagram and TikTok.
Without some of these solutions, platforms remain free to follow their economic incentive to pump the most sensational, controversial and attention-getting content into people’s news feeds, regardless of accuracy. This creates a fertile environment for misleading ads, not least because platforms have been given protection from liability. That is not an information system compatible with democracy.
It seems that dark ads are here to stay. Will the West Island have the tools and the courage to fight back?