• With spending on advertising topping $1 trillion a year worldwide, the United Nations highlighted the untapped power of major brands to shape the future of artificial intelligence (AI), warning that a failure to act could deepen a global information integrity crisis.
• In a new brief titled ‘Strengthening Information Integrity: Advertising, Artificial Intelligence and the Global Information Crisis’, the Department of Global Communications and the Conscious Advertising Network caution that unchecked AI adoption in advertising is accelerating risks across the whole digital information ecosystem.
• As AI tools become embedded in media buying and content generation, those dynamics are intensifying.
• AI is accelerating the spread of disinformation, hate speech and polarising content, while advertising revenue continues to fund online material – regardless of its quality or accuracy.
• At the same time, a lack of transparency in how AI-driven advertising systems work is raising concerns about fraud and inefficiency.
• The rise of AI-generated content also threatens the viability of independent journalism.
• The report warns that declining trust in digital environments is already undermining the effectiveness of ad campaigns.
• The brief stresses that these are not only societal concerns but direct business risks.
• As audiences lose trust in the platforms where ads appear, engagement drops and returns on investment decline.
What is ‘attention economy’?
• The concept of “attention economy” was first coined in the late 1960s by Herbert A. Simon, characterising the problem of information overload as an economic one.
• However, the concept has become increasingly popular with the rise of the Internet making content (supply) increasingly abundant and immediately available, and attention becoming the limiting factor in the consumption of information.
• Advertising is the dominant business model of the digital information ecosystem, funding all kinds of content, from pluralistic media and high-quality entertainment to hate speech and disinformation.
• It is the core revenue model of platforms where billions of people access news, connect with others and form their understanding of the world.
• These platforms are run by algorithms optimised to maximise user attention and advertising exposure.
• Content that keeps people engaged generates revenue, whether or not it is accurate, reliable or safe.
• To address the scarcity of people’s attention, technologies have been increasingly aimed at strategic capture of private attention aided by systematic collection and analysis of personal data, which has become a profitable business model.
• The longer people look at their smartphone, television or other digital device, the more advertising can be shown to them, and the more revenue a digital platform generates.
• This dynamic has been described as the “attention economy”, in which human attention is the commodity being bought and sold.
• To maximise attention, digital platforms collect vast amounts of data from users, from their interests and relationships to behaviours and spending habits, to keep users engaged for as long as possible.
• This model is now being recreated and scaled out through AI assistants and chatbots.
• The scale at which attention is monetised is substantial.
• Global advertising spend reached an estimated $1.14 trillion in 2025.
• Advertising accounted for 75 per cent of Google’s $350 billion revenue in 2024, and 98.6 per cent of Meta’s $162.6 billion revenue in 2025.
• Advertising is, in effect, the core business model of the largest technology companies. Without it, their business models would not be viable.
• While large platforms describe their approach to monetising attention as “topic agnostic”, this neutral-sounding framing obscures how AI recommendation systems function in practice.
• Content that triggers strong emotional reactions or is polarising tends to generate higher engagement.
• The “topic agnostic” approach is a misnomer. In reality, this attention monetising approach falls short of mitigating risks and, in a growing number of cases, has shown to heighten the risk of online harms, especially for vulnerable and marginalised communities.