The 7 Growing Threats From AI-powered Advertising & How Adtech Must Change

The 7 Growing Threats From AI-powered Advertising & How Adtech Must Change

Twenty years ago, digital ads were little more than online billboards — pop-ups that didn’t know who was seeing the ad, or why.

But today’s AI-powered digital advertisements are exponentially more sophisticated. The technology behind these ads can profile consumers and segment them into precise audiences, or make assumptions that cause discrimination. There are even plans to serve ads based on the emotions detected on peoples’ faces, as they sit in their own homes.

ai-powered advertising

Advertising is the dominant business model financing our digital spaces, giving consumers around the world “free” access to products and services – social media platforms being the best example. This is an effective deal for us as advertisers, and highly lucrative for platforms.

But there are grave harms, and consumers bear the brunt of them.

In my new report, the outcome of a 10-month Mozilla fellowship programme, I identify seven major threats that AI-powered advertising presents to consumers, from discrimination to misinformation. Many of these harms have been fundamentally changed or exacerbated by the addition of machine learning, and emotional recognition in ad creation and targeting, particularly in countries without data protection legislation.

These seven key harms are:

  1. Excessive data collection. Consumers are totally passive actors in adtech systems. They are something to be profiled and targeted, and are not given meaningful choices about how much data they would like to hand over, to whom, and for what.
  2. Discrimination. Personalisation is restricting choice and leading to discrimination, while advertising incentivises producing content for the most profitable communities. Algorithmic personalisation inherently restricts the products, services and content we see.
  3. Harm to the vulnerable. The advertising ecosystem is contributing to the manipulation of and harm to vulnerable people, including encouraging harmful consumption. Almost half of the world’s population are yet to come online, and when they do, most will be immediately exposed to sophisticated advertising.
  4. Online scams and misinformation. Fake news and misinformation have a lucrative business model via advertising, which favours content which garners a reaction. Social media sites, where disinformation can spread, have ad-based business models, and “addictive” interventions are designed to keep consumers on the sites for longer, enabling platforms to serve more ads.
  5. Limited agency. Consent mechanisms for advertising under GDPR and CCPA are poorly designed, and often nudge consumers into making choices which favour advertisers. Privacy policies and other terms and conditions are overly long, sometimes non-compliant, and frequently fall short of educating consumers.
  6. Environmental harm. Training one AI model produces 300,000 kilograms of carbon dioxide emissions, roughly the equivalent of 125 round trip flights from New York to Beijing. Plus, failure to tackle ad fraud and fake traffic comes with a huge environmental cost of its own.
  7. Hate speech. At least $235 million in revenue is generated annually from ads running on extremist and disinformation websites, often inadvertently including well-known brands. Far-right commentators and other hate preachers are continuing to make money through digital advertising on the open web or through platforms such as YouTube — which in turn radicalises young people.

As an industry, we need to tackle these harms and ensure that our digital advertising practices and activities actually match up with our brand values and promises around ethics, equality and sustainability.

We need to be proactive about how we think about these harms. We can’t just keep playing whack-a-mole with problems as they arise. We all want to use these amazing new technologies, but as we adopt them, we need to reach out and engage to ensure we’re not creating further problems.

I believe that, alongside the absence of data protection legislation in many markets, a lack of cross-sector collaboration is also damaging progress. We need to create cross-disciplinary, mediated forums, comprising digital rights groups, consumer protection experts, funders, publishers and advertisers.

As co-chair of The Conscious Advertising Network for the past two and a half years, I have watched these kind of forums lead to brilliant results on issues from hate speech and misinformation, to advertising fraud. The best solutions are created when NGOs or campaigners work together with advertisers and platforms to identify and suggest solutions to societal issues.

These forums need to ensure there’s ethics by design in AI-powered advertising, identify harms and create new initiatives to solve them, as they evolve.

Four main areas where we need greater collaboration and forums are:

  1. Supply chain accountability, ensuring advertisers are able to take responsibility for their digital supply chains in the same way as their physical ones.
  2. Funding a healthy internet, directing advertising budgets to support diverse voices, quality content, and accountable platforms.
  3. Maintaining consumer protection and human rights, using these as core design principles for new AI technologies.
  4. Proactive AI stewardship, using AI sparingly, tracking and acting on the emergence of harms in real time.

Only through collaboration can these issues be resolved for consumers, society and the environment. We stand on the brink of an AI revolution, where smart cities, augmented reality, facial recognition, voice-controlled devices and machine learning are shaping both our online and physical worlds. We must act now to ensure we don’t take harmful practices from our online world into our offline ones.

Advertisers must work together with consumer protection and digital rights groups to define issues and build solutions that benefit society, not simply the advertising industry. Together, we can design an online world that benefits us all.

----

by Harriet Kingaby, Co-Chair of The Conscious Advertising Network

     

This article has been provided by the New Digital Age, who is published by Bluestripe Media,  and covers the latest news, insight, opinion and research on all aspects of digital media and marketing. Its aim is to be an outlet for knowledge and inspiration about the companies, technologies and people powering the next wave of disruption in our industry. To view the original article, please visit here.

Misinformation FAQs


Digital3PC.com is an independent platform that brings together the best minds from tech, government, research, and academia to shape the future of cybersecurity policy and offer best practice solutions when responding to cyber threats. The most common access point for malware spread, data breaches, IP theft, election meddling, disinformation campaigns, and cyberwarfare is malicious third-party code (3PC) that makes its way into our websites, apps, and IoT devices. The compromise of the digital ecosystem erodes user trust and the credibility of media organizations, and undermines the integrity of our democracy, economy, and public safety.

legal teams

Category