Twenty years ago, digital ads were little more than online billboards — pop-ups that didn’t know who was seeing the ad, or why.
But today’s AI-powered digital advertisements are exponentially more sophisticated. The technology behind these ads can profile consumers and segment them into precise audiences, or make assumptions that cause discrimination. There are even plans to serve ads based on the emotions detected on peoples’ faces, as they sit in their own homes.
Advertising is the dominant business model financing our digital spaces, giving consumers around the world “free” access to products and services – social media platforms being the best example. This is an effective deal for us as advertisers, and highly lucrative for platforms.
But there are grave harms, and consumers bear the brunt of them.
In my new report, the outcome of a 10-month Mozilla fellowship programme, I identify seven major threats that AI-powered advertising presents to consumers, from discrimination to misinformation. Many of these harms have been fundamentally changed or exacerbated by the addition of machine learning, and emotional recognition in ad creation and targeting, particularly in countries without data protection legislation.
These seven key harms are:
As an industry, we need to tackle these harms and ensure that our digital advertising practices and activities actually match up with our brand values and promises around ethics, equality and sustainability.
We need to be proactive about how we think about these harms. We can’t just keep playing whack-a-mole with problems as they arise. We all want to use these amazing new technologies, but as we adopt them, we need to reach out and engage to ensure we’re not creating further problems.
I believe that, alongside the absence of data protection legislation in many markets, a lack of cross-sector collaboration is also damaging progress. We need to create cross-disciplinary, mediated forums, comprising digital rights groups, consumer protection experts, funders, publishers and advertisers.
As co-chair of The Conscious Advertising Network for the past two and a half years, I have watched these kind of forums lead to brilliant results on issues from hate speech and misinformation, to advertising fraud. The best solutions are created when NGOs or campaigners work together with advertisers and platforms to identify and suggest solutions to societal issues.
These forums need to ensure there’s ethics by design in AI-powered advertising, identify harms and create new initiatives to solve them, as they evolve.
Four main areas where we need greater collaboration and forums are:
Only through collaboration can these issues be resolved for consumers, society and the environment. We stand on the brink of an AI revolution, where smart cities, augmented reality, facial recognition, voice-controlled devices and machine learning are shaping both our online and physical worlds. We must act now to ensure we don’t take harmful practices from our online world into our offline ones.
Advertisers must work together with consumer protection and digital rights groups to define issues and build solutions that benefit society, not simply the advertising industry. Together, we can design an online world that benefits us all.
----
by Harriet Kingaby, Co-Chair of The Conscious Advertising Network
This article has been provided by the New Digital Age, who is published by Bluestripe Media, and covers the latest news, insight, opinion and research on all aspects of digital media and marketing. Its aim is to be an outlet for knowledge and inspiration about the companies, technologies and people powering the next wave of disruption in our industry. To view the original article, please visit here.
Digital3PC.com is an independent platform that brings together the best minds from tech, government, research, and academia to shape the future of cybersecurity policy and offer best practice solutions when responding to cyber threats. The most common access point for malware spread, data breaches, IP theft, election meddling, disinformation campaigns, and cyberwarfare is malicious third-party code (3PC) that makes its way into our websites, apps, and IoT devices. The compromise of the digital ecosystem erodes user trust and the credibility of media organizations, and undermines the integrity of our democracy, economy, and public safety.