The impact of Big Tech on democracy has never been more apparent and regulators are starting to take heed. In the last few months, the European Union passed two of the most significant pieces of legislation aiming to regulate Big Tech companies and their impact on polarization. On the heels of these policy advances, Ashoka social entrepreneur Anna-Lena von Hodenberg — founder of HateAid — had a precedent-setting win of her own: a German court ruled that Facebook (now known as Meta) is accountable for removing “identical and core similar” illegal hate speech from their platform once detected. To find out more, Julia Kloiber of SuperrrLab spoke with Anna-Lena about the future of tech policy in Europe and what it could mean for democracy.
Julia Kloiber: Anna-Lena, you started HateAid, in 2018 to support victims of digital violence. Can you tell us about the case of German politician Renate Künast, and her quest for justice in the face of social media defamation?
Anna-Lena von Hodenberg: For seven years, a popular meme circulated on Facebook: a photo of Renate Künast above a quote attributed to her. The problem was, the quote was fake. The meme was reported several times by different people, including the politician herself. In some cases, Meta even added the message, “This has been fact-checked and is fake.” So the platform acknowledged that this was defamation, a criminal offense, but the meme was still there. At the time, Meta’s notice and takedown procedure required the victim to comb through the platform and report each meme one by one. But the meme was being shared so frequently that it would literally have taken a lifetime to do that.
Kloiber: How did you get Meta to address the problem? And what does the outcome mean for us all?
von Hodenberg: There are different instruments that you can use to regulate Big Tech, and one of them is litigation. So we brought a defamation suit against Meta. Big picture, this case showed how successful we can be in defending users’ rights in court, but what we specifically achieved with this verdict was that Meta is now obliged to proactively find and delete all “identical and core-similar” instances of this content . We proved that Meta has the technology to filter for all identical memes and delete them when they’re not, for example, used for journalistic content. They’re supposed to search for these proactively as well, deleting any content that is obviously illegal. Remember, they themselves had assessed that this content was illegal. If they do not comply, they will be fined 250,000 euros, and they have to pay Renate Künast 10,000 euros in damages.
More broadly, this verdict means that when you’re a victim of digital violence, the burden on you to fix the problem has been lifted. It’s now the platform’s responsibility to take down illegal content. Meta is appealing the verdict of course but we are ready to defend it though all instances.
Kloiber: On a personal note, how did you come to this work? What inspiration or life experience led to it?
von Hodenberg: When I was growing up in Germany, the legacy of the Holocaust was very present. A sense of responsibility to never let it happen again was ingrained in me at a young age. After the Trump election and Brexit, we saw people, especially right-wing extremists, learning how to manipulate discourse on the Internet, how to use digital violence and algorithms to silence people, to drive them out of public discourse and misinformation. Individuals with fake accounts could now actually change the course of elections. I was shocked when research first came out about this, and I thought of Nazi Germany, where propaganda was the key to gaining power. Here we go again: propaganda being used to silence people who speak up against it. I’m passionate because this is really one of the biggest threats to our democracies, and it’s time to defend them.
Kloiber: Do you have any advice on pushing back against Big Tech, for everyone reading?
von HodenbergIt’s so important to discuss the problem, to make people think, “Yes, it is unjust that as an individual, the burden of addressing harassment is on me. Meanwhile, this profitable platform with so many resources is doing nothing.” The fact that we made this case publicly, that it was picked up in a lot of newspapers, and that people started talking about it, was probably as damaging for Meta as the verdict itself.
Kloiber: So what was Meta’s argument for why they should not be obliged to remove fraudulent or defamatory content?
von Hodenberg: They didn’t say that they were not obliged to take it down. They argued that was technically difficult and also too expensive for them to distinguish between it damaging content and, say, journalistic content. So we had an assessment done by a Berkeley professor who is very well known in the tech world, and he proved that , Meta is, in fact, well able to find this content. And we argued in court that it is reasonable to also moderate and delete it.
Kloiber: Do these non-human solutions, like upload filters, have any use?
von Hodenberg: Platforms rely on them heavily. The result is a lot of arbitrary content decisions, because AI can only go so far. What our case showed was that enormously profitable platforms like Meta need to invest more in human content moderators.
Kloiber: The European Union recently passed the Digital Services Act (DSA), which the Financial Times called “groundbreaking rules to police big tech platforms.” What is groundbreaking about it? And where does the new Act fall short?
von Hodenberg: What’s groundbreaking is that platforms are now obliged to provide transparency about their algorithms, which we’ve only ever glimpsed through leaked documents. The DSA also obliges platforms to grant researchers and NGOs access to the platform to conduct research. Another positive is that users now have the right to appeal content decisions. Previously, if you flagged, say, a rape threat to the platform, and they refused to delete it, then your only recourse was the courts. With this new DSA provision, it’s a lot easier to appeal wrongful content decisions.
Disappointingly, the regulation of porn platforms was excluded from the DSA, even though we’re seeing more and more women becoming victims of image-based abuse. Uploaders on these platforms don’t even have to make an account, so if the woman wants to press charges, police can’t identify the perpetrator. We demanded that uploaders on adult platforms be required to do a verification, but this didn’t make it in.
Kloiber: How do you balance the US concept of free speech and these regulations?
von Hodenberg: The problem with a totally permissive attitude to free speech is that when the loudest voices are free to intimidate, a majority of other users are silenced. Here in Germany, for example, over 50 percent of those aged 18-35 have said that they hesitate to voice their political opinion online because of the threat of digital violence. So what about their free speech? We need to look at it from both sides.
Kloiber: When you look to the next five years, what makes you feel optimism?
von Hodenberg: In just three years, we’ve won two landmark cases. People are talking about the subject more, and the DSA is being implemented. People and governments are becoming aware that there are serious problems to be addressed. They’re starting to see that social media is not just a thing we use, but a digital public space that is ours to create.
This is part of a series on the future of Tech & Humanity.