12: Penemue - AI against Hate Speech in Music
This is an automated AI transcript. Please forgive the mistakes!
Hello humans, I can promise you this episode will be a little bit different than
the others you have heard here before Yes, artificial intelligence will be the hot
topic. Yes, music, of course Also, but we will not dig deeper into how music is
made with an AI But how it is protected by it not exactly the music but the
musicians before we do that and I will tell you more about that, I want to mention
the news that I have read some days ago. Sony Music took a step forward to protect
their artists against AI use. It said Sony has sent a letter to 700 firms around
the world, Google, Microsoft and OpenAI included, demanding to know if they have used
its songs to develop artificial intelligence systems. What sounds a little bit funny
in my ears, they have given a deadline to respond and they say they will enforce
its copyright to the full extent permitted by applicable law, such as the EU's AI
Act, for example, which has just passed. I would actually not call this a letter,
but rather a threat. Although I do not agree often times with major labels, I think
it is a good step towards a solution concerning artists who are currently not paid
for their contribution in data sets. But if, because of this letter,
and that is important to me, in the end, not only the big stars make profit for
being in data sets or the big labels, but also the artists who are not well known,
who contribute with their music, that is a really Good thing. Story to be continued.
Let's start this little show.
This is The Iliac Suite, a podcast on AI -driven music. Join me as we dive into
the ever -evolving world of AI and music, where algorithms become the composers and
machines become the virtuosos. Yes, this music and text was written by a computer
and I am not real, but I am and my name is Dennis Kastrup.
I remember the days when I said one day we will be able to create a whole song
with an AI by just prompting some words. I was sure it will happen someday,
but I never thought this will be possible so soon. We talked about this here in
the Iliac Suite before, for example, about Suno. And now there is audio also. What
will come next? One thing I realized, I kept on saying with every new step how it
is insane that an AI is possible to do it with just some prompts. Same thing also
happening for videos. I think we can now all agree. Everything is possible and
nothing is insane anymore. we have to accept that. Although one thing that is sadly
insane is the fact that online exists a world full of hate speech these days.
And here comes the connection I make with AI and music. I met some weeks ago on a
rooftop on a sunny day at the South by Southwest, the German Sarah Egitmeier.
She told me about her startup Penemue, an artificial intelligence that scans comments
on the internet. And that works like this. Once the AI finds inappropriate words, it
marks and filters them. Unfortunately, this hate speech happens a lot under music
-related posts. I found that really interesting and important to talk about as we
experience more and more hate online. So this time in the Iliac Suite,
Sarah Egetemayer is my guest and I'll leave it to her to introduce herself. My
background is actually a little bit of business, but then I switched to philosophy
because I thought I should do something properly. Many people think it's the other
way around, but I found, okay, I want to understand more things. I studied cultural
studies, philosophy, linguistics, different things. And I worked in the field of
cultural management so to say for quite a while. But that has not led me to where
I am right now. After that period I decided to found my own organization.
It was a social business dedicated to gender equality and female empowerment.
And I worked in that field for some years and there I realized that especially
women, but not only women, are kind of afraid to become a public figure or to be
more present online. And that was the first touch point for me really to think
about how vulnerable are you when you're exposed on social media.
And especially if you want to be kind of a role model to others and then you
realize Okay, but that also comes with some risks. You are exposed,
people might hate, put hate speech on you, and that was the first time I really
got to the topic of hate speech, which is the connection now to Panamu, where we
protect people from online hate and violence.
(upbeat music)
Before we get into how exactly they do that at Panamiu, Sarah tells us how she met
her co -founder Jonas Navid Merabanyan al -Nemri.
I met Jonas, my co -founder, because I asked him to be a speaker at one of our
events from that organization for a more equal startup ecosystem and female founders.
That was what we were pushing. And I asked him to be a speaker and talk about how
you can combine being a startup founder and having a family, for example. And after
that, we talked about, yeah, what are you focusing on right now? And he was at
that time focusing on a project with the biggest public broadcaster in Germany on
hate speech. And as I said, I was thinking about hate speech from a slightly
different angle, from more from that women's perspective. And I found that so
interesting that public broadcasters are also having that problem of hate speech on
their social channels. And then we, In the beginning it was very open,
how we would collaborate on that. But then after a while we realized, okay, we
really want to work on that topic together. And in the end I stepped back from my
former organization and there is a great team running it right now. And we co
-founded PennyMew then together to really focus on that field of hate speech
protection and he was working in the field of AI before. So it was very clear for
us that we would use AI to do what we do right now. I guess we all know
unfortunately what Sarah is talking about. You find hate online everywhere. And since
Elon Musk owns Twitter, that hate sales pretty often under the flag X right now.
Not sure how you deal with that, but I protect myself these days more and more in
a way that I don't go in these rabbit holes anymore. I put my phone away and look
at the trees and birds outside to calm down. Sounds really silly, but seriously, it
works. I'm happy about the fact that I personally have not experienced hate directly
under my accounts so so far, no shitstorm has reached me. But that is different
when you are more exposed to the public, especially as a musician. What really
shocked me when talking to Sarah was that even in the field of music so much hate
exists. Nearly every, every publication is now accompanied by hate I've been told.
For me it was until recently clear that mostly politicians face these comments. But
from music? Well, yes. We have musicians, obviously, artists who are exposed.
They are public figures. And through that, they also need to communicate on social
media, because that's just part of the job at the moment. I don't know if there
are many musicians out there who don't use this kind of communication. So they are
vulnerable in a way because they are exposed on these channels where they can be
approached quite easily by just anyone. And you want to interact with your community
also. You want to be there and let people communicate also to you.
But at the same time, you don't have a protection that there can be people who
don't want to say nice things to you, but be really, really bad. And that's what
we hear a lot from, that musicians say that people are writing them not only
comments, but also very bad messages, like in the direct messages from social media
networks. And it affects them. So there are some artists who say they don't read
it, or they found a way to live with it. And some people say,
okay, you just, yeah, have to be kind of, yeah, that it doesn't bother you or
something like that. But I think this is not the right way to look at it,
because then not everyone could be a musician or an artist if you would need to
have that. Yeah, I don't care attitude.
I think it's a better way to think about that, to give these people protection and
make them feel safe when they communicate online. And it's not only on their social
media channels where they post something, it's also when they livestream, for example.
It's a different feeling if you are giving a live concert and you know there is
like hate speech going on in the comments. It's not a good feeling. And we know
from artists that even if you get like 10 good comments who really appreciate your
music and your artist's work and everything, and then there's one really bad comment,
that's the one you think about at night. That's the one that, yeah, bothers you
again and again and again and you criticize yourself or you think you're not good
enough and things like that. And especially when it comes to racism or sexism, or
even threats, like we have one artist that we're working with, she's really getting
threats like someone tells her, he or she, I don't know, wanting to kill her and
her whole family and everything. It's not so easy to just,
yeah, not think about that. So, yeah, it can really affect your mental health,
But also your artistic work because you you are exposed to that kind of hate and
violence basically Of course, it does affect you I mean on the internet most of the
time we do not see the faces behind the comments But just imagine a room full of
people facing you and there you see two or three people Who shouted you and maybe
make signs how ugly you are and even that they want to kill you? How would you
react? It is impossible to ignore that. And usually what you should do is kick them
out of your room. Tell them that they will never be allowed to be in your presence
anymore. That would be a normal reaction offline. Unfortunately, it is not that easy
online because people can come back under different names, different pseudonyms. But
what can you do on the internet to protect yourself, you can filter comments before.
And that is what the AI from Panamu does. The first step is to find the comments
or to read through the comments that are on the profiles that we protect.
So this could be a public figure, like a person, or it could also be a music
label, an organization, a company, etc. And then we kind of read through all the
comments and the posts. And we look for explicit hate, which is right.
Simple. Yeah, you don't need AI for that. You can just use Blacklist, but that's to
make it quick. We also do that. So look for explicit harassment, discrimination,
threats, etc. And then we also have like natural language understanding implemented so
we understand sentences which is very important because hate doesn't always come very
explicit so it can also be using stereotypes for example for antisemitic sayings or
being racist when you just say someone go back to where you came from or go back
to Africa that's Yeah, you read a lot or also monkey pictures,
for example, as a racist expression. So we need the understanding of pictures,
we need the understanding of semantics, like what does the sentence mean and not
only where are bad words, so to say. And then we also have the phenomenon of coded
language, like really racist things that you only know when you are in a scene
where you know that this language code is being used to express something that's
actually illegal. That's something that we always need to evolve and adapt because
language changes very quickly and also these codes do change a thought.
But that's something where we put all our expertise into. Also, we look for illegal
content. And that's also something that can change. I mean, we have the law that is
like the base of it, but then also we look for, okay, what has been decided like
recently, have there been changes?
And then we try to
Yeah, and then we basically categorize so we tell like is it hate yes or no is a
toxic because that's like kindly kind of different categories so hate speech is when
you're being threatened because of belonging to a certain group like because of your
gender because of your race because of your religion or something like that. And
could be just like a offensive language, which is also something you might not wanna
have. And then we also classify if it might be illegal or not.
And then we suggest what you might wanna do with that kind of comment, like would
you want to delete it or would you even report it to the police? So that's like
in very simple terms what we [MUSIC]
In the case of reporting it to the police, Panamiu has just started a new service.
They work together with the German Office of Justice. So all these illegal wordings
will now directly be reported to the office. Concerning all the other comments,
for me the question arises, how do they do that deleting? I mean, there are
sometimes live events also happening like live streaming of concerts. When is the
moment they delete comments? - So we can do it basically in real time. We have a
little delay of maybe some one or two seconds, but we integrate the API,
so that we integrate the service like Instagram or whatever it is or YouTube that
you use into our tool. And we call our tool the digital guardian angel. So you
have everything that you use, like your social channels integrated into our tool. And
then when you first set it up, you tell us if you want us or our AI to delete
things, or if you rather want to use it as a pre moderation, that we just give
suggestions and make like, yeah, like a stack and say, okay, we suggest that you
delete all these comments. And then all the comments are put into one stack way
where we think it should be deleted.
Or we can also stack the ones that we think should be reported to the police. And
then you can go through and decide or most people might also have someone who helps
them in moderating. And then these people can do that. But you can also Put the
filter directly so that we would delete it and then we would make screenshots from
From the comments that we would report to police because you need them. You also
need the link you You need to have them as proof. I'm pretty sure some of you are
already asking this question Wait, they are deleting comments. They are controlling
kind of what is to be said but not, this sounds like censorship.
Sarah has your response. - Yeah, it's very simple. We are not allowed to say
everything that we want. That's why we have laws and we can't do anything that we
want because there's a saying that my freedom ends where the freedom of someone else
begins. And I think that's also in the freedom of speech that are rules and that's
not us who define define these rules, we just try to help people,
yeah, to execute these rules basically. So we don't define anything new, not at all.
We just use the law that is already there and analyze what's online according to
that law. So we are not in any way like a moral instance or anything like that.
You can also imagine like we're like a camera and filming what's happening.
And if there is a crime happening, we just say, okay, here's a crime. So we don't
define what is a crime. We just apply the rules that are already there to the
online space. And I think many people don't know that things are illegal online.
Some people out of whatever reason think that they can do really anything, because
it's online, because they're anonymous, I don't know. But that's not the case.
So I can't tell you, okay, I'm going to kill you. And then it's fine. And I can't
say that in real, but I also can't say that online. So I think it's,
it's rather yeah, reminding all ourselves that there are already rules in place,
but they're just not being followed so well, and that's what we try to help in
that case. And these rules concern, of course, also the forums and sections under
videos, for example. Let's take YouTube. Why don't they already control the comment
section and filter hate speech? Why don't they really take the responsibility? I can
just guess, and I don't know, in the end. And we had some conversations also when
we were in Texas at South Bay where we also met, that they don't want to be the
big sensor, so they want to have independent bodies who do the moderation for them,
who are explicitly trained also and have a lot of knowledge about hate speech,
about the actual law in that field and they want Yeah,
basically externalize that that part there are other people who say well They could
do it, but they may be profit from the interactions that we all know are likely to
be very high when there is a lot of Yeah, maybe not so kind Comments So Yeah.
In the end, we don't know why.
And I would like it. Yeah, the world to be like that, that they would just take
care of it. But as they don't do it, we thought, okay, we can't wait for that to
happen. Because other people are already out there right now and don't get the
protection that they deserve. So that's why we thought, okay, we will put all our
knowledge from AI and everything into that and build the best language models we can
to protect these people.
I have talked about it many times here in the Iliac Suite, the problem of data
sets in artificial intelligences. As also mentioned in the beginning of this episode,
musicians should know if an AI was trained on their music and if so,
they should get paid for that. Not Sure, if we will sort that out in the future,
but I'm still crossing fingers one day there will be a solution that fits everybody.
In the case of the AI for PennyMew, another question comes up for me. I mean, to
train the AI to find all this hate speech, they must gather kind of all the hate
on the internet to feed their AI, right? So where do they get that from? We free
data sets that are out there that we just find. We sometimes also buy data sets.
Then we have some customers who allow us to use their data because they want to
make our algorithms better. So the annotation we do on our own.
So that's what we also think where our magic lies that we have a lot of knowledge
about anti -discrimination and online harassment and everything. So we annotate the
data ourselves, so we use the raw data that we can get like for free or we buy
it. It's very important for us that we don't just scrape the whole internet because
it's simply not legal.
So we have different data sets. Sometimes we also collaborate with universities who
have worked on a specific field, like for example anti -Semitism, there's a lot of
very good research that has been done in that field. And then sometimes we can
collaborate on that. Yeah, and then we do the annotation ourselves, like in our
team. And of course, our customers also help us to like retrain.
They can tell us if they like the decision that our algorithm is through, or if
they want it to have the moderation work differently. And And we implement this
feedback also again and again. To sum it up, Panamu kind of gathers on their
servers all the dark stuff of the internet. They have all the hate on their
servers. And if you work with that every day, you also must see or read a lot of
hate. How do you deal with that? Does that frustrate you? Does it change your
belief in humanity maybe? I'm okay with it. I have kind of a distance, I'm just
happy that I know we can help people in that. And it also, I think gives you a
sense of control, or that's at least what I feel that I know we work together with
the police. So just this morning, something came in and someone sent me several
stories from last night, where a woman that's very exposed at the moment gets really
awful comments. And I don't really go into that emotion then,
like, oh my God, how bad is that? I rather feel like, okay, what are we doing
about it right now? And then you come into action mode, and you, you want to help
this person, you want to, yeah, have a solution for her, because it's not just the
case of just moderating it away. This is a bit special. So we're doing it in a
one to one, we're helping her more than we would. It's not just the app user, so
to say. So we sometimes then also have a personal conversation with these people and
see if they also need maybe more support. And that's where we work together also
with nonprofit organizations who have also psychologists in their teams to support
people if they're really suffering from it. And I actually feel very empowered
because I know There are many things we are doing and that actually makes a kind
of good feeling and gives you a sense of control. I think that's very important
because if you feel helpless and like you can't do anything and you're just so
vulnerable, I think that's what makes it really bad.
In countries such as the United States, France and Poland, there are already similar
companies that use artificial intelligence to combat hate comments. However, PennyMew
is currently unique in Germany. It offers the service in 89 languages. The startup
is thus making an important contribution to artistic diversity in pop culture,
the well -being of musicians, and-- - I think it's very important to understand that
if people are hating a special person they're not mostly just meaning that person so
they are projecting onto people and what they actually do is they are silencing
people and when we look at who is being attacked the most it's mostly vulnerable
people it's people who are already discriminated against a lot in real life and then
this happens again online It's black people or it's different genders who are not
like the mainstream. And that's why we know that it's actually a threat for
democracy because people are being more and more afraid to say their opinion online
or to be a public person at all. And those who are hating, getting like more and
more space online. And that's what we want Yeah, actually end or at least reduce to
make online spaces who are the most important spaces that we have for discussion,
for democracy, basically, to make these places safe again for all people,
especially for those who are attacked the most.
Thanks, Sarah Egidemaya for talking to me about Panamu and their AI that filters
hate speech online, I think you're doing a very important job. To all the listeners,
have you experienced hate speech? Can you tell me something about this? Have you
experienced it as a musician? Let me know that. You can write me and find my email
in my notes to this episode. Check out the service of Panamu. It is worth it.
That was the new episode of the Iliac sweet.
Thanks for listening humans. Take care and behave.
(upbeat music)
Creators and Guests

