Hi, everybody, this is Ginevra with Mosaico, the podcast Putting Pieces Together. So today we're going to be talking about artificial intelligence, more specifically, about some human rights issues that there can be in artificial intelligence and possible regulations to mitigate these risks. So before I start listing the aspects that I think are problematic about artificial intelligence, of course I must mention that there are a lot of advantages for governments and for the public too. Our lives will definitely be made easier by artificial intelligence. However, the use of AI is problematic because it could reinforce potentially discriminations that are already in place in society. So, as everybody knows at this point, and if you're new to the channel, I will tell you I'm very bad at technology, mathematics, all these things that are really require certain levels of ability with numbers. But criminal law and human rights are my fields of expertise, and therefore I think it's worth listening to my sermon. So under this point of view, the AI is particularly problematic in the criminal judicial system in the uk, of course, but in Europe and in the rest of the world, really, because in criminal justice, there are already minorities or categories of people, vulnerable people there are already subject to very disproportionate accusations and stereotypes. And so in some of these cases, if not regulated in a good way, the artificial intelligence can make these situations even worse. So what can the legal community do against the AI discrimination issue? Honestly, it's really hard to respond this question because on the one side, we have technology that is running so fast, it's developing, it is becoming more sophisticated and intelligent every day, and frankly, it is also helping people a lot. So there is this. And then on the other hand, there is the law, the law that always takes a long time to catch up to what's really happening in the world. And so how can we even try to regulate this potential issues that the AI can cause? So in June 2023, the European Parliament voted on an Artificial Intelligence act proposition, and it voted favorably. And the act is really important because it kind of pioneers trying to take into consideration some of the risks of the artificial intelligence discrimination. I'm saying that it pioneers not because the European Union is great and not because it doesn't have its issues, but because there is not much legislation around the world around artificial intelligence and discrimination risks specifically. But we do have at least the European Artificial Intelligence Act. Of course, approving it isn't sufficient because now there are going to be a lot of other regulations and dialogues in order to implement it transparently and equally around the European Union. To be honest, according to me, some of these technologies should be banned, but I'm not sure how possible that is at this point. So we should really focus on regulating certain conducts. So, having said that, I want to dive into the discriminatory technologies that worry me the most, and I hope they will worry you after listening to my podcast. Number one, biometric surveillance. Very important. Biometric facial recognition is different from regular video surveillance because it is much more complex and does a very different thing. And if it was for me, it should really be taken away. An example of facial recognition surveillance was used by the Metropolitan Police in the United Kingdom. It was called the Gangs Matrix, and it was launched by the police in 2012 as a database of people who allegedly were suspected to be members of gangs in London. However, what happened over the years is that the gang Matrix was putting individuals arbitrarily into the Matrix, and the majority of the people who ended up into the Matrix were part of black communities or other kinds of vulnerable communities. The point is that once you are in a system like that, then the police will share your information with other agencies such as job centers, housing associations, other educational institutions, and therefore, if you end up in a system like that, your whole life can be ruined. So what happened in the UK is that there was a legal case, and after a lot of litigation, eventually the Metropolitan Police was forced to admit that the gang the Matrix was indeed unlawful. Not only that, they did admit that black people were disproportionately represented in the Matrix because they were about 80% of the people that the Matrix captured. And the police also admitted that the Matrix breached the right to private and family life under Article 8 of the European Convention on Human Rights, translated into the Human Rights act act in the United Kingdom. But obviously, the biometric facial recognition is not only an issue in the United Kingdom, but especially in places such as the United States that reportedly use very biased algorithms. Also, there are many authoritarian regimes that use this kind of technology, such as Russia or Iran. They use them to recognize and then obviously punish activists in ways that aren't necessarily legal. The State of Israel also used this technology in the Blue Wolf operation involving the idf Israeli Defense Forces using facial recognition in the Palestinian occupied territories to monitor Palestinians 24 hours. 7 and obviously not only dangerous people, but also civilians and collect a database that included pictures and very personal data of children and elderly people. So if it was for me, I would ban all of this everywhere in the world. But of course, there is a quote, unquote, security risk and therefore it will very unlikely be banned. But I'm really against the facial recognition in real time. That means that there are cameras that scan your faces, 24 hour sevens in the streets, and if they want to, they may or may not decide that you are dangerous. Number two of the problematic aspects of AI social scoring, the most famous is the one implemented by China. But honestly, I don't know much about it. And many don't know much about it because it is implemented by China, which is a borderline authoritarian state. But there are many around. Basically the way it works is that it gives points to people based on things that they've done in their past, such as paying taxes, paying bills, credit score, paying taxes on time, paying bills on time, which I'm not really good at. And based on the total score the person can access or not welfare services. It goes without saying that this is very discriminatory. And it's not because I'm not great at paying bills on time, but because probably any of us for any reason at some point of our lives have been in some financial difficulty. And so that wouldn't be fair that we're put into a very low social scoring because of that, right? Would it be? And then number three, we have my favorite predictive police. So have you seen Minority Report? Well, I'm very bad at movies and I have seen it, so I don't know if you haven't. It is a movie with Tom Cruise where he, I think, is a policeman, or it belongs to the authorities himself. But basically we live in a future where the police knows whether a person in the future will commit a crime. And so it prevents it all by arresting the person before it even happens. Obviously at some point they want Tom Cruise himself and he's so surprised, he doesn't understand why. And so he runs away and the police chases him with very illegal means throughout all the film. Just check it out, because it may very well be what's already happening to us, because we already have algorithms that calculate our risk in committing crimes. And unfortunately it's very widely used at immigration level. And now that I've talked to you about this, I want you to play a game with me. An NGO called Fair Trials created a quiz based on the predictive police algorithm. It's really funny, it's very well done and we can all take the test to see how they would profile us. I personally took the test and I ended up in the medium to high risk category. But why am I at risk? Well, first of all, I have an immigration background. I traveled around to study and then I based myself in the United Kingdom. But can you imagine those people that have at least one foreigner parent or both foreigner parents? Well, according to this algorithm, they may be profiled at a high risk. Also, I probably was at a medium high risk because I have contact with the law enforcement and the police. And why do I have contact with the law enforcement and the police? Well, I'm trying to qualify as a criminal lawyer, so I deal with the police a lot. But really, it's not only that an ID check is sufficient. The only requirement for this check is that you had some interaction with the police. It doesn't matter what kind. And if you belong to religious minorities or you're an activist, well, you're gonna have a hard time. So I love you to play this little game with me. And so I'm gonna leave the link to the test below and please do it, please, I invite you all to do it. And please let me know your results and we can all compare it together. We're laughing here, but this is actually how predictive Police works. And so if these mechanisms can't be stopped, at least I think they should be really, really well regulated to avoid discrimination. As I mentioned, there are talks now to bring the European law into force. But the European law objective should be to be transparent, and we all hope that that happens. I'm personally really worried that artificial intelligence reflects our society the way it is right now. And so we need very precise regulations so that the artificial intelligence doesn't make our society even worse. There is a lot to be said and I honestly could be here until tomorrow, but I think nobody will listen to this podcast if I did that. So I'm gonna stop here. But please keep an eye to artificial intelligence and not only to have ChatGPT or whatever that is, to write your own essays or your own things, Be careful to the gray areas you may bring up, because our democratic systems that are already undergoing a crisis may be completely subverted. And be careful, with these technologies in place, without regulations, you may be predicted as a criminal too. Thank you for listening. I'll talk to you soon.