"The Data Diva" Talks Privacy Podcast

The Data Diva E84 - Enrico Panai Ph.D. and Debbie Reynolds

June 14, 2022 Season 2 Episode 84
"The Data Diva" Talks Privacy Podcast
The Data Diva E84 - Enrico Panai Ph.D. and Debbie Reynolds
Show Notes Transcript

Debbie Reynolds “The Data Diva” talks to Enrico Panai, Ph.D., Data and AI Ethicist, Éthicien du numérique, from France. We discuss his work on AI and technology, his work on AI ethics with ForHumanity, AI uses of nudge technologies, the need for ethics in AI, audits of AI systems, How ethics and regulation are related to AI, unexpected results of AI, Real-Time Bidding and AI transparency protecting the privacy of individuals, how people can be influenced by AI systems, the UK Age Appropriate Design Code, danger and challenge of inferences made about you not subject to regulation, and his hope for Data Privacy in the future.

Support the Show.

59:49

SUMMARY KEYWORDS

data, privacy, people, work, nudge, inference, technology, ethical, important, understand, ai, happening, treat, Sardinia, world, regulation, philosophy, related, ethics, create

SPEAKERS

Debbie Reynolds, Enrico Panai


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.


Hello, my name is Debbie Reynolds. They call me "The Data Diva". This is “The Data Diva” Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with the information that businesses need to know now. I have a special guest from France on the show, Enrico Panai. He is the founder of the Éthicien du Numérique. He's also an expert in enhanced nudge-like technology. Also, he's a Ph.D. researcher, AI standards expert, and editor. So he's a very multifaceted person related to technology and privacy. And I'm happy to have him on the show. Hello, Enrico.


Enrico Panai  00:59

Hi.


Debbie Reynolds  01:02

Yeah, this is great. This is great to have you on the show. So we've been connected on LinkedIn for a while. And I've always loved your content and your commentary. And we always comment on different things. And I love to talk about emerging technologies. And you're leading that charge. So I would love for you to introduce yourself and tell people a bit about what you do. And I definitely want to get into your idea about nudge technologies and things like that.


Enrico Panai  01:43

Thank you very much. So basically, I'm a philosopher of information. So I always studied philosophy and took a Ph.D. in philosophy. But I always worked as a developer, also in the ICT sector, so I mixed the two worlds, and I do the development with the ethical, a very strong philosophical background and ethical framework. So at the moment, I've been teaching; I'm Italian, basically. And I came to France 15 years ago, and I've been teaching at University for several years. And when I moved to France, I started again to work in the private sector. And so, a few years ago, I created my consulting company in ethics of data and artificial intelligence. And now they are working for b2b companies, just to give you some general name, like what we or other banks, and insurance companies in France, just to help them to find solutions and to find good reasoning for ethical problem and ethical issues.


Debbie Reynolds  03:18

Wow, that's really cool. That's awesome. You know, I like to see people. You call yourself a philosopher. I study philosophy as well. So I think we have in common, and my mother was horrified. She was from a generation where you went to school for practical banking, so she just didn't understand what I was going to do with it. So it's kind of funny that I ended up in technology. So I think it's really cool that we have that, that in common also want to congratulate you. You work with someone we know together, Ryan Carrier of Forhumanity, and you are, I think, a licensee of their kind of audit process for ethical AI. Tell me a little bit about that. 


Enrico Panai  04:12

Yes, so I met Ryan a few years ago. And at the moment, I am a Senior Fellow. So Forhumanity, that is a big association, we count more than 100 people around the world. Think we are more than 40 Senior Fellows in the association, and the basic idea was to create a kind of ethical-based audience. So in order to do so and to build an ethical environment for companies and to work to construct an infrastructure of trust, we use several pillars to validate to verify that companies are using data, personal data, and artificial intelligence systems in a good, appropriate way, we do see what is good or bad for an ethical system, but we just verify that all the environments around this building the correct way, I think the ethics are not like no, you don't, you cannot comply with ethics, you can also only work to build the environment that brings people to resume in an ethical way. So that's what we are doing with Forhumanity. And the first audit that we are going to publish will be maybe about the GDPR, the British version, that we are preparing together in the ICO in England, and then that we are going to test a previous pilot project next year. And then, we are working on several kinds of skins related to data. For example, biometric data, the future AI actor. Another one is very, very important. For me, it's the children's code in the UK also because we are working on children's data and how to protect the children's data. And we have a lot of projects and people working on something that I think is very important is the code of that item. But it's a very specific subcategory of the code of ethics but related to data algorithms and the practices of developers or the management to focus on all the organizational systems of an actual company.


Debbie Reynolds  07:27

Yeah, I like Ryan. Ryan has been on the podcast. He's wonderful. Since you're a philosopher, I’d love to ask you this question. I know you have a great answer to this. So I think some people stop in privacy around the regulation stuff, right. And we know that there is a gap there. And that's where ethics is. So I tell people that not all laws are ethical. So you can't pretend that law gets you or regulation gets you to the place where you've done your job or done your duty as it relates to the privacy rights of individuals. What are your thoughts?


Enrico Panai  08:21

I think that what you're saying is true. You cannot only comply with privacy laws because it's a bit more complex than that. First of all, I do like to say that privacy in its full rated a foundational feature. We cannot do ethics without security and without the privacy of personal data. So they're even not ethical principles. They’ve seen it become normal, and you cannot do anything else. And so we should work on privacy for that. However, at the moment, we are soaking wet. I don't know if you have seen a lot of websites after the GDPR. They just added in a few more popups that you have to accept that. We are not doing a mythical approach of private data for that. We are just as assessing that they are using cookies for something or something else, but no user is going to read the layer privacy policy when they are visiting the website. So say In a very light way to understand the data protection from my point of view.


Debbie Reynolds  10:08

Yeah, I call it cookie theater in some ways. So I think at this intersection, it’s so important; it’s going to be more important going forward. And that is where do we AI; the harm in AI can be so detrimental to an individual that you can't wait really for regulation. And I don't think regulation will ever fill that gap. So I think it's important to have people like you who are working on it. And that kind of that fundamental level, then also, making sure that organizations understand that just ticking a box on a website, or writing a policy, a getting a template, probably off the Internet, and writing a policy is going to be sufficient. And I also think that fines aren't, shouldn't be the driving factor. And what I'm seeing now, I don't know how you're saying this in your business. But what I'm seeing now is that part of the good thing with the regulation is that I feel like more organizations understand that they need to be cautious, and they need to ask more questions, especially of third parties, about how they handle data. This, what I'm seeing is that if a company can't really answer the questions about how they handle data of individuals, it’s a risk, right? To organizations, it can be a barrier to business. So that's the bottom line right there. So what are your thoughts?


Enrico Panai  11:52

So I think that we misunderstood the idea of privacy when we were trying to comply with the GDPR. Because basically, there is a philosophy behind the GDPR. That is saying, hey guys, and we are in a new world that is a new formational world, what we call in philosophy, the InfoSphere. And so data, and in particular personal data, are very related to that data subject. And so, if you want to progress in this world, we have to improve our way of treating this data. The wrong thing that we are trying just to comply with privacy and regulation, but we should use GDPR as an engine to increase and improve the technological level of our company and their organization because it's not only the techniques of the technologies, but it's also the way you manage data because not all technology that we are using is data related. So we have to manage data in a good way to make the technology more powerful for the future. There is another thing that is important for me, and it's when we are talking about information as a society, we think that we are in a society that produces information and treats information. But what is our level of maturity about it? That means that if you go to a country where they are mature water society, you open the tap, you have water, and you drink the water because they are mature from the water, they have proper and potable water. And for me being in an information mature society is nice; the tech can use information, and information is going to use in a good way. And the fact that you have to accept cookies on a website or have a very longer privacy policy anytime you use an application or software means that we are not yet at a very high level of maturity because it's like if you were going to open the water on a tab and tap it, and then you add it to chemical tests anytime you want to bring something. No, a good society shouldn't ask; you shouldn't say, okay, I'm empowering her, telling her how I'm going to use her personal data now. No, that's not. If you want to be in a mature society, information. You shouldn't make me think about it, and I should be sure that you are using a good way my data. And don’t, I shouldn't mind about it; I shouldn't feel like you are cheating on me because you are stealing my stream my data and treating them or manipulating them in a way that I don't like. So we have, we are still in an immature informational society, from my point of view.


Debbie Reynolds  15:33

Yeah, right. I agree with that. So I like the analogy you gave about water. Right? So I don't know, I guess there are two streams of things happening. So one is there's an ocean of data out there. And it's being used in all types of different ways that people don't know or understand, right? And then we have this cookie thing going on, or this thing where people are trying to comply, where you're having to for every drop of water to your analogy, they want you to ask our question. But in the background, there's this ocean of stuff that's flowing in. So they're not connected right now. And right now, you know that the IB Europe case is going on about this real-time bidding thing and how organizations handle consent, transparency, and websites. You know, there there's a draft ruling, I think, coming out of Belgium, if I'm not mistaken, about this, and it sort of hits the point about they're these two different worlds, there's a facade. This theme is that this theatrical thing is happening in the front. And then there's all this other stuff that's happening, really damaging harmful stuff that's happening in the background, and those things aren't connected right now. So I think we need to really talk about that. What are your thoughts?


Enrico Panai  17:16

You are right, but because we are leading one of the most important revolutions in the last 20,000 years, we shouldn't think about data as a technical revolution of the last 20 or 50 years. We are just changing the way writing was connected to support in the past. So we had the great revolution in agriculture, when we and then with the invention of writing, they are great revolutions. And then the last one was the tooling revolution, just the last one, and we are leaving; we don't have it yet. So the product, to understand what we are leaving, just think about the quantity of data. If you think of data, like water, just to keep the same analogy. All the data produced from the first painting in a cave until 2005. Six were like a swimming pool, a big syringe to input but not enormous. In 2016, and 2017, we arrived to have the Black Sea. So we changed the dimension completely of the way we have to do data. We don't even know where we are going to stop all the data we are producing today. The problem is not only where and how we are going to use them, but even we don't know how to stop them too much. And yet, we are living exactly the same organizational approach of the paper era. So if you think that still today, a lot of companies are using the PDF to print something to sign it, scan it to send it back. And all those steps are steps where information and personal information can be leaked out from the company. So when you are asking for performance from an employee and so is bringing being at home to work with this data that, you mentioned the risks of losing the personal data of somebody. So the reality is that we are living in a new computing era with a very old paradigm of treating this kind of data. And so, in this way, the philosophy of the GDPR is interesting because they are saying that you have the opportunity to change the way you treat data. And if we use this regulation, as a driver for change, instead of thinking that we have to comply with it, then we could do a very big step. In this case, I'm living in Europe, and I think that Europe is using GDPR as a geostrategic tool to make an improvement in the way we treat customer data.


Debbie Reynolds  21:14

Excellent. I love that. What is happening? You're right at that cutting in; I feel what kind of technology and privacy and then also just looking at the emerging things that are happening. So what is happening right now in the world that is concerning you the most as it relates to privacy? What’s ahead that we're not thinking about that concerns you?


Enrico Panai  21:48

This big question is a lot that is happening. And working with personal data at large, because I'm part I'm co-chair of a group in Europe that is working on AI-enhanced, not just and this is a point that is very interesting. But before talking about it, because you studied philosophy, I want to just send you another think about what I think privacy is and how it can be in decline with several kinds of decisions or tools like nudges that we're going to talk about in a while. What is privacy? That's a good question. What is privacy, and why we are so concerned about privacy? I tried to have a poetic approach to privacy, and I changed the perspective. I don't know if you remember what diabetic means. It’s in the poetic days talking about the idea of something inside of the store. And so, for example, in a movie, the poetic sound is the sound that all the actors can listen to the inside of the movie. Okay? What is extradiegetic is the sound that only the people that are watching the movie can see is the soundtrack. But the actual cannot listen to that. Now, privacy is like a bit. That is something that should be digested in this sphere. What we don't like of privacy is when he's become extra energetic. So, for example, I do agree that if I'm a student in my class, my classmates know all my rates, but they don't like it if it's going out of my class. So sticking is a problem of dimension. So privacy is not something absolute but something that is related to the image. I don't like that it is going outside. And what I don't like is another effect is that privacy is that is good inside the classroom, and somebody is taking from they're taking it out. So these two elements of privacy, I don't like okay, as a citizen. So, I think with this approach to privacy, we know what is the definition, and now we can that we understand that it is something related to the environment. Now We can understand how we can make a trade-off with transparency. Okay? So is it transparent in a classroom if a student knows their rates, other students because it's transparency, but he should be privacy, taking those notes and publishing them on a public website? Okay. So anytime we talk about transparency and privacy, and that's important, we should be very careful to focus on the level of abstraction we are using. And on the environment, we are through to these problems. Now, the point is that all work is trans digital, so I take privacy and take it out from an environment. And in this effect of moving privacy. So moving data from one environment to another, I can use those data to make again people do something. So that's what I'm doing with natural and take; I'm working on either just to try to write a standard about it. And just to mitigate the possible risks related to AI, and so for people who don't know, the nudges are all there as a small incentive that is quite not visible, and they bring you to something, okay. And they can be designed by architecture, too. So we chose architects, so people who are behavioral psychologists, and then because of later they can design the way you behave. So this is simple. The real problem is when it is this, an algorithm that is going to take your data to reinvent a new mechanism, a new nudging mechanism, that is going to back to yourself again, so is taking data from an environment that is the decision making process something very personal. And it's taking some particular biometric data or behavioral data from you, using and elaborating them, and creating a mechanism to make you change your behavior. So it's, I cannot say that it's good or bad, and I want to never see, I want to know how they are used. And I want to protect some kind of categories that are more fragile to lose this kind of mechanism because they cannot face the deepness of this kind of mechanism is going to choose something profoundly.


Debbie Reynolds  28:21

Yeah. I did a video about nudge technology a bit ago, just to explain to people. So these are really deep concepts. So I guess I'll throw it out there just at a high level. So in this, and we can talk a little bit about the UK, age-appropriate design code because the ICL is the one that really called this out. And I thought it was great that they did this. So basically, nudge technology, I guess you can call it a philosophy or a framework, is about psychology and about how people learn or how people react to stimuli, right? So it has been used well in things like education, right? Where you're trying, for example, you're trying to get a child or someone to learn something. So nudging in that way can be very positive, where you're trying to increase someone's knowledge or understanding or comprehension. And what's happening is that this philosophy, for lack of a better term for nudging has been, is being implemented in technology and AI. So then, when it's using AI, sometimes people use it for different things. So they want to nudge you to buy something or to take some action that maybe you would not have done if not for that kind of influence in some way. So, like the children's learning can't really call this out, especially around children. Children don't understand psychological, I guess, for lack of a better term, and it's a kind of psychological manipulation in a way, right? Because he's like, okay, I want you to take this action, I want you to buy this product, I want you to do this thing. And maybe I want you to think it’s your idea. Maybe in the absence of what's happening in the background, that's a concern about this technology. And I like the fact that you're concentrating there because I think the thing that people aren't talking about with technology is the psychological part of it. So taking the way that people perceive input, right. And then creating algorithms that can do it billions of times over and over, so you're only seeing what they want you to see. And there is also with technology with things, like I work on Metaverse type stuff. So I work with a lot of sensory things, virtual reality, augmented reality, and that's going to be a whole different level of nudging, and all this other type of stuff, because not only will these technologies be able to show you what they want you to see, what they can do right now, right? But then you're going to be in a more immersive space where they can, and you literally have blinders on because you can only see what they want you to see. And then they can record your interaction or your response to that. And maybe the response that they want from you. If they know, okay, this person responds the way that I want them to when I show them this, let me show them more of this. What are your thoughts?


Enrico Panai  32:09

I think that this the possibility that more immersive assistance, the cognitive bias, that are going to be used to people to change their behavior that are more immersive, so are more than just one, one single bias. And the point would be that we will not have only human architects to do that because we have too much data to treat and produce adaptive nudges. So nudges have always existed because our parents, who were natural for us, and our teachers when we were young, used them, and we still use them every day. The problem is not trying to make something because, basically, you're trying to make something good for somebody. This is the basic philosophy behind the nudges. The point is that as a mechanism, a nudge can be bad or good. Okay, it depends on when you are using it, against whom you are using it, and how long you are repeating the action. There are several experiences that I've been doing at the moment in several universities that shows that when you are repeating the positive nudges, in the long term, the general effect can be negative. So that's what we want to evaluate, assess and check during the time. We are not saying that nudges shouldn't be used because they’re a mechanism, and so they will be used. And they are already used, as you said because social networks and video games they are quite usual, even if they are not always called managers. But we know that they are used. The point is that we want to, in a certain way, protect the liberty of a particular offer. So the constitution of the self. It's very important for that construction. You shouldn't be hacked during a decision. You should be free to take a decision to be wrong. You're the CEO, and you should have this freedom. What's in the UK they are doing this wonderful. Even if they stop at the idea of digital nudging, very different category of nudges are just an incentive in any way digital not just of the same, but to breathe into a digital interface, and AI and as not just are created in an automatic way by more than stay on all the statistical models. So the point is that the focus on already on digital, not just, and it is very good. But the danger tomorrow is nature; the overnight just improved and enhanced by AI. And so, with full humanity being the brand and we are working to create knowledge for the children developed by the ICAO, UK. I think that the UK is the most advanced country in this area. And they are doing a wonderful thing in protecting children. And what we are asking as the Association for Humanity is that any company that has children's data and the history of treating children’s data to give a special committee, not a classical ethical committee or any kind of algorithmic committee, but a special committee dedicated to children's data. So we are calling this the children's data oversight committee. And we believe that any company should have experts, a group of experts in cyber to prevent the possible danger of using this kind of data. So that's our approach. As you said before, it's not only compliance; we don't have any regulation about it yet. So let's do it before that is too late.


Debbie Reynolds  37:38

I agree I agree. Actually, the UK Children's Age Appropriate Design Code, I did a video on that as well. But I also recommend it to my clients that aren't in the UK. Because I think it's just a great framework, especially for people doing development of applications. So it's something that they really need to think about. And it's funny because I have this conversation with someone or a developer about nudging because it is used a lot in education. Right? And they're like, well, so that doesn't mean I can't use nudge technology, I'm like, no, that's not what it means. It means that we need to be careful about how we use it and that we're doing it in a transparent way. And the goal is to increase the education of the individual, not to create a kind of manipulative environment, right? You don't want to nudge them to buy your bottled water product or something like that, and we know that that can be really dangerous. So I think it's a great step. And I think just not enough, so I feel like the reason why this is so important is because you're calling out a category of kind of psychological work that can say, you're basically, your work on the most important cause you are touching on something that people haven't really been looking at, and then we need to look at a lot more. And that is the psychology behind AI systems and what the impact can be on the individual, especially as it relates to what their actions will be. So whether it's called nudge or not, we understand that there are psychological tricks that are used, not just in technology; this happens in person-to-person interactions as well. But technology can supercharge that interaction, and they can make it happen billions of times faster than a person can do, like an impersonal reaction or interaction. So it's very important.


Enrico Panai  40:09

Yes. And what you just said about the age-appropriate design, it was already used by developers that were aware of the problem in the past, but we should use it by design. And that means that all the interaction you are doing with a certain age, to be reasonable from that age, you cannot use the same language from an ad for an adult, for a junior, so the child. So that's important, we should be careful of that. And we are working a lot on these aspects. And they're quite important. But at the same time, I think that we should take care. Also, people that do not understand the complexity of the digital worlds, because we are living in a kind of future bubble, you know, about privacy, you contacted me, I know what we are talking about, yes, I use the acronyms of AI. And we understand, we talk about nudges, and we understand, yes, we are special people that know, we are in the sector, we know everything about it. But that's actually a bit of my experience in my entire life, and I started very young teaching basic IT technology to people. I was 19, and I had the opportunity to work for a big company and started doing; I took classes from a very low level, and I carried on doing this thing all my life; I'd still do it because I believe that is the only way I can understand how people are understanding technologies, teaching them the basics of ICT. And when you teach them to realize that several things, the first thing is that what we understand about data and information is not the same for everybody. And so everybody's got a different semantic structure to understand and, or even better, a different epistemological approach to understand what they are reading, what they are seeing, et cetera. So that's important. And that's why maybe, because we have so ready category in the social network that is going to develop the simple things, and I should be I think that people should think about it a lot. The second thing is that we believe that the new generation is more comfortable with technologies than older generations. And we believe that because they were born with a mobile phone in their hands, then I must say that in the five or six last years, I started to realize that the youngest people had an enormous conceptual gap with using data and information. So yes, they use a mobile phone, but they don't have something that is very important, the structure of the world that they are visiting; they don't have the geographical metaphor of the world they see. I mean, just an example, those who started to use ICT in the 90s generally had the metaphor of the office and then projected that metaphor on Office. So you got a desktop, you got word the software. So you are projecting your physical metaphors on the computer with a mobile phone. They ensure your use of utility archives; okay, you put folders and files into the folders. You give a name to the files, etc. So you are organized to see when younger people started to use technologies that use the mobile phone. So they don't have the idea of archive, they don't have the idea paragraph, CS degrees, but they don't have it. And so what we are creating is a generation where you have developers, a lot of developers, a few people that do development, and then you have the biggest majority of people that do not understand what the computer is doing, what the mobile is doing. And that's even worse than the old generation because they don't mind, they see some magic is working like that you took with a younger child, they don't know the difference between using them or using WhatsApp or Messenger. For them, there is no difference between standard communication, and they lost the physical approach of the information technology. And so see, their brain cannot understand what is happening to their data or what they are producing in the InfoSphere. So that we were aware of much more than the new generation. So why is it so important the work with the Children's code, because we have to take into account the different ways the new generation is thinking about the data, okay? Their own personal data, because personal data are not something that I can buy; it's not an external thing. They are me, and they are like my arm. So you cannot imagine just taking off your personal data from you and treat them as an external object. They are the integrity of your own, you being a data subject, the GDPR talks about the data subject, it means that there is a need to to a human entity that is represented. And so they're not separate. So I don't know if I was clear on effort.


Debbie Reynolds  47:33

Oh, definitely. I totally understand. And I think tying back to what we're talking about how data is handled right now, and I think there has to be a bridge at some point. Between this ocean of data that has been handled in the background and what we see on the front end because I think when we're talking about Data Privacy, for example, let's use the GDPR for example. So we're thinking about data controllers and data processors; data is something that you give to someone, and they do something with it. But there's this whole other stream of data that you don't give to people that they can't infer about you that doesn't belong to you that you never gave someone that a company or organization can make inferences about you. Right, and they can take action for or against you based on those things. So I don't think the regulations quite grasp that. And that's why a lot of these companies make so much money because it doesn't really answer. It doesn't really fit into that framework. What are your thoughts?


Enrico Panai  49:01

Yes, the gap there is that, technically, inference is not adapting. It means that it's one thing I always did during the ICT project is using the etymology of words because they help you to understand better what you are talking about. And the word detaching from the latching data to it means something that has been in the past, okay. And that's important. It's an important distinction because a new inference is something that you are building from something that has been in the past, so it's not real. In fact, that's why they are treating inferences, not like data; they see where they are not, we are not using data, we use inferences or proxy variables to make another decision, but they are not data. So that's a matter for us. The point is that inferences should be used in a very controlled way; you cannot influence something I cannot influence the LHC notes, something that maybe it's not very known to us. I came from an island in the Mediterranean Sea. Sardinia is not Sicily, and it's another Italian island; we are more or less the same dimension of the island. And in Sardinia, we weren't very developed in the last century. It means that generally, we did very poor jobs and the side of the island, then we have the coast that was a bit more developed. And there was a psychologist in Italy was called Lombroso, the started to do a kind of scientific, psychological measure the every, every measure of the Sardinia, to demonstrate that because of some mathematical arithmetic proportion, we were we could be only dangerous people. Okay. So Sardinia was treated in a particular way because the Italian State thought that we were dangerous. So we were a criminal, or we were used in the army just to go to the front line because we were thought to be dangerous. And this is new for us; it's you're taking your kind of measure you're facing you are stupid, or you are dangerous, or you are I cannot trust you. And I know that I am, I don't know, the same year, I haven't believed the same instant story of the American, the black in America. But I think that you are the meat of the same problem of making the inference of what you are because of what your physical measures are. That is the color of your skin, the school you have done, the neighborhood where you live, or now all that should not be. I don't say that it should not be used or should be controlled. Because yes, it should be used, because I needed to know in a database if you are a woman or a man, so I can do a kind of positive discrimination and say, oh, guy, we have too much, man, too many guys. Men in this group, we have to be open to women. So I need this information because I need the good ethical discrimination and positive discrimination. So that's not true that I don't need all the data, but I shouldn't do inferences about race or protected categories. And in making any kind of choices, even for a single advertisement, not just for a job, even for a single advertisement. So it's not good that, for example, a younger girl has got only advertisement in pink. I don't agree with that. Because it's not the way, we have to treat a human being. Okay, we are forming our mantra. We are modeling our mentality or thinking, or beliefs. That's not correct. It's not because I believe that it's just because the algorithms shouldn't reinforce this kind of approach. Okay. It was already in the past. We try to fight this approach. Now. We have artificial intelligence that is reinforcing. No, thank you. We have to be more intelligent and smarter.


Debbie Reynolds  54:42

I agree. I know there's a case going on in Europe right now about a credit agency that denied a person. I can't remember what country that was in the EU, but they denied a person an account for electricity. It was a utility of some sort. And the only information they had about the person was their name, their sex, and their address. So they didn't have any credit history or anything. So they literally denied this person based on what their name is and where they live. So I think I want to see this case and how that goes forward. So they're going to have a huge impact. So that's an example of inference, right? So they can't say that it's a black box, secret sauce type of thing with their algorithm, because they took a negative, they cause a negative impact on this individual based on the way they their AI worked, they can't they need to be able to explain why he was denied credit based on whatever their inference is about this person and where he lives. So you have hit the nail on the head. And I think that thing that you're talking about related to inference and discrimination is something that we really need to take a look closer look at with AI for sure. So if it were the world, according to Enrico, and we did everything that you said, what would be your wish for privacy or data protection anywhere in the world, whether it be law, technology, human AI, or nudge? What are your thoughts,


Enrico Panai  56:31

I’m a dreamer. And I love to work on the street without thinking about my security. Okay, I love to talk with people without being afraid of something would happen to me or my family. And I would like the same for privacy. I think we will be in a mature work, but nobody will really be stressed about his privacy. So the work we are doing now is just to create a more livable environment for that subject. I want everybody from the very beginning when they are children to after their death because we are still data of people, all these data will be treated in a good way. And we should care about this data without the people's minds about the security. I want I see something different. I don't want to take off empower many people, but I want to give them time and freedom and take off all the energy they're using to think about this kind of stuff. They should do other kinds of stuff, not thinking about their protection. So I hope that in Europe, another country, like the US or Brazil, Japan, and even China, we could get in a future development system where our data are protected without thinking about that. That's it.


Debbie Reynolds  58:29

Yeah, I love it. That was great. That was great. Thank you so much, Enrico, for joining me from France. This is great. I’d love to continue the conversation with you, and I'm in contact with Ryan; I think we're going to try to collaborate on some stuff this year as well. So congratulations to you for your work with Forhumanity. And I love what you're doing. So keep up your content and things that you share. These are such important cutting-edge topics that people may not be thinking about now, but they're very important.


Enrico Panai  59:06

Thank you very much for the opportunity, and I hope to talk to you again.


Debbie Reynolds  59:10

Yes, yes. Definitely, thank you.