"The Data Diva" Talks Privacy Podcast

The Data Diva E26 - Pedro Pavón and Debbie Reynolds

May 04, 2021 Debbie Reynolds Season 1 Episode 26
"The Data Diva" Talks Privacy Podcast
The Data Diva E26 - Pedro Pavón and Debbie Reynolds
Show Notes Transcript

Debbie Reynolds, "The Data Diva,” talks to Pedro Pavón, who works on Privacy, Fairness, and Data Policy, at Facebook.  We discuss being authentic and experiences of being a person of color in the data privacy field, the vital human elements of privacy as it is thought of differently all over the world, the Netflix documentary Coded Bias featuring Dr. Joy Buolamwini, the interplay of privacy and technology, bias in AI and technology, and how this relates to data privacy, facial recognition, and surveillance, the problem of using statistics to measure AI success and human harm, AI and decisions about life and liberty,  the impact of using facial recognition in policing and education, the problem of applying technology made for one purpose to a different purpose, the need to constantly monitor and correct AI systems, the challenge of information silos within an organization with data privacy, and his wish for data privacy in the future.



Support the Show.

 Pedro_Pavón

41:57

SUMMARY KEYWORDS

people, privacy, technology, bias, data, algorithm, police, wu-tang clan, create, problem, world, person, groups, communities, decide, happening, facial recognition, ways, organizations, documentary

SPEAKERS

Debbie Reynolds, Pedro Pavón

Disclaimer  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

 

Debbie Reynolds  00:13

All right, hello, my name is Debbie Reynolds. They call me "The Data Diva." And this is "The Data Diva" Talks Privacy podcast, where we discuss privacy issues with industry leaders around the world what information the businesses need to know now. So today, I'm very excited to have a guest that I've followed for quite a while on social media. Pedro Pavón, who is the head of Privacy, Data Policy, and Ads at Facebook? Hello.

 

Pedro Pavón  00:45

Hey, I wish I was the big boss, but I'm just one of several, but yeah, that's the job. And this is me. Thanks for inviting me. And I'm super excited to be here. 

 

Debbie Reynolds  00:55

Yeah, well, we met. So actually, I attended a webinar. It was a summit that you were speaking at—last year in 2020. For BigID, I believe.

 

Pedro Pavón  01:07

Yeah, that's right. That's right. I do remember. 

 

Debbie Reynolds  01:09

And you really stood out a lot because you're very, you're authentic. So I don't think I know anyone who's like you, and I really see it that way. And then you have so much swagger.

 

Pedro Pavón  01:26

SwaggieP? That might be my nickname on the streets, but or maybe I just gave it to myself. But no, I appreciate that I look, I'll tell you like, I'm a plain-spoken person, I have strong points of view on some things. And you know, I don't think they're that controversial. So I share them widely loudly. And the other thing is like, not a lot of people of color people from my background in this space. And so I'm so privileged and lucky to be able to have people be interested in what I have to say that I'm not going to waste opportunities to, you know, demonstrate Latino excellence wherever I can. So that's kind of what motivates me to be the way I am. Hopefully, it doesn't land like I'm a jerk, but, you know, just trying to keep it real on the streets. That's all.

 

Debbie Reynolds  02:10

Yeah, you're just being you. And actually, when when I saw you after, first of all, I was really impressed by just you and your presence, and just your fluidity with how you talk about privacy and just things in human life, because privacy is about humans, right. So you really bring a human perspective to privacy. And also, I commented, you know, that I commented when I first contacted you, and today, I was very impressed by your Biggie Smalls portrait painting that you have on your wall. So it definitely makes you stand out. So.

 

Pedro Pavón  02:47

I appreciate that you know I can tell you a brief story about how that came to be. I love Notorious BIG, you know, he and a couple of the 90s era rappers are my favorites, including Tupac, whose portrait is upstairs in my living room. But here's the thing, when I worked at a law firm, I decided I want to put some art up in my office because I was spending so much time in it. And somebody had gifted me a Wu-Tang Clan 36 Chambers record, like the actual vinyl in the case. And for people who aren't familiar, like Wu-Tang Clan is obviously one of the greatest rap groups of all time, I argue music groups of all time. And that was their first album, and the cover of the album was, you know, nine men, African American men in ski masks, essentially, right. And then, you know, it talked about the music or whatever. So I put that up in my office, and it didn't land well with the firm. You know, and they said, you know, that didn't project like a business-appropriate image or whatever. And I said, look, and I was an associate at the time, but I remember the conversation because a senior partner approached me and said, you know, is this the image you want to project? I said, Well, look, first of all, no clients come in here. I'm just an associate. But even if they did, I don't see how it's problematic. And second of all, I'll take down the Wu-Tang Clan album record when other folks takedown, you know, their Pink Floyd posters and their Grateful Dead posters. And there, you know, can't remember the country music, Hank Williams Jr. posted, okay, but when those come down because they were all over the firm. I'll take mine down. Yeah, I'll trade you, Elvis, for the Wu-Tang Clan any day. But I'm not taking mine down just because it's not your musical preferences. And so, now that I'm a little bit more senior, and, you know, whatever. I decided, like, I'm gonna project an image of myself that's authentic. And this is the kind of music I listen to. And these are my favorite artists growing up and my idols as a kid. And so I don't mind sharing that with the world. And I don't think it should be controversial at all. And I'm glad you like it.

 

Debbie Reynolds  04:56

Oh, I love it. I love it and see that swagger is coming out. So listen, great. Well, I would love to just get a sense of yourself. I mean, you're you obviously have had a lot of jobs in technology. So a lot of the things for your careers kind of technology and law. And so I can see how privacy weaves into that. As I said, we were at Salesforce when you and I first got to know each other. And you've also been at Oracle. 

 

Pedro Pavón  05:32

Oracle, yeah. 

 

Debbie Reynolds  05:34

So what is going on right now in privacy that really captures your attention? Whether you're concerned about it, or you think it is good or bad? What are your thoughts?

 

Pedro Pavón  05:45

Look to me, what's interesting about like, some emerging tech around privacy, like PET's, privacy-enhancing tech, and the use of like automated systems, automated recommender systems, there's a lot of potential there for creating like, tremendous economic opportunity and business opportunity, right. But there are also tremendous opportunities there to, like, make the world a more privacy-friendly place. You know, I think privacy-enhancing technologies have tremendous potential to change just the amount of data that is necessary to run the types of technologies that everyone loves and enjoys, like social media, etc., right. And so the more what really excites me is the amount of investment I see into research and study around those areas. Doesn't mean I love everything that people are trying, but I definitely applaud different companies, including mine, but also Google and Apple and, and others for at least taking it upon themselves to experiment with new ways to power, their technology that embraces principles, like data minimization, and on and on anonymization, it's possible for me to say. So like, that's exciting to me. And I feel like right now is a cool turning point. Because hopefully, we can pivot from we need as much data as possible to, do you know, to run ads or to personalized content to we're building technologies that require the absolute least amount of data necessary to do the same things. And I see that transition happening. It's slow, and it's hard. But I'm really excited about it.

 

Debbie Reynolds  07:27

Yeah, yeah, I feel like with a lot of the regulations, now, they're really trying to tie data collection to purpose. Where I feel like before, it was like, let's just collect as much as possible, and then we figure out what we want it for, you know,

 

Pedro Pavón  07:44

I think that's right. And the rules aren't allowing for that approach anymore. I don't think anybody necessarily set out to collect as much data as possible with, like, malicious intent, right? I just write the technology wasn't there otherwise, and experimentation by engineering people, you know, like, the nature of that process is very much like, give me as many tools as possible. And a lot of data was one of those tools. But, you know, the tides have changed, rightfully so in my opinion, data hoarding is no longer viable. And so, you know, there's just a lot of experimentation and research and study into ways to, you know, make the Internet run that maybe are less data-intensive. I think it's great. It's a great shift.

 

Debbie Reynolds  08:25

Yeah, yeah. And then a topic that you talked about in your work, and some of the posts that I've seen, and, you know, you see my posts as well. The problem of bias comes up a lot. That's something that you talk about a lot, and I'm concerned about. So what are your thoughts about bias? Where are we going? Are we get going to get better or worse? What do you think?

 

Pedro Pavón  08:52

I think things are precarious right now, you know, that we see all of the controversies around facial recognition technology, right? And how there's like these, you know, Dr. Joy Buolamwini at MIT, and she's actually she has like a documentary now right? About recognition. Yeah. Which is excellent, by the way, it's, that's worth watching and what I love. Let me plug that documentary features all women experts, right? What a beautiful statement to make during a documentary about bias and discrimination, right. And so, like, I thought it was well executed. But anyway, you know, what we found is that, like, the datasets, we talked about all the data hoarding that people have been used to, you know, model train all of these cool automated recommender systems and an AI systems that we take for granted, have biases built into them, because society is biased. And the data reflects that bias, right. It also doesn't help that historically, like the engineering decision-makers at all these tech companies are men and particularly white men. And so when you layer all those things on top of each other, and then you try to create a fair environment. What we're learning is that while no one might have malicious intent, you end up with, you know, an implicit bias outcome anyway, an outcome that is implicitly biased anyway. And that's problematic. And just basic things like, you know, facial recognition on my phone, but also some very serious things that involve people's livelihood, like decisions about loans, decisions about freedom, right, and the criminal justice process, decisions about economic opportunity, like recommendations, from buying real estate, and etc. So we have to guard against these problems. I don't know that anyone's provided a silver bullet solution to deal with all of this. I don't think that exists. But I'll tell you what a good first step is, including people of color and women and all of the decision-making and engineering processes involved because we're best suited to flag these things and say, have we considered this or we considered that? I don't think a roomful of white men is going to solve this problem. And so my hope is that and that we're heading in a different direction. And we're inclusive and in who we decide gets to do the heavy thinking on this, so that when we do the work, that translates into diverse, people have different backgrounds, talents, and experiences, working on the solutions, I think we're headed in that direction. And that makes me hopeful. But we have to stamp into them.

 

Debbie Reynolds  11:23

Yeah, I think one of the challenges. I've come across this, and people debate whether there's bias at all. So to me, that's a big problem. And part of that, in my opinion, is the way the Internet is where you really only, you know, some people assume that the things that they see on the Internet are the same things that other people see, we know that that's not true. So if you are looking at something, you know, like, like my boyfriend, who's Jewish, he sees totally different things than I see. And it's amazing to see how different they are. So I think, you know if you're someone who doesn't have that level of friction, and you know, in life, I would say, or, you know, you haven't had those experiences, you may think, well, I don't have those problems. So why is bias the problem in this arena?

 

Pedro Pavón  12:21

I think that's right. And you know, there's the other issue, and I hear this, sometimes, when I'm on the speaker circuit, or conferences, which is no people talking about the AI, the algorithms themselves and saying, Look, they're agnostic, they are only responding to the societal biases that exist, the algorithms themselves aren't biased. They're just being responsive to what's happening. And so the problem isn't a technological one. It's societal. Overwhelmingly, I've heard that case made by white men. And it's easy to make that case if you feel like the bias doesn't affect you negatively, right. Like, you know, and so like, you know, my thought process in reaction to that is that that's an incomplete analysis, right? Because technology doesn't exist in a vacuum, it is here to serve people. And if it's if the ultimate outcome of employing that technology is that certain groups are impacted negatively and disproportionate ways, then that technology doesn't work, whether you think it's a societal reflection, and not that technology doesn't work, and we shouldn't use it, if it is amplifying or perpetuating outcomes that either reinforce negative false ideas about people, or perpetuate, you know, unfair outcomes for groups that are historically marginalized and underrepresented, or maybe potentially generate new groups to be, you know, marginalized and not well represented and not benefiting from the technology equally. And so I hate that analysis. I think it's incomplete. And whenever I hear it, I push back pretty hard.

 

Debbie Reynolds  13:58

Yeah, me too. I don't like to hear that. Partially because I feel like two things are happening. One is, you know, there are some testing that happens on kind of a smaller group of people. And then you sort of jumped to the conclusion that you can put this on everybody, and it hasn't been tested on everyone, or whatever. So that's kind of a false thing in and of itself. And then when you're thinking about databases, you know, if someone went to Google and searched and nothing came up, they will say, cool, Google sucks. It doesn't work, right. So they have to give, you know, a database, you want to have a result, whether it's the best result is correct. You want to throw something out there. So in a facial recognition thing, if you can't get like an exact match, and they've come up with some things, people say, oh, this doesn't work. So let's say, well, let's throw up these people. You know, there's their 70% match, you know, and sometimes people feel Like, you know, they abdicate their responsibility to technology. It's like the evil robots taking over, and they can't do anything about it and stuff like that.

 

Pedro Pavón  15:10

I think that's right. And, look, it's a collective effort, like, no one group has all the right answers here. And we're going to have to work together. But most importantly, we have to allow for these debates and these discussions and the work that needs to be done, to be tense to be for there to be friction for people to feel vulnerable and feel, you know, have to deal with defensive reactions. Because if we don't, you know, like homogenous decorum isn't going to solve complicated problems that affect different people, different types of people, and in disproportionate ways. And so I think we have to be ready for some tough discussions and some tough actions. Want to get this right? And if you care about fundamental fairness and equity.

 

Debbie Reynolds  16:00

Yeah. And the problem is, I know, when people talk about regulation, although I'm not against regulation, I think smart regulation makes sense. But a lot of these harms, especially as they relate to anything biometric, the harm can be so detrimental that the law is not going to catch up in time. So, you know, I'm all for prevention and being able to do things in proactive ways and talk about things at a design level so that we can prevent some of this stuff, what are your thoughts?

 

Pedro Pavón  16:33

Yeah, and I think that's good. It's a good approach. And also like, you can implement all the preventive controls and prevention, mitigation techniques you want, when some, you know, particular one, an algorithm goes out in the wild, it's never clear how it's gonna play out, which means you can't just cut stuff loose, and then see what happens, you got to cut stuff loose, and stay on top of it and monitor it closely. And make sure that there are, you know, mitigation techniques employed to sort of guard against the algorithms, right, and you got to do that on your own. I feel like that's the responsibility of companies and organizations that put them out. And when we see this, not what we saw this not happen, well was in the like, you know, this, that algorithm that was designed to like, I don't remember exact specifics of it, but basically, like, evaluate a teacher's performance, right? I think we just cut out their black box, and then all these tremendously great teachers were being, you know, misevaluated, I guess, that's a word or like, just, like graded poorly. Right. And, and another example of that is these like, like, the parole and probation and limited release algorithms that were helping judges and other decision-makers decide people's freedom. Right, yeah. And, you know, and there was so much bias in the data sets used to make those estimates that, you know, people of color were being treated one way versus the majority group. And, and, and that one way was, you know, significantly more harmful. So, yeah, we got to prevent things before they happen. And then we also have to actively monitor and, and police, our, our technology. And it's, it's a never-ending process. There's no like, okay, cool, and this is done.

 

Debbie Reynolds  18:21

Right. Whereas the iterative process has to always be improved. And then too. So you, you saw the documentary Coded Bias with Dr. Joy Buolamwini. That's amazing. And she brought up something that was really interesting in this documentary, which is where you're using data and algorithms, and you're trying to predict things. What they're doing is looking at the past and assuming that what happened in the past will happen in the future, right, now this was one of the problems with kind of this parole thing where they're like, okay, this person did this thing in the past, we think they're going to do in the future. And that may not be true.

 

Pedro Pavón  19:04

Yeah. And also, that a lot of the data set about recidivism is based on neighborhoods and zip codes and locations that tend to be over-policed over, you know, underrepresented by, you know, political power underfunded. And so if you know the obvious biases involved in that, well, the biases might be obvious to us now. But when it's some mathematical equation in a black box, making these decisions, you know, it to me, it creates tremendous concern. Not that, by the way, not that like a white male judge making these decisions is doing a particularly better job, by the way, but at least we can hold that person accountable and look at their patterns and decide whether or not they should wear a robe and be in the privileged position to make these decisions about people's liberty. But like, I'm not sure how we hold them accountable.

 

Debbie Reynolds  19:56

Yeah.

 

Pedro Pavón  19:57

And I also am not sure yeah, I'm not sure how that happened. So, you know.

 

Debbie Reynolds  20:01

Yeah, yeah. Right. It's hard. And I think right now it's such a hot potato issue, and I feel like, you know, everyone who's involved has to take some level of responsibility. So even in that example, in the documentary about the teachers being evaluated back by this AI, it was really, you know, regardless of what the system said, the humans that were involved should have been able to say, wait a minute, this is my right, instead of sort of advocating, or saying, oh well, the computer telling me this, oh, this is the way things should go.

 

Pedro Pavón  20:41

That's right. And look, and here's another, like, I work in the ads business, right? Like, the threshold for value above, you know, for personalizing ads, I'm going to make a number, but let's say it's 70%. Let's say that 70% of the time, you see an ad, and the algorithm got it right, right? And I'm making that number up just for this example. That might be okay because of the societal harm from getting it wrong 30% of the time. And we have to decide this as a society but are less intense than an algorithm deciding my freedom. It is not acceptable for an algorithm to get it wrong 30% of the time, for anybody, right? Like, I can't imagine anybody trying to make the case that an algorithm that gets a liberty decision wrong, 30% of the time, should be used by courts. And so I guess the point I'm making is, these thresholds are going to vary by use case and application and context, right. So like, an algorithm decides whether I can unlock my phone with my face or not if Apple gets it right, 80% of the time, and 20% of the time, I have to put in my password, that's not a big deal to me, I'm fine with that, right. But if it's a police scanner, with facial recognition, and 20% of the time, it's pulling over the wrong person or, or accusing the wrong person of a crime or labeling the wrong person as a suspect. That's unacceptable. And it's the same technology, but it's just being used in different applications. And this matters greatly. And so for companies doing research in these areas, they need to think about, well, what use cases and what applications do we want to use technology and, and it may or may not be ready for use, depending on what the case is. And I think where situations affect people's health, economics, or freedom, you know, the threshold should be extremely high.

 

Debbie Reynolds  22:33

Right? And I think from an evidence perspective, and the threshold is high, right? The problem is this sort of happens. This is happening, kind of as evidence. Evidence is being created sort of in the moment, I guess. And actions are being taken before things get to court, so where you have to authenticate it, you know, so there was an example, I know that you've seen this about some guys that were arrested because they bought their driver's license, picture masks, like video surveillance of a store that the person went to, so they were arrested. And the police who picked them up even picked this guy up even said that he didn't look like that, but the computer told them that this was the guy. So they picked him up. And then he went in front of the judge. And the judge said, this is clearly not the same person, you know, so they eventually let him go. But I mean, but now he had to go through this process. Right? So now he has to fight against the computer, and what have they thought, you know, he did look like that, you know,

 

Pedro Pavón  23:41

And not just that, like, now he has to deal with the humiliation and trauma of going through this for no reason. He's just an innocent person, right? He just was going about his business. And some math formula decided that he was gonna have a bad day. But that turns into a bad month, or a bad few months, or and just an overall bad experience. You know, and back to my point about accuracy. Overall accuracy can be a misleading indicator, too. Because if you've got an algorithm, for example, that's getting it right 98% of the time, but then you look at that 2% of the incorrect, right, and 90% of the incorrect ones affect black people, that's a problem.

 

Debbie Reynolds  24:19

That's a gigantic problem, right?

 

Pedro Pavón  24:21

Because it's so like, it really matters. It's not just enough to say that, you know, we have overall accuracy and effectiveness; you got to look at that—effectiveness through the lens of different groups. And make sure that you know that even in those small margins of error, there isn't a disproportionate impact there. So we've got to figure out what the overall margin of error threshold is, is something, and I think that gets higher depending on how it impacts people's lives. And then what? You're never going to get to 100% accuracy because that's not how AI works, right? And so that's how you get to 99.9% accuracy. Well, then you got to really focus on researching that point one percent and make sure that that point one percent isn't disproportionately affected groups and negatively.

 

Debbie Reynolds  25:06

Yeah. And I think two part of it is kind of the, you know, statistics can be beguiling. Right? Where we're talking about people. So if you're a person who's who was pulled over, you know, because someone thinks you look like someone who committed a crime, statistics doesn't tell me anything at that moment, right. So the fact that it's impacting people in negative ways, we can't look at it like, oh well, you know, 90% like you said, 90% is okay. 95%. Okay, it really isn't. We're not looking at, you know, where we get it wrong and how we could correct it.

 

Pedro Pavón  25:45

I think that's right. And in some ways, like, look, police have been getting things wrong forever, okay. I mean, we can turn on the news and see live police doing a lot of wrong things. But we have this entire process to hold them accountable. It doesn't always work. And, and it needs to be improved. But at least we have a way to hold human beings accountable through the criminal justice system through the civil court system. And in other ways. I worry about accountability. When it comes to algorithmic decision making, because like, corporations are going to argue they're not accountable over time, we're gonna see these arguments, you know, organizations that buy these technologies are going to argue that they're relying on them, and so they're not responsible. So ultimately, when these algorithms lead to life and death decisions and look, I'll be straight with you, like, I hope there's never another wrongful police shooting, but we know that we live in a world where, you know, when you're a black or brown person, particularly a man, and you capture the attention of police, you're immediately in danger with that is a fact of life in America especially. And so my hope is that that doesn't also translate to when, when you are capturing the attention of an algorithm, you are also officially in danger. And I think, unfortunately, some of these technologies got out ahead of common sense and were deployed too quickly. And I'm happy to see people enjoy and other leaders kind of like helping real some of that stuff. And I applaud cities like San Francisco and Oakland and others and have said, you know what, you know, facial recognition doesn't have a place in policing. Period. And my hope is, I live in Atlanta. My hope is we roll something like that out here, too, because, you know, our police departments have great track record data.

 

Debbie Reynolds  27:34

Yeah, yeah. I guess one of my concerns, especially with people trying to use these databases for policing, is that I feel like if you're trying to create, you're trying to find a needle in a haystack, why are you creating bigger haystacks? Correct. So you're making, you know, I don't think the police department has like a good bucket where they throw people. Like, the bad people bucket, then is there the good people bucket? So once you get into that system, you're almost automatically sort of seen as a suspect until you're not.

 

Pedro Pavón  28:07

I think that's right. And I worry a lot about, like police investigation, transforming into all-time real-time some police surveillance, right. I don't love that. Communities of color tend to be where all the police cameras are tend to be where all the police officers are because it creates a chilling effect on the community, as you know, it perpetuates. You know, false statistics about criminality, meaning like if a white neighborhood was policed as much as the black neighborhood, I'm starting to be as many arrests because white people can commit crimes at great rates too like, there's no cop there to stop them like that nothing happens, right? Or to engage. And so like, I worry about the surveillance state that is sort of emerging, which is like police use of like doorbell cameras, police access to doorbell camera footage, police access to like, you know, installed cameras in neighborhoods, you know, police vehicles, that while they patrol are scanning license plates and doing all this stuff. I'm not a super fan of any of the above. You know what, you want to be a cop, and you want to protect the community? Well, you know, some good old-fashioned police work and investigation. I'm here for that as long as you do it within the law. But the idea that a way to prevent crime is to observe me at all times. Not here for that.

 

Debbie Reynolds  29:39

Yeah, right. And your example about certain communities being more heavily surveilled if you think about it in terms of the data collection effort. Sometimes people think the absence of data is telling you that that isn't happening in other areas. So that's a problem in and of itself.

 

Pedro Pavón  30:00

You're absolutely right, and then there's the other point of like the testing dilemma, which is, well, we'll test. Look, they're not rolling out like community cameras, in like the world's wealthiest areas; just think about this, the world's wealthiest people live in the most private spaces. This is the reality of the world. They live behind gates, they live, you know, behind, you know, across moats and private islands. Why do they do that? Right? Why, why not? Most of the richest people in the world are not celebrities; they're not Hollywood stars. Right? So why did they do that? And then turn around and look at the like, least economically advantaged communities and people tend to be the most of it the most-watched, the most harassed by police. Why is that? Because it's easier like that's why. And so it not only is it easier, you know, it, it creates this false sense of protection for the elites, right. But like, hey, the situation is handled across the train tracks, right. And but what it really is, is the institutionalization of discrimination and by the government through the, through the police, wing of government, because in the end, that's what it is.

 

Debbie Reynolds  31:13

Yeah. You said a moat. I was driving in Potomac, Maryland. And I literally saw someone who had a mansion that had a moat around it.

 

Pedro Pavón  31:25

I believe you! And like, look, I'm a very private person. Yeah. And if I could afford an island with a moat, I probably go live there. But I'm running from the cops. Right? Right. I'm sure other rich people are doing the same thing. They don't want to be scrutinized, they don't want to be this, they don't want to be that. And as we can see, in some examples of like, ultra-rich, a lot of them not a lot, but it's that the potential exists for, you know, privacy to be used to hide bad behavior, of course, but don't say that. That's the thing about poor communities because look at Harvey Weinstein. Okay. Right. This is sucker bought an island, so he will go do his dirt in the dark, right? You know, you know, and so like, it's not a function of like, oh well, if you don't have something to hide, then let me watch you like, you know, bad people are going to do bad things. And if you have more resources, you're going to use those resources to your advantage. This has nothing to do with economics. It just has to do with access. And leveraging your advantages.

 

Debbie Reynolds  32:25

Yeah, well, what do you think? So I feel like there's a new caste system being created and what, in a lot of different ways about data? So the people who have data have more advantages versus people who don't, or people who are insights have more advantages. But the other thing is, for the US privacy laws, as they're being created, a lot of them are around consumer activity. So if you can't consume or you're not consuming, you know, you are protected in the same way. What are your thoughts about that?

 

Pedro Pavón  32:56

I agree. Like the presumption of a buyer or someone engaging in economic activity, as a prerequisite for protecting their privacy is elitist and paternalistic. And, problematic for me, and I've raised this issue many times, not a lot of privacy, elite thinkers want to have that conversation. I think part of the reason is, historically, the people who get to sit down in ivory towers and think about privacy are super educated, wealthy people with access to lots of resources. And so that's an uncomfortable conversation. But people like me, you, and others are raising this concern is tremendously important for me, it doesn't just stop at if you're, you know, privacy, excuse me economic activity as a prerequisite for privacy protections. It also to be is regional realistic paternalism in the sense that like, you know, I applaud Europe for taking the initiative, and, you know, redesigning the EU privacy directive into GDPR. And creating what is essentially like a benchmark moment in privacy protection, and I think that the Europeans meant well, and I think they mean well, with a lot of the efforts that are underway now. However, I do worry about the fact that there's this propensity by Western policymakers to think they know best and to say that their rules should apply globally to everybody, right? Because I don't think it's anybody's business in Brussels to be telling anybody and I don't know, pick a place of Mumbai, how to think about privacy. Like I just don't know, you would argue that lots of people in London, for example, you know, caused a lot of problems in Mumbai over the last 150 years, right. Telling them what to do. The same thing goes for Latin America. The same thing goes for Africa like I think regions of the world and countries within those regions and groups within those countries should decide for themselves how they think about privacy and how they balance that against other interests? Because it's very easy to say, okay, well, the Western world has built these amazing, particularly the United States has built these amazing tech companies and tremendous amounts of wealth for lots of people and generated huge economic growth by exploiting data. Well, now, we don't want that anymore. So let's throttle that. Well, what about the tech companies developing in Latin America right now? What about the tech companies developing in Africa right now? Are they not gonna be able to take advantage of the data that the big incumbents now have? Is that fair? I don't know.

 

Debbie Reynolds  35:34

Right? Yeah. And what do you think about silos, so I feel like a privacy professional in any type of organization, you have to be able to break through those silos because they've attached to so much of what happens to organizations and because organizations, especially a corporate have been very silo right? You know, everyone is sort of like Sam's workshop, like, you do your part, and then I do my part, and then we get it together some type of way, where privacy, I feel like, you have to be able to reach across those silos and communicate with people, what's been your experience.

 

Pedro Pavón  36:08

You know, historically, privacy has lived in this like, compliance corner of companies like operations, right. And data protection in general, which includes cyber as well, right, like, just like, it is lawyers and some compliance people, and that's their job, and everybody else is going about their business. I think we're seeing a tremendous change, where privacy is sort of now being like, especially where I work, by the way, you know, thinking about privacy, advancing privacy is becoming a part of everyone's job. By design, I think that's great. I think that is a tremendous step forward, there's much more work to be done to un-silo thinking and to transform privacy from something we have to do, or else we get in trouble too, if we get it right, it's good for the business, like not just that we don't get in trouble. But like we can grow, we can create products designed in ways that people are more interested in, etc. I think that's happening, there's more work to be done. You know, the other piece of the puzzle here as privacy comes out of its kind of corner, and, you know, becomes part of the broader consciousness of organizations and companies is to make sure that that broader consciousness includes voices that are diverse points we said earlier, like, privacy is not something that everyone thinks about the same. And so, making sure that there's room for debate and strategy and application of privacy controls and privacy rules within organizations is important. And so you just can't have one point of view, prevail and decide that your privacy point of view, all right, everyone else has to deal with it. Everybody has to get in line with what you say. And there are companies out there saying that they're saying this is what we think everybody get in line. I don't think that's the way forward. I think that's paternalistic. And I think that that undermines equity. Inclusion, inclusion. Yeah,

 

Debbie Reynolds  38:14

Yeah. Well, you definitely haven't been shy about your thoughts. So this is gonna be an interesting question you probably answered already. But if it was the world, according to Pedro, and we would have to do whatever he said, what are your wishes for privacy, whether it be regulation or anything either in the US or around the world?

 

Pedro Pavón  38:33

That's a really hard question. But if I was king, the first, the most important. The most important principle of any privacy action must be that the application of it be fair; that's hard. You know, I can. I can spend two hours trying to unpack that and probably won't make a lot of progress. But my biggest concern right now as a privacy practitioner is that we are not listening enough to groups who we historically don't listen to as we develop a worldview in our profession about privacy. And that concerns me greatly.

 

Debbie Reynolds  39:17

That's a great answer. That's a great answer. I agree. I share, I share your thoughts, I think, you know, we must be related in some way. I think we agree on this very much. So I'm happy to see your voice. You know, people really listen to you as you really stand out. They have a lot of swagger, as I stated before, and you're really smart about these things. So I really like that you're thinking about these things in these ways. Oh, yeah.

 

Pedro Pavón  39:47

Well, I appreciate it. And I look, it's mutual. I'm a big fan of yours. And I can't wait to have you on my podcast. So me and Andy can pick your brain on some of the work you're doing, and I'm very grateful to the not just a way you spread important information to large groups on social media and in other ways. But the way you frame a lot of the issues, it's important and and and I'm just good to see a strong, smart black woman leading the charge. I think this is great for the profession, and you're setting a great example for others coming behind you.

 

Debbie Reynolds  40:29

That's the biggest compliment. Thank you so much.

 

Pedro Pavón  40:31

It's just a fact; it's not even a compliment, just that's just what it is.

 

Debbie Reynolds  40:36

Well yeah, we definitely have to. I definitely would be excited to do your show and happy to have you back.

 

Pedro Pavón  40:43

Oh, whenever you want. We're friends, so whenever you want to hang out, I'm here.

 

Debbie Reynolds  40:47

Oh, thank you so much. This is great. This is great.

 

Pedro Pavón  40:51

Thank you, my friend, and I'll say one more thing I don't. I don't think you do a video podcast out there, but like I'm digging the scarf, the scarf is looking fire, so nice work on that the rest of the world is missing out, but I got the scarf.

 

Debbie Reynolds  41:06

Oh my goodness. Well, thank you so much. I may have to do a pink dress screen grab this one. I think you should

 

Pedro Pavón  41:14

You dress up all nice I'm still here like a bum, you know, 

 

Debbie Reynolds  41:16

but you have your Biggie portrait!

 

41:23

Oh yeah.

 

Debbie Reynolds  41:26

So well. Thank you so much. This has been great. 

 

Pedro Pavón  41:28

Thank you, my friend. It was awesome. 

 

Debbie Reynolds  41:30

Okay.  Bye-bye.