"The Data Diva" Talks Privacy Podcast

The Data Diva E49 - Masheika Allgood and Debbie Reynolds

October 12, 2021 Debbie Reynolds Season 1 Episode 49
"The Data Diva" Talks Privacy Podcast
The Data Diva E49 - Masheika Allgood and Debbie Reynolds
Show Notes Transcript

Debbie Reynolds “The Data Diva” talks to, Masheika Allgood Founder and CEO of AllAI Consulting, LLC and AI Ethicist. We discuss her talent of highlighting the impact of AI and algorithms on humans, the mythical idea of AI versus the reality of AI,  assumptions, and inferences that can be problematic with AI, Bias in AI, the tension between technology and law, Data Privacy in the US, and her wish for data privacy in the future.



Support the show

 

Masheika_Allgood

Sun, 9/19 3:57PM • 56:11

SUMMARY KEYWORDS

ai, people, data, harm, creating, tech, law, privacy, issues, bias, human, algorithm, feel, person, decisions, world, difficulty, application, money, regulations

SPEAKERS

Debbie Reynolds, Masheika Allgood

 

Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds. So this is "The Data Diva Talks" Privacy podcast, where we discuss Data Privacy issues for the industry leaders around the world with information that businesses need to know right now. Very happy to have a special guest on the show. She has inspired me on LinkedIn. I love your comments about almost anything, her name is Masheika Allgood, and she's with Ally Consulting LLC. She's from the California area. And I have been super impressed with you, your AI emphasis, which, you know, just that title alone interests me quite a lot. But I feel like ethicists have their finger on the pulse of the issues that we need to really be thinking about, that probably are not yet codified in-laws or maybe shouldn't be or, or the ways that maybe we're developing or looking at technologies to need to be talked about further. So without further ado, I would love Masheika to tell me a bit sort of about yourself and sort of how you ended up in this particular industry.

 

Masheika Allgood  01:25

Oh, thank you, the compliments are lovely. I feel special. I've been watching you as well; I follow your five-minute talks and whatnot online, you provide really good information. So thank you for giving me a bit about myself. My path to AI was random. Let's just put it that way. I am a lawyer by training. It's what I want to do from very early. But I also have always loved tech. But when I was a young person, you know, tech wasn't really a job, you know, it was more like, you know, video games and what was that wargames? And, you know, it wasn't seen as a career path. And so I went to law, which I loved. But practice and the profession of law are two different things. I was more interested in the profession, but everyone I worked for was really about the practice. And I don't see the law and the businesses that a lot of people get that I worked for. So I define, you know, what is my other passion? How can I move forward? And I really like tech. So I went back and got an international business degree, focused on marketing and data analytics, ended up doing an internship at a hologram startup at an incubator out of Edinburgh University. So I was in Scotland for a little while doing that, which is pretty cool. I came back to Florida, which is becoming a bit of a tech hub, or at least trying to but at the time was not. And then I finally realized I just needed to. I just need to get to California. So threw all my eggs in that basket, came out to California, worked my way up from vendor contractor to long-term contractor, and then finally had enough of that life. And it's like, I need a contract to hire a job. And I ended up getting one of those that video worked on video as a product manager on licensing software, enterprise licensing software. But I'm the type of person when I get bored or aggravated, I need to feed my brain positive things, or it spirals. So I got bored and aggravated and was like, alright, what's new, and picked up this book on information theory. The first thing was, well, what about stuff we can't communicate? Like, you know how your stomach feels when your mom calls your name that right way? You know, you've got to get it. Right. Do you understand? Like what I say it because you felt it? How do I explain that to someone who hasn't had that experience? Right? I can't communicate all the things. So that's what I initially started thinking. And then I started thinking about AI in legal applications came across the whole campus debacle. And then, because I have an English degree and because I'm language-focused, I really got into the issues of natural language, natural language processing, and, you know, the difficulty of the robustness of data. And, you know, using it in applications where you don't have robust data. So the first paper I wrote, which actually got accepted for GTC talk, was about, you know if I create natural language processing that's not particularly robust when it comes to access. You know, what happens if I try to take that to, you know, Alabama, Mississippi, and replace 911 operators, like, how's that going to work? So looking at it from, you know, an application point of view this tech that we're putting out, and if it doesn't have the robustness, how does it how does that play in reality? So spent some time in the video trying to work through AI ethics and started a working group to try to, you know, figure out how a video could do ethics in a video-specific way. We took our ideas to the CEO, and he was not interested. And so I, I kind of figured that there wasn't really a place for me. Because it is important to me to feel like, you know, whatever I'm working on is a benefit to society. And if it can't be a direct benefit, at least we're doing no harm. Right? I am a lawyer, and ethics are key to who I am. So left InVideo, started this Ally Consulting, spent the initial couple months of quarantine taking courses, so statistics, a variety of AI, just foundational courses, you know, so that I can understand the tech itself and what's under the hood, as well as continuing to read and try to be on the cutting edge of you know, what was going on in research and application space? And yeah, so we've come to this point now, where I apparently have garnered a bit of a following on LinkedIn, which is cool, and really starting to, you know, be able to make an impact in these issues that are of great importance to Yeah,

 

Debbie Reynolds  06:13

I was so excited to try to get you on this show. I love to see your comments. I'm like, Yes, yes. Because you really, you know, how to splice through the issues in a way that I feel like, you know, I don't know, I feel like a lot of times, especially with AI and ethics, a lot of times people try to do what I call, you know, the Beach Boys version of what they think AI is like. They really do. Maybe the Jimi Hendrix version of it is a little bit more gritty. And I feel like you really do, so I get excited when I see your comment. Oh, once you say now, I would love to talk about it. So you have posted something. I just love your stuff. So you say something about algorithms are not the problem. And I would love for you to explain that a bit. But I'm just throwing something in here that I think is interesting. So, and I agree with what you say. And people, sometimes people don't want to go down this rabbit hole, which is you were saying that the algorithms are discriminatory, right? They discriminate. So it is, to me, discrimination is something that's a knife that cuts both ways. So obviously, if you're creating AI, you desire a certain result. Right? So is looking for a certain thing, but then the other, the way that my slice is the other way, if that the thing that they're looking for may create a blind spot for something that may cause harm to someone else, right?

 

Masheika Allgood  07:59

Mm-hmm. No, I think that's completely legitimate. I think the difficulty is, all of this stuff is actually R&D. Like the tech is not mature enough to be affecting people's daily lives because we honestly don't understand it. Like, when I made the post, I didn't think it would be as controversial because I just felt like we keep tiptoeing around this issue of bias, as if it's, you know, something separate from the algorithm. But that's fundamentally what it does. It's a discrimination engine. Like that's, its whole point is to classify, and, and to, you know, cluster groups of information. Well, that's discriminatory, like, you can call it bias, you can call it discrimination. But it's, it's the same thing. I'm looking for certain qualities or certain aspects of certain features. And I'm going to classify groups or entities based on that. Right. And so, that is precisely what it is intended to do. That's how the tool works. So I think we've been, you know, tiptoeing around it as if it's something separate from the tool without just acknowledging no, this is what the tool does. The issue then is, what do we do with that understanding? And so you've got a one really popular school of thought, which is, well, let's mitigate the bias, let's, let's see if we can, you know, eliminate it in the data, and let's see if we can, you know, minimize it in the algorithm itself. And I understand that, you know, pathway because it's tech-driven. And the issue with tech is we want all of our solutions to be tech, right. So we find an issue with bias. And it's like, well, how do we solve this with the tech itself? But if that's what the tool is meant to do, all you're really doing is trying to position it in that, you know, I want to get rid of this kind of bias. So I don't want to do discriminate based on race. You know, I want you to discriminate based on, you know, geography and age and in these other things, but I don't want you to discriminate on these four or five factors. So can I choose what you discriminate against technically? But what we really need to be looking at is, is how the tool works. How do we use this tool as it is to benefit us? Right? So that really was what I was trying to do is, where are we hiding the fact that bias is the core of AI? It is what it is. So instead of trying to hide the fact and paper around it and solution around it, let's recognize what it is and then use it to our benefit, right? So if AI can illuminate areas of bias, and I can use that in my human decision-making in a positive way, it's only a problem when I try to cede all control to the AI. So so it's the application is the automation of human decision-making was where that bias becomes harmful. But if I'm using it as an input, then I'm in a position as a human to say, Well, okay, well, I'm not going to make that decision. Because, you know, that's the wrong kind of bias, like, I thank you AI for bringing it up and letting me know that I've created this model that acts in a way that I don't really believe, but I'm not, you know, I'm not gonna move forward with that. But this push to make the AI the decision-maker to make, you know, the AI takes over human mental function, human thought and action are misguided. And that is what's driving us down the wrong road. So instead of trying to solve around bias, we need to just change the problems that we're asking them to solve.

 

Debbie Reynolds  11:28

So eloquently put, oh, I just love it. And I say this all the time. You know, I feel like we're abdicating our human responsibility and judgment to AI. And I don't know, probably, you know, I don't want to get in trouble. I have people who work at Disney, but I feel like this is like the Disneyfication of, you know, AI. So it's like, you know, I tell people, people think about people, people treat AI like a teddy bear. And it's really, it really, we're, you know, totally different.

 

Masheika Allgood  11:59

Well, actually, I see it way more insidious than that. I don't. I don't see it as Disney. I think you're putting sheepskin on a wolf. Like I, when you look at what, what you do as a developer, you know, when I'm designing AI, that is going to determine if people get benefits, that is going to determine if people get accepted into college, if people get offered jobs, even shown jobs, people get mortgages, if people go to jail, people stay in jail, or people, you know, have constantly policing around their home. You know, when you look at the application that you're using AI for, you're providing, like a tool for a very small group of people to create their will to impose their will on large groups of society and not allow any recourse if the AI has an improper input? Who do you go to for that? Do you even know? Did AI make that decision? So how are you going to fix it? Right? And so for me, it's not, it is insidious, it is, it is cruelty, and it is a problem. So I don't; I came to it in a very different sense because I was a lawyer. And I've seen, you know, how a bad lawsuit or a bad ruling can destroy not just a life, but a community, you know, it can suck the life out of an entire community. You know, when that person, you know, this particular person, some people are pillars, and you move them, and everything happens. And we can see that, and we can argue against that judge, or we can appeal that. But when you have an AI system, that oh, the knowledge is proprietary, so you can't even pro you know, the data we used or how we tune the algorithm, then you're basically saying those four or five guys who made decisions, they're determining people's lives, who they're never going to meet, you know, who they have no understanding of the devastation they caused, and they will never face the consequences for it. So I'm, I'm in a much different, like, this is why I post the way that I post because we can't keep tiptoeing around the danger that is ever-present in real right now.

 

Debbie Reynolds  14:12

Yeah, totally. Right. Oh, my goodness. You know, I guess to me, is, in a way, if I have to describe someone is like, you know, that mat, that you step on we go to the grocery store, and it opens the door, right? So for someone who's creating this AI, you know, maybe every time they go to the grocery store, and they step on the mat, they know this door is going to open, so they've walked through it, and then someone else walks through, and it doesn't open the door. And so their argument as well, well, it opens the door for me. I don't know what the problem is because it does open the door for you. Right, right. Exactly. You're here. You're not stepping on it right or, you know, just ignoring the fact regardless of whether you, whether it happens to you or not, is harming someone else. What are your thoughts?

 

Masheika Allgood  15:01

Well, I think so. I think that there is, there is a level where there's the absolute knowledge that what we're putting out is bad. Like when you read, oh God, Facebook, they just came out with a variety of stories on Facebook this week, they just pinpoint this issue, right? Clearly, the issues within Facebook go directly to the top, they've been shown to the CEO, and he does what he wants anyway. Right. But what I say that the individual developer, who was creating that system, understood the harm at that time, I don't necessarily know. And one of the questions I asked on my LinkedIn was a question that was brought up by my mentor, Randy Williams. She was like, you know, when I have these conversations about robotics, that no one's talking to me about carbon costs of, you know, my degree, like, no one's telling me, you know, what, you know, how much carbon I'm generating, and doing all this testing and programming and whatnot with AI, like, no one's having those conversations. She was like, what other things that, you know, people aren't learning, you know when you're going through, and you're getting degrees and data science and software engineering. And I think this focus so hard on STEM, to the exclusion of all else, has put engineers in a position where they don't have, you know, the overarching, philosophical, political, you know, social knowledge to put anything they're doing in context. So when I, when I talk to engineers, they're not anti what I say, they're always shocked. They're always surprised, like, I never considered this in this context. Because that's, those aren't, those don't trade skills for engineering, right? I'm a lawyer, my job is to look around the problem, you know, sociologist job is to look around the problem, political science job is to look around, like the arts, and the humanities, that's our job to look around these problems, engineering, they funnel focus them from like high school all the way through. And so, they don't get this overarching understanding of how the technology fits into regular life. So I don't approach this as in developers, or, you know, blindly creating whatever, I don't think they've been given an opportunity to, to have this kind of discussion, and to see their work within these kinds of contexts.

 

Debbie Reynolds  17:20

Yeah. And then I also think one thing that sort of gets the gets people up in arms, especially people, because I work very closely with developers and privacy by design and stuff like that. And I think, you know, people, a lot of people get upset about this, they assume that the person who's developing have evil intent, you know, maybe some do, or maybe some don't. So to me, regardless of what the intent is, there is residual harm that can result that they need to be aware of, you know, down the line, even if it doesn't impact them. So I feel like if these harms impacted the people who were creating them, I think they would develop their tools in different ways. I don't know.

 

Masheika Allgood  18:10

No, no, I agree with that. That's the whole reason, the argument or having a variety of people in the room, right? Because I've sat in a room and someone says something, I was like, oh, for real, you know, how that will play with my right. And so, there are issues of cultural disconnect. And, you know, I've worked with people globally, and I don't know every culture as I do. In my international business degree, we do a lot of cross-cultural communication, and, you know, to learn and be able to, like, unearth some of these differences, but I am not a culturalist, you know, in any sense of the word. So I have blind spots. So I've been in situations where people had to check me like, oh, that's not. That's not how we say that. That's not how we do that. Right. So like, I get that, but I think we focus very heavily on the developers themselves. But AI is an ecosystem. And AI is a team; you do not develop engineers who are not solely responsible for AI, right? You have product managers, product managers, like 80% of the job as a product manager. I'm speaking as a product manager, and as to provide context to engineering. I have my I would call him a mentor, but he was my colleague, but a mentor, I guess. Andy, he's our head engineer. I can't remember his last name. But he was the head engineer for my product, my product. I saw him in a meeting, like early into my career and video. And someone had gotten a little, a little extra on the product management side and was trying to tell engineering, like, oh, y'all need to do this in this way. He very calmly said it is your job to tell us what you want to build. It is our job to determine how to build, and I was like, oh, thank you. So right, and so he helped me understand the roles within tech, he firmly committed that to me, because, yeah, I'm saying it very calmly, but that cut like a knife. So the Bible was like, oh, this dude is okay. But he was very clear that this is my role as an engineer, and that is your role as a product manager. And I came to understand working with him, you know, for some time, that my main role, my main function was to give them context, to explain to them why this doesn't work, in reality, how the deployment will be affected by you making this decision or that decision, like how users you know, intend to use this product and why this particular, you know, the structure is not going to work for them, we had a lot of conversations about things that they want to do with the code that would be cleaner. And in computer science, terms more theoretically sound, and I was like, that shit ain't gonna work. Over here, is using it in this way. And that's not how this works, like, so we had clashed, but they respected it because I was never just coming to them, like, oh, I want this for this reason, like I had a client that I was thinking of and how they were going to use. So I think part of the difficulty we have in tech, particularly, you know, Silicon Valley Tech, as property management has no teeth. Product Management is basically product marketing and sales. Like we don't actually, you know, develop our software. And so we were kind of an afterthought, so we're not providing those key inputs. So I'll take the insurance algorithm that we've heard about the New York State is suing over, that was heavily biased towards providing services towards sicker white people and fewer services, I mean, towards less sick white people and fewer services towards sicker black people. So if you were less sick, and you were white, you got more services. If you were sicker, and you're black, you got fewer services. And obviously, that's a bad outcome. So somebody should have figured out in testing that that wasn't going to work, right. But somehow, it was made through a few testing, and it was being used. And people found it out. And it was like, Well, what is wrong with the algorithm? Is it not working as intended? Well, the developers could not find good data around the level of sickness, like they didn't really know how to figure that out. So the proxy they used was money spent. So how much money are you spending on insurance? Well, the white people in that area had better insurance, and they spent more money, and the black people had less. So you've got an insurance algorithm that's making your life and death decisions based on your findings. Now, if the product management had been on their job and had been speaking to experts in the community, it would have been very clear that that is not a proxy for sickness. And that is actually kind of illegal, or at least, if it's not illegal, it's definitely not ethical to use it as a proxy. So everyone wants to look at the developers as if what were they thinking? But there's a whole ecosystem before that gets out into reality. So there were several people along that causal chain who dropped the ball. And the developers may have been first. But clearly, there was a lack of communication between the end customer and the developer to where that got lost in the sauce. So I feel like, yes, there is a role for diversity in the development teams. But there's also a role for diversity amongst your product management, amongst your, you know, your VISTA amongst, you know, whoever is interfacing with the clients and bringing that information back to engineering. So I think it's a holistic issue of lack of diversity within tech and a broader sense, instead of just the narrow focus on engineering.

 

Debbie Reynolds  23:40

Yeah, you know, you say something, there's no way to de-bias data. And I'd love for you to go into that. But I want to say to me in a way, that's kind of a parallel to Data Privacy in a way, so it's sort of like, what we have now is people say, let's, we're going to bake a cake. And then, in the end, we're going to take the eggs out, right? So. So the point is, you know, in order to build these products in a way that doesn't have bias, or it has to be thought about at kind of the fundamental foundational levels, not just something that happens at the end.

 

Masheika Allgood  24:19

No, I fully agree with it. And I think you're right, the way that we've been approaching all of these issues with equity, equity, or ethics or privacy or anything of that nature is it's always at the end transparency, explainability. It's always at the end, can we give, you know, a little something make people feel okay. And that's just not the way to do it. Like, the issue with data is, the whole value of data is its bias, right? If I give you data and every single, every single point along that data is the same, what's the value? What insights can I gain? There's nothing, right? It's the entire point of data is this differentiation between different people, different classes, different entities, and from that, I can glean some sort of information and knowledge, right? So instead of trying to de-bias data, what we need to be doing is considering what data we are using, you know, as we make these decisions about the algorithm, right? Because we have this current paradigm of all the data all the time. So, when I was at InVideo, we were having a conversation about, we had some software that or some hardware or software that wasn't collecting, you know, data, it wasn't sending telemetry data back. So then the question was, what do we want telemetry data? And then it was like, Well if we have telemetry data, why don't we just do a data dump into a data lake? I was like, whoa, whoa, whoa, whoa, that's not where the industry is going. We're going to, you know, selectively collecting data and using it for internal purposes. But that is the legacy system. When you look at a whole lot of industries, they just got data lakes, and a whole lot of different companies this, you just collect all the things, and you figure out what you need. And but then you end up with algorithms that are based on all of that data, when they only need it, you know, specific parts, and now you've got bad data, or, you know, unreasonably discriminatory data in with the rest of the data. And now you're screwed. Right? So now you're trying to figure it out on the fly. And so I think the better issue or, the better approach is more in line with what you're saying is, let's figure out what we need. And why. And bake that into how we build the algorithm, like bake that into the decisions we make along the chain of building a. Right?

 

Debbie Reynolds  26:34

Yeah, I guess there's a there's, I guess, a tension between technology and law, in my view. So you're probably the perfect person to ask this question, too. But I feel like and this is why I think it's important that you do AI efforts, right. So literally, in my view, ethics is supposed to come before law, am I right? Unfortunately, not all laws are ethical. But sometimes people think, and I think this is a huge problem with this sort of AI system that makes life or liberty decisions about individuals is that if you wait until the harm has happened, there may not be an adequate redress, right? Where people say, Okay, well, we have this law. And then if this bad thing happened to you, today, we're going to go, you know, to a court or do whatever. And I feel like the thing because the data is about people, right. And it can harm or hurt people in just all types of different ways. That out of I feel like trying to go the legal route isn't an adequate redress, so that it's imperative that we do kind of this work before the harm, try to prevent these hires perhaps. What are your thoughts? Well,

 

Masheika Allgood  27:55

I think that you've hit the right pathway. But I think it's direr than you envision because there is no law. Right? There's no tension between and law because there is no right. So part of what I'm doing is creating educational materials for lawyers because we don't understand the tech. We've been way left behind on this. And so, you know, the EU just came out with their draft legislation. China has come out with a legislation. The US has got bupkis. We got nothing. We got a couple of things that we're hemming and hawing about. And you know, there are a couple of people who are really into it. But on the broader scale, there's no law regulating AI in any real sense of the word. We don't know under what theory you can see. So right now, a couple of years ago, an Uber self-driving car crashed into a pedestrian who was walking the bike. Uber had made several decisions that made the AI a lot more dangerous, right. So first, they disabled the automatic braking system because they don't like the herky-jerky drive. Second, they had created a system where the AI, at some point, you know, would give up. But there was no notification system for the driver. And then third, they didn't have any kind of buffer between when a crash would happen. And when the AI would give up. Right? So all of that comes together into, you know, the AI can't classify this warning, and just going back and forth, confused, confused, confused. 1.3 seconds before the crash. It says humans need to intervene. And then there's no notification, no light, no sensor, no anything to say humans should intervene. So the person who's driving, you know, doesn't do their job, and they crash into this woman, and she dies, right? What was the civil liability for Uber? There weren't any because there were no laws that you could hold them accountable. And there was just an argument over what theory is like? Is this product liability? Is this negligence? Is this intentional equipment? What is this? So no one at Uber got sued by the woman who was sitting in the driver's seat, who didn't react in 1.3 seconds because she was playing on her phone. Yeah, she went to jail. So this is the difficulty, like they prosecuted the person they have laws for, which is the human. But they couldn't do anything about the entire causal chain of events that was set up by people who made way more money than the human in the seat because we don't have laws for that. But we just we, we don't we have, we have a couple, you know, we have laws that can be used in these contexts. Still, it's new, no one knows about it, like, so the law is not in a position to do its job, which means that ethics becomes it's all we have to protect us, you know, in cases of rogue or runaway, you know, applications of AI. And so it's, it's a bit of a terrifying situation.

 

Debbie Reynolds  30:59

Yeah, I guess I'm concerned about the leadership in the world, right of, especially the US right now, where we see all these different countries, at least trying to come up with a strategy around sort of data or AI or whatever. And sort of, we're still kind of doing this whole 50 separate state thing, and we don't know what to do on the Federal level. And we know, this election year that's going to happen, or whatever. So I guess I'm concerned about, you know obviously, other, some countries are taking it more seriously, you know, try to look at it and try to at least create some type of illumination in sort of how this data is being handled and what it's doing so well. I mean, what are your thoughts about? I don't know, like, what should we be doing that we're not doing now? I feel like we're not doing really anything at this point.

 

Masheika Allgood  31:57

So I mean, that feeling is I feel like I get you. But there, there are efforts at the state and local levels, like cities have been banning certain, you know, like facial recognition and instituting you know, laws about, you know, what can be on the streets, when it comes to self-driving cars and things of that nature. And, you know, you got a couple, I think New York just passed some kind of rule about the cap on Doordash fees, and, you know, gig work and fees and whatnot. So they're trying to approach it from a labor standpoint. So there are no actions being taken in states and cities, and localities. But the difficulty, which I don't think many of us really consider, is tech companies or countries. They are literally the size and the GDP of countries. So when you actually pull up a GDP list, and then you go and look at the market cap of the top 10, 15, you know, tech companies, they outrank like a large portion of the world's countries when it comes to market power, right. So the difficulty at the Federal level is trying to rein in a country within your country like our system was set up for that. And so there's, there's just a difficulty with how do we deal with these usurpers in our space? And so what you're starting to see is these antitrust movements, which is, I got to give Elizabeth Warren a lot of credit for that she saw this before a lot of people did, which is they are too large to regulate, they have subsumed too much of our daily working lives, for us to know put rules on them, like, how do you not use Google? How do you not use Microsoft? How do you not use, you know, Apple, or Amazon, you know, like these companies, and then you've got, you know, companies underneath that, that provide services that aren't as forward-facing, but they still, you know, they enable those companies to grow at the scale. Right. And so, I think, I think what it's going to take is, you know, breaking these countries down to the size of companies, and then, you know, legislating around behaviors that we don't want to see, we don't want technology making ultimate decisions about people's lives. So that was that human augmenting that was, you know, using the tool in the proper context, I think, once we start creating legislation, it can't be in the same mode of standards and regulations that they were taken within the EU. I think that is great legislation. But I think you also have to have legislation that's purely on or that is focused more towards use cases and applications. And, you know, I don't care if it fits the definition of an algorithm or if you found a way to strip down an algorithm that doesn't fit the definition of AI. I still don't want you to engage in that behavior. I guess. It's not the definition of AI; this issue is what you're using computer-aided decision-making to do. So I think that's, that's a key point where they weren't left when they should have gone, right. But I think the standards that they've created will be useful, and the structure that they've created will be useful. But I think in the US, given where we are, given that we've created this wild, wild west, we need to come in first, on applications, things you cannot do with the AI things that you are, will be held liable for? If you use AI in these ways, I think we start with that because that's our problem. Right? That's what we've wrought, is out of control application of AI and ways that it should not and never be used here.

 

Debbie Reynolds  35:46

Yes, I'm concerned. I'm concerned about many things. But I'm guessing I'm concerned about the consumer element of privacy in the US where your rights don't kick in unless you're consuming, so we can't consume, or you're not consuming, you know, that gap is quite wide for that person. And then also, AI, the harm can be very different for each person, right? So, you know, the example I gave is, let's say you bought lettuce in the store, it had E-coli. And they said, oh, we're gonna have this recall because 50 people in this area get sick and throw your lettuce away or something. So the harm happened right away, right. And then you knew everyone was impacted in the same ways where AI are hard to happen, like maybe this person, six months from now doesn't get a job, or maybe a year from now, you won't get any school or this, so you know, the harm can sort of escalating and be in the future. Right? So I think, you know, I'm concerned about us not really looking at terms of regulation, at the human level, you know, just thinking about it in terms of consumerism, and then also assuming that the harm there is created will be the same for each person.

 

Masheika Allgood  37:06

No, I agree. One of the issues that I've been bringing up to attorneys is we tend to focus our standing like is who can sue is based on specific types of harm, right? It's based on financial, physical harm, like those, are really the key you can sue if you have one of these two issues. But what about reputational harm? Because of the age of AI in the age of you know, that we're in now, with social media, reputational harm is, is a real tangible thing. You know, but we, in the law, not considered that right. So why don't we revisit our rules on standing and consider the new ways that you can be harmed that are novel, you know, in the law and in the world. And so I think the issue of individualized harm for generalized tech is something that we just don't have a handle, right? Like, you know, we have this idea of class action, which is great. But some people are harmed more than others. Like if you're younger, you know, and you got, you know, this is the intentional infliction of emotional distress, right? So, you see, they just came out with a study that Instagram is actively harming young women, girls, right. So can we sue for potential reflection of emotional distress? Like, because at what point is it intentional, if you know, and you've known for some time, and you continue to feed this to these people, just because you didn't point her out individually? Is that not intentional? We haven't had these discussions in the law and regulation and law, you know, they, you know, one comes before the other. Still, they go hand in hand, like we, the law, are, and it's an attempt to affirm those regulations and uphold them along with societal norms that have been built around the regulations. So this is kind of like a joint conversation. Whatever the law does, the regulations are going to follow whatever regulations the laws gonna follow. It's a symbiotic relationship. So we've not had these kinds of conversations. But it's, it's a real thing like that the harm is individualized, right? Because if I am, you know, if I'm 13 when I start feeling the harm, it's going to be compacted by the time I'm 18, you know, but if someone just got on like six months ago, like how? Much is their harm, right. And so also, like, some people feel things in different ways, you know, I'm saying like the harm of, you know, a credit AI being incorrect. Well, if you only try to buy a car, it hurts you with your car, but what if you're trying to buy a house? What if it also affects you getting a job? You know, what if it also affects your kid getting into college? Like, you know, that's, you know, I'm an individual person, I make different decisions with my life. This one issue can affect me in seven ways that will affect one other person. So I don't know that we have a handle on the fact that, yes, this tech is general, but it creates very individualized types of harm. And if we don't have a private right of action, what's the redress?

 

Debbie Reynolds  40:08

Right, right. One thing that Europe has that we don't have in the same way, and a lot of times I feel like, I don't know, maybe people, maybe we have too much pride, I guess, in the past, and the way we've done things maybe we need to do something different, like in the future, you know, for me, there were no good old days. Right. So you know, like, one thing in Europe, they have, I think it's really interesting is this EDPB, which is a board that recommends regulations about, you know, privacy and security and stuff like that. And so, we don't really have that here. You know, I think having some group whose only job really is to sort of look at what's emerging there, look at the harm the current harm, right, look at what's emerging and look at it on a human level and not look at it industry or sector-specific area to be able to suggest changes in regulation. I wish we had that. Because I feel like what we're doing is, you know, we need strategies. And right now, we just have tactics in some way, or the tactics are happening. You know, there's some strategy happening on the state level, especially in places like California, but as you know, those things sort of build on each other. It's like a pyramid of things that happen. So you can't just do you know, you can't just take a, you know, I don't know anything about sports, but you take a football and just, they can throw it all through the endzone and then, you know, wow, okay, we're gonna have a Federal law. And it just doesn't happen that way. You know, I think you have to get agreement and consensus over and over year over year until you sort of build something that really has some teeth to it.

 

Masheika Allgood  41:59

I think I think what you're talking about, like, we can see an example of that, and the Consumer Financial Protection Bureau, right. It was a huge fight to get that up and to run. But their sole job is to look at what is best for consumers in the US markets. Right? That was new for us. I think that was Elizabeth Warren. She'd been on the job lately. Right? Yeah. But yeah, I think this would be a similar type of bureau. So we got pride issues in a lot of ways. But I think, I think this one, we just haven't gotten around to yet. Right, I think, I think the need is starting to bubble up like you're starting to see actually some bipartisan conversation about especially social media. Now, obviously, the way it started with some foolishness, but, you know, just having them in the space and thinking about these issues is important. And the fact that you know, whether they be doing it for some really odd reasons, they're still, you know, looking critically at, you know, at these issues when it comes to social media. So expanding that from just focusing on what I can tweet, and why are you banning me into, you know, what algorithms are you creating? And why what, what user bases are you going after? And why? So, the fact that people are actually looking at why would we allow Facebook to have an Instagram for kids, when they can't protect kids, they've shown a willingness to protect kids on the main Instagram, like, those are the kinds of things that I think an independent bureau would be able to tackle, as well, as, you know, what is? What is our goal, and what is our strategy in a nation when it comes to Data Privacy? Because even within the government, you know, there's a lot of confusion and conflict, you know, responses and ways that we're addressing it, should we just encrypt it? Should we, you know, how much information do we give to contractors, you know, how-to, you know, what technology are we using to share and to you know, to keep things private? Like, I think there's a lot of open questions, and people are looking for best practices. And I think a bureau like that, not in this I get in this role, but I don't think it's proper for this particular issue, because they're just a general standards body, like you need someone who is in the weeds with what is going on. And, and really looking to protect, you know, the individual person who owns this data, because we own the data we used to, we technically, like theoretically own our data. And I think we need just like the Consumer Financial Protection. We just need a data financial, protect our Data Protection Agency, and, you know, address how we're giving data to the government and how they're handling and then you know, what is going on in the private sphere, but having them individually tasked with data protection issues? I think you're right, and I think there may be an appetite for that because of all of the issues with hacking and our inability to keep data safe. Are you really it to any degree across any of our various forms of government and corporate or corporate structures?

 

Debbie Reynolds  45:08

Yeah, I would love to see have a foundation for someone somewhere to toss in the constitution that privacy is a fundamental human right. So I give an example like so that you're from California. 

 

Masheika Allgood  45:24

I'm from Miami, but I'm in California, but I'm in California for reasons.

 

Debbie Reynolds  45:35

As a current Californian at the moment, you all have a CCPA. And it, you know it's the most comprehensive Consumer Law right in the US right now, on a state level, but it doesn't cover every industry, right? So the example I give is, let's say you go to a grocery store, you share data with them, they have to comply with CCPA. But you walk across the street into the church, and they don't. So you're the same human, right? You want your data protected, regardless of who you're dealing with. But you know, that's not you, right. But I feel like that gap is widening because as some of these laws are being passed, they're creating more exceptions to how to what agencies can do whatever, like, almost all this not just us, even in other countries, a lot of like, for example, the DMV, the Department of Motor Vehicles, they're almost always exempted out of this date of sale, because tons of name sales happen with them. And so they do a lot of fusion with stuff. So I think, you know, I would love to see, I would like to see people have more transparency with how their data is being used and more agency. So right now, we don't, you don't have visibility, and then we don't have control?

 

Masheika Allgood  46:58

Well, I think those are really key things that European regulations bring, you know, into, into, into play, right? Because they create those structures for that. And just like with GDPR, you know, America didn't do GDPR, but we follow it. Because if you're a big tech company, you're doing work in the EU. And then, if you're doing work with big tech companies, and you need to, you know, follow the same rule. So I think, you know, once a large governmental body puts out those rules in that spirit of those rules kind of proliferate throughout the world. So I think that was a big addition, but as to your point about a constitutional amendment. You know, the state's got to ratify it as a whole process. But I think you would achieve the same goals. If there was a statement at the Federal level that we view data as a fundamental right, Data Privacy as a fundamental right, and then, you know, create an agency that enshrines that goal into their mission. So I think it comes back to us creating some sort of agency where their entire purpose is to protect, you know, our fundamental right to Data Privacy. Right.

 

Debbie Reynolds  48:06

Yeah. That's, that's really smart. Thank you for that. I have thought about it that way. So if it were your world, and we did everything that you said, what would be your wish for Data Privacy, whether it be law, technology, human consumer robot? What are your goal or your wishes?

 

Masheika Allgood  48:30

Well, I'm old school, right? Like, I'm a little older than probably a lot of audience. But, you know, I'm from back in the day when you never gave your Social Security number. Like you guided that thing. Your parents told you from jumping once you memorized it, but you write it down. Don't you share it with anyone like it was the core to who you were, right? It was in governmental systems, and this is who you are. You hold on to it. That is yours. So that, to me, is Data Privacy, like that's the paradigm I grew up in, that's what I'd like to return to is, if I want to give you my data for a purpose, it should be for that purpose. If I want it back, I should get it back. You know, like, I just feel like, you know, it's like anything else. If I buy something, I don't like it, and I get to return it. You know, if I give you money for a service, and I don't like it, I get to recall it. So I think we treat data transactionally, and the same way that we treat money, it ain't for everybody. You can just have it because you want it because you found a cool way to use it. You know, I have to authorize it. If you go to my bank account and take my money, we don't fight right. But you can go and grab my data from whoever, and I can't do anything about it. Like when did that happen? So right, you know, they keep saying data is a new oil data, so new money will give me the same rights that I have overall the money, right? Like if you see it like that as that key that you need to pay for. Why are you creating this globally scaled, you know, behemoth, and I'm not getting any money. You need my data to create it right. So how am I not getting paid? So I just asked like we, we speak out of both sides of our mouth, when it comes to data, you know, we want to treat it like, you know, it's oil, and it's precious, and you know, it should be treated in a certain way. But then, when you get it from me as an individual, you act like a snap. So I kind of feel like is the same mentality that, that America and European countries have taken to South and Central America to Africa, where we mined you, you know, for your, your natural goods, and then you know, we've changed it into something different, and now has that, right. But if you weren't able to get those natural goods, then you wouldn't be able to do what you're doing. So I just felt like we were using those same tactics and structures around data that have proven to be, you know, discriminatory. And what do you call it exploitative, you know, throughout the course of modern history, so I think it requires a fundamental shift, and how we interact with the world and not allowing, you know, American companies to colonize Americans, or anyone else, you know, your data is a resource. It should be treated as a resource. So if it were up to me, I would treat data as a resource. And that's how we move forward.

 

Debbie Reynolds  51:17

Wow, this is fascinating. I could talk to you for hours. Thank you so much. I really appreciate you being on the show. And I love your point of view on the things you're doing. And I definitely follow you, and I can't wait to see what I'll be talking about. I hope everyone else will be able to tune in and connect with you as well because I think that you're having very important conversations that other people aren't having. So you kind of stick out in great ways that I love.

 

Masheika Allgood  51:45

So yeah, all I can say is thank you. I got on LinkedIn heavily last year, just because I had things I was thinking about. And I curated my feed and made sure that I was getting a lot of information from people who were, you know, sharing a lot of stuff. And yeah, so what you see now is a result of all that study and work, and I'm just glad it's beneficial. Like it was mainly for me to be able to argue with people who I felt weren't looking at things holistically. And then it became this platform. So I do want to give a shout-out to liberal arts because I think it gets a bad rap because it's not immediately a money-generating type of thing. But the reason I think the way I think is because I'm an English major with a law degree. I would not think this way if I grew up in straight STEM. So I just want to put out there that there is value in something besides, you know, we can't treat education like trade. Right, there are trade schools, you know, there are places where you go to specifically learn to trade. Still, undergraduate is supposed to round you out as a person, help you better understand the world around you, to be able to verbalize and connect and communicate, you know, outside of your bubble. And the more we lose that, the harder time we're going to have with data privacy, ai ethics because you're creating people who just don't have any understanding or any ability to critically see their role in the wider structure of the world.

 

Debbie Reynolds  53:17

I love that. I will so I am I was a philosophy major in college. And I must tell you from my mother was horrified. Oh, yeah, I understand it was so bad. Oh my God, because I mean, she'd grown up, you know, you want to be a nurse, you go to school to be a nurse, you got to be a doctor or a lawyer. So he just cannot comprehend why I want to do that. But it's helped me a lot in my career. So I taught myself I'm self-taught in terms of technology, and it just fascinated me. But you know, to my philosophy that that thing, to me, I sort of weave it through my work because everything has Class B, right, like so before someone creates a spoon, they had a thought that maybe if I bent the metal a certain way I could use it to eat cereal, right? So being able to try to translate that into practical things. I think it's really important. And I agree about her rounding things out and being able to communicate, right. So I think that's the big problem that we have a lot, especially you know, that's why I think your work is really important where we have to break down those walls, we have to break down those silos where we kind of thought, okay, in order to be, you know, successful, whatever, we're going to create the Santa workshops where this person only know has tunnel vision, they only look at one thing where you're building something bigger, and it may not be what you want, or it may create some type of harm. So being able to break those silos down and have people like you who know how to communicate across those different silos is very important.

 

Masheika Allgood  54:53

I agree. I just want to say thank you for your time. This has been a lot of fun. I never considered that you would contact me for this. I've just been liking all your stuff because it's awesome. So this is a joy and an honor. And I appreciate it. Thank you.

 

Debbie Reynolds  55:10

Oh, you're welcome. Well, all right. I just cannot wait to get you on the show. I think I sent your email like this, like two o'clock in the morning. Oh my God. That's really interesting. So I'm really happy to have you on the show. And I'm sure we'll be able to talk more, and we're going to definitely follow your work. Awesome. Thank you.