"The Data Diva" Talks Privacy Podcast

The Data Diva E87 - Danyetta Fleming and Debbie Reynolds

July 05, 2022 Season 2 Episode 87
"The Data Diva" Talks Privacy Podcast
The Data Diva E87 - Danyetta Fleming and Debbie Reynolds
Show Notes Transcript

Debbie Reynolds “The Data Diva” talks to Danyetta Fleming, CEO, Covenant Security Solutions Intl. We discuss her background and journey into cybersecurity, her idea that cybersecurity is about protecting humanity, how profit motives may lead to detrimental privacy practices, facial recognition technology, the collection of redundant info leads to privacy and security risks with no redress, biometric technology can be intrusive, inadequate redress for technological error, information will be around a long time, her current privacy concerns, the importance of context, AI is a double-edged sword, we can’t let technology rule us, inexperienced designers can be problematic, the lack of technology can make you vulnerable, adapted technology can lead to trouble, low tech and no tech,  and her hope for Data Privacy in the future.

Support the show

49:17

SUMMARY KEYWORDS

people, technology, cybersecurity, metaverse, talking, data, irs, information, ai, privacy, thought, understand, problem, identity, person, money, lost, facial recognition, create, system

SPEAKERS

Debbie Reynolds, Danyetta Fleming Magana


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.  Hello, my name is Debbie Reynolds. They call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast where we discuss Data Privacy issues with industry leaders around the world with information that business needs to know now. I have a special guest on the show, Danyetta Fleming Magana. She is the CEO of Covenant Security Solutions Ltd. Hello.


Danyetta Fleming Magana  00:38

Hello. Thank you for having me.


Debbie Reynolds  00:40

Yeah. So we met, this is so funny. So you're in the US. You're on the East Coast. I'm in Chicago. And we actually met because we were on a panel for I think it was women Aspire in the Middle East. Yes. Isn't that funny? And I've thought around the world. How do I not know this black woman in the US? After we did our session? I was like, oh, yeah, we have to chat and meet and do stuff. So I'm happy that you're on the show. You have such great insights and social great energy, and I was really mad at myself. Why do I not know her? This is outrageous. It was the time, right? Yeah, it was.


Danyetta Fleming Magana  01:26

And there's a lot of little rabbit holes in this community too. So it's sometimes difficult for all of us to connect. But I'm so glad we connected through the Middle East. Yeah, we're here able to chat here in the US.


Debbie Reynolds  01:38

Yeah, exactly. Absolutely. Well, tell me a bit about your background. You have really deep experience in cyber and also it fascinates me. Entrepreneurs like me, right? You've been doing this much longer. I’d just love to hear kind  your journey in your career.


02:01

Yeah, I like to say I got into this through serendipity. I initially got sent a job announcement from the Department of Army, I didn't understand a word in it. Most people would run the other way. And I said, this is interesting. So let me apply. And because I wasn't prior military or anything, I was literally at the back of the list. But somehow they made it to the back of the list. And I ended up getting a job with Department of Army helping do cybersecurity at that time, it was called information security. We hadn't come up with the term cyber yet. And, you know, at first actually hated it, I did not like this. I was looking to get out of it. And as I started to get more into it and learn more and stuck with it, I really understood how this affects all of our lives. And that at that moment, I knew that this was something that I wanted to stay connected in and continue to do. So when I finished working with Department of Army and went on to work with Booz Allen Hamilton, which is a consultancy, I was actually also a global consultancy firm and worked with them for several years before I decided to strike out on my own. And part of it was just looking at the longevity of the field. I knew as we continue to delve further into the technology age as we continue to build and create our lives intertwine with tech, that cybersecurity was going to continue to be a big part of that. And I loved being a part of that conversation. And I love the fact that something's new, like when I started 20 years ago. I mean, the big deal was a server now we're talking AI we're talking cryptocurrencies. So you're always learning, you're always having to expand yourself and think about how the basic concepts pretty much stay the same, but how do you apply them and sometimes, it gets trickier as you go along. So that's in a nutshell how I started. It was really all one thing led to another and I took a chance to go out and step out into this field as an entrepreneur as well.


Debbie Reynolds  04:13

I love it. I love it. You have such clear-headed thoughts. And I like your posts. I've been watching you, stalking you on LinkedIn. You put out a really cool thing that you said recently, you throw the thoughts out there and let people think about it. And you hit the nail on the head with this one and I've not heard anyone say this. But you posted something, saying cybersecurity is about humanity. Yeah, talk to me a little bit about that.


Danyetta Fleming Magana  04:51

You know, the longer I've been in this field, the more I realize it has really nothing to do with the hacker and the guy in the hoodie. You know, that's the image worry that is put out there. But at the end of the day, cybersecurity is about how we conduct our lives. Actually, the first headline for my company was securing your way of life. And, you know, there's not an aspect of our lives that technology doesn't touch, whether it's getting your paycheck, whether it's your Social Security benefits, your health care, every single thing that we do has technology integrated into it. And so if we can't figure out how to secure and how to manage privacy, and our data in a way that that actually benefits all of humanity, where we're going down a bad path, I don't know if you saw the most recent one I put out, it was a quote from Nelson Mandela, talking about how we treat our children is a reflection of who we are as a society. So when you look at things like gaming and the kids, I had this happen to me personally, I've got three kids, and I had one of my sons get out there, and he's online getting access to just ridiculous stuff. But you know, one of them being porn, and I was just like, my goodness, and when you go back and you look at it, I said, thank goodness, I'm a technical person. So I could go on live, and hunt everything out and shut everything down. But I can't imagine what happens to parents who don't have that capacity and ability. And what's going to happen to our society, when you have children that are getting exposed to things that, their minds can't even fully grapple like, what am I looking at? What does that mean to me? And what does that mean to the future? And how should I treat other people? How should I treat women? How should I treat men? How should I treat myself? If I'm looking at content without a context? So you know, it's really getting us to think about the technology and have forethought? You know, my mom used to always say to us, just because you can do it doesn't mean you should. And we don't have that thought when we come to technology at all. We don't think about the humanity aspect of it. It's just that oh, look, my fridge can talk to me. Well, that's great. But now, right? What does that mean? On the flip side, like your fridge could talk to you, your phone could talk to you, you got your home to talk to you? Well, now I could pick up patterns I didn't understand before, that could tell me things about you. And what does that mean, when we bring in artificial intelligence? What are they going to pick up? You know, will they know by what time you wake up in the morning? Whether or not you're more susceptible to heart attacks? And is that going to then lead you not to be able to have insurance? Because you're a person who wakes up at, you know, 6:45 instead of 6:30? We don't know, right? We don't know, we don't think about the human aspects to what we're doing.


Debbie Reynolds  07:41

Yeah, that's true. And then to me, a lot of companies that are making these technologies, they obviously have a profit motive. And there's nothing wrong with making money, right. But the things that they're doing can be so harmful and detrimental. And right now, it just seemed like people are saying, well, as a consumer, it sucks to be you, right? Like to badger them, we breed 90 page, privacy policies, or too bad you didn't have a grasp on how this technology, the downside of this technology. And I feel like, that's such a huge gap. What are your thoughts?


Danyetta Fleming Magana  08:20

Yeah, I mean, I think it's a huge gap. And I think that what we haven't thought about is what does that mean to us as a society? You know, I remember being at a conference, actually, we held a conference here in Chicago. So maybe 2014. And we had a gentleman come in from the UK. And he was talking about a retired pensioner who was a teacher, and they went online and got caught up in some sort of like, feed the children type scam kind of thing that was going on, you feed these kids, and that person got a hold of a bank account and wiped them out. And so, and that was such a touching story because the police were like, look, we we have no way to know where that money is. Right? We know what account it hit when it left yours. But after that, you know, it's spent for all we know; we have no way to trace it. And so there's that aspect where we don't think about well, this is a person that spent 30 years of their lives, trying to build up their nest egg, looking to be able to take care of their grandkids and their children and have a nice life, and now they're penniless. And so, you know, as much as we sit back and say you build it and best of luck, you should have read that. You know, what does that mean? When we get, you know, 100 or 200 or so pensioners without their livelihood? Right. What does that mean when you've got, you know, systems? You know, even now, one of the things I'm thinking of putting out actually, while we're talking, is something for law enforcement because some of them go, And there are people who are swatting them because they don't like what they're doing. So they’re, and for those who don't know what swatting is, they go out, and they basically figure out where you live, they take data that's readily available through either as your camera, your ring camera, or you know, some sort of device that you may have. And they target you, and they have other people come to your home. Usually, the police come to your home and create havoc for you. Scare you half to death, your family half to death, all over the fact that they don't like you. So, you know, it's getting to the point where that interconnection between what's happening online and what's happening in the real world is narrow. And we can't keep saying, Well, you didn't read that privacy policy? Well, yeah, that's nice. But the person who's going to do a malicious attempt they don't care what the privacy policy says.


Debbie Reynolds  10:56

Exactly, exactly. You posted something else I want to talk about. It's very timely. So this is about the IRS looking to use his ID.me. Technology for facial recognition and all this stuff. And so I literally almost went on a rampage about this; I cannot believe I'm just like, what are you talking about? I have; I could go 1000 different directions on this one. So apparently, the IRS had this company, a third-party company called ID.me, try to develop a system for the IRS to somehow prevent fraud. So their idea was that people having an IRS account would have to submit to this kind of facial recognition thing, submitting utility bills, you scanning, doing all types of crazy stuff. And you know, I don't know, I'll just let you jump in first. I was like, what?


Danyetta Fleming Magana  12:07

Well, you know, I, I look at it from two angles, right? Because my my way into privacy is through more like cybersecurity engineering, systems engineering. So let's start with, I get the problem. The problem is every year, they've got probably millions of dollars that go of money that gets sent to a fraudster who goes in and steals the identity of a valid taxpayer and takes money. And so the thought behind it is, let's go and create a system where we can get more assurance on identity. Reality is that there isn't really a lot of strong controls to protect that information. And if they lose it, which, you know, I can't say, the IRS, I believe, has said stuff gets lost, along with OPM, and some others, I've actually personally been involved with some, some loss of information from the Federal government. So you know, when they lose it, then what happens? And there's a flip side to that conversation. And I think that that's where there's this hopper, and I know that they're walking it back, right now, trying to figure out how to use manage identity. And that's been a problem that's been as old as the Internet. I mean, I remember going to identity conferences, I don't know, maybe the early oh, like 20 years ago. But there has to be a safe and a humane way to figure that out. And it might be that they go back to sitting down and having hard copies of return sent in order for you to get the money. And people may not want that, but that might just be part of the answer. But getting that type of information, and you don't have assurances on how you can protect it. That's a non starter for me personally.


Debbie Reynolds  14:07

Right. And I it's like why boil the ocean? The majority of returns to happen, have no fraud whatsoever. So why try to push everybody into the system as if there were some fraud happening with everybody. And that's not the truth. And then much of the information that they're collecting already exists within the Federal government. So if you ever got an ID, from our Department of Motor Vehicles, they have all this information, we're going to say they have a capability to do this scanning and take your picture, they do facial recognition, you know what I'm saying? So why create another bucket of data that you literally already have, right?


Danyetta Fleming Magana  14:58

Another validated bucket. And that's the issue, right? They have all these buckets. And that gets into what are the aspects of data security we never about, one of which is integrity, right? We always get all these buckets all over the world. But who knows if that information is valid, still valid, invalid, just dead wrong. No one's going back and doing integrity. And I'm gathering that's what they were trying to do with this ID.me is to then have you take the bucket that they have, and say what's true, what's not true? So they can further identity, but it's not. We're just not in a place to protect that information and to provide assurances to people and if it's lost on every US citizen, what happens then? You know, can that be weaponized? Right? No other nation or hacking group?


Debbie Reynolds  15:53

Exactly. I don't think one year of free credit monitoring is going to help if someone steals your facial recognition or thumbprint, whatever it is.


Danyetta Fleming Magana  16:02

We'll have to do Minority Report, if you saw some of those posts, but I like to call it out of there. That's one of my favorite movies. But, you know, I mean, are we going to end up there where the guy's going out to get his eyeballs changed? Because that's the way he gets around the biometric sensors, right, you know, are we going to have a black market for for body parts? And you go get plastic surgery? So you change your face? Right? I steal your face?


Debbie Reynolds  16:33

Exactly. You know, I need to watch that movie again. They were pretty accurate about a lot of stuff that came out over 20 years ago. This is not sci-fi, people are literally trying this stuff, and if anyone's ever interested in this crazy biometric stuff, they have a website called biometric update, and they do all the news on who got investment, the stuff that people are trying out, and it's scary actually, to think about all the stuff people are trying to do. And it's like your mom said, just because you can doesn't mean you probably should. So suddenly, I just look at this kind of concern. So, I think, one thing as being a technologist, and I'm sure you probably feel the same way, I like to see people try to leverage technology to solve problems. But I don't love everything that people are trying. So to me, part of it is trying to figure out how the things you're developing really benefit the individual. So I don't really see how this thing the IRS is trying to do really benefits me.


17:50

Probably it doesn’t; it was, well, put out a requirement and said, this is what we're going to do to try and meet this requirement, which is trying to have a way to have very strong identity controls. And that's been one of the difficulties of the Internet from its inception. Is that there really? I mean, it's an anonymous place, right? I can tell you that I've done yet. And then I could tell you that I'm Sally, and how would you know the difference? There really isn't a very strong mechanism for identity. And so this was someone's attempt to do it without really thinking about the grander implications; what happens if you lose it? You know, who's to say that someone couldn't take a picture of me because some of this data it's? I want to actually look at the system because I'm curious. You know, they're doing it online. So who's to say I could put up a picture or put my dog in there,  like, say that you're scanning? I mean, they're assuming some honesty in the scanning, right? Yes. It's happening online. I could put my picture of my great-great-grandmother in there, right? You just don't know. And so it was I'm going to give the people the benefit of the doubt they were trying, just didn't think they think it through deeply about what the implications were, and especially when we're dealing in this world of AI, we don't know what kind of conclusions these algorithms are coming to. Especially when one of the posts that I put up from time to time is in particular for women of color, those AI scanners do not do a good job of recognizing our faces. So what happens then when you're deploying this, and I'm a woman of color? Do I now not have access to my Social Security number, into my text documents, and things like that. So some of this we're rolling out, and it's half-baked; it's not taking into consideration anomalies that happen. There was another one where there was a camera system that's rolled out to many police departments, and you're finding people are having to come back and sue the police departments because the camera is saying, I saw you, you were at this incident. There was one actually that happened in Chicago. I think it was Chicago, where an elderly African American gentleman had given a ride to a young man, that young man, I guess, got out and might have been involved in a shooting. But you know, the cameras saw that man's car, they saw the shooting, and it made a correlation. That wasn't accurate. He didn't have money to make bail. So he sat in jail until a court date came up. And then they came back and said, well, it's not as accurate as we're claiming. But imagine what's going to happen to people across the country, as you see people starting to adopt these technologies and not understanding their limitations.


Debbie Reynolds  21:11

Right. Also, some of these problems that happen to me, there is no adequate legal redress. You know what I mean? That's not acceptable that you're going to be in jail for a year, just till people figure it out. If this person had the money to be out, they have the means, or they were a famous person, they probably don't have to sign and get out on bail, or they have their lawyer jump on it and do your stuff. And so we already have access to justice issues, unbelievable access to justice issues, as far as statistics in Illinois, even where they said that the majority of people who end up in court cases represent themselves, right. So this is problematic. I mean, I'm not saying, oh, 30% or 40%, It’s  70% - 80%, you know, just a ridiculous number.


Danyetta Fleming Magana  22:11

Couple that with most people, they have a hard time using their phones. So then they say, you're going to use the tech, and then it's a fallacy that keeps getting put out there. Oh, it's the technology. So it can't lie; it can't work. You know, you and I both know, it's an algorithm that sits behind that technology. And then, the datasets that they're using are datasets that are already existing, many of which already come with biases in them. So when they sit down, and they say to you, oh, don't worry about it, the tech will never lie. Well, the tech can only do what the tech is told to do. And the people programming it, the people who are executing it, we're making judgments off of it, not really understanding how this all comes together? How is it coming to its conclusion? So that's where it's getting back to that you said that humanity and cyber, they're intertwined, inexplicably.


Debbie Reynolds  23:12

Yeah, and then too, I feel like I work a lot on people’s identity, so I get it, right. So I'm not necessarily slamming this particular company, what they're doing. I just don't like the way they were trying to implement it for the IRS. Right. But I think one thing that happens is that I think when I read about this, in particular, they've just really got me heated, it was that they said, they were going to keep the data, biometric data for seven years. For what purpose? Once you identify a person, you don't really need that information anymore, right? You don't.


Danyetta Fleming Magana  23:51

You don’t, and what do you do with it? And like I said, it's another thing of people keep in data, we had that big, big data push, you probably aware of it like four or five years ago, they were really big data everywhere, couldn't go to a conference without them talking big data. And I try to get people to understand that big data is your data. You know, they're making data ubiquitous. Oh, you know, but that's your data. It's not just IRS, but I think just companies, in general, have to start questioning what's your liability? What happens with that information? How far can you mine it, and again, it gets back to integrity. Is it any good? Because I know a lot of people, you know what I was saying with my son, he went online to create all these fake identities, and they don't know who in the world he was—trying to get around mom's defenses.


Debbie Reynolds  24:50

Right.


Danyetta Fleming Magana  24:51

It's like, so you’ve got this database full of what you don't even know what you have. Right? Yeah, you’ve got to use it and say, oh, well, we got you, don't you don't know what you have? Right? Very quickly tell you that. Exactly.


Debbie Reynolds  25:05

So tell me, what is happening in the world right now, other than what the IRS is trying to do that concerns you about privacy?


Danyetta Fleming Magana  25:18

What we've been talking about that concerns me is that we have so many nations and groups coming up with their own privacy regulations and their privacy laws. And as I said, from my perspective, I come at this really, from a systems engineering perspective, it is near impossible right now for people to keep up with the regs we have. And then we've got more countries and more groups adding their own flavor to it. So what you're finding is that people are basically taking a more legalistic aspect to it of saying, well, we'll just wait to work, it will save money for the fight, instead of really kind of taking a holistic perspective and saying, let's really design in the aspects and think about this strategically, so that we're protecting information and data. I also, the only thing that I've I've brought up in the past is I really do think that there has to be some level of ethics, or an ethical body or something that basically identifies what we consider to be good for the whole, and how that should be structured for the whole versus allowing that to be made at an individual and corporate level. That's getting back into the types of data that's coming off with some of these sensors. We’re saying the IRS is keeping data for seven years, but I believe that you have to go and actually delete your information off or like the echoes and some of those places. So I don't even know if they have a limit for how long they keep that data on you. So we're getting to a place where there's copious amounts of information, but there really aren't any true ethical guidelines about how this should be used, and we're trying to regulate it with a hodgepodge of patchwork that really is different. I think a lot of countries have tried to follow what they're doing with GDPR. But for the most part, each individual jurisdiction has its own view of what privacy means to them and what security means to them. And that's, that's a tough place to be in.


Debbie Reynolds  27:39

Yeah, definitely. You mentioned something earlier; you said a word that triggered me, and I would love to talk about this and the word of context, right? So context matters a lot. Right? It matters. Because not every use case or every situation should be handled the same way. So I have a concern related to AI. So AI is a knife that slices both ways, right? Maybe it is a good thing that you wanted to do. But it can also swiftly and mercilessly do this bad thing that you don't want to do? So it isn't a scalpel. It is a machete literally; what are your thoughts about that?


Danyetta Fleming Magana  28:30

No, I agree with you; I think we're entering into a place. Look, let's back up a little bit. You know, part of my concern is that we're entering into this new technology where it is self-learning. And yet, we haven't quite figured out what to do with cybersecurity and privacy today. We were overwhelmed and lost before we had the use of AI at the levels we're seeing it and the levels that we're probably looking at over the next two to five years. So it concerns me greatly. Because we are letting loose technology for which we don't have a great understanding. We don't even understand what we're doing now. To be honest, we're just throwing up servers and talking about the cloud and how people think you're talking about the cloud and sky, you know, just running, running, running. And connecting things and not really understanding what this all means in the end. That's what concerns you and me. I was talking about the lawyers not, and if I have any lawyers that come across them, I'm like, look, this is going to be the new field go into the cross-section of cybersecurity and law because people are going, I mean, people are finding, and they're going to continue to find that they're going to have a run-in with a program. And there's no legal precedents that are set for how do you handle a run-in with a program like I was listening to the other day. And I think there's companies out there that are using these AI programs to determine whether or not they're using them for HR. So say, if you're late so many days or you slow down on the production line, then the AI will take note of that, and then it'll correlate that information and pull that up to a manager, but people are getting fired because the AI is correlating information that a regular hiring manager may not necessarily have had access to. Or they may not see, you know, they may walk the floor once or twice a day. And you know, if you work in them, and they say you work it. But this is correlating information and saying to people that okay, well, you've got outside what I consider to be normal. And therefore, we're bringing you up to let you go. Now, I guess this failsafe in it for the companies is that they have a human who goes back and looks at that information. But they only have one human for every, maybe 200, 300 employees. So, those are things that we haven't even begun to think about when we talk about AI. Again, it's that intersection between humans and technology, and where does it end?


Debbie Reynolds  31:21

Yeah, I can't just go technology. Has to be humans in there somewhere. So, we can't, and this is a thing that literally happened. We can talk about why this is wrong, literally or figuratively. We can't be in the passenger seat, right? We can't let the technology drive us, right. And we've seen people jump into the passenger seat and try to let a car drive them, and it didn't have a good result. Right. So from another perspective, the figurative perspective, that's what people are doing there, they want the AI to do human things, and AI doesn't have judgment, right. It doesn't have the technology; data doesn't have context until our human gives it context. It's just collecting information. So we're, I feel like we're abdicating our responsibility to do judgment. And say that that's where ethics comes in. That's where judgment comes in. No pilot should have to wrestle a plane from an AI. That makes no sense.


Danyetta Fleming Magana  32:31

It makes no sense. But that's where we are. I mean, we're at the point where, and I think, that's why I like your show. And I like what you're doing because people have to have these questions posed to them. So they can start thinking about this as we go forward. Because I think we're at that place where we can turn around and start to make some, some better decisions when it comes to technology. But I think if we don't change course, in two to three years, I mean, we're going to be sitting in a place, we're going to go from Minority Report to Matrix, you know, hey, you're getting in my way, we're just going to put you in a pod, and we'll use you for electricity. I mean, I think we're at that nexus point where we can start having these conversations to say, what does it mean to be human, and we built this to really serve us, not the other way around. And when we start abdicating our responsibilities, as you said, and we stopped allowing you to have a context, you know, what's the morality a program doesn't have a morality other than what you program it to have? Right? And who's the arbiter of what that morality looks like? Is it the software engineer that's creating or the coder? You know, who's determining that, and that's something that we need to understand and take control of as a society. This isn't just something we leave. I mean, I'm surprised we don't have bodies in, you know, the UN or in places where, you know, we're still talking about policy. We're not talking about, well, what's the policy for this technology? I mean, the next pace is the metaverse, right? That's where we're going to jump to next. So, what's the policy for that? Like, what? Right, if you're in a 3d format, did the laws of the land apply to you or not?


Debbie Reynolds  34:18

Yeah exactly. And to your analogy about going from, like Minority Report to the Matrix? Yeah, the Metaverse would probably put us in the Matrix. Partially, because what's going to happen is that if you can't, if you don't have a smartphone, if you don't have these devices, then this data will be collected about you regardless, right? It could be used against you regardless, but you won't have any agency. So you'll basically be food for the Metaverse.


Danyetta Fleming Magana  34:52

Yeah, mentally and emotionally, yeah. And probably eventually, physically. But if we're to believe the Matrix, but I mean, that's where we're headed. And it's such a slippery slope because there's people like, oh, we don't want to give up all our advances, you know, you don't have to give them up. But you have to think about how they interact with you and how they interact with us as a society. We get to the point where in grocery stores, you don't even need to pull out your wallet. They don't need tellers anymore, and you just walk in; it's hooked up to your Amazon account, yes, rolled through the store, pick up your groceries, and rollout.


Debbie Reynolds  35:33

Part of this problem, I would love your thoughts on this. To me, part of this problem is that some of the people who are designing this have not experienced problems like this. So they don't know what the downside of this could possibly be because they've never had that. So an example I give is like, so let's say you go to a grocery store, and it has a mat that you step on, and the door opens, right. So every time you step on the mat, you don't even slow down; you just keep walking because you know that this door will open for you. But what if someone else behind you steps on the mat and the door doesn't open for them? So you'll say, well, there's not a problem there. Because when I step in front of the door, the door opens, but you're not thinking about someone else. Why isn’t the door opening for someone else? So it's kind of like this idea. I don't know what you call a fallacy or something like that, where you're really not thinking. Here's another example, when they were doing the COVID Passport thing, where they're saying, okay, we're going to create this app, and everyone's going to have this app, and they're going to be able to tell who has Covid. And I'm, like, only 45% of people in the world have smartphones. So if you don't have a smartphone, then you can't participate. So a lot of people thought, okay, that was the solution. Let's go to the next day. You know, it's like, no, let's back up and think about this whole thing, to begin with.


Danyetta Fleming Magana  37:05

All right, no, no, you're absolutely right. It comes down to perspective. And creativity. I mean, this was even talked about during 9/11. They were like, wow, how come we couldn't see the people that came into play? Well, that's perspective and creativity, right? You didn't understand what, how other people view things and how they could creatively think about using something that you thought for one purpose in another way. And that is probably the crux of the problem. And most of the technology we have out here, it's, you know, I'm trying to solve problem x, but I don't think about what happens on Y, you know, A through Z, you know, you just thought about X, you're not thinking about all the possible combinations of how this could be used, how people can get what they want, and understanding how ingenuity works, right? Because going back to that example of the person stepping on the mat, if you need a credit card to step on the mat for the door to open, then I'm going to figure out how do I get to a credit card, so the door can open for me, right? And that may or may not be, it may be that I click on the person's credit card in front of me, right, and the system doesn't have a cache to say that, hey, this person comes in six times in a row.


Debbie Reynolds  38:29

Absolutely.


Danyetta Fleming Magana  38:30

And a lot of it, again, gets back to diversity, right? Because we're in a field that's not very diverse, we're in a place where you've got people that are coming in with a perspective that's just their perspective, that's how they would do it, that's how they would access things, but not, its gender, it's class. And based on how you have that perspective, you have different ways of approaching life. So a lot of this gets into the reality that we're creating solutions that are for a very, very small group of people. So creating this disparity, even just with how we handle information, but with how we protect information, we're going to figure out how to live.


Debbie Reynolds  39:19

Absolutely. Especially so you know, I keep beating up on ID.me about this, but really about them. They just happened to be the ones who had the baton at the point at the time. But this one I talked about a lot which is you take a tool or technology that maybe was used for one type of use, and then you try to throw it, especially into a government use right where, instead of you have less problems if your customer is more curated, right, but if your customers everybody, you have to have an everybody solution. So you have to have a solution that fits for every person for every scenario. So you can't assume everybody has a smartphone, or you can't assume that someone can scan a utility bill and upload it to their computer. You know, some people barely want to use a cell phone at all, so, I think trying to, I see these loud, people say, oh, we have this problem in marketing, too. So we have people, for example, people create a tool will say, okay, I want to sell this shirt, you know, I think 80% of these people are asking them this coupon, they're going to buy this shirt or whatever. And they were taking the same technology and putting it into systems that profile people for crime. Right, so what do you say like, okay, you're this person is 80%. This 80% likelihood that this is the person that committed this crime? No, it's either 100% or zero, right? You're either 100% certain, or you're not. So it doesn't work that way.


Danyetta Fleming Magana  41:11

Exactly, exactly. This is why I like the design side because you get to ask people those questions. And get them to really think through like, why? What are you trying to achieve? And often, people look at something well; I'm just going to put it over here. Well, it doesn't work like that. And what are you going to do with the data? What kind of assurances do you need? And those questions aren't always very clear. And so that's how we end up in this space of, I would say, our cybersecurity nightmare that we keep living over and over and over, which has affected privacy, affected all the different aspects of the intersections of cybersecurity, because we're taking tools, and we're saying, okay, well, I can use this shoe to hammer a nail in? Well, you can, but is that the right tool? Is it the right tool? And what are you going to do the next time, right, if you've got 50 nails? Is that what's going to happen to the shoe? So those kinds of questions are coming up because everything is about getting speed to market. When you talk about marketing. Everyone wants to be out there first. I was actually listening to a podcast, but it's on Clubhouse, and they were talking about health technology and how you have to get things out to market so fast. And I said, well, you know, but there's an ethics to it. Right? There's a question about, well, what are you getting out fast? And what harm could that potentially bring to the individuals that could be using your health technology? So I think we end up in this really weird place that I call having a clear plan, having a roadmap of where we're trying to go and what we're trying to achieve. And often, we're trying to achieve it without a roadmap. We're just going out and saying; we're going to go do this and make it happen. And get this technology, this technology, no one wants to build from scratch anymore because it's expensive and long. Right. And so we're a hodgepodge of things together and not understanding the unintended consequences that come with it.


Debbie Reynolds  43:30

Yeah, well, let's talk about low tech and no tech. Okay, so I did a panel recently about privacy in the Metaverse and super smart people on the call, right. And they're talking about people are concerned, right about ethics and are concerned about all these technologies, and people don't understand the end of potential harm. If someone you know, obviously, well-intentioned, right, it says something about, well, why don't we build this tech because like me, you know, obviously, I like technology, but to me, not everything is a technology problem, some things can be done from a human side that doesn't require technology. So here's an example. You know, don't collect certain things. How about that? You know, so they'll even put it in the system, to begin with, and then we don't have to worry about how to protect certain things. You know what I mean? Right to begin with, so I think a lot of us, we get so enamored with this idea that, okay, we're going to buy this tool, and then we're good, and it's like, no, like, there's like a human side to that. To me, some of these problems really are human problems that we're trying to have technology solve.


Danyetta Fleming Magana  44:46

Absolutely. Absolutely. And you know, and it gets back to us asking the question, what are we trying to do? Why are we trying to do this? What are we trying to achieve? And often people will when you say are asking those questions, then you find out, you know what, well maybe I didn't really need to have the phone number and the Social Security number that was just something that I would like to have, or maybe I really don't want the system to do this, but it's getting that discipline back. And that's one of the things that have been lost because the longer it takes you to build a system, the more questions you ask, the longer it takes to build, and then the longer it takes to get to market and the less money you make, because if somebody else comes up with that same idea, it throws it out onto the market, then that's money. That's revenue share that's lost, and there's first mover advantage still plays out. So from a low-tech perspective, I think that we need to really look at that more than anything. I think people get enamored; it's just like, the long lines to get the iPhones; I never understood it. Was never that person. But, you know, you can't deny that people like tech; they like that idea that I can push a button, and it will give me what I want, or it'll make things easier. You know, it's lovely to think about the fact that I can go on Uber and they can bring me my food. But you know, there's some human aspects to it that still need to be thought about. What happens to the individuals that used to perform those services? How can we do that in a way that we bring integrity to some of these systems? Some of them are just connecting people and doing very little background checks or limited background checks? What is it that ultimately makes this service valuable? And those kinds of questions you're seeing, that's why you're seeing some of the things fall out because those questions aren't being asked. You know, I love those apps like anybody else, but I've had my fair share of bad experiences. 


Debbie Reynolds  46:58

Wow. That's true. So if it were the world, according to then Danyetta, and we did everything that you said, what would be your wish for privacy anywhere in the world? Whether it be technology, law, or human stuff?


Danyetta Fleming Magana  47:14

 Good question. I think what I would want is if we could go back to putting people first, understanding the basic concept of We Are One that there isn't, and being able to then make decisions from that place of looking out for one another. Because I think that basic question of how does this affect me, the golden rule, they want to have others as you want them to be done unto you. That's missing in tech, very much missing. So I would like that to be if I had a magic wand. I'd like that to be sort of the underpinning for our regulations for our system design. And for us to go back and look at the things that we have in place. And ask that very critical question.


Debbie Reynolds  48:06

I love that. You're one. We are one. We're not food for the Metaverse.


Danyetta Fleming Magana  48:12

No, we're not, and we have our way. Maybe we can convince enough people to turn the boat, so we don't become food.


Debbie Reynolds  48:20

Oh, my goodness. Well, thank you so much for this. This is fantastic. I'm so glad we're able to do this chat. I think the audience will really love it.


Danyetta Fleming Magana  48:28

It's wonderful. Same here. I had a ball. So if you could invite me back. I'm more than happy to come back.


Debbie Reynolds  48:36

That would be great. All right, talk to you soon.


Danyetta Fleming Magana  48:40

Okay, take care.