"The Data Diva" Talks Privacy Podcast

The Data Diva E65 - Ryan Carrier and Debbie Reynolds

February 01, 2022 Season 2 Episode 65
"The Data Diva" Talks Privacy Podcast
The Data Diva E65 - Ryan Carrier and Debbie Reynolds
Show Notes Transcript

Debbie Reynolds “The Data Diva” talks to Ryan Carrier, Founder, and CEO of For Humanity. We discuss his journey to “For Humanity” from finance, regulation in finance and privacy, the challenge of the digital divide in privacy, AI and the Metaverse,  the potential harm in AI for which legal redress will not be sufficient, the proliferation of cyber data breaches and complacency about data uses with emerging technologies, third-part risk and AI, and his hopes for Data Privacy in the future.



Support the show

49:59

SUMMARY KEYWORDS

people, data, metaverse, risks, regulation, privacy, breaches, problem, world, biometric data, built, human, law, humanity, technology, systems, harm, entities, ai, thinking

SPEAKERS

Debbie Reynolds, Sanna Toropainen, Ryan Carrier


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds. They call me "The Data Diva". This is "The Data Diva Talks" Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world for information that you need to know now. I have a special guest on our show, Ryan Carrier, who is the Executive Director of For Humanity. So he works on all matters and types of things doing with AI or anything about, you know, modernity technology in the future. And I love your website, how you have kind of the robot hand touching the, you know, the human hand, that's really cool. But you reached out to me a couple of months ago; I’ve been a fan of your work. And I've really been amazed by all the things you've done. But when we chatted, it was interesting because you were able to give me a background of sort of, you know, the genesis of this project and what you had done prior, and I was super impressed with all the work in the different workstreams that you guys have going and the things you're doing and actually recommended your project to many people. So many people call me up and like, you know, I want to get involved in AI and, you know, stuff about technology in the future. And I say get involved, you know, jump in. So this is probably the first way that we're collaborating, but we're working on doing some other collaboration stuff in the future. So definitely introduce yourself, I'd love for the audience to hear your backstory. I thought it was really fascinating how you kind of got started For Humanity.


Ryan Carrier  01:49

Sure. And thank you for having me on and letting me share the story and appreciate all the recommendations and sending people our way; we do think of For Humanity as a great way for people to get involved in a way that fits them, right that fits their energy and their passion, their drive and their time, and everyone who's involved it for humanity is a volunteer. And so it is really about matching up to their ability to plug in and finding ways to take the tools and the work that we do and have it meet people where they are. So that they can have their say and have their voice and really work in advance and make sure that our AI's and our autonomous systems match up to the needs of people and are driven by humanity. The backstory is fairly simple. I had a 25-year finance career, and I was running my own hedge fund for the last eight years up until about 2016. And unfortunately, I just survived, but I didn't thrive. And I was winding that hedge fund up in 2016, and I had some time on my hands. But the most important points were that we use AI to manage money. So I have good familiarity with it. But more importantly, my boys and I have two boys. One was four, and one was six years old. And I was looking at the industry looking at Silicon Valley, you know, move fast and break things being the kind of a Silicon Valley mantra. But what, unfortunately, I saw was that the breaking things were breaking people breaking relationships, and breaking communities. And that's not okay, and I know what the mantra was meant to be. It's, I’d you know, to break bureaucracy, break old, old styles and old views and all that I get that and I see the value of that. But unfortunately, the risk management culture that comes out of Silicon Valley that was built around AI built around autonomous systems was ineffective and remains very weak, very ineffective. And unfortunately, people are getting hurt. So I saw all of that I saw the downside risks associated with many of these technologies. And I don't mind sharing with you I got scared, scared for my boys' future. Scared sufficiently that I started a nonprofit with no funding and no plan. And that should highlight for people what these risks are in terms of size and magnitude. And so, For Humanity, its mission is very simple. It's to examine and analyze downside risks associated with AI and autonomous systems. And where possible, engage in risk mitigations because if we can mitigate that risk, then we can get the best possible results from these technologies for people, and that's why it's for humanity, and that's where the idea comes from this overly ambitious name of kind of who we work for. But we love it. It keeps us focused on who we work for. And we want to make sure that every time we are thinking about these systems, we want to think of them inclusively. How can we be for more people? And it's always difficult; it’s always a challenge. We’re never going to be perfect. And we're never going to be fully inclusive. But it doesn't mean we don't try. And we don't keep trying to make sure that all of these technologies are working for human beings. And so when I started this, I explored through writing, I explore through thinking, we touched on the future of work, technological and employment, rights and freedoms in the fourth industrial revolution. Data should we own our own data is that part of the equation. But the thing that I settled on, which borrowed from my experience in finance, is that many of these systems are not built with trustworthiness by design. In finance, we have an independent audit of financial accounting and reporting. And what most people who aren't close to that field don't recognize is that this is a system that builds an enormous amount of trust. Independent audit of financial systems creates an accounting and the audit process 10 Q's 10 K's financial reports, are used in such a way across the industry, that entire businesses, entire industries, rely upon these numbers, for their whole business strategy for their whole investment strategy, without even checking the numbers. And what that tells you is that the process that generates these numbers is extremely trusted. It's extremely robust. And it builds what we call an infrastructure of trust upon which other ideas, strategies, businesses can be built. And so we want to bring those concepts into AI into autonomous systems to find a way to enable this infrastructure of trust so that all of our systems are reflecting our values, and our purposes, and our needs. And we don't use debits and credits; we don't use balance sheets and cash flow statements, what we focus on our ethics, bias, privacy, trust, and cybersecurity, a holistic lens and a holistic way of looking at these systems to make sure that they are built with humans at their center. Does that make sense?


Debbie Reynolds  07:43

Yeah, it does make sense. I like the way. That's why I thought it was very important that you are able to really explain that. And I like the fact that you're in you have a background and finance. So I can go a lot of different ways with this. So one thing and one thing that I've seen through history is that great wealth precedes regulation. Right? So that's what happened in the financial industry. So when you talk about all the process and procedures, regulation, sort of place of finance, that's because someone made, you know, billions of dollars, and they'd say,  we need to do this differently. And that's kind of where regulation came in. So what I see in kind of the technology sphere is that we're overdue for regulation in this space, and probably, you know, if use financial, as a lens would have happened by now, so what are your thoughts about regulation in general? And, you know, although I think we have smart regulation, regulation to me does not mitigate the harm that can happen, right? Before you can get to a court, or before you can get to a regulator. So what are your thoughts about that?


Ryan Carrier  09:00

Well, that's a great point. Let me start with that before I come to the regulation part. Most laws are built, and they’ve put out even GDPR. Right, which is an industry-leading law, globally, European and the UK focused, but it's still a law that sits there. And what happens is entities will engage in the ownership and collection of personal data. But then what happens is there will be a problem; there’ll be a breach or misuse. And so it's kind of a game of Whack a Mole, right? They stick their head up, and they get smacked over the head with GDPR. And it's reactive. But the problem is, is once they smack some people, somebody over the head who's violated GDPR, like Amazon at 5,000,887 million euros for a Data Privacy violation of GDPR. The problem is it's already a problem. People have already been hurt by their data being breached or leaked or misused or mishandled or whatever it was; that’s reactive. What I see in finance, one of the reasons I love this concept of an independent audit of AI systems is that when you take tax law, and you codify it into a set of audit rules, and you require all entities to abide by those rules, and go through those audits, now, what you have is you have a law, but it's been translated into auditability. And it can be enacted in advance proactively before people are hurt before people are breaking off, before people are violating and creating these harms. And so, it's a proactive approach to the law. And so this is why when we think about law, we think about regulation, we want to think about enabling this sort of certification process or this advanced look that can proactively build in compliance by design. So let's talk about regulations around the world. Europeans, Singapore, UAE, even a little bit of the UK absolutely are substantially ahead of the United States. In fact, the United States is probably the leader in laissez-faire regulation. Now, part of that is our political system. And I don't want to make this a political discussion, but we have issues about being able to agree on some things. The interesting thing is regulation in this space seems to be a bipartisan issue. But it's approached from two different sides. And unfortunately, we're just not there yet. Part of the problem is we don't see significant enough harm, which is remarkable given that both sides of the aisle would argue that social media has had an impact on voting, you know, one of our key and crucial democratic institutions. But what I'd rather see what I prefer to see is the approach that Europeans and even the UK have taken in enacting Data Privacy laws in enacting in the UK. For those who don't know, there's a law called the Children's Code was formerly known as the Age Appropriate Design Code, and it governs the interaction between children, their data, and their online world and life. And the net result is that it's actually an extremely thoughtful approach to ensure that when children engage and start putting their data out there learning what that means, they're aware of the way they're interacting. They're aware of the way that data is being taken and used and how it's being used because the law demands it. Interestingly, three US politicians, Trey Han and Castor from the House, and then-Senator Ed Markey put out a call to 12 of the US leading game designers to abide by the Children's Code suggesting that this Age Appropriate Design Code, the Children's Code, should be followed and how they design games. And I do agree that it is a gold standard law that is likely to be adopted in various forms in jurisdictions all around the world. It really is a very nice and valuable law. So I see regulation coming in the United States; I see it piecemeal. I see it coming through individual sectors. I see it coming through regulators themselves, particularly the department's regulatory guidance I see in finance, building upon Star Eleven Seven and Model Risk Management. I see it coming out of the EEOC; both Charlotte Burroughs and Keith Sanderling have called for regulation in the HR and hiring space. I see it coming out of the FTC. Lena Khan is very much in favor and Rebecca Slaughter as well, both pushing for better governance of privacy laws. One of the problems that we face is that right now, the slowness of Washington is being, I don't know, the ball is being picked up by states. And that makes it very hard for companies to comply because they're complying with individual state mandates, and even knowing what all the laws are in these in this marketplace can be difficult at the state level. And so, there really is a need at the Federal level to settle on some of these issues. And then finally, on top of all that, you have the new EU Artificial Intelligence Act proposed in April, still being discussed. It is not enacted in Europe, as of yet likely to be, and then the question is, is how are these high-risk AI is going to be managed and more importantly, how will they meet the demands of what is known as conformity assessments or the work that we're trying to do? Which is these certification schemes so that they can meet and prove compliance with the law. And that's going to be the kind of the next 12 to 18 months is seeing how that plays out. Did that answer your question?


Debbie Reynolds  15:19

Yes, that was great. Thank you. As you were talking, I'm scribbling notes. It reminds me so what's happening in the US right now reminds me of the digital divide and why that's the problem here. Problem everywhere definitely here, because many of our laws, almost all of them around privacy are, you know, either based by a sector, right, or what I like to say is consumer law versus human law. So, you know, the example I like to give to people is like, if you're a resident of the state of California, you walk to a grocery store, they have to abide by, you know, CCPA, CPRA, you walk across the street to a church, they don't,  because they're not for profit. So you're the same person, you're the same human, but because you're not kind of consuming in that way. Those laws don't apply to you. So for me, if you're not someone who's able to consume things, that you can't really exercise your privacy, right? In the US, that is the gap that concerns me a lot.


Ryan Carrier  16:37

Well,  and I would add to that, that we have a wide array of appetite for privacy; you know, even amongst my peers and my friends here in town, I have some who are like privacy care, literally privacy. Okay, I, you know, I'll tell Alexa everything. And then, you know, through For Humanity through the work we do, we, you know, I've got, you know, privacy, you know, the people who are super focused on their privacy, very careful about where they place data using pseudonymisation of their own, even when it's not available to them, they'll engage in it from the outset. And so that spectrum of appetite for privacy is our biggest problem because the ones who are super focused on it are not sufficient in the whole spectrum of demand. And that is one of the key reasons that, you know, we don't have a focus on this now. Another is that it's actually not a constitutional right. I want to be very clear, and there are 250 years of jurisprudence around this, which is built a right to privacy as it should have been. But it is not an explicit right; it’s built out of the Fourth Amendment, which is not about privacy is certainly not about data. And so this is why you hear talk about sort of the second bill of rights, possibly, and one of those areas of focus is this idea of privacy. But until your data has value, I don't necessarily see a sufficient number of people to drive this. Now, it might come because the data that's out there turns into all of these breaches. And these breaches are pretty problematic and expensive, by the way, very expensive for the entities that get breached. And so we could have kind of an institutional drive that brings people with it, which that's fine with me doesn't trouble me. But it doesn't change the fact that we still need to educate what I would call the retail population on the value of data privacy. So I would add that on top of your remarks.


Debbie Reynolds  19:08

That's a great way to look at it. Yeah. It definitely concerns me, especially because of the exponential growth of data capture, artificial intelligence, the metaverse. We can jump into that, and we’re entering a situation probably there already some instances where data is being generated. And people who don't know all the data that's being collected around them have no agency to be able to do anything with it or have any control. But then, you know, going back to the digital divide thing, it's like okay, let's say we don't have a cell phone. So our smartphone. So the last statistic I saw was that less than 50% of people in the world have smartphones. So yeah, you probably can't even participate in the metaverse, and you don't have a smartphone like that's one of the key things. So you're going to have a population that data is going to be collected about them that they may not even be able to interact with at all.


Ryan Carrier  20:20

Yeah, and you whether we talk about kind of a US-centric perspective where that statistic of smartphones doesn't apply nearly as much, right, it's a much higher number, versus a developing world where data isn't even an item, right? It's not even a thing yet until there's a form and way to collect that data. But it doesn't change the fact that there are what I would call global North entities who are going into the global South, which tends to be that's it's one way to describe sort of a developed world versus the developing world. I see extractions not dissimilar to colonialism. Now I don't like the term colonialism for what's happening. But it's a pejorative that we should be aware of and work to avoid. Whether we are the global north bringing it not to be colonial. And whether we're it from a for humanity perspective, for example, we want to be in those markets, helping to educate people and helping them to avoid that colonization, and having their data extracted and sucked out without any value. And in fact, with likely a detrimental delivery of service and information takeover. We saw this in a couple of places in the Middle East, where mom and pop stores are kind of a very traditional thing right in the US where mom and pop stores go away because the big-box retailer comes in. Well, we saw this in information, local information, the local interpretation of information was going away. Because the big entities were coming in saying we'll scrape all the information. We'll scrape all the data that we need to, and we'll give it right back to you. And now it's the global north telling the story of the Global South. And that's a problem. That's a perspective problem. It's an ethical problem. And it's just one of those sort of nightmares of using the term digital divide, and I'll use that as well of where a digital divide can create huge problems of just even information, and we have big information problems in our society today.


Debbie Reynolds  22:42

Yeah. Let's talk about the metaverse. That's like a hot topic now.


Ryan Carrier  22:48

Do we have to use the term Metaverse can we use? I hate giving Facebook that kind of presence.


Debbie Reynolds  22:55

They were the first ones to try to claim it and try to, you know, make it more widespread in terms of the talk about it. So they want to be in the conversation about Metaverse, but Metaverse really is about devices, right? They have software and sensors that are collecting data. The Metaverse is about fusion. So technologies that can make sense of that data once it's collected in real-time or near real-time. And then what happens with that data after so obviously, there's money to be made all in that chain, but, you know, Facebook isn't going to be the focus. The Metaverse is going to be all these other companies that you've never heard of, you know, your coffee pot will be talked through to your Roomba vacuum cleaner very soon.


Ryan Carrier  23:47

Okay. Yes, for sure. So I have a whole mess of concerns. Not the least of which starts with just the human experience in a virtual world. We are already having significant difficulty in being human to each other. Now, that doesn't mean that a Metaverse can't actually help overcome some of these issues, right. But I do see technology has created a genuine problem where people believe that they are self-sufficient. And it's a mistake. Because I can order anything, I can grab get anything by myself. I have this belief that I could survive on my own. And that's a different existence than the way things were 100 years ago. And let me give you an example. One hundred years ago, I did not have enough access to enough things to actually survive. So I had to go into town, and I had to get lumber from the lumberyard, right where I had to go to the general store to get anything to survive. Here's the thing. When I interacted with those people, let's say I didn't get along with them. Let's say I had a problem. Let's say we had a fight. Okay. Do you know what I had to do? Once I had that fight with the guy who ran the lumberyard. I had to fix it. I had to interact with that person to live. And you know what that caused me to do? It causes me to be human, to look at their sighting. It brings empathy. It brings sympathy, it brings understanding, it brings dialogue, and we find ways to get past our anger, our hurt, our problem, you know, we do now, we say the worst things to people because we don't think we're ever going to meet them. And we certainly don't need them. And the net result is we create all of these divides at a very human and personal level. Because there's this belief in self-sufficiency, and by the way, it's an illusion, you lose your electricity; you have no money. You know, let's say electricity went down for a substantial period of time. Now you have no money, and you have no access, you have no food, your ability to look at electric cars, your ability to get anywhere might be substantially hindered. Your ability to be self-sufficient has not improved. It's been made easy. It's been made illusionary. It's been made systematized. But now you're a part of the system. And then when you translate that into a Metaverse into an online space, what if these are at least things that are restricted? What if you can only get access? You know, think of a Ready Player One world; people were building around Columbus because it was the center of the Internet at the time, right in that sort of an imaginary world? Well, what if everybody's clustering around certain nodes where the Internet is being projected out or where energy can be extracted? I look at Bitcoin. And I look at cryptocurrencies racing into Texas because of the deregulated version of the electricity markets, and they can get more access so that they can mine more crypto. I don't know, I have concerns, as you can tell, but they center around the human. And they center around the fact that most of these technologies are not being built to serve the human experience, human wellbeing. They're built to serve making money. They're built to serve taking advantage, I think, of a lot of our humanity.


Debbie Reynolds  27:50

Yeah. And then, you know, the thing that concerns me most about technology and technology advancements is that the harm can be so great and so detrimental that there is no adequate legal redress. So I think just looking at regulation isn't enough; it has to be, like I said, mitigated in a way to try to prevent the harm. But we all know, right, from history, that there's a lot of money to be made after the fact. So there's a lot of money to be made. Reaction. As a matter of fact, you know, I was talking to someone the other day, as realize, you know, there are people who use AI, the term AI to talk about so many broad things that really aren't AI. Right. But the fact that you could actually have something technology is trying to predict the future by looking at the past, right was to me, I don't think that's intelligent, in my opinion. But the future is not going to be like the past. So that is a huge problem. If organizations are trying to rely on things, especially around preventing the harm around individuals, so trying to pretend that somehow the future is going to be like the past is a problem. It's incorrect.


Ryan Carrier  29:28

I think I have to two different ways. And let me make sure if I can remember what I'm thinking here, which is number one, I actually think what we advocate for, through our process of looking at ethics, bias, privacy, trust, and cybersecurity, leads to more sustainable profitability over time for companies. So I think there are those out there who could strip and just extract every bit that they can from people today. But if they do it in an abusive way, that profitability will not be sustainable over time. So I actually think that those kinds of entities that can engage in responsible and value-driven service, combined with treating humans with human dignity and respect and embedding them in the process, will result in better profitability that's more sustainable over time. So that's number one; I actually think that is centric to the human experience. And these human humans are needed to generate this profitability, to generate consumer demand, and so on. The other place that I see being crucial and critical is identifying risks in advance and mitigating them. And one of the key elements of how we engage in our work is what we call diverse inputs and multi-stakeholder feedback. Now, what is required in diverse inputs, and multi-stakeholder feedback is the idea that any system is built by a small team. Usually, it's going to be a team that is not as diverse as it should be. They will come with a perspective. And one of the things I like to say about diversity is, you know, you can have a full team of, let's say, black and brown men and women, right? So already we've got it's a good start, right? But if they all went to MIT, do we really have diversity? You know, do we have a diversity of thought, to have a diversity of lived experience? Do we have a diversity of wealth? Do we have a diversity of backgrounds? These elements of diversity are just as important as what we would call protected category, diversity of race, or color, or age, and so on. So we want to make sure that when we're thinking about diverse inputs and multi-stakeholder feedback that we're thinking about the big word of diversity. And in addition to that, how can we enable those diverse inputs to creatively identify risk and harms? Because if they can creatively identify risks and harms, and we have a chance in advance of those harms happening, of mitigating those risks and planning for and managing those risks? I'll give you an example. It's an unfortunate example. But the way that I think of diverse inputs and multi-stakeholder feedback is this. How many people had to be in the room at TSA? You know, TSA, governing how we travel, right? Governing how we enter an exit airports, as an example, how many people had to be in the room in 1998? Before, someone might have said, you know, what could happen? Terrorists could learn how to fly planes; they could actually go through security, hijack a plane and drive it into a building. The answer is more than we're in TSA at that time. Right. But if we had more people in the room with more creative backgrounds, thinking creatively about how risks might manifest, then could we have thought about that risk, and taken those steps in advance, and decided if it was a meaningful risk that we wanted to manage to get? Now it's not a perfect process because you could even hear those words at TSA and be like, X not gonna happen. And then it still happens, right? So we're not talking about perfect risk mitigation. But what we want to do is we want to let people think, in lots of different ways, about how negative things can happen. So that when a Facebook goes from 200 people at Harvard to 2 billion users, somewhere along the way, someone might say something like, You know what, we now have sufficient enough influence that people might be able to use our system to influence democratic institutions like voting. And is there something we can do about it in advance? So those are the two kinds of key things that I think about, at sort of the corporate level at the risk management level of how these systems are being implemented, and ways that we can work to imagine and identify these risks and potential harms and adverse impact, disparate impact in advance to try to get the best possible result again, for humanity.


Debbie Reynolds  34:35

Yeah, I agree with that. I think when I think about that, I think about what's happening in cyber right now. So we see all these cyber breaches, and it's just ridiculous just getting bigger and, you know, more sophisticated or more not right. We're seeing companies go down in flames for just basic hygiene, cyber hygiene, but part of that problem in this sort of fuels, cybercrime is that you know, conference thing, it can't be them, like, you know, we're great. Like, it's not gonna happen, you don't happen to the gym down the street, but it's not going to happen to me. And so I see parallels there and privacy and, you know, an AI where people like, for example, someone says, you know, I'm sure Warren Buffett is not having privacy problems, right? He probably lives in, you know, most private spaces or whatever, but you're not Warren Buffett, you know, you think about these types of things, right? So the fact that it's out of sight, or, you know, it's not a problem for some people, doesn't mean that it doesn't impact other people. And I think that's where we kind of tripped over ourselves where people want to take advantage of that. They're like, Okay, well, this person is unsuspecting, that they are going to, like, take their information or use it in some harmful way. So I'm going to only show them like, the cool stuff, and I'm not gonna, you know, like, I like to say, the way that products are made, or whatever it's like, let me satisfy your need for instant gratification, and then delay the harm, you know, down the road, sometimes we don't have to think about that. So it's like eating the cotton candy. Today, you're gonna fall out in six months, you know?


Ryan Carrier  36:21

Right? Well, what I think of going to cybersecurity, I get as frustrated, the right word, maybe frustrated the right word, that we don't have baseline norms of operation, right for all entities, and that we are actually meeting a need for creating cost-effective and efficient applications of cybersecurity, that create foundational levels, you know, the Pentagon is, is a place that gets attacked very regularly, and therefore, they're in the business of considering what those risks are, how to protect their data, right, it's an active strategy, it's an active part of everything that they do, at all levels, you know, whether they're dealing with a human resources type of software all the way up to, you know, weaponry, right. But Target didn't think of that, and they had a 4 million, you know, data, consumer breach, the right experience, didn't think of that, and they get a huge breach, right? So what needs to happen in terms of consequences to cause entities to take it seriously and to at least establish a baseline. And the funny thing is, is most of these breaches occur for the sample list of reasons, right, I didn't update my Windows Update, and therefore I was in there, or worse, it's human in the loop kind of breaches where passwords are being shared, and usernames are being shared. And so these are, some of these are actually relatively simple, relatively inexpensive fixes. And yet, there's not a robust environment for ensuring that it happens. And you know, what insurance means not I insurance, not insurance, ensuring, and thus assuring all of these roles. Here’s the key thing about what we do with an independent audit; you might be a great person, Debbie. And you might go through a process that makes sure that you've done the right thing. The human nature of it is, is that even if you're a good person, at some point in time, you're probably going to be like, I'll take care of it tomorrow. Or I got other priorities right now. The difference is if you know that I'm coming to check that you've gone through your checklist, I'm coming to see, have you done the things that you said you're going to do you know what human nature is? You do it. That is the very nature of governance, oversight, and accountability through third-party audits, when there's a whole set of rules in place, and everybody knows that those rules are there. And they know that a third-party auditor is coming to see if those rules and those criteria have been followed. You know, the remarkable thing about humans is we do those things. And that's the great advantage of the independent audit system to build the kind of government oversight and accountability that we need to shore up a lot of these risks and make sure that we can mitigate them in the best possible way. That makes sense.


Debbie Reynolds  39:45

Yes, I'm glad you mentioned that. I love talking about third-party risk. So I see in the very near future, the next year, the third-party risk is going to increase exponentially for a couple of reasons. One is there are all these regulations. Now in the US, we have state-level regulations coming into full effect in 2023. About that have stipulations about third party risk and third-party data sharing. We already see it and things like GDPR and our law. So this seems to be a trend that's happening. And as we know, the states can pass these laws faster than other people, right? And then sort of back to the metaverse, okay. And back to a situation where there are more types of data being collected about individuals, and it's more third parties. Now, right, and more data sharing with our parties, I feel like this is gonna be like a tsunami of like third party risks going to converge probably within the next year or so. So I like to factor thinking about third party risks and audits, but what is your thought about sort of this groundswell phase, or will it be happening around third party risks.


Ryan Carrier  41:01

So I'm going to add a term that you didn't use into this third party risk, which is Biometric Data, especially when we talk in an AR VR Metaverse kind of world, the amount of biometric data that each of us will be giving off captured by these third parties will be astronomical. I can change my name, and I can change my password, I cannot change my DNA, I cannot easily change my retina, I cannot easily change the shape of my face or my fingerprints. When this biometric data is out there in a substantial way, identity theft ramps up exponentially. The cost difficulty to overcome, and let's not even we don't even need to go into the science fiction of it mind if my genetic data is out there. Cloning that. I refer to something called that technology is an autocracy. Let me explain what that means. The entire world can agree on a plan the entire world which you know, think how many times the world agrees on something, right? It's very rare. Okay, the entire world agreed not to gene edit humans. And yet, three scientists in China, one at Rice University, got together and gene-edited a baby. And it turned out to be two babies and brought them to full term, even though the whole world said you shouldn't do that. And that is what I mean when I say technology is an autocracy; that is what I mean, an individual, any individual can do what they want with technology, even if it's not the right thing to do. And even if the entire world agrees that they shouldn't do it. The net result of that is crazy talk, like taking your DNA, you've just done 23 And Me, right, you've just put out there 23 And Me owns your genetic code when you give it to them, which is nuts, by the way. And now they get hacked or breached. And now that data is out there, what could the wrong person do with that biometric data? These are the kinds of things that five years ago caused me to start For Humanity. And think about this. And unfortunately, we haven't made a lot of progress. And we have a long way to go. We are very focused on biometric data, taking it to a whole other level, lockbox style, but also getting back to that point about our data, all these people who are doing things like giving out their genetic code, just to get an interesting and cute little report back and paying for you know, the service. It's like, oh, my gosh, what are we doing? These are some of the most valuable things that we own. And the risk is not well understood. And most importantly, the risk will change over time, as more and more of this data is out there. And unfortunately, there are bad people who engage in malfeasance and other problematic Well, let's call them criminal for lack of a better word, ways of causing harm and benefiting themselves. And unfortunately, this is the next wave. And I don't know we can't move fast enough. We're trying, but I don't know; there’s not a lot of education around the risks around biometric data currently, something you certainly can help with and something I know you do work on and care about. So any way that we can help you magnify that voice we are, we are all for it because proof where I'm nervous and a lot of people for humanity are nervous about biometric data, third parties, where it goes, how it’s protected, and so on.


Debbie Reynolds  45:10

I'm deeply concerned about that. So that's definitely something we can collaborate on. But let me ask you, I ask everyone, if it was the world, according to Ryan, and we did everything you said, what will be your wish for privacy in any anywhere in the world, anything, whether it's technology, law, human behavior, anything,


Ryan Carrier  45:33

I would just be following up on this, and I would have a lot. So my list would be long, but let's focus on where I just was. And that would be a high degree of safety and security, mandated around biometric data in a manner that is always consensually driven around a great amount of education, about the risks and consequences to the use, or distribution or giving or consensual giving, whatever it is around this data at even a retail level, just raising the awareness of how risky this data may be. And then those who have it, those who get it through whatever means whether it's consensual, you know it whether it's explicit and informed consent, or whether it's even people who are out there scraping this data, which I just despise, no permission, no control of governance over what they scrape what they take what they do with it, that all of this would have to be in as big and as sound and as firm and as strong and as protected a lockbox as we possibly could put it in. That will be, I feel like a victory, at least if we can if we get to that place with biometric data.


Debbie Reynolds  47:08

Yeah, I share your concerns in that regard. For sure, for sure. Well, this has been really great to chat with you today. This is a fantastic episode. Why don't you let folks know how they can contact you and get involved with For Humanity?


Ryan Carrier  47:24

Sure. So we have a website, it's https://ForHumanity.center. And there, people can learn about what we do; they can actually register. And registering on our website means something very simple. It's you're providing your email address, and you agree to our code of conduct. But once you do that, we actually do personal onboarding with me, where I walk people through what it is we do and how we do it. And we've grown over the last 18 months from just me, too, now, nearly 650 people from 48 countries around the world, all pouring in on the work that we're doing. And so, if you want to be involved, you are welcome to join and register. And we can sort of bring you in and figure out what your passions and your energies and your dreams and your goals are; we have a lot of tools that we want to give to people to empower them to succeed in this space. So registering on the website is easy, the easiest way, and also connect with me on LinkedIn, Ryan Carrier, and for humanity, and you'll find me pretty easily. And we can take it to take the conversation from there.


Sanna Toropainen  48:40

Excellent. I'm looking forward to us being able to collaborate together and have more chats about things that we can do for sure. So I'm really excited about the things you're doing. And I have friends as well. Mitch and my buddy Rohan Light from New Zealand are part of the for humanity effort as well. So I think you know, I love what you're doing and I love to see what we can do together.


Ryan Carrier  49:06

That sounds great. I look forward to it as well.


Debbie Reynolds  49:09

Well, thank you, and I'll talk to you so


Ryan Carrier  49:12

Thank you so much. Thank you for having me on. I appreciate it.