"The Data Diva" Talks Privacy Podcast

The Data Diva E218 - Thomas Morrow and Debbie Reynolds

Season 5 Episode 218

Send us a text

Debbie Reynolds “The Data Diva” talks Thomas Morrow, Ex NASA, Attorney, Technology Expert.  Debbie and Thomas discuss the critical intersection of data privacy, artificial intelligence (AI), and emerging technologies. Thomas shares his fascinating journey from working on aerospace innovations at Boeing to contributing to NASA’s International Space Station program, where he navigated secure communications and data privacy challenges. This experience laid the groundwork for his deep understanding of how technological advancements and privacy protections must coexist.

Thomas explains how NASA’s approach to astronaut data—disclosing what data is collected, how it is used, and potential risks—provides a model for how data privacy could work in a consumer context. He and Debbie explore how transparency, ethics, and consent can build trust in a data-driven world. The conversation touches on the double-edged nature of technology, particularly AI, which has the power to drive significant innovation, such as better weather prediction and disaster preparedness, but also poses risks like deepfakes, misuse of personal data, and privacy erosion.

The discussion highlights pressing issues such as the need for global standards in data protection, AI ethics, and authentication processes to verify the authenticity of communications. Thomas emphasizes balancing innovation with accountability, stressing that regulation is essential to ensure technology serves society positively rather than being exploited for unethical gains. He and Debbie also share their thoughts on building digital trust through novel personal authentication systems, drawing parallels to encryption and hashing methods to secure identity and communications in an increasingly digital world.

Thomas advocates for a collaborative, international approach to regulation and ethical AI development. He underscores the need to think long-term, projecting into the future to address challenges before they become unmanageable. Forward-looking conferences like Ecosystem 2030 champion this mindset, and Thomas shares his hope for Data Privacy in the future.

Support the show

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:12] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.

[00:24] Now I have a very special guest on the show, Thomas Morrow. He is an attorney that specializes in all manner of tech issues.

[00:34] Thomas Morrow: Welcome. Hey, good to see you again.

[00:37] Debbie Reynolds: Yeah, it's great to see you. Well, I was so excited to meet you. This is actually a funny story and you and I chatted about this. So we met or we got a chance to really meet and talk together at a conference in Spain and we chatted a lot on a mountaintop at a, like a Michelin star restaurant, you know, so it was like very, very hoity toity type of place.

[01:03] And we were talking about the future, technology in the future.

[01:07] And your background is outstanding and we just had so much fun talking. First of all, you're so lively and friendly to talk to. And then, you know, just your legal and then the technology background.

[01:21] I think especially when people think about, you know, you work previously for NASA. When they think about that, you know, that's all about, you know, innovation and the future and stuff that people talk about in movies.

[01:33] But tell me a bit about your background and how you came to your career right now.

[01:39] Thomas Morrow: So I started as an engineer. My first job was with the Boeing Company and I worked in defense and aerospace. And during those years I worked on a lot of really interesting things.

[01:50] I worked on the space based interceptor program and other programs that fell under the Department of Defense for protecting countries against attack. We see this now in Israel. And some of the technologies that we worked on back then are being utilized today to put up what they call the Iron Dome.

[02:10] I'm not saying I worked on that, because I didn't, but I worked on technologies that, that wrapped in that area and knock on that door. And then I moved over to NASA and ended up on the International Space Station program.

[02:22] And as part of that work, I worked on the original specifications for the trust segments and the communications and tracking systems that are on the space station now.

[02:34] And during that I learned all kinds of stuff about how you, how you communicate securely from one point to the other and protect your communication while you're doing it. And the important things that matter in the land of sending commands and control information from one point to the other.

[02:55] And that's very, very applicable to data privacy. Because if you don't have good security.

[03:03] How can you have data privacy?

[03:06] And so through that I met a number of NASA people and one day I was approached to ask if I wanted to come over to NASA. And I was brought over.

[03:16] Initially I said I don't know very frankly because I was having fun doing what I was doing. But they talked to me and they said we want you to come over and be the representative for the engineering directorate for the extra reticular activity suits.

[03:30] And I said that sounds really interesting. So I came over and I did that. And while I was doing that, I got involved in a lot of agreements with outside entities between NASA and outside entities.

[03:43] And the legal office got to know me. And by then I had a law degree. I'm in the interim of doing all those things I was talking about, I got a law degree and I had been working as a patent attorney on the side.

[03:56] And the legal office at NASA looked at me and liked me and asked me to come and work in the group. And so I worked in the legal office for multiple years as a procurement attorney and ethics attorney.

[04:10] I did all transactional work that you might imagine that goes to a legal office. And one of the things that I was is I was a representative, I was the legal counsel to the science group that does testing on astronauts.

[04:25] And that's where we really got into the discussions about data privacy and personal information. Your PIIs. You can imagine that your name and address is your pii. Now think you're an astronaut and you're agreeing to be experimented on a lot.

[04:45] So now your blood type, how your blood is on Earth versus how it is on space from a chemical perspective.

[04:54] This is all information being collected about you and all information that is being protected by NASA so that no one really knows what's going on inside your body because no one should know what your medical personal information is.

[05:09] And NASA protects that, but they still do experimentation. So what they have to do is anonymize the data so that you can't take the information that's in a test or in a, a scientific survey or test and figure out who is associated with that data.

[05:28] So the common rule is a rule that says that you have to protect people's medical information.

[05:36] You have to fully disclose what you're going to do to them or with them.

[05:41] And you have to tell them what the potential downsides, full and complete disclosure of exactly what's going on. And so NASA utilizes that and does that on a regular basis.

[05:51] And we got into so many discussions about where is the Data, who has access to the data, how is the data protected against improper access and so forth and so on.

[06:02] In the area of just the information about the astronauts. And that was one of my jobs that I had another job at the same time was I was also the legal representative to the Information Technology Group at Johnson Space Center.

[06:16] So I was dealing with both sides of the equation. How do we make sure that the systems at Johnson Space center are compliant and secure from a data protection perspective as well as every other perspective, your cybersecurity perspectives.

[06:31] And then as I said in the, on the science side about the experiments are doing, they're being done with the astronauts and how do we make sure that their information is protected?

[06:42] But there is a lot of valuable information that we can get from those experiments and it can be shared with the world in a way that no astronauts are harmed from a release of improper information about them.

[06:56] So that's, that's a kind of quick how I got to where it was. And then about six years ago I decided to leave NASA and start a business with a partner, vine.

[07:05] And we have that business still running. And I also currently do a lot of transactional work. Most of my work is temporary in house counsel for corporations that need my kind of background.

[07:16] And for some reason they like it because I'm very busy.

[07:20] Debbie Reynolds: I love that the thing that you said, which to me makes a whole lot of sense was about when you're taking someone's personal information that at NASA you disclose to the person what you were taking, what you're going to use it for, what were the upside and the downsides of it.

[07:39] And I feel like NASA obviously is all about precision and doing things in very exact ways. And I wonder, maybe it's harder than it sounds. That model to me seems like if we implemented something like that on a more consumer wide basis that people probably wouldn't be so upset about what was happening with their data.

[08:02] What are your thoughts?

[08:03] Thomas Morrow: Oh, I agree completely. There are so many things that I do every day where I give up information about myself and I do, I take as many steps as I can take to protect my information.

[08:18] The settings on my, on my browser are set up so it's very difficult for somebody to track what I'm doing. I use Apple MacBook Pro, I have Safari set up so that it's very difficult to track me.

[08:33] I have a firewall that makes it very difficult to track me. I just have as much as I can put on my computer and including VPN when I need it.

[08:43] That's because the ecosystem of the browser is set up to collect data and sell that data, and that a lot of people don't understand how much data is collected. For example, if you go on YouTube and you use it on a regular basis and you're logged in, even with an email address that you made up, they still make you give a second one, and it makes it very difficult to hide who you are.

[09:14] And the more you use YouTube, the more they zero in from their algorithms on what kind of person you are. And they have a lot of scientists who spend time saying, this is how we design the algorithm to determine who's on the other side of this.

[09:31] Even if we can't track them, we know a lot about them, and that information is very valuable. And that information is sold to advertisers. As, you know, sometimes when you're sitting around and you mention, you know, I'd really like to buy a new car, and then all of a sudden on the browser, car ads pop up.

[09:49] It's because the computer is listening to you and the systems that are out there are reacting to it and selling that access to you, to the advertisers, you know, in a very, very focused way, so that they can bring advertising to you and entice you to buy from them.

[10:11] Well, all this information is out there, and it would be wonderful if people were told how their data is being used. When you buy a smartphone, a lot of that data is used when you buy a new tv.

[10:28] That new TV collects a ton of information on you and sends it back and sells that data. And you have to go into the settings on that tv, you look it up online, you have to find the right settings to go in and turn all that off so that your data, in other words, what you watch, do you stream, what shows do you watch and stream?

[10:49] All of that is collected and sold, and very few people know. In fact, it's one of the reasons why LEDs and LCDs and OLED TVs are much less expensive than they used to be.

[11:01] Because the value proposition is you sell the tv, get it in the home, and then sell the data on the backside, where you make the real money, as, as was said in Spaceballs.

[11:14] And now where the real money is made, merchandising. And so that's sort of the kinds of things that are going on. And not a lot of people know that. So for purposes of.

[11:26] Should they disclose? I believe, completely, yes. You and I were going back and forth on an article.

[11:35] I think it was on LinkedIn about. No, it was the Olympics teams, it was a sports team and they were taking their data and using that without telling them why they weren't following the cool and they should have been.

[11:55] In fact, you even commented on my comment that I made that they should not be just collecting that, forcing people to allow them to collect this information about them without the person being able to say, no, I don't want you to have it.

[12:10] And what are you going to do with it? How are you going to anonymize it? In other words, giving me full disclosure.

[12:16] And it's not a simple thing. The reason it's not a simple thing is now we start getting into these ethics problems.

[12:23] Let's say I have a medical condition and my doctor and I know about it, and as part of one of these adventures, I involve myself in a sports team and they start taking this data from me.

[12:38] It's found out that I had this medical problem through the work things that I'm doing with the sports team.

[12:44] And now that data is sold or it ends up somehow with insurance companies who then reject me on future insurance because my risk is then their algorithm allows my risk to be.

[12:59] Well, that's a significant problem.

[13:01] They have that data anyway through the way that our insurance works in this country.

[13:08] But they should get it through an official source that they're signed up to have, not through a source where they buy the data on the side.

[13:17] Another truism about that is that information can be used against you. For example, imagine if there's a database of diseases that people are found to have had through their involvement at the gym.

[13:29] And now insurance companies are buying that data, or worse, let's say you run for a political office. Now all of a sudden your opponent buys the data and finds out you have something wrong with you physically or medically, and they spread the information about you.

[13:47] No one should know that unless you decide to release that information.

[13:52] If your medical condition, your situation is such that you can't do your job, then okay, you shouldn't run for office, but it shouldn't be used against you. And I know I'm coming up with a lot of scenarios that are worst case, but I think about these things.

[14:10] I think about them because I have to advise people on this stuff and I have to warn them of where they're going and what the outcome might be. And it worries me a lot that we don't do these full disclosures.

[14:21] As you could tell by my response on the LinkedIn article, I was like, this data is personal and should be protected. And the Only way it should be collected is not you must give it to us.

[14:33] It's this is what we would like to collect. Do you agree?

[14:37] Here are the ups and downs of this information.

[14:40] This is how we're going to anonymize it so that you are not connected to it. Ultimately, if it's released, that should happen, and when it doesn't, it really starts causing problems.

[14:54] Have you ever seen the movie Gattaca?

[14:57] Debbie Reynolds: Oh, yes, yes. It's a great movie.

[14:59] Thomas Morrow: I go right to Gattaca in these discussions where we start talking about personal health information and how it could be collected. Now, of course, that's a dystopian example. I get it.

[15:11] I'm not that kind of person. But these stories ring true because they give us a warning and they let us know that we need to think about these things and we need to protect all of us from this kind of stuff.

[15:24] If you are in great physical shape and have no medical problems whatsoever, in the world of Gattaca, you do well. But if you have some problems, the best job you can have is a janitor job.

[15:37] We have to make sure, from an ethics perspective, that we continue to allow our society to be open and as inclusive as possible for as many people as possible and not allow the idea of how do we cut costs, how do we save money, how do we do all these things that some people think about.

[15:58] We don't allow that to get into our life in such a way that someone is prevented from having a career that they could have and a very, very good career that they could have simply because information gets out about them.

[16:15] Let's say you have type 2 diabetes or type. Type 1 diabetes is even better. Type 1 diabetes comes about when your body stops producing insulin. And it occurs, many people believe, because of a virus that you get.

[16:29] Now, I'm saying that from what I've read, I'm not a doctor, but let's say you have type 1 diabetes and it shows up one day, you feel sick, you go to the doctor, they take your blood sugar, and you have a blood sugar of 600.

[16:43] Oh, my. And they find out you're type one diabetic, you have no insulin in your body.

[16:47] Now, you can control your type 1 diabetes through the use of insulin, insulin pumps, all kinds of various things.

[16:54] And that should be private to you and not released to anybody unless you want to release it to somebody. But there are people out there that would say, I don't want a type 1 diabetic working for me.

[17:04] I'm worried about all these Potential problems associated with that? Well, it shouldn't be an issue. And under our laws, it's illegal to do that, to say, well, you have type 1 diabetes, I'm not going to hire you.

[17:17] But if through a series of processes that data is captured and sold and released, then the admission as to whether they knew whether you had type 1 diabetes or not becomes harder to prove.

[17:31] And then why didn't they give you the opportunities it becomes harder to prove. Once again, I know this is way out there, but these are the things we have to think about when we're having these discussions that you and I have had about new technologies, what they mean to us, what does artificial intelligence mean to us?

[17:51] How is that going to be leveraged? Is it going to be leveraged in a positive way? As we talked about in our conference in Spain, we know there's going to be some bad actors in it, but how do we set ourselves up so that the vast majority of the use of that new technology is a positive thing and provides a positive outcome for society, makes things better for the masses of, of the people in our society?

[18:17] That's what. Those are the kinds of things that are on my mind when we talk about all these wonderful subjects that we just went through.

[18:23] Debbie Reynolds: And I think some people like, I think I'm, I'm very tinfoil hat about these things as well. So, you know, I'm, I'm always asking, you know, how far can you take it, like, one way or another?

[18:34] And so I am a technology enthusiast. I love technology, but I don't love everything that people try to do with technology and do with data. So I try to try to balance that.

[18:45] But, you know, as you were talking, I'm thinking about maybe someone who's maybe on the other end of the spectrum and they say something like, I have nothing to hide.

[18:56] Take all my data. I have nothing to hide. What would you say to that person?

[19:00] Thomas Morrow: Perfectly fine, as long as there's full disclosure about what data is being taken and how it's being used. So I have nothing to hide. You can take all my data? Well, are you sure?

[19:11] Let's have a conversation about what data we're taking and how we're using it. And is that what you want?

[19:17] If we're doing a good job, people understand what they're giving up. I know what I'm giving up when I'm going online and making purchases. I know what I'm giving up when I'm taking certain steps on a computer or working in new technologies or working with vendors.

[19:33] I Know that because this is the world I work in.

[19:38] Most people aren't even told.

[19:41] I can't tell you how many people I've met that were surprised at how much data is taken from their activities and don't know how it's used.

[19:52] I'm all about full disclosure about what information is being taken and how is it being used. And if you're okay with people taking your data, fine, as long as you know why and how is it might surprise you some of the things that are done with your data.

[20:06] And you might say, I don't mind if any of my data goes out there. Except for that, you know, until you, until you know it. They do that with the data.

[20:15] So understand why they're using it and then, sure, live your life, do your thing. If you're okay with that data being used, no problem.

[20:25] My worry is that there's an edict, once again, we go back to that LinkedIn article. It was an edict that this, that the athletes had to be plugged in. And that's just, that's private data.

[20:38] And you don't plug people in until you tell them what and why and how you're protecting it. And all the things we talked about and I feel the same way about all the other things that we do online.

[20:49] I love your comment about being tin hat. I think that's fantastic.

[20:55] Don't let them in amongst all of this conversation. I'm a very positive person and I believe that we can find the path to continuing to improve our technology and use all these tools.

[21:12] But we need thinkers who think about how we make sure that that future is a positive one and not a negative one. As you know, Omar Hemla was the person that put together that conference and he's written several books on this entire subject of artificial intelligence and what the future would be like.

[21:29] And his are always, his books are all in the direction of we can find a very positive future with the utilization of this new technology. And I, I share his belief because I work in on a regular basis.

[21:42] I have to help my clients understand what the upsides and the downsides are and how they protect themselves.

[21:49] For example, if you're sending data from one point to another, how do you protect that data?

[21:56] So when you get online and you click a button, something goes out and something comes back. There's a handshake on both sides most of the time that is very open, unless you use a VPN which encrypts your data and also sends it to a location that is different from the location that you Sit at.

[22:14] They kind of spoof where you are. Like, I can on a vpn, I can come from anywhere in the world. And I have done that many times. Actually, I was in Las Vegas last week, even though I was sitting in Houston, Texas.

[22:25] And so that kind of confuses a little bit about where you are. And it's encrypted, so it's a little bit harder to get to, but it's still vulnerable. And it's vulnerable because there's two elements to secure data.

[22:41] That's how do you encrypt it, and how do you authenticate who's sending it to you. For example, right now, Debbie, you and I know each other. We met each other face to face, so that's how we met each other.

[22:53] And you knew my name because it was on my badge, and I knew your name because it was in your badge. And we chatted, and we had a good time.

[22:59] And now when I called into this Zoom meeting with you, even though this podcast is an audio podcast, you and I are looking at each other through a video camera.

[23:10] So you see my face, I see your face. That's an authentication.

[23:14] I know it's you, and you know it's me. Now, if you take away our faces and it's just voices, I can spoof my voice, I can say words out of my mouth, and it can come across as any number of people.

[23:29] Debbie Reynolds: Okay.

[23:29] Thomas Morrow: There's all kinds of tools online that could do that for you. They haven't gotten to the point where the faces are easy to change.

[23:36] So if you try to authenticate me via voice, it's not as secure as seeing my face and hearing my voice and seeing that my lips move properly in the video.

[23:46] Because we're doing a lot of authentication when we do that, when we're looking at each other.

[23:51] And the same thing is true with communication, digital communication. That's why you see many banks and many of your passwords, including two factor authentication, because the first authentication is your password and username.

[24:06] But those are. They can be taken from a whole lot of different reasons. With all the releases of passwords and usernames that have occurred recently, you know, a billion people's passwords and usernames have been released.

[24:19] There was this huge release, over a billion people's information in a company that collected information about your credit, and in there was your name and address and social number. Many people are impacted by this, and in your username and your passwords, and things weren't properly protected in that company.

[24:39] So it really got out there. It's a huge breach.

[24:43] Well, that's why two factor authentication is important. So two factor authentication adds a really great layer protection for the systems that the first authentication step is username and password.

[24:56] And so that's why I said that they're, you know, how do you protect your data? And it's really the big thing in data protection is authentication, because then you know for sure that's the right person that the data's coming from.

[25:13] I'll share a problem with you.

[25:16] Say you're sending a command from the ground to space and you this command is going to tell the vehicle in space to move.

[25:26] And you can do one of two things in space. You can speed up or go slower. If you speed up, your altitude goes up. If you slow down, your altitude goes down.

[25:35] Well, if a bad actor sends a slow down message to the vehicle, it slows down, re enters, burns up, and anything on board is destroyed.

[25:45] So you want to make sure for absolute certain that the command that you send is from the right source and in the right format.

[25:55] Same thing's true about any communication between point A and point B and making sure it's secure.

[26:01] So you not only have to build the command in such a way that when it gets to the other, to the radio that's receiving it and the computer that's receiving it, that it knows that it's a command that it's supposed to follow.

[26:13] But before it ever thinks about following the command, it says that, oh, I know it came from a source I could trust, which is the same problem that we have in protecting our data on the ground over the computer.

[26:29] And once again, you know, the authentication being the important part. And then after that, encryption. Encryption has its downsides because as quantum computing becomes more powerful, encryption will become irrelevant.

[26:42] Quantum computers will be able to break encryption very, very quickly compared to today's standards. And even 256k encryption won't be enough. So then it has to, it's going to all have to fall on, on authentication and how you make sure that you don't open the door, because you never know who's going to come through that door once we get to that point.

[27:03] And that technology is coming, by the way.

[27:06] Debbie Reynolds: Yeah, it is, it is. Well, I want to touch on something I love for you to expand about. And this is something that we talked about in Spain at Ecosystem 2030.

[27:18] And by the way, I wanted to mention Omar Hatamla. He is the chief artificial intelligence officer at NASA and he invited me to attend that conference and he's amazing so the thing I wanted your thoughts about, and this is, this is, I think this is a parallel with privacy.

[27:36] I think artificial intelligence and privacy kind of parallel each other just because data is the food of AI. And so it's hard to pull those two things apart. But I guess I'm excited about the innovation that can happen when people utilize artificial intelligence the right way, where we can see a lot of advantages and advancements that happen at scale.

[28:05] But then on the, you know, I call technology like a double edged sword, right? So then we can have harm at scale as well. But I want your thoughts about that.

[28:15] Thomas Morrow: So first you have to go back to, you know, what's the whole concept of artificial intelligence and sort of how does it work? You have a model that you train, as they say, to solve a particular problem, and once you get it trained to look at a particular problem, then you send it production data and it looks at that production data and it figures out a solution set for you.

[28:46] So here's an example of the kind of AI that the future is looking at that will be very, very powerful and very, very beneficial on weather forecasting, if you can put together an AI model.

[28:58] And by the way they are building these models, they can take the data from the past and look at all the information we have and see what happened in the weather and in the weather patterns.

[29:11] Now you have a trained model that can start looking at or projecting what might occur in the future, depending on information that it has and it receives on a regular basis, such as wave activities in the ocean, ocean temperatures, wind, and all the other data that's collected by our meteorologists can be pumped into one of these tools.

[29:35] And the artificial intelligence can give you better responses and better forecast for what's going to happen in the future.

[29:44] And those wonderful things that we're looking for is figuring out where a tornado is going to hit, where it's most likely to hit in a particular part of the country that's extremely powerful.

[29:56] And those artificial intelligence tools that take in all the data from NOAA and all these other systems can start putting those forecasts together better than humans can, because they can take in all this data, process all this data, compare it to its training model and come up with predictions.

[30:15] And boy, wouldn't it be great if you had like a 20 minute warning of high potential opportunity for a large tornado to hit Oklahoma somewhere versus there it is, it's coming at you and you've got about three minutes, the alarms go off and you're in a totally different space or a better way of forecasting the hurricane season.

[30:41] The hurricane season was supposed to be about 15, 16 hurricanes.

[30:46] Well, typically all that happens early.

[30:49] Wouldn't it be neat if we had systems that were trained and did a better job of these forecasts for the next hurricane season based on all the prior hurricane season data that trained the model?

[30:59] So that's a very, in my view, a very positive and powerful use of artificial intelligence that benefits everyone.

[31:09] It gives us an opportunity to prevent problems or prepare for them in such a way that the impact is less to prepare for them in such a way that after the impact occurs, all the equipment and resources that are necessary to recover are closer and are better prepared for what's coming.

[31:31] That's the good side of artificial intelligence and all of this data. The bad side is, as everybody knows, how it could be used to, in fact, how it's being used right now, sending advertisements to you online.

[31:48] I don't want to see those advertisements online because I talked about them or I happened to click on a particular link or something like that, or I went to a particular YouTube video, or I don't do TikTok, but I'm told TikTok and any number of tools that are used out there, these artificial intelligence tools for advertisers are making targeted advertising far more advanced than it used to be.

[32:14] In fact, there are advertisers out there that are talking about, I can get you down to the neighborhood. I can pick male or female. There's so many things that they can pick now in advertising online.

[32:26] And all of that is being accelerated and improved through the artificial intelligence that is out there. And the other problem with artificial intelligence has been talked about by other people and that are, that's the deep fakes and it's, it's really kind of sad.

[32:42] But the tools right now to take my face and everything that you and I have talked about today and turn it into some terrible message that I allegedly said are there and are easy to use.

[32:55] And you really have to study the person, the video to see that it's not the person saying it. What worries me is two, three years from now, you could study that video all you wanted and you wouldn't be able to tell that it wasn't my actual words, that it was created by a computer.

[33:12] You can still do it if you watch the video carefully or if you listen to the video. Watch and listen.

[33:17] Still not perfect, but it'll get there. And when it does, then an hour long conversation with you that ends up on a podcast could be used later. On to have me saying things I never said.

[33:31] So that's a real concern that falls in the area, as you said, you can't separate data from AI because AI lives on data.

[33:40] And it really.

[33:41] How do we deal with that in our society? Do we train ourselves that we can't believe anything that's recorded and the only thing that matters is face to face meetings?

[33:53] Do we go back 100 years where there was no phone and there was no Internet?

[33:59] I don't think so. But we're going to have to find solutions to these problems.

[34:05] One of the responses that a number of large corporations are using are artificial intelligence tools that are trained to figure out when something has been generated by an artificial intelligence tool.

[34:20] So it's sort of your antivirus to artificial intelligence deepfakes.

[34:26] And there are companies out there building it because they know there's going to be a market for it, number one. And number two, it's also going to help the companies prevent problems.

[34:35] Imagine that I'm the CEO of a corporation and I call you on a video conference and you work for the corporation and I instruct you to send a million dollars to or $10,000, let's make it easy because that $10,000 is below most thresholds.

[34:55] So $10,000 to somewhere and we hang up and you send $10,000 and then your boss comes and said, what are you doing? Well, the CEO told me to it, well, there's going to be artificial tools out there that are going to be able to generate that video.

[35:09] And so what you need then are systems that authenticate the call once again that it's actually coming from the source that it's supposed to come from. And also systems that can identify that it's a fake.

[35:20] And those are going to be built, by the way, with artificial tools. Artificial intelligence tools that come from the other side.

[35:27] Debbie Reynolds: Yeah, I almost feel like we're addressing, we're attacking the problem in the opposite way that we need to. So I think it would be easier to try to authenticate what's real than try to figure out what's fake.

[35:44] Cause I feel like the fake is gonna like totally outnumber what is real.

[35:49] Thomas Morrow: I agree. Authentication, I always fall back to that. When people talk to me about data security, I always fall back to authentication. Because if I know you're talking to me, Debbie, I know I'm getting it from an excellent source whom I enjoy being around and talking to.

[36:06] That's what I know. So I always go back to authentication after that. Once you know it's coming from the source you can trust. You don't have to worry about whether it's phony or faked or anything like that, because you're like, okay, we're connected.

[36:18] We're in a good place. So I agree 100%.

[36:22] Debbie Reynolds: I think at some point, there are going to have to be kind of Personas that are created about us. Like, I tried to coin this term in Bloomberg many years ago in an article.

[36:32] It's like a data dossier about you that says, like, well, Thomas, you know, this is not. We don't think this is Thomas, because Thomas. We know that Thomas hates peanut butter or something.

[36:44] Thomas Morrow: Yeah, yeah, yeah, yeah, yeah.

[36:47] Debbie Reynolds: I think that would be so interesting. Oh, my goodness.

[36:50] Thomas Morrow: Well, actually, that's a great comment, because right now, authentication can be done through this video that we're using. I know this is an audio podcast, but just mention everybody. Hey, I'm looking at Debbie, and she's amazing, as always.

[37:04] And so that's our authentication process.

[37:06] And in the future, how do we do that? And I love the idea of having encryption keys.

[37:14] And we have a public key, and we have our key on how we encrypt things. We utilize that information to send data so that I can encrypt and you can decrypt and back and forth.

[37:25] And that way, as the data is sent over, it's encrypted and very difficult to get to. It's not impossible, but it's difficult to figure out what's in that data. Well, wouldn't it be great if there was a personal key about me?

[37:38] This is Thomas Morrow, and this is how this key knows that. And maybe it's how I log into my computer. Maybe it's that I use my fingerprint or my. My iris scan, because we've got cameras on these computers, and all of these things are very possible.

[37:53] And then you know for sure it's me. And then the key is activated, and then everyone knows I'm at this computer right now, and it's definitely me. It's not someone who stole my password and my username and isn't using the.

[38:07] The video camera.

[38:09] There is. That's. There's a definite. Boy, that would be neat if we could get there. There's some downsides to it, but there's always downsides to everything. But I like the idea of having that, because we do encryption keys all the time.

[38:23] Why can't we have a personal key that says, when you're hooked up between a computer or some kind of communication device between me and you, I know it's you because your key is there and you know it's me is because my key is there and they've been authenticated and we can move forward and have a conversation and know we're in a good place.

[38:45] Debbie Reynolds: I almost feel like it's similar to hashing where when you're doing hashing, the two parties agree on what is being hashed. So you have to agree on what those elements of the hash will be.

[38:57] And so maybe that's kind of a way to kind of figure out people are real and we agree on what that has would be. I don't know, maybe we're creating a new business here I don't even know about.

[39:07] Thomas Morrow: Well, what I will tell you is that for many, many years, people have been figuring out how you, how you do, how you handshake between two communication points, how the data is structured in such a way that both sides, when it's received, agree that that's the right data.

[39:23] There's lots of different ways to attack and send data, to try to spoof data. So what they start with is what frequency are you on?

[39:34] Okay, we're on both on the same frequency when we're sending the information. I'm thinking radio frequencies. But there's the same idea works in all kinds of communication sources. So, okay, what is the, what is the structure of the data?

[39:47] Is there a word that comes in the beginning to confirm that? This is the beginning of a statement. You know, there's a whole criteria about how you set up just sending bits from point A to point B.

[39:58] So entire criteria about that. And so you have to meet all those things.

[40:03] And then on top of that, then, do you have an encryption key? Do you have a personal key?

[40:09] All those things have been thought about. And also error correction, because digital data comes across and there's errors that occur in the, in the bits. So there are a number of error correction algorithms that are run all the time in the background that nobody ever sees to make sure that the data coming across is error free.

[40:29] So since we've been thinking about that for so many years, and the people that worry about that, kind of, the engineers that worry about that kind of those kinds of issues, they could do the same thing with the personnel key or the personal key.

[40:42] They could set up an international standard on how we define who we are electronically, or as you said, our electronic Persona. How do we define that? Well, we have an international standard on how we define that.

[40:56] And then our computers are set up with a tool that helps us do that. And then it's used along with all the other security stuff that we use to communicate online.

[41:07] I think that's a great idea. And another help for authenticating who you are and getting around the deep fake issue. Because every video that goes out on the Internet has your key attached to it.

[41:19] And the spoofers can't necessarily, you know, we'll find out some way to make sure they can't get to your key.

[41:25] Whatever that source is, there's lots of different ways and approaches to take it. I actually like your idea. I think it's just an add on. We just need to do it.

[41:33] This is what we're going to do. So then we need the IEEE to put together a group of people to come up with a international standard for or personal keys.

[41:46] Debbie Reynolds: Yes. And I collaborate with them, so I may talk with them about that.

[41:51] Thomas Morrow: Those are the people. So the Radio Frequency group and the IEEE are the people to talk to because they deal with encryption, they deal with authentication, and they deal with all the things that we're talking about in the land of communication, over fiber optics, over all the different ways that we communicate these days digitally and mostly all that's digital.

[42:10] So it'll be digital. The people that worry about digital comp.

[42:14] Debbie Reynolds: This is why I love talking with you. Always have my wheels turning.

[42:18] Thomas Morrow: Thank you, you're very nice.

[42:21] Debbie Reynolds: So if it were the world according to you, Thomas, and we did everything you said, what would be your wish for privacy anywhere in the world? Whether that be technology, human behavior or regulation?

[42:33] Thomas Morrow: Yeah, it would be a mix of everything. Because I don't. This is one thing that I know. If you don't regulate to some level, businesses will go to whatever is the most efficient way to make the most money.

[42:46] I don't see that as evil. I have no problem with how do we efficiently make money for our stakeholders. Because we all want to have a good life. We want to work, we want to build a new business like my partner and I started six years ago.

[43:01] We want to have these ways of doing that. But what we learned is that if you don't have some level of regulation, companies will find the most efficient way. And the regulations have to be international.

[43:14] Because if you allow, if you put regulations on one part of the world and another part of the world doesn't have those regulations, the other part of the world is going to use the most efficient approach to make as much money as they can as part of capitalism.

[43:29] A really good example that I learned in law school, actually it may have been before law school was companies that were making products that had a lot of chemicals involved.

[43:43] What did they do with those chemicals? Well, in the early days of tanning and the early days of the oil industry, they would do their production processes and pour the waste into the water.

[43:56] Well, as we know, chemical waste in the water is a really bad thing. And so regulations were put in place for everybody that you're not allowed to do that anymore.

[44:08] You have to deal with the chemical waste that you have in a way that makes environmental sense and doesn't kill a bunch of fish and all the other things that occur.

[44:16] Well, this is the reason the companies poured it off into the water was because that was the most efficient thing for them to do for the products they were creating at the time.

[44:26] And it costs money to do it differently.

[44:31] So if there's not a regulation requiring everybody to do it, then why would anybody do it? Because we're trying to find the best way to improve our efficiencies and provide the best return on the investments of everybody involved in whatever entity it is.

[44:48] Same thing is going to be true about artificial intelligence and how we allow data to be used throughout the world. We're going to have to have regulations, and they should be worldwide regulations that we can all agree to.

[45:00] Now, I know there are people out there that believe, you know, it's the globalists are bad. There are some things that it's good to have the whole world agree on.

[45:10] It's good to have the whole world agree on.

[45:13] We don't want to pollute our oceans.

[45:15] We have bad actors out there that still do it. But we've got to get everybody to agree on this stuff. The same thing comes true with our computer systems, our artificial intelligence and our data use.

[45:26] So if I had my brothers, it would be that we make sure that our artificial intelligence tools are being built in an ethical way that don't take advantage of people.

[45:39] We make sure that people are fully informed about how their data is being used, what data is being taken, how it's being used, and how it's being protected or anonymized if that's occurring so that people can make their own decision, and also that they get to say no.

[45:56] The one thing I like about the cookies on all the websites is they all now have to say, you want to accept our cookies? I always reject all cookies except the.

[46:07] The only. You can't reject all of them because there's some production cookies here you have to accept because the system doesn't work.

[46:14] And I get that. But the rest, oh yeah, I reject them all, every single time. Reject, reject. And I love that we have that button now, because I don't want you to have that data, Mr.

[46:25] Company A or Mrs. Company B, because that's my business and what I look at, and I don't want to be bothered with your targeted advertisements.

[46:34] So it really is a thoughtful approach to, and a regulated approach to artificial intelligence.

[46:43] Will we get there? I don't think we're going to get there because I believe there are certain countries in the world that don't see that as an advantage to them and they'll utilize the tools to advantage them.

[46:57] And we're going to be dealing with this through my career, through your career, and through careers into the future where people like you and I who are professionals at what we do and who care about our industries and who believe in a positive future, continue to talk together and continue to discuss these items today on this podcast or at the conference that we might attend in the future.

[47:25] I. I'm. I'm very much looking forward to ES 2030, and I'm just waiting for it to come out so I can prepare and put it on my calendar because I had such a great time last year and the year before when I went, I got that wonderful opportunity to meet you.

[47:42] So it really is.

[47:44] That future would be one that was well thought through. And we together came up with solutions that were for the betterment of everyone in the world, not just something to put handcuffs on one country over another.

[48:03] You know, we have to. We have to include people that are thinking about the ethics and that are implementing those procedures.

[48:13] Side note, some companies that started with ethics groups to make sure their artificial intelligence was being created ethically, those groups have gone away because it's not as profitable to have an ethical system as it is to run a system that might push against those boundaries.

[48:31] I won't say they're unethical. I'll say they push against the boundaries of ethics.

[48:35] And so another reason for regulations is because once again, entities are trying to find the most efficient path to the highest return to their investors and their stakeholders. And that's going to be true no matter what.

[48:51] That's. That's how we humans work. I believe. I believe. And so it's. It's going to continue to be an issue. And the only way you put your arms around that is through governmental regulations that puts some guardrails.

[49:06] I'm not saying you can't do it like no one in the world can do us except the three experts that know artificial intelligence. No, all the programmers in the world can do all the artificial intelligence that they want as long as they stick between these guardrails.

[49:18] And that's all I'm thinking about. And that's how I would see a very, very valuable future. And I think that that's what Omar talks about in his books.

[49:30] And I think we get there with people like you and me and Omar and other people that think about these things and. And hope the best for the future and work to the best for the future.

[49:43] Debbie Reynolds: Yeah, it's a big problem. It's a big problem. You know, I'm glad we're thinking in this way. And that's another reason why I really love the conference, because you really do have to project out into the future because these things have such a long tail and they are going to have such a huge impact on the future.

[50:01] So thinking about it in the future, I think, is vital to being able to kind of address these issues.

[50:08] Thomas Morrow: Yeah, agreed. Agreed.

[50:11] Debbie Reynolds: Well, thank you so much. This has been amazing. I'm so happy we were able to chat, and I hope that we get a chance to collaborate together in the future, for sure.

[50:19] Thomas Morrow: This has. It has been my great pleasure to chat with you again. It's been nice to meet the new people that I met at this last conference and to see how they're flourishing.

[50:30] And you are one of those people I think about and I appreciate you and I thank you for letting me chat with you today.

[50:36] Debbie Reynolds: A Aren't you sweet? Aren't you sweet? Well, the pleasure's all mine, and this has been amazing, and I'm sure we'll talk soon. Absolutely.

[50:56] Thomas Morrow: Sa.