"The Data Diva" Talks Privacy Podcast

The Data Diva E197 - Matthew Lowe and Debbie Reynolds

August 13, 2024 Season 4 Episode 197

Send us a text

Debbie Reynolds “The Data Diva” talks to Matthew Lowe, Senior In-House Attorney,  Data Privacy & AI, IBM and Adjunct Professor of AI Ethics, Legal Studies Department, University of Massachusetts Amherst. We discuss shared connections through the New York State Bar Association and our roles in shaping the intersection of law and technology. Matthew discusses the recent advancements in AI technology and the proactive measures the industry is taking in response to evolving privacy regulations, emphasizing the importance of technical controls to protect intellectual property. 

The conversation deepens to explore how the heightened public awareness of data privacy has influenced attitudes toward AI technologies. Matthew shares his concerns about the potential misuse of deepfake technology and its challenges for digital trust and authentication. The dialogue also covers the increasing sophistication of social engineering attacks and the crucial role of public education in combating these threats.

Looking ahead, Debbie and Matthew speculate on the future of federal privacy legislation in the U.S., considering the impact of recent executive actions and the potential for comprehensive AI regulations. Matthew expresses his wish for greater transparency and informed decision-making in the fields of privacy and AI, underscoring the need for improved public understanding and regulatory frameworks.

The episode concludes with Matthew reflecting on the educational value of discussing AI and privacy and his hope for Data Privacy in the future.

Many thanks to “The Data Diva” Talks Privacy Podcast “Privacy Champion” MineOS, for sponsoring this episode and supporting the podcast.

With constantly evolving regulatory frameworks and AI systems set to introduce monumental complications, data governance has become an even more difficult challenge. That’s why you need MineOS. The platform helps you control and manage your enterprise data by providing a continuous Single Source of Data Truth. Get yours today with a free personalized demo of MineOS, the industry’s top no-code privacy & data ops solution.

To find out more about MineOS visit their website at https://www.mineos.ai/



Support the show

33:37

SUMMARY KEYWORDS

ai, privacy, people, data, technology, cybersecurity, regulations, cool, little bit, technologies, companies, potentially, thinking, typos, podcast, happening, biden, continue, world, today

Many thanks to “The Data Diva” Talks Privacy Podcast “Privacy Champion” MineOS, for sponsoring this episode and supporting the podcast.

With constantly evolving regulatory frameworks and AI systems set to introduce monumental complications, data governance has become an even more difficult challenge. That’s why you need MineOS. The platform helps you control and manage your enterprise data by providing a continuous Single Source of Data Truth. Get yours today with a free personalized demo of MineOS, the industry’s top no-code privacy & data ops solution.

To find out more about MineOS visit their website at https://www.mineos.ai/

SPEAKERS

Matthew Lowe, Debbie Reynolds


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello. My name is Debbie Reynolds. They call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a very special guest on the show, Matthew Lowe. He is a Senior in-house attorney for Data Privacy and AI for IBM. Also, he is an adjunct professor of AI Ethics for the Legal Studies Department at the University of Massachusetts, Amherst, and I just found out today that you and I have something in common. You and I are members of the Committee on Technology and the Legal Profession for the New York State Bar Association, so welcome.


Matthew Lowe  00:58

Yeah. Thank you so much. Great to be here, and that's an awesome happy coincidence.


Debbie Reynolds  01:03

Yeah, I'm sure we'll have a chance to collaborate in the future on that. I've been doing it for many years, so it's a cool thing, but an interesting bunch of people who collaborate together and do a lot of webinars and helpful papers for people. I think people look to New York a lot for their opinions on what the Bar Association is doing. So it's very influential.


Matthew Lowe  01:24

Yeah, absolutely, New York and California.


Debbie Reynolds  01:26

Yeah, totally. Yep, that's true, those two. Well, first of all, thank you for being on the show. We've been connected on LinkedIn for a long time, and I love the way that you comment and the types of content that you put out, but I would love for you to tell me about your journey into privacy and AI and why this profession interests you.


Matthew Lowe  01:50

Yeah, absolutely. I'm a big fan of the podcast, and I think I join many others in that I have not taken a conventional path at all. I don't think it was a straight path. I think today, what's really cool when I talk to a lot of the students who I mentor in law school and in undergrad, they are very privacy minded. But when I was in law school, I had no idea privacy was a thing. So, there wasn't really anything in the curriculum that could advertise that as a potential field, and certainly, when I joined the University of Illinois. I went under thinking; I'm going to do standard sort of contract, transactional work, and contractual work. Then it just so happened that the University of Illinois was in a very techie, cool, hip area, so I started to really get involved with talking to different startups and people who were working on really cool products and offerings. I was like, man, if there were some ways I could weave my interest and passion for law with technology, that would be grand, so I didn't really know how that would take form. Then I joined the workforce and got introduced to GDPR, and I was like, this is the best. This is the perfect marriage of everything that I love. It's got that human rights element, which I think is really important, something that's really easy to get behind and be passionate about, and then, yeah, it's transactional, it's compliance, it's tech, and it's constantly changing. So I think it's very difficult for people to say that, yeah, I'm in Data Privacy, and I get bored. I don't think there's anybody who can say that. So that's the short version. I mean, I went over to IBM, and I didn't start in Data Privacy at all, but I think once I started to see that it was the thing, I raised my hand for every opportunity that I could to work on projects and get closer to it, to learn more about it. I got my CIPP certiification when I was not really in a privacy role, but I was just trying to learn more about it. Then, thankfully, because IBM is such a big company, and because they really are interested in something and you're good at it, then yeah, we'll try you out. So I started slowly, just moving more and more towards privacy, and interestingly enough, before I landed in privacy and AI law for that, I was the Global Policy Manager for cybersecurity, and I think this is something that you come across a lot as well. From what I understand from the podcast, is people conflating Data Privacy and cyber and so I was like, oh, I'm interested in Data Privacy, and a lot of people were, oh, okay, yeah. Cybersecurity is basically the same thing. It's not so. But I'm really grateful to have also had that cybersecurity experience as well, because I think it does inform the work that I do today, and yeah, I think along the way, I just kept on getting more education in the space, started to write more, do more research, publish in the space, and just try to get really active and just falling in love with it more and more every day.


Debbie Reynolds  04:34

I love the story, and I think it's very instructive for people. I get calls and emails from people a lot of times asking me how I get into privacy, and basically, what you said is exactly what I advise them to do. If you're at a job, you can get closer to it, you can take your own time to do some self-learning, whether that's certification, whether that's reading about it, put out some thought leadership there. People know that you have a voice. Anyone can do that. So those are all the great things that you've done, and it has been successful. Obviously, I'm really happy and proud that you can tell that story. I want you to tell me a little bit about privacy and AI and the convergence there. So I feel like people have really taken to AI, especially because of these AI models, and burst out onto the scene in terms of more public awareness of AI tools that they play with. Before the AI craze started, a lot of us had been beating the drum around privacy and jumped over privacy in terms of more exposure or more eyeballs on it, but as we see the emerging technologies get deeper into scrambling the eggs or personal data, it really brings up privacy again. So tell me about how those two things come together because sometimes people don't understand why they're related.


Matthew Lowe  06:08

That's a really good question. I think a lot of people are curious. Why are Data Privacy professionals all of a sudden, the AI experts? How do they get to glide over to this thing that seems potentially not as related? I think that it really started with that beating of the drum with Data Privacy, as you mentioned; I would imagine if we didn't have that much interest and curiosity on the part of end users and customers around nutrition labels, privacy notices, what are you doing with my data and what rights do I have with regards to that data that movement, I think, made end users so much more aware and thoughtful about these things. So if we didn't have that Data Privacy movement, and if we just got right into AI without this big emerging field and all of these regulations like GDPR, I don't think people would have been as curious about AI. I think they would have been wow, this is really cool. This is magic. This is awesome. I love using this tool, and I'm not too worried about it, but now these AI developers have to deal with a very conscientious user base who's like, yeah, this is cool, but what exactly is going on here? I think that issues around these large language models, or what have you that are just doing large data collection, and these questions around, well, when you collected the data, was the data produced with that use in mind, and what are the ethical sort of implications there if I post something on Reddit, we just saw that Google has a deal now with Reddit, a $60 million deal to get the user-generated content to build its own AI models. What are the potential implications of that so similarly, again, just a year ago or so, before things like open AI and ChatGPT really took off, people were, I think, paying more and more attention to these emails and these notices that, hey, we updated your privacy policy because we knew, oh, okay, that probably means that there is some change or development in how this technology is being processed. So I think similarly today, we are very curious about the disclaimers, the notices, the contractual language, and the documentation that's out there. Again, the terms of use, not only for companies that are developing AI but for companies that are potentially using that AI to distribute to developers or to potentially develop their own in-house technologies, that curiosity was already primed thanks to Data Privacy and so again, when we think about, well, okay, where are you getting that data from? What is it comprised of? How are you protecting it? Issues of data lineage and data provenance are all ethical data stewardship principles that started with Data Privacy, I think more broadly. I think AI is just a larger application of these technologies that are using data, just much bigger sets.


Debbie Reynolds  09:06

Yeah, I agree with that. We were definitely primed to have that conversation, and I was really pleased to see that the Biden Executive Order on Artificial Intelligence mentioned privacy 35 times. So I think it's important to understand that the underpinnings of what we do and AI have to be safe for humans, and we have to think about the harmful elements for humans, and so privacy is part of that conversation. I want your thoughts just about the discussion around AI and privacy. So as you see, these wild articles that come out, and it's infuriating, some parts of it are the world's going to end in two years. That's the actual article that I saw, or AI is going to cure cancer. These are two different sides of the spectrum, and so for me, I tell people I don't preach AI abstinence, so I don't tell people they shouldn't use AI. I tell them it may or may not be the right tool for what you're trying to do. You have to think about not only the benefits but also the risks. So, what are your thoughts?


Matthew Lowe  10:14

That's perfectly said, and I couldn't agree more. I think the bottom line is, now that AI is out there. There's no bringing it back into Pandora's box. We are going to be using it, and so I think that's where people like you come in. This is a really popular podcast, which is fantastic for people who are lawyers, people who are curious about the privacy profession, or whatever the case is. Everyone out there who is putting out information and pulling information from various sources is doing a really important job, or who is advising on how to use this technology in a way that is ethical and that is safe, cool, it's already out there. How do we use it properly, and how do we build safeguards in infrastructure around that? Because that's exactly right. It is a game of Risk Management. I'm in the same boat as well, where I'd rather not speculate on what the future is going to look like. I think that every time some really cool technology comes out, people start immediately fear-mongering, or they immediately start thinking, okay, yeah, this is going to be the thing that brings us the flying cars. In reality, we don't know until we know, but we do have a responsibility to just make sure that we understand what the implications are of the technology. I will say that AI, the speed, the adoption curve, and the distribution are quite significant and notable. It does set it apart from other previously released technologies. I think that is of note and something that we should be very conscious of. If ChatGPT, for instance, picked up, what was it, a million users within five days, just absolutely destroying adoption records previously? I do think there is a responsibility for us to think about how to slow down a little bit or how to be more thoughtful. I think that continuing to educate people and bring awareness to improper uses of technology, not feeding personal information into the technology, and not trusting it completely. That's the big thing, and I really like that. If you do use it, I keep talking about ChatGPT. If it's the only thing out there, obviously, there are a ton of other cool Gen AI tools. Let's just say if I say ChatGPT. I'm talking about chat in general, but so for ChatGPT, if you are using it, it has a little disclaimer at the bottom that says, look, we make mistakes, and I know that is the big trend that we're seeing now is that we're moving away from this deification of the technology. We're not treating it as infallible, and that's a really crucial first step, and lawyers, I mean, they learned that the hard way. Initially, when they were citing, case law didn't exist. They thought, oh, yeah, cool. AI is out here. It can do all this stuff for us, and thankfully, it can't. I think we have to continue to have this lens and perspective that AI is here to augment human capability, not to replace facets of it. I think as long as we can continue to really drive home that point, it's a really critical first step.


Debbie Reynolds  13:06

I agree. Very wise words. What's happening in privacy right now that's concerning you, something that's on the horizon or something maybe you see in the news, that is concerning? I need to maybe look closer into this.


Matthew Lowe  13:19

A lot of things, Debbie, a lot of things. Where to start? I have to name this one. I guess I'll start with one. I don't want to frame it as something that's concerning, keeping me up at night and waking up with cold sweats because I don't want people to be afraid, necessarily. I think these are exciting challenges that we want to be thinking about. But first is context of, well, AI, you asked about privacy, also just squeeze in a concern that I have about AI, which is the first deep fake I ever saw was extremely freaky. This is very bizarre, and I think that to those who were a little bit more tech-savvy, their first instinct was, this can't be real because we knew the potential for manipulation when it comes to digital images or video. If you look really closely with the discerning eye, you can see a glitch around a mouth or something that looks uncanny valleyish, something inorganic and gives you that reassurance that this isn't real. But same with phishing emails, we forget that the vast majority of the world doesn't really have that level of sophistication where they are going to immediately be able to discern, okay, that's not real, and deep fake technology is getting better and better and better, and so even for the discerning eye, there is a little bit of concern, because if you see something like Putin or Biden or whomever who is saying something about war or saying anything that can incite some kind of harmful reaction from the greater population that's concerning. One of the things that I'm really trying to track is how can we be better at developing tools or having something that allows users and viewers to authenticate content that is potentially generated by AI. I mean, in the world of cybersecurity, we're also seeing how people are taking just a couple of clips of someone's voice, and now you can create this call, hey, Mom, I really need money. I'm trapped on the side of the highway. My car broke down, and people who again, you're still having all of those social engineering tactics of urgency and desperation and time pressure, but now you have a voice that sounds exactly like a person, or you're posting a video that looks exactly like a person. I am concerned about that, and I do think we need to have protections around that, definitely defensive technology that we need to be investing in and continuing to track. That's one.


Debbie Reynolds  15:55

Yeah, I agree with that, and actually, you made a really interesting point that I would love to chat more about, and that is, I think someone posted something on LinkedIn. It was a fake photo, for example, and they're like, some people picked it apart. They're like, oh well, their fingers look different. Their mouth looks different, or whatever, and these technologies don't have to be perfect to fool people. You see a lot of pictures every day. You don't really look; you don't examine them. You may only see it for a second, but if it's enough to get you to change your behavior or make you afraid or make you do something maybe you wouldn't ordinarily do. I mean, that's a problem. So that's like social engineering on steroids, in my opinion, and as you say, these technologies are getting better and better. It's almost like counterfeit things where people say, oh, wow, I could tell this item was counterfeit because of this and this and this, and the counterfeiter was like, yeah, you just told me what I need to do to improve the thing that I'm doing. It's definitely a double-edged sword, but I want your thoughts on that.


Matthew Lowe  16:58

It's so funny. The more content that we put out there to help people is potentially an instruction manual for threat actors to reverse engineer and to think through, okay, well, thanks. I do think it's interesting because, yeah, AI still has not quite gotten fingers right. That is, for the moment, at least, seemingly a good tell, but no, I couldn't agree more. I'm thinking about what to add to that, but I think it's just perfectly said. The game of cat and mouse that we continue to play in the cybersecurity space, right? Actors become increasingly sophisticated, things like domain spoofing. To your point, even if 90% of people can look at something and identify it, yeah, this is probably not real. You're exactly right. If you ask any person who is on a blue team or pen tester, anyone in the cybersecurity space, social engineering is just a numbers game. It's not about creating or crafting the perfect attack at all. It's about if I drop 100 different emails to people, well, at least one or two click the link, and that's enough. That's a good day's work, and I think absolutely, with just those small issues here and there, you're always going to get vulnerable members of the population who are going to give into that, and then exactly, we continue to advance it. We continue to have increasingly sophisticated attacks, where it's getting so much harder. Back in the day, I mean, one of the most helpful hints and suggestions that we would give to people is to look out for typos. If you see a message and it has that urgency and like also the self, look for typos, because typos usually could be an indicator that there's something wrong here. Well, if you have a message that's generated using AI. You don't have to worry about typos, and I'll tell you kind of a funny story. This is one of my favorite things, and in one of the classes that I teach, one of our simulations for the students is to come up with your own phishing attack. I've had some awesome ones. I've had people do targeted phishing attacks by looking at my LinkedIn, and they'll come up with a message about maybe a publication that I was working on, and they'll say, hey, this is the email address is submitted to, and also, can you include this information about yourself? It's really well done. But I remember this one message in particular that was just beautifully crafted, no typos, perfectly said, really compelling, and I was, wow, how did you do this? They were like, oh, I just asked ChaGPT to do it, and I was like, it really gave me trouble in the beginning because obviously, these tools know to flag if you're asking it to do something potentially harmful. But all he did was say, yeah, I'm using it for an example in a classroom and chat room. She was like, okay, yeah, that's fine, and so I think the other thing that we need to think about is how easy it is to potentially bypass those safeguards, those safety rails; those need to be a little bit more robust, and we need to be a little bit more thoughtful. I mean, I've also gotten it to spit out code for logic bombing people. Just Python code if I wanted to run that, and then I can manipulate it further to do things that are increasingly harmful, or whatever the case is. So these are all things I think we need to be wary of. So, AI is really changing the game. I mean, again, just a couple of years ago, what we would tell people to look out for is changing so rapidly.


Debbie Reynolds  20:08

Yeah, and it's so funny that you mentioned the thing about typos. That's one thing that used to bother me that people used to mention that hasn't been true for 10 or 15 years. The right actors have spell check. They get rambling accounts, too. So that hasn't been true for a super long time, but people still fall for these things, and they're just becoming more and more believable, especially as the speed of things we see these developments are happening, the money that's going into these technologies is so massive that the cycle in which they will improve will be astronomically fast. You're thinking about a regular technology life cycle for a change or product, maybe three years. I've done presentations about things like ChatGPT last year that aren't true. Today, there are so many rapid changes that are happening because so much attention and money are paid there, so people really need to keep up and be on their toes about technology and these changes because they are scary, frightening, and interesting too. So definitely exhilarating.


Matthew Lowe  21:15

Yeah, absolutely and again, I think it's fair to say that I have to disclaim it, but I will disclaim it, not picking on ChatGPT at all. I think OpenAI has been very thoughtful about if there are issues arising, how do we share those working with Congress to have a little bit more foresight into potential pitfalls and things like that? So, I think ChatGPT is an incredible technology. It's one of many on an OpenAI suite that is incredible. I'm a fan of it. So I feel like every now and again, it sounds like I'm using it as a punching bag, and I just bag, and I just want to make it clear I'm not.


Debbie Reynolds  21:45

I have that experience online because I use it, and I like it. I think someone has sent a post out like, well, that's the way I use it. I put my list up there and stuff like that. So sometimes people can perceive that when you're saying you need to watch out for dangers, that you're trying to slam a particular technology, which is that's really not the case, just like you wouldn't use a wrench to put a nail in the wall, is not the right tool for everything. So that's my thing. What do you think about it? I guess it's the hot-button question that all US privacy people get asked or want to chat about: are we going to have a Federal privacy law? Is that going to happen?


Matthew Lowe  22:25

It's funny because I remember listening to one of your podcasts; I think the question at a lunch that you had was, is it going to be the first female president, or is it going to be a Federal privacy law? Which one's going to come first? I thought that was so interesting, and yeah, from that moment, I think when you raised it with, I forget who that was, but I remember thinking about it myself. Yeah, when are we going to see that Federal privacy law? I think that it's helpful that the Biden administration has been very active in the cybersecurity and Data Privacy space. You mentioned the executive order earlier, one of a couple in this space. So they had been very diligent. They had been very thoughtful, and I think that one of the most recent pushes was that, look, we need Federal legislation. I think the question is, are we going to have a Federal AI regulation similar to sort of the EU AI Act, and totally bypass the Privacy Act? Are we going to have this Federal law that encompasses everything? I'm not sure, but I do think that it's more likely today than it would have been a few years ago. This is what signs seem to indicate again because Biden does seem to be actively pushing for that legislation. What the horizon looks like for that, I don't know, and I think a lot of privacy professionals are, I don't want to say, disenchanted, but I think that we've been talking about one for a very long time. I am hopeful. I do think that it's coming. What form does it take, and what role does AI play in that? These are all other questions that I'm not really sure about, but fingers crossed.


Debbie Reynolds  24:01

Yeah, I think this AI push has really elevated that privacy discussion because, and to your point about what's happening with the AI Act in Europe, I don't think that they could have done the AI Act if they didn't have a GDPR. So I think having privacy and data protection as a foundation was a building block that Europe used, and that's why they're so far ahead on regulation in AI. So I think we want to get there in the US. We have not taken the regulatory path around AI, but we're nibbling at the edges of that, trying to figure out what it means or what we're going to do around that. So, I think people who study this area know that even if they want to do something about AI, they have to address privacy at some point. So I feel like those conversations are happening, probably in tandem now, as people are looking more closely. What do you think?


Matthew Lowe  24:59

You're absolutely right, and I think that America has always been globally competitive or has sought to be; I mean, I think that we don't want to be left in the dust when it comes to having a voice in global regulations. Right now, the EU is really leading that narrative because they have the GDPR to your point, and now they have this EU AI Act. So I think there is some ground to cover there for us, but there is definitely incentive and motivation to put some thoughts out there. So, I think you're definitely spot on.


Debbie Reynolds  25:29

I don't know about you, but I'm exhausted every time I see any new proposal that comes up for a new Federal privacy law. There is so much interest. There's so much excitement about the ADPPA, and I never even talked about it. It's not going to actually pass. So I don't know why I'm putting my effort here, and so my thing is, I think I want to see something that's actually going to pass first before speculating on what's going to happen with drafts. What do you think?


Matthew Lowe  25:56

Like I said, hopeful, where if I do hear something, I do get a little excited, and I'm like, maybe, maybe this is going to be the one. I think that it's going to be really helpful if politicians continue to follow the path they have been on with AI, where they continue to also involve industry leaders in the conversation and leading companies in the space in the conversation. I know that one of the larger issues when the CCPA first came out, or a lot of these regulations, is there seems to sometimes be a little bit of a disconnect between what's written on the page and then the actual implementation and execution because it doesn't always cleanly line up. I think that originally, a lot of these regulations came out in response to specific companies and industries. I know, for instance, Meta was probably top of mind for a lot of regulators when they were thinking about this, but a social media platform functions in one way. But then there's all these other technologies and companies that are also in the scope of these regulations that function in another way, and it's not as obvious to them all the time how this is supposed to work. So I think the more that we can have that collaboration between stakeholders and drafters, the closer we're going to get to something that is compelling and passable. So that's my hope. I think, as well, in that process,


Debbie Reynolds  27:13

I have many concerns, but one concern that I have is that sometimes I feel like some of these regulations are trying to articulate things that are either very difficult or impossible to do technologically. So you're trying to bridge that analog-digital gap, but it may not exactly translate. What do you think?


Matthew Lowe  27:39

Absolutely. I mean, I think that what it was some EDPD when I tried to mess up that acronym, it was some guidance that came out and one of the recommendations for compliance, it was homomorphic encryption, which was a technology that simply we just weren't there yet to be able to use it and to adopt it. I think there's a little bit of a conflict where, to your point, you may understand something conceptually, but a conceptual understanding of something versus a realistic and feasible and practical implementation in a meaningful way. There's a little bit of dissonance. It's not quite as simple, and I think that one of the most difficult things when it comes to policy writing is sometimes it does get a little bit pie in the sky. Then exactly to your point, I think that's further evidence of why it's very helpful to have people in the room who are responsible on the technical side for implementation and saying, I don't think that we can really do that today. That's something that we can aspire to, and that's something that we should track, and it does become feasible. But here are options for today that can make sense. One thing that I liked about the EU AI Act was that it does take into account the complexity of the technology. AI is not just AI, and I think that that's one of the issues that we're running into now is that everybody seems to think that AI equals Gen AI equals this one single developer, when in reality, AI has been around for quite some time. It has a lot of different use cases. It uses a lot of different data depending on whatever the use case is, so that risk-based approach is great to see. So, if the technology is functioning in this way, here are the things that are expected of you. I would hope that we're going to continue to see regulations move towards that path as well.


Debbie Reynolds  29:37

Very good. So, if it were the world according to Matthew and we did everything you said, what would be your wish for privacy anywhere in the world, whether that be regulation, technology, or human behavior?


Matthew Lowe  29:52

So you're asking me if I could have one wish come true. Yes, yes, okay, with regards to privacy. So, yes, I think that it would be really great if we could solve the transparency problem. I think that transparency is a really difficult concept from end to end. I think it first starts with general awareness, so getting more users and developers aware of the importance of privacy. I think today, it matters a lot. It's very difficult, regardless of what geography you're in, to develop a technology and to sell in the market without having privacy be a factor. But the extent to which it's a factor, I think matters, and I think that if you ask the average person on the street their thoughts on privacy, the answers will be quite variable, because I just don't think that there is as much awareness as there could be. Then around AI and the AI models, that is definitely something that is also a challenge that is top of mind, that a lot of companies are taking seriously, is around, how do we create transparent models, and how do we really make all of this visible to those on the outside? So I think if we can achieve transparency, we can have more informed decisions from all stakeholders, and I think now, when you're trying to solicit decisions in spaces that are a little bit gray or muddied, it's not great, it's not an ideal condition. So I think that, yeah, if I could just snap my fingers and tomorrow we wake up, and one thing is different in this space, it would be that there's just a little bit more transparency and awareness.


Debbie Reynolds  31:29

I agree with that. I think transparency is so vital, and maybe I shouldn't be surprised that companies don't want to be as transparent as they should be. But I think it can be more of a benefit than a problem because I feel if people feel like a company, it’s being transparent, that engenders more trust,


Matthew Lowe  31:50

Absolutely, it's a range, some companies are taking it very seriously and are doing awesome things where they're setting this precedent for a cool, that's what we should aspire to if you're benchmarking your own privacy processes and maturity, great targets. Then there's others which are clearly, I think, trying to figure it out. So, a little bit more alignment there would be great.


Debbie Reynolds  32:15

I agree completely. So, thank you so much for being on the show. This is great. I'm so excited that we're being able to do this, and I'm sure that the audience will find this episode as valuable as I did. That is very cool, and congratulations on your work. Being a professor, I know it's not the most well-paying gig in the world. I did that when I was an Adjunct professor at Georgetown. It's important work to actually see people who are maybe earlier in their careers or earlier in this adoption phase understanding AI, to be able to have someone like you to really advise them or show them the right path.


Matthew Lowe  32:55

Yeah, absolutely. I think you did it for the same reasons I did probably, which is just to show us the fulfillment, and if you can give me one or two students in a semester to light up and get interested, that's a job well done. That's my metric. Thank you so much for having me on this was a lot of fun, and, yeah, it was great talking.


Debbie Reynolds  33:16

Yeah, we'll talk again soon, and hopefully, we'll have other opportunities to collaborate together.


Matthew Lowe  33:22

Yeah, definitely sounds good.


Debbie Reynolds  33:24

All right, thank you so much.