"The Data Diva" Talks Privacy Podcast

The Data Diva E165 - Pamela Isom and Debbie Reynolds

January 02, 2024 Season 4 Episode 165
"The Data Diva" Talks Privacy Podcast
The Data Diva E165 - Pamela Isom and Debbie Reynolds
Show Notes Transcript

Debbie Reynolds, “The Data Diva” talks to Pamela Isom, Chief Executive Officer and Founder, IsAdvice & Consulting LLC, Former Executive Director the Artificial Intelligence and Technology Office, U.S. Department of Energy (DOE). We discuss Pamela’s extensive experience in various fields, including AI and cybersecurity. We discuss the evolution of AI from expert systems to Generative AI, with Pamela highlighting the need for accountability, bias mitigation, and ethical governance. We also discuss the limitations of AI and the importance of using it as a helper, not a replacement for humans. Pamela shared her experience of being kicked out of a group call because someone thought she was an AI, highlighting the lack of human judgment in relying solely on AI tools.

The conversation then shifted to the risks and challenges associated with AI, including privacy concerns and the potential for AI to make judgments about people. Pamela drew attention to the alarming case of a 14-year-old girl whose photos were used in deepfakes, highlighting the serious risks posed by this technology. We also discuss the challenges of determining the authenticity of AI-generated content and the need for watermarking and content authenticity initiatives. Debbie Reynolds and Pamela Isom delved into the importance of data lineage and provenance in AI decision-making, emphasizing the need to know the origin and journey of data.

We conclude the discussion on the importance of ethics in AI and the challenges of implementing them in government. We highlight the importance of networking and building relationships in the field of AI and technology. Pamela also expressed concern about children receiving scam calls that sound like their loved ones and emphasized the need to educate children about cybersecurity and privacy and her hope for Data Privacy in the future.



Support the Show.

43:17

SUMMARY KEYWORDS

ai, data, people, cybersecurity, ethics, governance, information, today, talk, executive order, systems, call, happening, lineage, understand, privacy, part, kids, bias, real

SPEAKERS

Debbie Reynolds, Pamela Isom


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva." This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a very special guest on the show. Pamela Isom is the Chief Executive officer and founder of IsAdvice & Consulting LLC. Welcome.


Pamela Isom  00:40

Thank you. I appreciate being here.


Debbie Reynolds  00:44

Well, I don't knowI don't think I can do your introduction true justice. You have very deep, very deep experience in areas of ethics, governance, innovation, technology, and digital transformation. You and I have in common risk management and AI data cybersecurity; you're also a keynote speaker as I am. So I would love for you to be able to introduce yourself and tell me how you got where you are in your career trajectory.


Pamela Isom  01:20

Okay. I'm gonna start kind of from the top from where I am today and work backward. So yes, I'm CEO of IsAdvising Consulting. The organization specializes in using AI for equity, opportunity, and sustainability. I'll explain what that means in a little bit. But it's safe. So the practice of safe AI for equity opportunity and sustainability. I'm a former senior executive and director of the Artificial Intelligence and Technology Office at the Department of Energy. So I came fresh out of working at the Department of Energy and serving the Federal government into running a business of my own. Prior to that, I was the senior executive over all things application engineering and development at the US Patent and Trademark Office. So in both of those cases, I was leading application development, Application Engineering and overseeing, at Energy in particular, all things AI, so the AI portfolio at the Department of Energy fell under my jurisdiction. So one of the good things about my work there, and then I'll get to my history is, one of the good things is I was able to establish and lead a team where we established AI ethics for the department. And also stood for undermined jurisdiction, the AI Advancement Council; the AI Advancement Council was mentioned in the blueprint for an AI Bill of Rights. And so that was yours truly. So yours truly, and my team worked hard to get that council going until we feel good today about the Executive Order. Because that information has carried forward as one of the examples, not in the order, it influenced, it made it to the Bill of Rights. So let's talk about some of my history. So I started using AI when it was called OPS5, and it was in the 1990s. But that's all I'm gonna say about my age. But it was in the 1990s, just learned it. And it was a great experience. So I started back then just using it as a software engineer. My background is I'm a computer programmer, and then systems engineer. And then I moved into more IT specialist roles in the private sector big companies. Then, I landed in the private sector as an executive IT architect and an IT architecture principal, so I was a principal. And then, I hopped straight from there, as I worked in the private sector responsible for engineering and development. And I had lots of patents. I have like, I have five, and I think four are still active today. So I was responsible for innovation and patent generation and leading these innovative activities. And so I was able to carry that forward and work from being on the side of the table where I am applying for patents to happen to the other side of the fence where I'm managing the systems and enabling others to have the tools that they need to be recipients of patents and trademarks. So that's kind of my career in a nutshell. I'm very thankful for my experiences and that's how it is somewhere in between there. So in everything that I've done, especially in the government, cybersecurity was a part of my responsibility because I had the systems that I had to oversee and ensure that they were deployed. And then I also was Chief Innovation Officer and the leader, Executive Director of cybersecurity when I was in the private sector, right when I first came out of the Federal government. So I definitely am very familiar with cybersecurity. And that whole, that NIST, the 800 controls and all that so well, I'm sure we'll get to that.


Debbie Reynolds  05:35

Well, that is oh, my goodness, that's tremendous. Well, you're the perfect person to talk to about all the stuff that's happening. So since you're someone who's been an early adopter, you have seen AI go through all these different transitions over the years. What are your thoughts on where we're going with AI, where we come from, and where we're going?


Pamela Isom  05:56

We came from what I'm used to in the past, was expert systems. And so it was machine learning, not like it was a few years back, right? It was the early stages, more rules-based expert systems. So the data that you need, it was right there, there was a rule, you said, if this happens, and that's what you do, but it was not procedural programming. It was rules-based expert systems. But then AI moved into, okay, let's really get into machine learning where you're starting to learn from the data, and more advanced machine learning concepts and practices. But then lately, the past couple of years, that's when Generative AI really started to take hold. And that's where it's not just learning, but it's also generating text. So that's where the Generative AI round really hit, really took off. Generative AI has made such an impact number one because people are now more interactive. So people like to be interactive; they like to have and feel like they have some say. And the prompt-based approach to Generative AI makes people want to engage because you have control. And you don't feel so much like it's the black box; you still know that something's going on behind the scenes, but you feel more like it's happening because of the prompt that you gave it. So, if I just learn how to get the prompt better, I can get a better outcome. So it's more empowering to the users. That's where AI has come. And where I think it's going is it gets to the conversation that's happening now about the AI chatbots. So where it's going is AI is becoming more distributed; it's going to rest more in the user's hands. So it'll be in their mobile devices, there's announcements out about how you'll be able to get the AI bots from the AI stores. So at this point, the users are going to feel even more empowered. So that's what's happening. That's where it's going. I think that's all goodness, I think that the distributed approach to AI is good. The worry that I have is the ethics and the privacy. So the ethics, the cybersecurity, the privacy is the worry that I have, those are the things that I think we need to build in as we move along. So don't wait until after the fact. We should be thinking about these things right now, integrating this alongside the innovation.


Debbie Reynolds  08:32

I agree with that. Let's go deeper on that. Tell me a little bit about your concerns with the ethics and privacy, I can definitely tell you what I think.


Pamela Isom  08:40

So ethics traditionally, so accountability, are we making sure that the solutions are not going to cause any harm? Are we doing the things that we need to do to ensure that bias and discriminatory practices are not amplified throughout the outcome as a part of the outcome? So ethics, in my mind, is making sure that you have some principles in place and then operationalizing those principles through strong governance. So, I always say ethical governance; sometimes, you'll hear some of my readings. And I'll speak to ethical governance because governance without ethics is, I think, useless. So we want ethical governance; you need ethics first. And the ethics needs to evolve. And the reason why I say that is because the technology and the empowerment of the users are evolving with these personality bots and these AI agents. Now I'm able to create my own smaller language models and create my own AI. I'm able to do it myself. So there needs to be some interactive governance there to manage this type of activity and to oversee this. The other thing is the data. So, what data are we going to be using to create these personality bots and the AI agents? So let's talk about personality bots for a minute. My big concern about that is people starting to open up to these avatars and these personality, AI chatbots. And opening up too much. So now, all of our ethics pertaining to safeguarding information get lost because we lose sight of the fact that these avatars are not real. And so my biggest concern today is to remind people that Artificial Intelligence is not real intelligence. It's artificial. It's not real. But we can get so, especially when we start with the personality bots with the avatars that are going to look like we want it to look, or look like a person that we admire. And so I forget that I'm not talking to that person. And this is happening. Debbie, do you hear about people who are opening up to dating avatars and things telling them too much information? Oh, my gosh, see? So that's my concern. And where's the Applied Ethics and governance? Oh, my gosh, so, and it's only getting exacerbated. So I'm big on and ethics is so important. And it needs to be evolutionary, because what worked even last year, it has to be adapted to what's going on today. And tomorrow.


Debbie Reynolds  12:04

I agree with that. I'm also concerned they're like even in robotics. I don't like humanoid robots and stuff like that. Because I feel like what they're trying to do is elicit almost like an emotional response from a person. Emotions are not logic, right? So you may have a feeling when you're interacting with this by like, oh, my goodness, this bot understands me, it's like, well, no, it doesn't. It's just listening to what you're saying. It's trying to give you an answer. It's not responsible, right? So I've seen people say, well, let's use it for suicide counseling. I'm like, No, absolutely not. We can't abdicate our human responsibility to technology, right? So it's supposed to be a helper. And so for people who know how to use AI in the right ways, it shouldn't be a helper to humans, it shouldn't be a replacement for humans as it certainly shouldn't be making judgments about people. But I agree with you also the people do give up too much information. If I tell people, if you want to stop to be really private, stop putting it on the Internet, right? Also, I think one thing that AI is doing is using data in different ways that people probably have never anticipated. And so it's creating more challenges or more risks for people around privacy. What do you think?


Pamela Isom  13:20

I think that that's true because you don't intend to. So AI is pattern recognition; it's all about looking for patterns we didn't really think about so it's taking this information and now drawing conclusions based on patterns that it sees; it would be good to use AI for one one-on-one coaching to say, tell me some patterns that you're seeing about me, and myself, me, myself and I, not for the world, right? And so that's the risk that we take when you put things out on the Internet. And then, these language models are looking at this data. So you have to be very wise, you have to be stewards about how we handle our own personal data, as well as others because that data is being used in ways we didn't think about.


Debbie Reynolds  14:16

Yeah, you mentioned expert systems for AI. And when I talk to people about AI, they're like, well, how do I know, especially with Generative AI, how do I know that the result is right? Or how do I know that I'm giving you the right information? And I always tell people you're asking the AI to do a particular task. If it doesn't do that task, then there's a problem, right? So that the result really shouldn't be unexpected. So I think you really need to think about it that way. So, the analogy I give is that you can go to Costco and order a cake for your kid's birthday. You got to pick it up and they give you a casserole. You're like wait a minute, something wrong happened here, right? So that's what I think about AI, you're supposed to tell it what you want it to do. You should understand what the capabilities of the tool are, and then the results should be expected. It shouldn't be unexpected. What are your thoughts?


Pamela Isom  15:07

So the AI should generate the results that you expect. But you should verify the results. So, I've been talking to some workplace investigators, and I had the pleasure of meeting with a group of workplace investigators not too long ago. And they were looking at as an example, how do we use AI to help us with our mission, which is investigating employee practices; maybe there's been a dispute claim, and they're investigating that. So they're interviewing people, they're having conversations with folks, they're looking into information. And so what I convey to them as I think that is perfectly fine for you to use AI to help you with your research. But I think you should be very prudent about when that information comes back. It's just like anyone else who provides information to you; you do your due diligence to verify the information that you have received; I don't think that you just take it with a grain of salt. And I never think that you should use that information to make a decision that's gonna impact proceedings in court; you're gonna pass that information on to a lawyer, who then uses it so that a human is impacted by it. I think that instead of doing that, you use AI to help you, it's an assistant, as we said here already, it is truly an assistant. So it is not the end all be all; as much as we would like it to be, it just isn't there yet. And so how I advised them was, the other thing is, let's say you're in a setting and you're having an interview. And it's you, it's your client, and then you've got your AI right on your device; they need to know, tell them that you have a digital assistant and that your digital assistant is here to help to either do notetaking because we have AI today for note taking, or we have it here to help you as an interviewer because the AI can kind of let you know, I'm sensing something based on what the person says, maybe get clarification. Ask the person if they could repeat that; I didn't quite understand what they said. So, will they repeat that? So I think that we should use AI for those purposes. And I think you should be transparent, and let people know that you're using it. Because there will be, as they start with these customized AI agents, right, and the digital assistants, wherever we're using them, they're gonna be using them. And so we need to know that you're using these AI agents, and you need to be forthright and let them know that it's as a digital assistant. And if they want to use one, people that were interviewing, in that case, if they want to have one, they should be able to. You just need to know about it. But again, trust but verify. If you're not doing your due diligence, you're not doing a good job.


Debbie Reynolds  18:30

I agree. I agree. Right. So just like there was a case recently in the news where a guy, he's been a lawyer for 30 years, he should know better. He had AI write a brief for him. He submitted it to a court and a lot of references were made up; they weren't correct. And so he got in big trouble about that. I think he got sanctioned. Because definitely not a good look. But I mean, if you think about it, and from a human perspective, let's say he let his assistant or paralegal write his brief and give it to court without looking at it. That doesn't make sense, right? So I think we need to be able to put AI in its place, which is it is a helper; it's not going to supersede your knowledge and your wisdom because it is not human like you are, and I don't think even though I'm hearing stories about things being sentient and stuff like that, it really is not human, is not going to be human. It may fool people. And maybe that's like a selling point. Because AI is so wild west right now, I feel like there's just so many uncheckable claims, right? Really dramatic claims are being made about what these AI tools can do. These companies really want to sell those tools, but you as the organization that was using it, take the risk, right? So, you have to be able to understand what this tool is doing. What am I asking it to do? Is it giving you the right thing and want to make sure it has the right level of like you say, ethics and governance, ethics?


Pamela Isom  20:04

So data ethics, cybersecurity ethics, and AI ethics are really all three forms of ethics in this situation. So that's why I usually just say ethics, right? But I'm looking at all three flavors. I'm going to tell you about an incident that happened to me not too long ago. So I have this situation where I'm on a call like I am with you today. And so I'm listening. And it's not an interactive call for me, I'm just kind of listening and will chime in when needed. And I get kicked out. And I just get booted out. So I tried to go back into the call; it's a group call that's for dedicated individuals. And so I was truly invited and truly accepted. I was legit. I tried to go back in. I couldn't get back in so immediately. I'm like, okay, so there's not many people. I'm a black woman, right? So it was one of those calls where I'm probably the only one on the call, and it doesn't matter, right? It's neither here nor there. But the video was on, like to see me on this call. And I got kicked out. So I tried to get back in, but I couldn't get back in, so I pinged someone. When I said what happened, I was not able to get back in the call. What happened? Oh, yeah, we thought you were an AI. Okay, so I'm just like, what? So they thought I was an AI. I'm like, okay, let me get this straight. So first of all, who tells another human being that we thought you were an AI? Where's the ethics in that? Right? Who tells another human being I thought you were an AI? It's like, kicked you out? So they had to clarify that no, no. Okay. So no, no. Okay. So like, I'm just like, Okay, how are we going to handle this? Because sometimes I try to be the adult. And so I'm like, how are we going to handle this? So I'm waiting. And so like, I try again, I can't get back in. Oh, yeah, they fixed it; you can get back in. That sounds like okay, so we're not gonna sit here and do this; I'm not gonna do this. I'm gonna go do something else, fix this problem. So then I get this email saying your name has AI in it together; Pamela, ends with a, and then Isom begins with I. And I'm like, what's wrong with your script? You didn't look up? Was my background too dark? You know what I mean? So I get that we want to be good cyber stewards. But in that particular example, we need good governance. So that was not good. That was not good. That was not good. Because there's two sides to governance, right? So there's a part of the governance where you're trying to do the right thing. You're trying to be a good steward. But then we forgot about the human aspect. And how that made me feel, right? And don't ever tell me don't do it. Now I'm like, okay, don't tell me. You thought I was an AI because I'm just gonna have an attitude. No, I'm just kidding. So that's my need to have really good governance, really good stewardship. And that is where we are not today, right? So with these types of tools?


Debbie Reynolds  22:42

Right. Yeah, I'm very concerned, basically. So the scenario, first of all, it's horrifying. In the scenario that you mentioned, what happened? And I agree, the human side was not there. Right. So that person basically abdicated their judgment to the AI tool, they didn't actually think that's why God gave me a brain, to think, reason, and to make a judgment. So, I don't think AI should be used to make judgments, right? I think those judgments should happen with humans. So, humans should look at the data. They should make their own judgment and not abdicate their responsibility to technology.


Pamela Isom  23:58

Yeah, that was my point.


Debbie Reynolds  24:02

So what is happening that you're seeing in the news right now that concerns you?


Pamela Isom  24:10

So one example is just recently, so the writer's strike is an example that I can't really talk to because I don't have all the details yet, but I understand it, and I get it. But the one that really, really gets my attention is the young 14-year-old and she had an experience where her photos were used in pornographic content through deep fakes. Okay, so one of the things that we want to do with AI privacy, cybersecurity, all that is we have to, and it's difficult to do the in this particular case, according to what they reported, and I really appreciate the fact that she is out talking and making others aware that hey, this is real, this is really happening. But according to what I read, they grabbed photos of her and then used those photos in inhumane ways, right, pornographic ways. That's a form of deepfake. And so deep fakes are nothing to take lightly. Creating these images and then using somebody else's face has been going on for a while. But now it's even more pervasive. And some of it is because of the advancements in being able to create images from AI, etc. So that is what happened to her. What I liked about the fact is that she brought it out. A 14-year-old who could have easily said, oh, my gosh, oh, my, I'm so embarrassed, right? But it wasn't her. It was her face, but it wasn't her right? So I think that I appreciate the fact that she brought it to the table. But I think more so we should learn to be very careful when it comes to tracking our data, and how our data is used. And then there should be stipulations in place to protect us when these things happen and has to get better. It has to get stronger; there has to be better rights for us when these things happen. And I believe that cases like that, although we don't want it to happen. Cases like that helped make the point that we can be exploited. And deepfakes are very, very real. I have a customer who received it because you know, you can take audio, you can have AI create audio for you. And I have a customer, Debbie, who contacted me and said, hey, I don't know if this is truly a real situation or if this is an AI. Can you help us figure this out? So I have a great job. Because I was like, yeah, let me see what I can do. Right. And so that due diligence is what we have to pay attention to. Because we don't know what's real nowadays and what's not. And so I teach this in my class because one of the things that I do as a part of my role now is to provide coaching and training. And I talk to clients about ways that we can start to look for flaws in the AI, voiceovers, and the AI imagery to help us tell whether that's genuine or not. And then I just got involved with something that has to do with content authenticity so that we can better watermark, and you can tell the lineage and the provenance of the materials.


Debbie Reynolds  28:01

I would love to talk about lineage and also what you said about watermarks. So in the executive order that came out on October 30th, Joe Biden, the AI Executive Order, one of the things they call for is industries to find ways to mark content that is fake, that is generated by AI, as I'm thinking about this, okay, I'm a data person. I've been doing data stuff forever, you know, helping people create expert databases, different things. So I know a lot about how data works and how it flows. How realistic is this in terms of being able to watermark or mark some of this technology, this data that comes out of systems so that people can tell whether it's real or fake? I almost feel like, if they do it, you almost need a system to read it. What are your thoughts?


Pamela Isom  28:55

Well, AI can spot itself. So it would be a good idea to use AI to detect whether it's interfacing with another AI, right, so the AI because the AI pattern, pattern matching against pattern matching, and gamification, sometimes I talk about game theory, put it against each other. That's a good strategy for cybersecurity, too. This is cybersecurity. Authenticity: one way to do it is using AI to spot the patterns. Because if it's AI-generated, it has certain pixels and characteristics about it that another AI is going to know. So that's gonna be one of the keys. And this is very real. Like I said, I mentioned, I think it's called the Content Authenticity Initiative. So I applied, and I'm waiting to hear back and like that, but it's important, and there are others. And so, one effort that I'm involved with, I actually looked into testing their solution is a deep fake detection system that's out there. It detects, like I said, I had a customer, and I was trying to figure out whether her voiceover was real or fake or AI-generated. And so they have a defect detection tool. So the industry is out there creating tools; they're in the deep fake detection, for some places, is in its infancy. And for some, it's a little bit more mature. But this is very real. It's very real.


Debbie Reynolds  30:31

Yeah. You mentioned provenance and lineage. So I talk a lot with companies all over the world about AI. And one of the things I always tell them is that, be prepared and think about data lineage in AI systems, right? So maybe companies that aren't accustomed to using AI maybe this is a new way they're using data; they have to think about where the data came from. And so maybe, in other words, they've never had to do that, right? In their company, okay, we have this data, we're going to protect this data, doesn't matter where it came from, we're just going to protect it, protect it, protect it, or whatever. So, in AI, you may have to know is important to know where the data came from. And it's important to know the data’s journey, who did what, and what's in these databases. What are you putting in these models? What are you prompting? What comes out of the models is that lineage is really important. And I think that's going to really change people's work life because as companies implement more AI, or create those new obligations. What are your thoughts?


Pamela Isom  31:41

I think that's absolutely correct. And absolutely necessary, if I think back to cybersecurity roots, so one of the things that cybersecurity speaks about is supply chain risk management and cybersecurity supply chain risk management. And what you're doing is you're trying to understand the suppliers, and the supplier's, suppliers. Where are the suppliers coming from? And that's the same for data. Where is the data originating from and how has the data traveled? What's the lineage? And so that's really important to know. And it's more important to know from an AI perspective because that's how you're going to get to authentication and authenticity. So with what I started out with, is that what I'm looking at today, if it started out as Pam Isom, and this is how you spell her name, is that how it's looking? When I see the responses today through the AI? Or is it a different name? Or a different characteristic? And if so, when did that change? Was there some history? Did she get married? You know what I mean, is there some history there that will cause the change that makes it make sense? And because we're making decisions based on this type of data? I mean, it's really important to be able to verify the information is accurate. And is the information what we are expecting? And if it's not, why not? I'm making decisions about people making decisions; I'll use myself; people are making decisions about me to date, about my creditworthiness. And so things have changed for me, because I left the Federal government, but that doesn't mean my integrity has changed. That doesn't mean that I'm a credit risk. Don't even try it, you know what I mean? But things have changed. And so you want to understand that metadata, as well as the data, which is where the lineage and the provenance come in.


Debbie Reynolds  33:49

I'll talk to you a bit about bias. And this is one reason why I decided that I want to focus my attention in emerging tech because I'm very concerned about bias. People who look like me in digital systems. We know it is a fact that, especially in facial recognition systems, they're very poor in correct identification of people of color, and things like that. I thought it was really interesting. I ended up at a dinner recently. And a couple of women there are retirees and black women. And they were very concerned about AI and the bias issues. And they knew a lot about it. But it was really interesting. So they've been reading the reports about electric cars with AI that can't identify black people. And these are real, and they're really concerned about it. So what are your thoughts about biased AI?


Pamela Isom  34:42

If you want to fix it, become a part of it. I don't believe that you sit back on the sidelines and just gripe because those electric cars are still going to happen. So I love the Executive Order that just came out, not just because some of that is what I was doing when I was at The Department of Energy, that makes me feel good because I feel like things have come full circle, because some of these things I was pushing for when I was at the overall things AI of the Department of Energy, but that's not why I like it. Why I like the Executive Order is because the Executive Order points out the innovations. And it points out the responsibilities that we need to take, right? And one of the things that really articulates is the need for multidisciplinary, interdisciplinary feedback and inclusion. So again, my motto is AI and emerging tech for equity, opportunity, and sustainability. You want to be a part of the solution. You don't want to be one of those that's just sitting back, right? Because these types of vehicles they're not going away. AI is not going away. Right? So you wouldn't be a part of the solution. How do you become a part of this solution? Get involved, offer to be in fact, and insist that you're a part of the red teams. Or that you're a part of this interdisciplinary group that is verifying and validating testing the outcomes? Because who can test for bias better than those with the lived experience? That's right, yes, common sense, right? So don't sit back on the side because here's what happens when we sit back on the sidelines, our data is not included. They just think we don't like driving. I'm using myself as an example that Black people are not going to use driverless vehicles because they're not in the data. Or they're gonna make some assumptions about it. Because we're not represented in the data, we don't like it. That's not true. Right? That's not true. So I say become a part of the solution, figure out a way to become a part of the solution. I also think, being able to be in command of these autonomous systems, I do a lot of work around skills development. So going from the one that is scared that I now have to share my workload with an AI to thinking about it from the standpoint of now, I'm going to be the human in command. I mean, that's empowerment. And that's a job. So they always talked about how jobs are being displaced. But there's two ways to address that. Right? So your question about bias, so that I make sure I stay on track. I'm now in there, or those that are learning these skills are now there to say not so because we're going to help test these outcomes to make sure that it is not going to hit the market because we're going to be a part of the bias detection team. So that's a job. So yes, there's displacement. But that's a job. That's a career, and it doesn't require a PhD. And we're a part of the solution. So that's really my point there. And then I just can't help but talk about equity. I keep thinking about how these are job opportunities and skills that we can develop and cultivate. Wow, we're addressing a mission, which is to make sure that it's not amplifying disparities. That's why I like the Executive Order. Because it says that.


Debbie Reynolds  34:43

Yeah, I love the Executive Order. I'm using it a lot in some of my work as well because I think that they were able to hit the right balance and communicate it in a way that anybody can understand what the thought process is around AI. Brilliant. If it were the world, according to you, Pamela, and we did everything that you said, what would be your wish for AI, privacy, and technology in the future? Whether it be human behavior, technological advancements, regulation, or anything?


Pamela Isom  39:08

Well, I want to see more regulations around privacy protections. So, I've been wanting to see more in that space, particularly because of AI. And then, I want to see our children learn more about how to use AI, but also how to protect themselves. So as a kid, and actually even today, I have had calls and emails. They weren't ransomware, but they could have been. So phishing, right? So I received those, and we all have, and sometimes I've had individuals call me and say, okay, let's take a look at your browser. And I was like, why do you need to look at my browser? You told me that there's a payment due. I don't understand why you're sending me this message. Why are you wanting to look at my browser and I know you are a scammer? I figured it out, right? So you learn not to do these things. But as a kid, I will never forget, I had a situation where someone called me that they're supposed to be my mom. And they told me to go ahead and walk home from school today because of something. So I'm not gonna be able to pick you up. So just go head on and walk home. So I remember I called my Mom and said, Mom, do I have to walk home? I'm having fun where I'm at, can I just get a ride with bla bla bla bla bla, and my Mom wanted to know, what are you talking about? Right? What are you talking about? So I should have known that I wasn't my Mom. But I was a kid. Right? A little kid? What are we doing for our children? They're getting these calls that sound like their loved ones, that sound like them. So start now with preparing our kids for these things. preparing our kids, have an image pop up on their screen that looks like Mom, know how to tell that that's not really me. Right? Because they don't know. Especially the young ones, they don't know. I mean, our kids are smart as all get out so they probably do know; they probably can say you're a deep fake. I know all about you but be sure so that's what I want to say is I want to see more work in the cybersecurity and privacy route to help our kids and I just want legislation but let's teach our kids, let's have an Executive Order that the kids helped us come up with, you and me so that way we know that they get it or do something fun like that but that's also very serious. That's what I want to see with AI because I think distributed AI is here; it's gonna be on personal devices, it's not gonna go away, the autonomous vehicles are not going to go away, so let's protect each other but let's help protect our kids.


Debbie Reynolds  42:15

I love this. You're such a joy and a thrill to be able to talk with, and wow, this is fantastic; thank you so much.


Pamela Isom  42:26

You're welcome. Oh, this was so good.


Debbie Reynolds  42:32

That's amazing. I'd love to keep in touch, or maybe as far as some ways we can collaborate together, and whenever they say they aren't black people in AI, I say hey, call Pamela.


Pamela Isom  42:46

Yeah, have 'em call me, and we'll talk some more next time because there's still some more stewardship things that we need to focus on. But I think we've covered enough here today.


Debbie Reynolds  42:56

Absolutely. Well, thank you so much, and I really appreciate you being on the show.