"The Data Diva" Talks Privacy Podcast

The Data Diva E176 - Gopal Padinjaruveetil and Debbie Reynolds

March 19, 2024 Season 4 Episode 176
"The Data Diva" Talks Privacy Podcast
The Data Diva E176 - Gopal Padinjaruveetil and Debbie Reynolds
Show Notes Transcript

 

Debbie Reynolds, "The Data Diva" talks to Gopal Padinjaruveetil, Vice President, Chief Information Security Officer, AAA The Auto Club Group. Gopal discussed his thought leadership and career journey, his educational background in chemistry and law, his transition from operational technology to IT, and his eventual entry into cybersecurity. Debbie then transitioned the discussion to AI's influence on cybersecurity, seeking Gopal's insights. Gopal began to address AI's impact on cybersecurity, highlighting its significance in the evolving technological landscape. Gopal and Debbie engaged in a discussion about their perspectives on AI and its potential impact. Gopal expressed concerns about the potential for AI to enable harm and emphasized the need to consciously shape its impact, drawing parallels between the use of AI and the use of weapons. He highlighted the importance of managing AI to prevent negative consequences and stressed the need to address cyber safety, freedoms, and rights. Debbie contributed to the discussion by sharing an analogy that Gopal compares AI education to sex education, emphasizing the need to teach safe usage and shape behaviors for the benefit of society. Their conversation reflects a nuanced understanding of AI's complexities and the importance of addressing its potential impact. Gopal led a thought-provoking discussion on the evolving nature of privacy, particularly in the digital realm. He explored the paradox of willingly sharing data while expecting privacy, emphasizing the need to protect privacy in private spaces while acknowledging the lack of privacy in public spaces. Drawing on legal concepts, Gopal advocated for a balanced approach that protects privacy in private spaces while recognizing the role of governments in ensuring safety. The discussion prompted reflection on the trade-offs between privacy and security, ultimately highlighting the complexities of privacy in the modern age. Gopal and Debbie engaged in a heartfelt discussion about the significance of optimism and the challenges of upholding it in the face of adversity. They drew inspiration from historical figures like Martin Luther King and Gandhi, emphasizing the importance of pushing forward for positive changes and his hope for Data Privacy in the future.


 

Support the Show.

39:44

SUMMARY KEYWORDS

people, ai, technology, talk, privacy, world, crime, deep, fake, give, cybersecurity, law, society, debbie, call, morally, company, precog, gopal, human beings

SPEAKERS

Debbie Reynolds, Gopal Padinjaruveetil


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.


Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. Our very special guest on the show all the way from Detroit, Michigan. This is Gopal Padinjaruveetil. He is the Vice President and Chief Information Security Officer at AAA, the Auto Club group. Welcome.


Gopal Padinjaruveetil  00:51

Thank you, Debbie; it's an honor to talk to you. I'm a big fan of your show. I'm a big fan of your title, "The Data Diva". That's so fun.


Debbie Reynolds  01:00

Thank you, thank you. I don't think that your title really encompasses all that you are and all the things that you do. I'm a big fan of you and your work and your thought leadership and the things that you write. They're so deep; I feel like, in today's age, people want to do everything in bite-sized pieces. But I feel like you really get down into those details. So I'm always excited to see the things that you write and how you bring in human life perspective, not just data or privacy or cyber perspective. I would love for you to give your background or your trajectory in your career and how you got where you are now.


Gopal Padinjaruveetil  01:44

Yeah, and I think I recently did another podcast called My First Job, where I said my education is in chemistry. Then I actually did law, which is actually helping me in privacy because I did study law in India. I'm a lawyer by education, but I'm not a practicing lawyer. But that gave me a good insight into some of the concepts around privacy. Then, I worked for the first 15 years in a petrochemical factory. Then the Internet bug bit me so I moved from OT, operational technology, to IT. Then, I came to the US in the late 90s and early 2000s through different jobs. There's been a lot of crossroads that I've traveled, and I had to make certain decisions. I don't know whether it's destiny or something, but I think I've made some right choices. A lot of times, I tell a lot of people that I didn't choose cybersecurity; I think cybersecurity chose me. Like I said, there was no cybersecurity when I got into the workforce. It's just a series of events that led me to where I am today.


Debbie Reynolds  03:02

Very interesting, very interesting. I love the way that you bring in real-life things and the way that you talk about cybersecurity and security in general. I want your thoughts about something that recently happened. This is about AI and we can talk about that as well. Well, actually, before I get into my question about AI, something happened recently, and I want to have your thoughts about it. Tell me about how AI is shaking up things in cybersecurity, in your view.


Gopal Padinjaruveetil  03:35

Just like any other thing, right? So if you look at the world itself, there is good and bad for us to expect. Everything is good. There's only goodness in the world. If you're a spiritual person, if you believe in religion, right, in God and the devil, right? It's that you have predictors, so the natural world, and to expect that this is a perfect world, that's not a good approach to think, right? It's delusionary thinking; we realize that the world is not black and white, a million shades of gray. For us to realize that it is a coexistence of evil and goodness that's always been there. But some of these technologies are making it easier if you want to do harm. If you've decided to do harm, if you're a predator, if you desire to do harm, technology as a tool is enabling you or the bad person, the threat actor. It's giving them risk-free avenues are easier ways to perpetrate crime or bad things. So, just like that, I see a huge potential to leverage AI, and it can make our lives lives better. But if you don't manage this, it is possible to create a dystopian world; it has the potential to cause more damage. It's exactly like the gun discussion, right? I mean, weapons have been created; you can use them for goodness. But if you don't do this, you can use it for bad things. The tool itself is, I would say, benign. I mean, the tool itself does not do anything. It's in the hands of the person who is using it that determines whether it's going to be used for a good cause or whether it's going to be used for a bad cause. Like I said, I see huge potential for AI. But at the same time, looking at a lot of things. I'm deeply concerned and worried; I don't want to talk about security; I want to talk about safety, cyber safety. I want to talk about our freedoms and our rights. Historically, we have been living in a world that has inequality, empathy, and environment; we have all come through different eras. We have seen, whether it's slavery or things like that, we have seen the dark side of human beings. I'm worried that it can give rise to more darkness than more light. If we don't talk about it. If we don't consciously shape where we want to take this. That's my introductory perspective on AI. I never hated AI, nor do I love it.


Debbie Reynolds  06:45

Right. Well, I want to dig deep into an analogy that you made that made me chuckle because I sort of say this as well. I'm not in the camp where I think AI is wonderful and it does everything perfectly. I'm not in the camp, where I think it's terrible and awful. All of this type of stuff. This is an example that you gave, which I thought was so funny. So you were comparing education around AI, around sex education. I was saying that I don't teach AI apps.


Gopal Padinjaruveetil  07:18

I think we met in London last year. I think they were asking, what's your perspective on AI? I said, whether we like it or not, AI is out there; people are going to use it. So I gave this analogy, like, I use teenage sex. It was a cultural shock. For me coming from India, it was a cultural shock for me when my son went into middle school, and he came back and said that, and I said, oh my gosh, that's not the age for you. Then I think I was having some conversations with people in education. They said, you know what the reality is, we can say, or we can think this is not happening, or this will not happen. The reality is it will happen. You can't stop it. In such a world, it is better for us to give them the right education and the right guardrails so that they are aware of the dangers of what it can cause. Especially like I said, I talked to my two boys. I said, you've got to be very careful, you have to be very respectful and write them. That is where I said what sex education in schools is. It starts with an assumption that it's going to happen. You can't prevent it from happening, it will happen. Now, let's talk about how to do it safely. I said that's exactly how I see AI. A lot of companies are banning it. Oh, no, you can't use it. But I'm telling you, it is being used because we have data, people will use it, people are using it. You can't stop something like this from not happening. So I said what we really need is a digital condom. It's that how do you use this with safety in your mind and privacy in your mind? Right? I mean, so there are some concepts that sex education can teach us in AI. That was the analogy that they gave. So there's a lot of lessons that we can learn from that was the approach that I was taking, because you can't stop it, but we can keep the behaviors, and we can teach people to use this in a safe way for the benefit of the society. Right, so yeah, there's a couple you should pay attention to. So this is a great idea, you hated it. Think that is where I think I mean, at some point of time in this conversation, we might get to minority report or right we We'll have to talk about the role of privacy is changing. My belief is that digital technologies are a paradox for us to expect because we are giving all the data. So we want to connect with people; we want to do all those things. So we are willingly giving lots of data. At the same time, we expect privacy, but there's been a huge discussion around the definition of privacy. A very simple definition. It's a good definition that I have a right to be left alone if I want to be left alone. I think that's from a digital world; it is very difficult for us to do that. If you look at the physical world, I think this is my personal opinion that we have lost the idea about privacy in public spaces, we have already lost that. So you're in the middle of Times Square, you can't expect privacy there, but in your bedroom in your private space. So there's this concept of public spaces and private spaces in law; I'm using my law education here. Because you should be allowed to have privacy in your private space, what you do in your home, in your private space, that should be protected. There should not be an intrusion into your private spaces. But expecting privacy in a public space. It's wishful thinking, let me put it that way. What I'm worried about is that technology is intruding into our private space. So I'm of the opinion that public spaces are public spaces; don't expect privacy. The Internet is a public space. The Internet is like a big Times Square. So don't expect privacy there. But when you're living in your home, you have a connected home. When you are in your private space, it's a right for you to have privacy. So, let's protect privacy in our private spaces in our homes. That is the approach I have always been talking about. Safety is another important concept. The role of government is to protect me, my life, and my property. I expect my government to keep me safe. From that perspective for crimes not to happen. They have to start looking so I mean, you can see patrolling policy, they will stop you if they have, right I mean, so this kind of surveillance has been there. And it is with an intention to make us citizens safe to protect our property. And we should welcome that because we want to be safe. We want to have freedom, right? And all those things. So we want to make sure that if you have an expectation of safety, we should allow them to protect us. That means giving up some of the freedoms, the autonomy, or the sovereignty that we have. Let's all go all in to protect privacy in our private spaces. Does that make sense, Debbie?


Debbie Reynolds  13:32

Yeah, yeah, it does, it does. A good example, will be there was a company called Rite Aid that got in trouble for using facial recognition, for example, in their stores. Yeah. So I think most people expect if you go into a store, you're going to be recorded by their cameras, they're trying to make sure you are not stealing things, stuff like that. But the place where this company really went off the rails as they started trying to go outside of the four walls of that store and try to figure out who these people were. They're surveilling people outside of that space. Misidentifying people, arresting people because they thought they looked like someone else. I think, to me, that's a problem. Because to me, you're going outside of what our initial purpose was, which was to secure the people in the store and their property in the store.


Gopal Padinjaruveetil  14:25

That is where we have to talk about ethics. So, while every corporation is working for the interest of its stakeholders, people who have made investments and the stakeholders, and that's a whole fundamental principle of capitalism, right? I mean, people who are making investments should get a good return. So I think I'm 100% for that. But at the same time, there is a line the corporation should not cross in ethics, morality, and bias. Whether it's gender bias, whether it's racial bias, it's a leadership issue. If somebody says the right leadership did not know about it, that's a very bad thing. I'm sure that these things are done with the approval of the leadership. That is where, as a leader, we have to be very careful in drawing a line; while I want to protect the store, I want to protect the revenue of the store, and I want to protect the property of the store. I want to stop stealing and shoplifting. But you have to be just, you have to be morally right in doing the writings, you have to treat everybody through the same scale, you can't say that, just because I look like this. You can't make an assumption, you don't have any evidence. So, we have to treat everybody with the same respect in material of their color, gender, race, creed,  whatever you want to call it. That message, to me, has to come from the CEOs at the top, that this will not be tolerated because there will be pressure to do certain things. But if the message from the top is, this kind of behavior will not be tolerated. I think these stores will not have done that. So I think to me, the whole Rite Aid issue is a moral bankruptcy issue. I think the leaders were morally bankrupt; I'm sorry for saying this because they had to know this because I've read the reports. They had to know this. And they did not stop it. That's a problematic issue in AI and all those things. Are you giving the right direction? Right? Are you worried? Are you setting expectations? Are you allowing people to do bad things? Do you know if they're doing bad things, and as a leader, you should clearly set the expectations, these kinds of behaviors will not be acceptable in the organization. I think that's where the wheels came off the train I believe.


Debbie Reynolds  17:15

I agree with that. Thank you for that. Morally bankrupt, I'm going to write that down. I think that's true. I want your thoughts about an incident that's happened, people; I guess it set the Internet on fire around AI deep fakes. So apparently, there was an incident where someone with a company had a video call with someone that they thought was their boss. They transferred money out of the company to someone else, and it was actually a fraudster. So I think the company lost, I think, the equivalent of $20 million. So these types of stories, I think, annoy me most because I think when people talk about it, mainly because of AI and deep fakes. People are thinking about it that way. So they're thinking about in terms of a cybersecurity failure. But to me, I think it's more of a leadership failure within a company. What are your thoughts?


Gopal Padinjaruveetil  18:10

I think the deep fake has to be discussed in two separate buckets. So I was really angry about what happened to Taylor Swift. They used deep fakes to create inappropriate pictures and videos of her, that was circulating on social media. There were justifications; people were saying, oh, that's not her. That's a virtual image of her. I feel okay with that. My position on this is no, that is, again, you're morally bankrupt at that point in time, right? Yes, it is not her, you know, it's not her. You're justifying that it's not her. So it is okay to watch that or increase that or that kind of a deep fake video of celebrities and all those things. That's an individual morality and individual and societies, and we should really come down on people; you have crossed the limits of your personal freedom just because you can do this. That's one lens of evilness that's happening because of deep fakes. The second one is this whole concept of fraud that we saw that a CFO, there was a team's Zoom meeting with the CFO. He was asking urgently to change money, and they lost, I think, around $20 million US dollars. I think this happened in Hong Kong. I believe, again, the problem here, Debbie, is it is so easy to create these things. You have free, open-source software. You have so many YouTube videos that teach you how to do these things. The second it's available, including free, open-source software, all you need is that you and I are talking in public spaces. Your voice is very much there in every podcast; you have been a public speaker, I'm a public speaker, and there are videos of us; all somebody needs to take is a few hours of video and a few hours of our voice. They can create an almost perfect deep fake. It is possible. It's so easy, and that's what's worrying me. The entry barriers are so low. We, as children, have played pranks, right? I mean, kids, I mean, we do pranks for fun. So, if you want to try these things as a prank, that is one thing; you just doing it for fun to make a prank on somebody, that's one thing. But you doing this to create financial fraud, or fleece, or blackmail people, or do all the bad things, we are now entering into a very, very dark area, it's very difficult. Think about this debate. If we live in a world where we cannot figure out what is true, everything becomes so believable. That is what the Turing test is. If you look at the Turing test, he said if I put a person in one room and a computer in another room, and then you're asking questions, and they two don't know each other, they're not seeing each other. But you're asking questions to a human and a machine, and you're not able to determine whether it was the machine or the human who gave that answer. That is when they said, you know, that's the Turing test. So, the Turing test is about believability. Can you believe that the machine can talk like a human being? Absolutely, yes, we have come to a situation where a video or an audio, you can make it as believable as me. But the risk is, where is the truth? Humans, we need to know. What is the lie? What is the truth? That is slowly dissipating the concept of truth is slowly dissipating with technologies like this. So, I see these technologies as proof killers. They're killing the truth. How can we live in a society where we don't know if it is true? Or if it is a lie or fake? How can we operate? We are born to trust. I mean, we trust people; we trust others. But if you can't trust anything, we can't operate in a world. It's a sad day for humanity when we don't know what is true and what is false. This kind of technology is lending ways to dissipate truth. Then you have the question about intellectual property; then we are entering into this problem of intellectual property and things like that. There's a lot of fundamental human problems. These kinds of technologies are bringing up to the surface. We as human beings, like that's where I started, we as human beings, even less has always been there in the world, the world I've met like there are there have always been good people, there always been bad people. These kinds of technologies are giving new ways or new tools to perpetrate a crime, to take away privacy, to take away your property, to take away your dignity. That's the way that I look at the pic. Like I said, somebody said this, oh, we can do this. Because I love that it's physics. We look at the sound; we look at the images. It is physics, wavelength, pitch, amplitude, and tone are all physics. So you can actually put something into an analyzer, you can analyze my voice, and you can recreate my voice digitally. It's easy just because you can do it. My question is, should you do it? That's the question I asked. Just because I can do it, should I do it? Where do I stop? I'm looking for one use case where a deep fake is going to why this is going to save 100 lives or 10 lives or so. I've still yet to find a use case where deepfake can really be of help to humanity. Then why use it? Why proliferate this kind of thing? When we know the harm. So if you put you have a balance, and you do a trade-off, and the goodness is rarely going down and there is a little bit of that is okay, but when the evil nature of this is so predominantly, I don't know, I mean, that is where I get a little tired, I get frustrated, I get angry, I go through all kinds of emotions. When technology is like this is going to, and you know, people like you and me, we are technologically savvy. Right? Think about I mean, my mother, okay, I'm telling you, it's so easy, because she came from a different world. She never was technologically savvy, but she uses a phone, and it's so easy to actually cause harm to people, like parents or grandparents or even children. I mean, I can admit there are so many people who don't know about these things and how to protect them. So because you and me know about the fakes. But my mother or my grandmother, my uncle's really doesn't know about this. They grew up in an era where they really trusted human beings, they trusted their neighbors, they trusted the community that they lived in, they trusted their friends, they trusted their relatives; how do they change? We can’t ask them to change overnight. That's my true worry, Debbie is how do we protect the vulnerable people in society, who don't know about these kinds of technologies? Like I said, we all use phones, we have FaceTime, or we get a voice call. How do you know? I saw a video, a CSPAN thing, that a lawyer was talking about; he got a call about a son being in an accident. The lawyer was called right away. He was talking about this at the Congress, US Congress. He was saying that he went to the law enforcement agencies, he went to the police department in Philadelphia and talked about this deep fake incident that happened to him. The police said, we can't do anything because no crime was committed. He didn't fall for it. No crime was committed. He went to the FBI. They said there is no way even if right, because cryptocurrency and anonymous, and that's another security problem, we are not able to attribute crime. So, because deterrence is a big thing. People don't do crime, because they know that if they get caught, they will go to jail. Or if you're speeding, you get caught by the police; you will have to pay a fine. So, deterrence is how we have tried to prevent crime. But in cybersecurity, deterrence is a big problem. Maybe the law enforcement is struggling. So that is where I think I don't know how we have to talk about the future of crime and how future crime is going to be perpetrated. How the loss of the society, how the law enforcement, how the judicial systems are going to acquit themselves to prevent this kind of vicious crimes or unknown or unseen crimes that is going to come out of this kind of technology.


Debbie Reynolds  28:16

I agree with that. Actually, there's a law that was updated in the State of Illinois in 2024 around deep fakes. The loophole that you're talking about is that because it was not an actual thing that happened, but it was fake, the law currently is written to handle that. But basically, what this law in Illinois has done, I think, is very unique. They basically said that if you're depicting someone in a way where you're trying to convince someone that they are real, that is a crime. So, there are civil and criminal penalties to them. I'm hoping that that will become something that other States and maybe the Federal government will adopt because I think you're right; the theme of deep fakes is very frightening, but one of the things that I have noticed is that especially in the commission of crimes are a deception, these defects are being used as another level of social engineering in a way trying to get people to maybe do something that they would not have ordinarily done.


Gopal Padinjaruveetil  29:27

Yes, and like I said, we have not figured it out because these technologies are accelerating at a pace faster than we can handle it. Right, I mean, just imagine the velocity that these things are coming at us, the technologies are coming in. We still, as a society, have still not figured out what is the right way of handling this thing that Europe AI law, they have said if technology is used to biometrics your emotions or your expressions you're capturing, they're banning the use of this. So maybe that's a good place to start, that these kinds of technologies should be banned from usage. It won't stop in itself. But generally, I would say that 90 to 95% of human beings are well-intentioned, good people, and they will not break the law. If there is a law, they will not break it, but 4 to 5% will try to play around with it. There will be really bad people who will no matter what they will use it. But I think laws help for majority of the people because humans want to be law-abiding citizens. That's my belief, that the majority of human beings want to be law-abiding citizens. It's only a small portion of the society. Please don't get me wrong; that smart person is enough to destroy the world. We did not need 1000 Hitlers to destroy the planet, we need only one Hitler. When we say it's a small percentage of people who are committing these crimes, but they can cause huge damage. That is where I think I was talking about, there was a wonderful interview, I saw the law professor talking about the future of law, if there is a way that AI can be used. Have you seen the movie Minority Report?


Debbie Reynolds  31:28

Oh, yes, yes.


Gopal Padinjaruveetil  31:30

So if there is a way AI can be that precog, who can predict the crime before a crime is being committed and can stop that crime from happening? Should that be? I think the question was, are the laws ready to have AI do certain things to prevent crimes like this? The answer is yes. So it is quite possible for us to create a digital precog. To see now that comes with a huge cost. We may have to give up some privacy right, surveillance, and privacy. We may have to give up that. Now. I don't know; I'm also struggling. Is that okay? Or is that not okay? I think we will come to a situation where we have to give up some more privacy. Is there a potential for misuse of this kind of technology by bad governments? Absolutely, yes. That is where our corporations or anything that is where I think, again, I'm going back to that word, moral bankruptcy, we need to have leaders who are moral and ethical. That's all we need. All we need is a group of leaders who are not morally bankrupt. Because if you choose whether it's in a political ecosystem, or an organizational ecosystem, or in our societies, we choose leaders who are morally bankrupt, we are doomed, Debbie. There's nothing you and me can do. I mean, there's we are doomed. It's a double-edged sword. When I think about whether AI can be used to prevent AI crimes? I think it is possible. But there's a huge cost that comes with it; we have to give up some of our privacy. We have to make sure that these things cannot be misused and abused. I think we are very familiar with controlling some of the things, the security we are talking about, and privacy; we know how to put controls to prevent these things. So those are the kinds of things, as a CISO, I think about where we should apply what kind of technical controls or administrative controls or process controls, the new types of controls should be there. How we can apply these controls is something that I think about as a CISO. Because we know controls. How do we design controls for a new world of AI and a new world of privacy? That's something that I constantly think about.


Debbie Reynolds  34:23

Yeah, that's a deep question. And it's a deep way to actually navigate through them. There's so many different angles and paths to actually go there. So an interesting problem for the future. So we'll be very busy, I know, working on that. If it were the world, according to you, Gopal, and we did everything you said, what would be your wish for privacy, whether it be regulation, technology, or human behavior?


Gopal Padinjaruveetil  34:54

I think my wish, honestly, I think the world has been unfair to a significant percentage of the population, Okay? I don't want to talk about inequalities, but I want to talk about inequities; we want to make life more equitable; we want to make sure that everybody has a fair chance of how everybody is treated. That's my wish. I think technology can be an enabler. I mean, for example, digital technology. If you go to India, you will see that a good digital identity can bring in social inclusion. You can marginalize people, people who cannot be part of the mainstream; they are now being brought into the mainstream, and it is bringing social inclusion. Once social inclusion happens, what we need is financial inclusion. People need to be able to be able to get loans, or they just need to be able to get what they want. From a financial inclusion perspective. I want my son to go and study in college, or I might want my daughter to do this. But we should be able to from a financial perspective; I think that we hope that these technologies will lower the cost of many of these things. So these things can be affordable, not only for rich people but for everybody, is my wish. Let's make sure that technology brings equity to the world. It brings in new opportunities for people who never had opportunities. Let's see if we can use technology to bring hope to people. That's my wish. I mean, if you asked me to say let's see if we can bring hope to people, all the people who never had that opportunity or who were never given an opportunity to hope for a better future. I think it can be done; maybe I'm sure that this kind of technology can be done. As I said, if you go to India, or countries like India and other countries, this kind of technology, we see it's not a perfect world. I mean, they're not perfect. I mean, that's the thing, right? Let's not look for perfection. But let's look to see how we can use these technologies to, like I said, bring hope and inspiration to people who never had that.


Debbie Reynolds  37:25

Thank you. That's amazing. Thank you so much, Gopal, I have the same wish. So I'm hoping that instead of trying to use technology to bring forth more dystopia, hopefully we use it to bring forth more prosperity and more equity for people.


Gopal Padinjaruveetil  37:44

It’s very difficult to be an optimist, and I have been an optimist, but it's becoming more and more difficult to be an optimist. I'm saying no, no, we are better; we can get over this. We will conquer this. I mean, like Martin Luther King, I mean, people like they have been an inspiration to many of us, Gandhi, Nelson Mandela, these people. I think that's why I want to keep that optimism. But every day, I have to constantly remind myself not to fall into pessimism. With what I see, there's lots of bad things we see. But I've seen them, and I'm not going to fall into that pessimistic, deep well, I'm just going to rise myself and be optimistic.


Debbie Reynolds  38:31

Push forward for the positive changes. Yes.


Gopal Padinjaruveetil  38:34

I still believe in humanity. Let me put it that way.


Debbie Reynolds  38:40

I agree with that. Well, thank you so much for being on the show. It's been a pleasure. It's so much fun to chat with you today and get your thoughts. I highly recommend that people follow you on LinkedIn; you write some really astonishing, very thought-provoking things. So I love your writing. Thank you.


Gopal Padinjaruveetil  38:59

Thank you. That's a compliment coming from you. It's an honor, Debbie, because I know that you're an influencer in LinkedIn and in your sphere of data. You are an influencer and inspiring, and thank you for those kind words.


Debbie Reynolds  39:18

Well, I look forward to chatting with you more on LinkedIn and seeing more of your writing and hopefully, in the future, being able to collaborate would be great.


Gopal Padinjaruveetil  39:27

Awesome, thank you.


Debbie Reynolds  39:29

Thank you so much. Talk to you soon.