"The Data Diva" Talks Privacy Podcast
The Data Diva Talks Privacy Podcast
The Debbie Reynolds “The Data Diva” Talks Privacy Podcast features thought-provoking discussions with global leaders on the most pressing data privacy challenges facing businesses today. Each episode explores emerging technologies, international laws and regulations, data ethics, individual rights, and the future of privacy in a rapidly evolving digital world.
With listeners in more than 130 countries and 2,900 cities, the podcast delivers valuable insights for executives, technologists, regulators, and anyone navigating the global data privacy landscape.
Global Reach and Rankings
- Ranked in the Top 2% of 4.6 million podcasts worldwide
- Top 5% of 3 million+ podcasts globally (2024) – ListenNotes
- More than 850,000 downloads worldwide
- Top 5% in weekly podcast downloads (2024) – The Podcast Host
- Top 50 peak in Business and Management (2024) – Apple Podcasts
Recognition and Awards
- #1 Data Privacy Podcast Worldwide 2024 – Privacy Plan
- The 10 Best Data Privacy Podcasts in the Digital Space 2024 – bCast
- Best Data Privacy Podcasts 2024 – Player FM
- Best Data Privacy Podcasts – Top Shows of 2024 – Goodpods
- Best Privacy and Data Protection Podcasts 2024 – Termageddon
- Top 40 Data Security Podcasts You Must Follow 2024 – Feedspot
- #1 Global Data Privacy Podcast (2021, 2022, 2023)
- Community Champion Award – Privacy First Awards, Transcend (2024)
- 20 Best Data Rights Podcasts – Threat Technology Magazine (2021)
Audience Demographics
- 34% Data Privacy decision-makers (CXO level)
- 24% Cybersecurity decision-makers (CXO level)
- 19% Privacy Tech and Emerging Tech companies
- 17% Investor Groups (Private Equity, Venture Capital, etc.)
- 6% Media, Press, Regulators, and Academics
Engagement and Reach
- 1,000–1,500 average weekly downloads
- 5,000–11,500 average monthly LinkedIn impressions
- More than 14,000 subscribers to the Data Privacy Advantage newsletter
Sponsor Impact
- 4 podcast sponsors secured funding within 12 months of featuring
- $25 million average funding raised per sponsor
- 3 average new enterprise customer sales per sponsor within 6 months
About Debbie Reynolds
Debbie Reynolds, “The Data Diva,” is a globally recognized authority on Data Privacy and Emerging Technology. With more than 20 years of experience, she advises organizations across industries including AdTech, FinTech, EdTech, Biometrics, IoT, AI, Smart Manufacturing, and Privacy Tech. As CEO and Chief Data Privacy Officer of Debbie Reynolds Consulting LLC, she combines technical expertise, business strategy, and global regulatory insight to help organizations retain value, reduce risk, and increase revenue.
Learn more: https://www.debbiereynoldsconsulting.com/
"The Data Diva" Talks Privacy Podcast
The Data Diva E264 - Brintha Shanmugalingam and Debbie Reynolds
In Episode 264 of The Data Diva Talks Privacy Podcast, Debbie Reynolds, The Data Diva, talks with Brintha Shanmugalingam, Data Governance Expert at Capgemini, about how organizations can reduce privacy risk and unlock innovation by managing data with more context, precision, and intelligence. They explore why traditional governance often restricts value by imposing blanket prohibitions, and how granular, attribute level stewardship enables safe data usage without unnecessary barriers. Brintha explains how ontological modeling and knowledge graphs help maintain meaning, purpose, and control throughout the data life cycle, even as information moves across borders and functions.
Debbie and Brintha examine the growing importance of aligning privacy, compliance, security, business value, and technical feasibility to establish governance systems that empower rather than block decision makers. They discuss how identifying the specific sensitivity of each data element can prevent misuse while accelerating lawful sharing and innovation in areas like AI and cross regional analytics. The conversation also highlights the misconceptions organizations have about risk and why binary thinking about data exposure leads to lost opportunities.
Listeners will learn practical insights for improving data confidence and accountability, including understanding contextual use, designing protections that evolve with business needs, and ensuring safeguards are embedded where work actually happens. This episode encourages leaders to rethink governance as a strategic capability that creates agility, trust, and measurable outcomes when executed with smarter structure and deeper understanding.
Become an insider, join Data Diva Confidential for data strategy and data privacy insights delivered to your inbox.
💡 Receive expert briefings, practical guidance, and exclusive resources designed for leaders shaping the future of data and AI.
👉 Join here: http://bit.ly/3Jb8S5p
Debbie Reynolds Consulting, LLC
[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
[00:13] Hello, my name is Debbie Reynolds. They call me The Data Diva. This is the Data Diva Talks privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.
[00:26] Now I have a very special guest on the show all the way from Sweden, Brintha Shanmugalingam,
[00:33] who is the data governance expert at capgemini. Welcome.
[00:38] Brintha Shanmugalingam: Thank you, Debbie.
[00:41] Debbie, thank you for inviting me. I have to say I have been a fan of your work for a while.
[00:49] You bring such energy and inspiration to the privacy and the security space.
[00:56] And I honestly feel the Data Diva really suits you because you seem to be everywhere. From your past shows it feels like there isn't a corner of the world you haven't reached.
[01:10] So you have engaged with data security and compliance professional across many regions and industry.
[01:18] That cognitive diversity at heart is what makes this, this podcast very unique and omnipresent. So it goes with Diva.
[01:27] I'm really excited to be part of the conversation.
[01:31] Debbie Reynolds: Oh, that's so sweet. Oh, I'm so excited. I'm so excited.
[01:35] I've been connected for many years on LinkedIn and I'm always impressed with the things that you talk about and the type of engagement that you have with people because I really love to hear from people who are practitioners out there in the field and being able to really have voices that are talking about real things that organizations deal with in real ways is very important.
[02:05] But tell me how you became data governance expert at capgemini.
[02:11] Brintha Shanmugalingam: It's a long story.
[02:14] I actually started in data visualization area,
[02:20] basically turning complex numbers into visuals that tell stories that people can understand.
[02:29] But the curiosity kept on pushing me. I wanted to explore how the pipelines work. Exactly how am I getting the data that I'm working with.
[02:41] I went into data engineering side but that wasn't enough for me. So I went into data science where I wanted to to model and predict. But somehow I think my meticulous way of handling things landed me in data governance.
[03:01] I also realized something very important.
[03:05] Data's real value only shows up when it is managed ethically and responsibly.
[03:14] But again that I felt it also was getting me into surface level governance.
[03:22] That kind just says stop, you can't use this. Data doesn't really work. And I felt like it was blocking the innovation instead of guiding it.
[03:34] So that's why I went deeper into information modeling, ontology development and knowledge graph.
[03:42] These tools I would Say it gives governance a context.
[03:48] They let me understand the relationship and the meaning,
[03:52] not just the labels. So I was able to make contextualized decisions when it comes to governance.
[04:01] So stop being a roadblock and becomes like a framework to come up with exceptions.
[04:10] Instead of saying no, I can say here, here is how this attribute can be used safely under GDPR or EUA act and here where it must stay local.
[04:27] And that is what led me to develop like, you know, cross border data shareability framework by having attribute level of level classification as foundation and attribute level access control so that you protect what is sensitive while still letting the rest of the data flow.
[04:52] So for example, in a data set you may find 20 columns. Out of that, 15 columns are shareable. Only five columns may be super sensitive. But that may not be needed for innovation.
[05:07] It could be just,
[05:10] you know, something,
[05:11] it could be a demo data those, you know, the five of those fields. So in such cases I work, we can anonymize those columns and still share the data to continue the innovation.
[05:24] So that's why I went for attribute level classification for a cross border data sharing.
[05:32] So that is the area I mainly focus on now.
[05:36] Debbie Reynolds: Well, that's fascinating. So I love when you were talking about ontologies,
[05:41] that really fascinates me because I feel the same way you do where sometimes when people think about data and data sharing, they're thinking you just can't share it. Right?
[05:53] And so thinking about, like as you said, if there's a data set that has certain fields that can't be shared and certain ones that can be shared,
[06:04] I agree that a lot of that shareability is based on context, right? Because I guess whoever initially put the data set together, they put it together for a particular purpose.
[06:16] But when that purpose may change or it has to transmit or be sent somewhere else, you have to really think about that. But talk a little bit more about this.
[06:25] This is fascinating because a lot of people think about data as, I don't know,
[06:34] like blocks that are in a box, right? So you either take the blocks out of the box or you put them in the box and data just isn't that way.
[06:41] And so I want to talk a little bit more about,
[06:44] you know, the types of things you think about or people should be thinking about in terms of shareability of data.
[06:52] Brintha Shanmugalingam: Definitely. Thank you for leading me into that. But before I tackle that specific question,
[06:59] I think I wanted to tell you how do I approach data?
[07:04] So I call it, I have this balanced scorecard mindset. In a business,
[07:10] balanced scorecard doesn't just look at revenue, it looks at customer satisfaction, internal process learning and growth.
[07:20] And if you only measure revenue, you miss full picture.
[07:25] Data is the same, you know, if you look at the compliance, you miss the innovation. That's how I feel or that's what the experience taught me. And if you only look at the security, you miss agility.
[07:38] So a balanced scorecard in data means weighing all perspectives at once. So the privacy, compliance,
[07:46] ethics,
[07:47] business value,
[07:49] even technical feasibility,
[07:51] that is what prepare me for like AI governance. Because AI requires that balance at much higher stage take than how we handle data in a data mesh concept, for example.
[08:06] But again it's just that always I somehow felt that the traditional governance is a bit like a stop sign.
[08:16] It's always saying no, you can't use this data set and sure that keeps you safe, but it blocks innovation.
[08:25] So with that attribute level classification, it's more like very sharp,
[08:32] very detailed but instead of blocking the whole data set, you break it down attribute by attribute. That means column by column. Then you apply attribute based access controls.
[08:45] Then you map them to associated GDPR article and EU articles so that you have like you know, you providing cautions like you know when you are tackling this attribute,
[09:03] be careful because there is a regulatory requirement there. That doesn't mean you cannot use it at all because they tell you how you can use it and when to use it and when not to use it.
[09:16] So the articles don't really restrict you from using them, but they give you cautionary so you need to handle them more meticulously.
[09:27] And that type of guidance we can provide with ontology and you know, knowledge graph because I'm mapping the attributes to articles. So for example,
[09:41] let's say if I am dealing with an attribute called age and I believe it's GDPR Article 8 if it's someone under 18,
[09:51] you are not allowed to have a secondary process of that data.
[09:57] So this is something that we can write like validation plan, a technical or like a legal transformation rules to that data. So if you find an age group under 18,
[10:10] please do not use it for secondary usage.
[10:14] So like that's why for me having it at the grand governance at the granular level help us lot because you can still create momentum.
[10:26] So I would say that in most of my work like 80% of the data we have in data sets can be shared, maybe 20% really need to be regulated.
[10:40] So it is governance that protects, but at the same time it becomes the enabler. When you have attribute level classifications.
[10:51] Debbie Reynolds: I think one very important point that you Just made. And a lot of people really need to understand is that all data,
[10:59] regardless of regulation, needs governance. Right? But certain data needs special additional detail. And that's where we get into privacy and security and all these other like the business side and stuff like that.
[11:13] And it can't be all or nothing and it can't be like one trumps the other,
[11:18] so to speak.
[11:19] So you have to do that balance. But you had talked, obviously you're a data person. I'm a deep geeky data person as well.
[11:29] But one term that you mentioned that I want you to be able to explain to our listeners and that's about what is a knowledge graph.
[11:39] Brintha Shanmugalingam: Knowledge graph is like understanding the relationship of an entity.
[11:50] And which business capability does that fall under,
[11:56] which capability it's enabling, but how it is enabling.
[12:02] So for example,
[12:03] let's say campaign.
[12:05] It is very important for market intelligence. It is important for market and it is also important for sales. But how all three teams develop concept around the campaign is different in each functions.
[12:25] The sensitive level of the campaign data can be also different.
[12:32] Let's say in the early stage of a customer journey,
[12:38] we may see an audience.
[12:41] We want to see how many clicks that campaign had, for example,
[12:46] for that we don't really need to see specific details of that person and we can still make sense of the efficiency of that campaign.
[13:00] However, when the campaign data is shared with sales, that means that it has had many digital touch points.
[13:11] There is the possibility we may know the person who interacted with a campaign.
[13:19] So we may get their demographic data. For example,
[13:23] in such case we will be connecting that campaign activity related data to the demographic data or person affiliation data data and so on.
[13:35] Now the sensitive level is different and it should be.
[13:40] The audience of the data should be limited.
[13:43] Do you see? This is how I use knowledge graphs.
[13:48] I make sure the knowledge graphs not only is telling me the process events within a capability,
[13:57] they are also telling me which process events that we need to meticulously handle.
[14:04] So I hope that I answered your question. But not everyone uses knowledge graph as the way that I would use it.
[14:15] So I always say to maximize the utilization of a technique,
[14:22] you really need to have industry specific knowledge. Just like to implement governance or operationalize governance. Also you need to understand the industry well. You need to understand exactly data, how the data flows, all the process events and everything.
[14:46] Then I say from policies to standards to procedures needs to be understood in the context of the business operation in order for me to write the data governance guidelines or sacred document.
[15:00] So that's how I work and these knowledge graphs come into place because with the click of a search,
[15:08] I could clearly see if I identify a data point,
[15:12] I see its connection to all the concept and all the functional team. It is going to have an impact on and for a specific domain, exactly how it needs to be handled.
[15:26] So that is how I have been using it.
[15:30] Debbie Reynolds: That's. That's very sophisticated way to do it. Because I think sometimes when people are thinking like, so knowledge graph, that's a very sophisticated way to look at data.
[15:42] Sometimes people look at it like, okay, well, we have this data and it came in this way. But then they don't track it throughout the organization about how it's used and the different people who use it.
[15:53] And.
[15:54] And then the context gets lost,
[15:57] right in terms of, you know, why do we have it, why do we collect it, who should be using it.
[16:03] And I feel like that's the area where companies have the most challenges because the data, it gets duplicated, it gets moved around, it gets changed. People forget what they have it for.
[16:15] Someone else finds it and uses it.
[16:17] So thinking about data from the full life cycle and in your method with the knowledge graph, that really helps you do that.
[16:25] But I hope more people try to think about it that way, because that's where I feel like companies have some of their biggest risks. What are your thoughts?
[16:35] Brintha Shanmugalingam: It is such. It's a choice. And I also wish the same that when we have these outstanding technologies, we also utilize them as a tool for governance as well. I. I will tell and how I learned this lesson.
[16:53] It's pretty embarrassing, but it's a real story. And this is what gave me this idea of developing knowledge graphs with, I would say, regulatory controls, connecting entities and attributes and so on.
[17:10] I. When the digital twin was so much of a hype,
[17:15] nobody talks about it now, but a few years back, I was like, okay, I'm going to do a digital twin of my pantry.
[17:24] So I did recipe by recipe. I arranged everything. So I do have stuff. And once she came and she made a huge.
[17:34] She was frustrated. She was like, who,
[17:37] you know, misarranged everything.
[17:39] I want to have taller jars at the back,
[17:42] shorter jars at the front.
[17:44] Now I cannot see anything. It's clumsy. And then I was like, no, but this is going to allow me to have the right grocery list at the end of the week.
[17:53] But she was like, do you even cook?
[17:56] For you to make that decision. Do you see?
[17:59] That's when it really clicked me. You could have a technology.
[18:04] You think that you are doing the right Thing by simplifying or like getting to the grocery list is faster now, but that you have complicated everything else prior to that grocery list.
[18:18] Do you see?
[18:20] I think that is where you know when a new technology is being introduced. So like if a company decides to build a knowledge graph,
[18:28] they need to understand what are the pain points people are having connecting the dots.
[18:36] So or like when do they feel that they are not empowered enough to utilize the data because they are not too sure about can they use this data or not.
[18:49] So when they have a tool like Knowledge Graph, it will be so much easier for them. They could arrange it in the way that they are used to,
[18:58] but at the same time it also shows a different level of hierarchies.
[19:05] Do you see that? That's why we have those taxonomies in there.
[19:09] So then they will know, okay,
[19:12] I can use it. Oh, I couldn't use it.
[19:15] And they should be the users, the actual data handlers. But you will barely see in organizations that develop knowledge graphs is that the end user is in really using it.
[19:31] That's how I see it as.
[19:33] And let's say in a situation where there is a question about can I transfer this human sequence data from one region to the other region.
[19:46] But for. So they may have to read certain SOP standard operating procedures.
[19:52] But let's say if I map the human sequence data points or the attributes to the specific SOP within the organization,
[20:03] how easy their life would be.
[20:05] They could get that information with the click of a button that will give them the confidence to send the data or not.
[20:14] So this is why I think Knowledge Graph should be really used in governance to operationalize governance, knowledge privacy.
[20:23] Debbie Reynolds: How important is it to understand,
[20:29] especially jurisdictionally,
[20:31] the fact that people have different cultures and how they think about data protection or how they think about privacy. Because I feel like sometimes when people are working with people in different jurisdictions,
[20:46] they may say, well,
[20:48] we think everyone should be like us and you should do things our way. But you really have to figure out how to translate that and be able to talk and be able to find commonalities and then figure out where the base the big issues that you need to resolve.
[21:04] But I want your thoughts there.
[21:06] Brintha Shanmugalingam: It used to be like I used to say find the human within the data and their dignity privacy that is.
[21:17] But now the regulations are coming up with controls that protects them.
[21:25] Do you see?
[21:26] Like, no longer we have to think how we are going to protect cultural sensitivity or not, but the regulations itself are guiding us that way. So I look at it as in the Past we would say the tech moves maybe fast and the regulations are slower.
[21:49] Here we have to put our mind to it and see, you know, how ethically we conduct this. But nowadays, like you know, if you look at the EU AI act, it is moving very fast and they are so Venice like okay, let's say data handling and cultural sensitivity.
[22:12] How do we tackle that?
[22:14] I think when we didn't have GDPR or EU AI act or DOJ final rules or PIPL from China,
[22:27] I think we kind of had to figure out a way to be culturally appropriate.
[22:36] But nowadays I would say all these regulations somehow still protecting their citizens. As long as we observe that and then put ourselves in their shoe,
[22:50] we can reason with it.
[22:52] Do you see? And then we can also understand why certain information should not be processed.
[23:02] So I think that's where all I can say is some level of anthropology,
[23:10] knowledge and understanding of different cultures come into handy. But the actual data is currently always being processed by the regulatory requirements. I would say maybe in the marketing and sales area this could be more sensitive.
[23:35] It need to have a another layer of how we approach different cultural groups. But when it comes to transactional data or data that we use for innovation, it doesn't get into that subject.
[23:54] But I do agree when it comes to marketing and sales,
[23:58] I think mapping cultural appropriation to data, just like we map GDPR articles to data would really become. I think it will help the organizations to become more customer centric regardless of who they are, what country they are from and so on.
[24:20] Yes, definitely. I think that would be really good and I'm glad you asked that question because now I am thinking how do we highlight what is inappropriate for a culture in our data set and how can we map that within knowledge graph?
[24:38] I think yeah, this is something I will do it as a side project.
[24:42] Thank you for this idea.
[24:45] Debbie Reynolds: Oh my goodness, that'll be, that'd be really cool. You just giving me a lot of thoughts and a lot of ideas.
[24:52] I want your thoughts. What's happening in the world today with either data or technology or data protection or privacy that's concerning you right now?
[25:04] Brintha Shanmugalingam: I think there is a gap between data governance and so called AI governance and companies that haven't reached the where they can say that okay, you know we have operationalized data governance.
[25:22] When they say that they are enabling agent dki,
[25:28] I'm worried.
[25:30] So for example in practice like the governance,
[25:35] literally the connector of people,
[25:40] processes,
[25:41] data and technology and we classify attributes like let's say sensitive personal financial biometrics.
[25:54] But in AI Governance, those same labels tell us you know which attributes attributes high risk, which may like a bias models and which are safe.
[26:07] So if you do not have the classification in place,
[26:12] how can you push those data into these training models?
[26:17] To me that is not responsible, it is not ethical.
[26:21] This is why I think sometimes you even called me a preacher once. So I have been preaching for this attribute level classification and the attribute level access controls because we can no longer look at it as a data set anymore because it is very hard to tag a data set to a sensitive level.
[26:44] Otherwise we will have the simple way of tagging them. Okay, you find the highest sensitive level and then you tag it as,
[26:53] let's say strictly confidential and you cannot use it. But then we are also stopping the innovation. So that is also not good.
[27:00] But like you know, having these attribute level classification, maybe we don't need to feed the personal data. And I think so many companies who are having these problem is like let's say the credit check because a person is from a specific city,
[27:20] they are simply being rejected.
[27:24] Do you see?
[27:25] So maybe when they are looking for eligibility that they don't need to train with all levels of demographic data,
[27:35] all they need to know is their maybe job and their age and income. Do you see? That's why I think tackling at the attribute level to me,
[27:48] I also think, I think it is the EU AI Article 10.
[27:53] It requires the training data to be relevant,
[27:58] representative and bias managed. So you can even attempt without attribute level classification. Like you know, if you have the governance gives us, I would say the dictionary and the charitability framework,
[28:16] applying it at a scale across borders, across system and into the AI. But that is the bridge the data governance gives us.
[28:26] Data governance is the moral compass and the AI scales it into the future.
[28:33] Do you see where I'm going with?
[28:36] Debbie Reynolds: Absolutely.
[28:37] Brintha Shanmugalingam: And without that compass you are directionless in AI governance. So that's.
[28:45] Debbie Reynolds: Yeah, you're preaching, you're preaching, you're preaching. So two things you said are incredibly important and I want to make sure people understand why this is important.
[28:54] One is that I think where people are going wrong with AI is that they're trying to take everything and throw it in and then hoping for the best and all that stuff where you're really supposed to think, really like curate what goes in at the beginning.
[29:12] And so that is kind of, that's what you can control,
[29:16] right? Because even if you put in the model, you don't really know how the model is working and stuff. But your best shot really is to Put in, figure out what goes in and so that that level of governance to be there.
[29:29] But then the other thing that you talked about, and this is why attributes are going to be.
[29:34] Attributes are important and they will be even more important in the future is because I think that data will need a lot more attributes in order to be able to manage them in AI systems and a lot of these other systems, right?
[29:50] Because if you add more attributes to data, let's say metadata,
[29:56] that can help at a machine level to be able to do almost like your knowledge graph in some ways because it will help you identify what are the risks and especially in the use case.
[30:12] And then the other thing that happens in AI systems is that it's incredibly easy for it to lose the context or is incredibly easy for AI systems to conjure up their own context.
[30:28] Right. Or conjure up their own relationships or different things and makes it more hard to govern that data. But I want your thoughts.
[30:37] Brintha Shanmugalingam: It is.
[30:38] So how do I start here?
[30:42] So yeah, when I propose these attribute level classifications before that I want to define them and I want to define them in a way. It has 360 view.
[30:57] So you know, if I come in and say I am Printa, I am data governance expert,
[31:02] it really doesn't mean anything until and unless I go deeper into explaining how I came to be, that's when you get to know me better.
[31:13] So just like that, we want to train a data for that we need to get to know the data and its characteristic completely.
[31:24] And not only that. So for example,
[31:28] an attribute such as a name,
[31:31] when it's going through different value chain,
[31:35] the classification level can also change.
[31:40] Do you see? So when I try to map all that,
[31:44] they usually I hear, oh God, this is lot of,
[31:48] you know, like people will say, oh, like how are you going to do this? This is going to take so much time, you know, so much effort and so on.
[31:58] But then my question to them is like, if you already have a data dictionary,
[32:05] metadata management and traceability lineage, not just technical lineage, we want business lineage as well. Why not have a legal lineage?
[32:16] It's all possible,
[32:17] like why don't we do it? You know, and then, you know, the attribute classification is just the next logical step. It doesn't add work, it actually operationalizes governance. Do you see?
[32:32] But you know, if a company says oh, like I have actually even received it, or we don't have any of that,
[32:40] then my answer is bold and blunt.
[32:44] GDPR was introduced in 2016,
[32:47] I believe and it was kind of came into like, I think the regulations were passed in 2018.
[32:55] But haven't they invested in governance? And if they haven't, how can they possibly jump from no governance to AI governance?
[33:07] There is nothing connecting there.
[33:11] And then I tell them,
[33:13] you want to have that bridge, then the only way is attribute level classification. You need to. And that's why information models come into place too. You understand the data flow and you have your domains to own those data.
[33:30] And you have entities or business objects that specifically tell you exactly which business capabilities that they are supporting and all the attributes belonging. And how many attributes are we sharing between domains who get to make the last call on the sensitive level.
[33:52] And all that is matter without that saying,
[33:56] like talking about AI governance and data governance never made sense to me.
[34:02] Debbie Reynolds: Yeah, exactly.
[34:03] Oh my goodness.
[34:04] So you saw me smiling when you said that, right? You can't, that's. That's the quote of the year. He says you can't go from no governance to AI governance. And it's so true.
[34:14] And so.
[34:15] And then AI needs to be governed differently,
[34:18] right? So you have to really think about it differently. So. Wow, that's amazing.
[34:23] Well, Brentha, if it were the world according to you and we did everything you said, what would be your wish for data protection, data governance or data privacy anywhere in the world, whether that be human behavior, regulation or technology.
[34:38] Brintha Shanmugalingam: I think for me tomorrow is about putting people back at the center of technology.
[34:51] Everything we do,
[34:52] like let's say in privacy, data, AI governance, it's about,
[34:57] it should be about protecting people and their dignity and their trust because they are giving you the data, so you better protect it. You know, that's how I see it as it also means using AI for right things right now.
[35:15] I think it is crazy how many companies are rushing to innovate on top of messy data or slogan, garbage in, garbage out.
[35:27] But like a better tomorrow would use.
[35:31] I want them to use AI, but with the clean data,
[35:35] build knowledge graphs of regulations and link attributes to the obligations so we know exactly what each piece of the data means and how it can be used responsibly and ethically.
[35:55] Respect every human being in this world.
[35:59] For example, in global healthcare, AI could classify attributes automatically. Use the AI to do that. So then the fault isn't too bad anymore. You see,
[36:13] apply GDPR or PIPL rules or DOJ final rules in real time.
[36:20] Companies are claiming that they are at. They are, they are exploring agentic AI. Applying rules at real time shouldn't be so much of a problem.
[36:30] Typically so. And let researchers cheer those anonymized Outcomes worldwide. Why,
[36:38] you know, identifiers should stay local,
[36:42] but that innovation should build on that integrity.
[36:47] Do you see? If somebody wants to take my cells and try to innovate medicine, I'm super happy. But you don't need to say it come from me.
[36:58] Debbie Reynolds: That's right. Exactly. Exactly.
[37:02] Brintha Shanmugalingam: My vision is very simple. Use AI to establish governance first and then innovate.
[37:13] So not just faster, but let's always keep the people at the center.
[37:20] How you're using their data matters to them and there should be a transparency to the people,
[37:27] how their data is being used. I don't think as an individual, I will never stop an organization from using my blood samples or cells to innovate something that could save many lives.
[37:42] But I just don't want my name out there,
[37:45] I don't want my date of birth out there. I don't want my security ID out there.
[37:51] So,
[37:53] yeah, use the AI for the right reasons first.
[37:56] Debbie Reynolds: I agree.
[37:57] That is extremely wise.
[37:59] Extremely wise words.
[38:01] So thank you so much for being on the show. This was incredible. I'm so glad we were able to talk. And please follow Brentha on LinkedIn. She really, you just throw some gems out there and people really need to listen because I think this is the way that governance should be thought of.
[38:20] And I think we're going to start to hear a lot more about attributes, I think, in the future. So I'm glad you're thinking in that way. So, yeah. Well, thank you so much and I look forward to us possibly being able to collaborate in the future.
[38:34] Brintha Shanmugalingam: Appreciate and, and thank you so much for having me.
[38:39] Debbie Reynolds: You're welcome. Thank you. It.