"The Data Diva" Talks Privacy Podcast

The Data Diva E161 - Vivek Kumar and Debbie Reynolds

December 05, 2023 Season 4 Episode 161
"The Data Diva" Talks Privacy Podcast
The Data Diva E161 - Vivek Kumar and Debbie Reynolds
Show Notes Transcript

Debbie Reynolds, “The Data Diva” talks to Vivek Kumar, Assistant Vice President, Data Protection, EXL Service Holdings, Inc. We discuss various topics related to data privacy and protection. Vivek emphasizes the importance of catering to a broad audience and explaining acronyms. The conversation highlights the challenges faced by both large and small companies in ensuring compliance and building effective risk assessment frameworks. We discuss the importance of accountability and transparency in AI development, the need for regulations to ensure fairness and transparency in AI, and the importance of privacy by design. We also discuss the growing importance of consent management in data protection, the challenges of managing consent life cycles, and the need for communication and awareness across departments to avoid missed touchpoints and ensure compliance with data protection regulations. We also discuss the challenges of implementing privacy compliance within organizations and the need for privacy-enhancing technology to address these challenges. Finally, we discuss the importance of privacy in data governance and the challenges of implementing effective privacy programs, emphasizing the need for a pragmatic and balanced approach to privacy in the face of evolving technologies and regulations and his hope for Data Privacy in the future.



Support the Show.

42:38

SUMMARY KEYWORDS

privacy, data, building, ai, solution, company, people, consent, organization, framework, thoughts, aspect, implement, policy, transparency, accountability, requirements, aligning, team, technology

SPEAKERS

Debbie Reynolds, Vivek Kumar


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks privacy podcast where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show, Vivek Kumar; he is the Assistant Vice President of Data Protection at EXL Service Holdings LLC. Welcome.


Vivek Kumar  00:43

Thank you, Debbie. Thank you for hosting me today.


Debbie Reynolds  00:46

Yeah, well, this is great. We've been trying to get this together for quite some time. And I'm glad we were able to get you on the show. You and I are connected on LinkedIn; we've been connected on LinkedIn for many years; you would actually call me up and ask me to participate with you in a presentation that happens. You're a corporation around Data Privacy, data protection, how to do robust programs, and things like that. And I thought you'd be a really great person to talk to; I think there are a lot of people in the audience who probably work at corporations where some of their duties fall into this realm. So really being able to talk to someone like you, I think, will be very beneficial. So why don't you tell me about your journey? How did you get to where you are in your career now?


Vivek Kumar  01:37

Oh, so that's an interesting question. So I started my career towards more towards GRC. Governance Risk and Compliance towards managing runoff are also intermediate Security Management System standards and a bit of data protection inspection, but then, in 2016, when the world was waking up towards GDPR aggregation, I think one of the key projects given to me to handle of implementing GDPR, across the company. And that's where my privacy journey started. I think I was one of the few valid members in the company to implement GDPR at the full end to end-across the globe. So yes, that's how VCs get into my DNA where now if I look back a few years, I think a little bit of time, the privacy as a subject, as a domain has evolved a lot. Law is getting integrated into action towards the growing team of Data Privacy, where the country is just trying to enforce and enact the privacy regulations. And I think a lot of the standards are being drafted and coming up in the industry. So yes, that journey I started on this is still going on.


Debbie Reynolds  03:06

That's great. I think, especially in the US. I remember when the GDPR law was passed on May 25,  2016. And I thought, oh, I'm going to wake up in the morning, everyone's going to care about privacy, it's like no, crickets, right? Like nothing happened. And no one was really thinking about it. So it's great that you were thinking about it, then you were working on it, then because it's definitely become a huge issue. Now tell me a little bit about AI. So AI has taken over the privacy conversation in our field. And maybe we need to hitch our wagon on AI of sorts, I guess some of them here, especially on the regulation space, but tell me how just the topic of AI has changed what you do or touched it in some way.


Vivek Kumar  04:01

Oh, absolutely. I think I was just expecting this question. So AI, I think, over the last, I would say, one year, gained a lot of noise with the world trying to catch up with ChatGPT. And then a lot of regulators and a lot of standards are now trying to get involved with how do we comply as sort of technologies. I think one of the key aspects I look at Debbie is still I think AI is not a new thing. The principles of AI still carry and are valid to our existing Data Privacy regulations and stratagems. So I think a lot of work has to be done, but at the same time, I would say the company has its existing program, which requires enhancement towards getting to AI principles. So I was in a conference in Washington DC recently, and I think in an IAPP, I think that this is the most hot topic where industry leaders are starting to get together and trying to make universal consent towards implementing a responsible AI framework. And I think now, the way you locate the companies are coming up with their AI policies. I think that's the first step where policy in the form of acceptable uses of AI policy, in the form of the framework towards managing AI-led solutions, maybe and the process in the form of doing a strong risk assessment, and especially towards AI, I think the way I'm looking at is how do we one is enhanced privacy program of Data Privacy is towards AI? Or, then, what are the additional tools, technologies, and skill sets, we need to bring into the team?


Debbie Reynolds  05:56

Yeah. Yeah, I don't know. I guess I'm seeing three different ways companies are addressing this. One is the company's assumption that their organization is a castle. So let's close the gate and close our eyes as if my AI isn't there and go on business as usual. The other side of the spectrum was that people are just going totally out gangbusters for AI, and they're not really thinking about the risks. And then it feels like this middle group where they're trying to say, okay, AI is there, are we as a company going to actually use it or not? Because I feel like, first of all, I don't think abstinence is going to work, right, so don't just don't use it. Like, that's not going to work, right? And then I also think that even if companies like Apple basically say, okay, we're going to shut off these external AI as we're going to create our own, we're seeing a lot of that happen. So what are your thoughts about all these three approaches?


Vivek Kumar  06:56

I think that is absolutely right. Because, like Apple or Microsoft, the AI chatbots that are being built and solutions that are being used still with a lot of questions. I was just recently looking at the Microsoft Copilot, I think, and they mentioned in the website that the US uses black box. But the fundamental question is, do we have a transparency mechanism built into the black box tomorrow? Somebody questions that, you know, how this black box taking decisions and doing profiling, and then, but when he was questioning, you don't have answers right now, so I think the way, we are still building the solution, trying to help our customer, but at the same time, I think how the rights of individual needs to be protected. And towards these risks to the exposure of data that also seemed to me to be looked at very rigorously. And moreover, the way I'm looking at is just, and I think we'll come to that subject in a few minutes, that accountability is more important, building more and more transparency in your policy procedure, the way you're collecting data, the way you building solutions. And I think building that transparency, where you are able to explain how you do AI chatbot taking decisions, and then what data is being used. So yeah, still a lot of thoughts.


Debbie Reynolds  08:22

I love it. And I agree. Let's talk about accountability. So I think what these AI tools are doing, whether people realize it or not, that is shifting the risk to the organization. And so there has to be accountability. So you can't just say, oh, wow, the AI did this thing. I don't know what happened. They're like, well, what happened? I don't know. It just magically happened. I was like, no, that's just not going to work. So what are your thoughts about building in accountability? Especially you talked about transparency, but I feel like what we're lacking in this AI discussion is that there needs to be accountability or responsibility from beginning to end of that data lifecycle. What are your thoughts?


Vivek Kumar  09:05

I look at it two ways. One is, you know, do we have a set of guidelines, standard regulations to build this accountability? On the other hand, is that the roles within the organization building solutions are caring and understanding? What it means is that I was in a recent conversation with Vanguard and said, hey, I just taking this AI solution from vendors who are accountable to the vendor, whereas they're wrong, or I said, of course, you are. And you know, so I think this level of clarity and mechanisms to build accountability frameworks is evolving. And I think it's completely, we'll go to organizations there, but the risk assessment framework is towards evaluating vendor net solutions to what AI and how those solutions are that supposing consuming data, or fulfilling your subjects right,t individual right, towards maintaining a level of protection level of data, same data, which is required. Now, another aspect is there is a lot of effort required from the developers who are building the solution. So, I think that is missing a skill set and developers where they need to be designed and assessed towards an effective risk assessment framework of the entire lifecycle whether the development stage requirement getting the stage, whether the deployment stage and the monitoring stage, where all this privacy requirement has to be assessed and assured data which is moving derived, which is collected with as a training data, whether it's a synthetic data, nowadays not a lot of time is in which is introduced towards AI solutions has to be compliant. So, what happening is, then, you look at the solution, how that privacy principle from the design stage has been embedded. Now, the understanding is since there are no laws so, nothing applies to us. Right. But the fundamental question is no, the still existing privacy regulation applies to you, as long as you're processing personal data, you know, so I think we are gearing up towards that, we are building that awareness, that culture thing, I think that will again, come to your question is how the accountability is getting built, and so, the end objective is how do we build a solution, which is, give trust to our customer where the data is protected, if we are complying with regulations, you are safe, your consumer data is safe. So I think that's that.


Debbie Reynolds  11:51

I want your thoughts on a statement that I hear people say all the time, and now it's like, fingers on a chalkboard for me, but I want your thoughts. So I've heard people say, in relation to AI, that technology is not good or bad. That just depends on how it's used. And I just don't think that's true with AI. Because humans build these systems, it could be a black box; there could be bias, and there could be discrimination things in there. The systems are making inferences, right? Maybe things you didn't say is making inferences there. So I think when you say the technology can't be good or bad, you're basically saying, well, it's not my fault if it does something bad. I don't know. What are your thoughts?


Vivek Kumar  12:47

Yeah, I think you're right. This is still an, I would say, evolving area. And it's still in grayscale. But then I think I always believe in fundamentals. So fundamental statements, theme, and the world always catches up. So the solution is not to stop the technology just growing. The approach should be how quickly and effectively, as a balanced approach, you can build your framework for your policy processes towards the best you can; I think your intent plays a key role. And I see a lot of organizations trying to build that framework. There are a lot of initiatives, you see Microsoft, whether Apple, whether give it itself staking to make sure they don't get caught up into any of the regulator's eyes, and they had a good knock on the door from Ireland Authority. So I think they are taking seriously but then again, there's another aspect. Big players like Microsoft and the Facebook app will do whatever they can do, but then I think, what about the small-scale companies, you know, how they can invest so heavily in terms of making sure that they are compliant? How the solution is getting built is still giving the confidence to the organization that they are doing what they are supposed to do; many examples of this AI solution going wrong with a strategy because once it has gone wrong, it can create a lot of chaos. So as we continue to build that effort until the time we get to clear guidelines of KPI standards coming out how these new challenges regarding explainability, transparency, and buildings installations, again, data protection is where the quantity of data has increased. And I'm wondering, maybe, I think the definition of personal data might evolve. We're saying not just individual data right now, around the recent bill, which is coming out to AI in the US, whenever they say AI bill with a New York bias, bill where they're continuously saying, okay, you have to have that audit towards the solution you're building towards the bias. But then, how effectively one can do it? I don't know. Do you have any audit mechanism where you can do the audit? I don't know. So yeah, let's see how it goes.


Debbie Reynolds  15:26

Yeah, I just did a webinar for the New York State Bar Association today about the New York AI, audit bill, and stuff like that. So I think what typically happens is, unfortunately, something bad happens, and then a regulation comes out, and they try to answer that bad thing that happened. So I think that's kind of what we're seeing. And I think what they're aiming to do is to create more transparency in these systems; I think that they're tired of calling CEOs to Capitol Hill, and they'll say, I don't know, I don't know what happened. Even they're like, well, that's your job. And though, we're going to put it in a law. So I think, like you said, accountability should be there, transparency should be there. And so they're not necessarily dictating how people do it. They're basically saying these elements need to be more transparent. And you need to be able to show evidence that you've taken the steps and things like that. So I think that's what they're looking for. I think, in hiring, I think that this is going to be a huge issue because I think that there probably is quite a bit of discrimination there, bias in there. I know a lot of people who are maybe older workers, and they're saying they're having a hard time finding jobs, right? They know age discrimination exists. But I think some companies don't realize with these systems is that discrimination is probably in the code. But we have these audits; people suspect it will be there. So I think it's going to be very interesting. I'm hoping this will push more privacy by design and I want your thoughts on this. Like before, when people were talking about privacy by design, they're like, let's just do whatever and then call the privacy people later and put some band aids on stuff. And it's like, no, it doesn't actually work that way. So are you seeing more of an effort to either have you be involved earlier in a process or have a situation where you're trying to create less risk downstream? Is that something that you're seeing more of?


Vivek Kumar  17:42

Yes, the more and more attraction towards privacy from the design stage is growing right now; if you see a lot of involvement of privacy team and privacy folks, from the start, whether it's a new customer, whether you're doing merger and acquisition, whether you're building new products and solutions, so how those products and solutions need to be aligned towards applied privacy principles and regulations. So those requirements are all getting on stage. But I think still a lot of awareness needs to be created. The fundamental challenge I see is, again, and the people issue or debate is organization is willing to invest heavily, build a skill set towards who can work closely with developers, and align those privacy requirements or awareness towards those developers where they understand privacy as a fundamental process, where this has to build from the requirements challenge, and which is very important is last bit is business, while businesses justifying or creating and building those requirements, are they taking privacy as a core towards their design framework? So I think if I look at these three areas, a load of things has changed and evolved from last, I think, one year well, now the customer is also asking, where is the privacy embedded solution and design where organizations are seeking an onboarding privacy team and privacy champions towards their processes. Whenever there's a new client, a new customer, or a new product needs to be developed. So we are seeing now, but there is still a long way to go. I would say that the structure has to be built very strongly. Privacy has to be looked at from scratch, not as a cost factor where the solution is there. Now you go ahead and do and assess.


Debbie Reynolds  19:45

Yeah, I think it's expensive, and it's not helpful to try to do it at the end. Because you built in so many things that maybe shouldn't have been done in the first place.


Vivek Kumar  19:56

Yes, right. I think the trouble is by design; I will touch upon a couple of things more towards content management. Now, when you look at inbuild, any solution with a lot of data, especially with the AI, or whether it's validation solution, which is being built, so expect of traceability of the data, who the data belongs to, you know, what all data volume, which is capturing how the data will be used, and who it belongs to with I think a lot of awareness is growing towards consent management. And I've seen companies building an effective consent management framework also as well. I was surprised to hear I don't know if you look at the India Data Protection Act, which was recently passed. So, consent management has been categorically defined as one of the provisions to collect the data. They also talk about having a dedicated consent manager onboarded who will oversee consent management practices. So there are very, I would say, very appreciable steps taken because managing the consent lifecycle itself is a very challenging job, right? Building those content lifecycles towards your entire organization ecosystem should be done at the base level. So what data do you have and to whom does the data belong?


Debbie Reynolds  21:31

Yeah, I think you're right. I think there's definitely especially in APEC, there's definitely a move, or even in like Saudi Arabia, different things like that, definitely a move towards consent, right? Because there's some of those countries, consent is the only way that you can get certain data, right? So I remember this around the time GDPR came out, a webinar, oh, it was like an in-person session. And one of the lawyers on the panel, he's like, consent is impossible, that's impossible. We can't do that. In the past, where companies so much of what they were doing was considered quote-unquote, performance on contract legitimate interests, we know that as data has grown comprehensive, done other little business deals, and are doing other things with data that people didn't want them to do. And so that's what I think a lot of these laws are for, responding to that. So it isn't impossible, but it is difficult, right? Because technology, software tips, or data systems are typically built to remember, not to forget, data. So you're basically swimming upstream when you're trying to get to a situation where okay, someone gave me data, and now I have to get rid of it or someone revoked their consent. So, how do I do that? So, what are your thoughts on that?


Vivek Kumar  22:56

So I think this goes towards building a consent lifecycle which has to be integrated with your consumer rights management workflow and process because I look at it in three ways. One is, are you able to build a centralized mechanism where you have a central view, whatever data is coming in your environment, you know, okay, that is consented by building those checkpoints and building the consent as in my data collection point, those checkboxes or whether it be implementing such capability in with us to ensure that you know, any website collecting data in your organization consents is one of the key requirements there. Now, what other aspect is are those consent mechanisms are integrated with your consumer right, whether data subject rights workflow, whether somebody has not consented or it is requesting some right, do you have a team process you can handle those requests in an integrated way and I see a long road of gaps where these two things don't talk to each other, where you have a human resource department sitting there getting requests for them not talking to the department of privacy or the central time sitting in the other part of the world. Then you have marketing, still, a gray area, who do a lot of marketing campaigns and collect data, and then they get requests, Hey Do Not Disturb me, why are we sending my emails but then they do not talk to the mechanism where they can have an integrated process with the DSR are concerned. But the other aspect is awareness, which I repeatedly say that the more and more awareness we keep building towards these areas across the organization where they know okay, something has to come. We have to know what we do, we have to reach out, and who can help us to align with this. I have seen those touch points where someone sitting in Europe gets a request, hey, different data for this applicant. And then you have a team sitting in the Philippines where they don't know what to do. And because they manage the SR processes of workflow, and then those two people are not talking to each other. So actually is looking at is a normal VA process where they say, oh, okay, I don't know what to do. But then the other team will need to align with his privacy requirements and how should we handle those requests. So what's your thought?


Debbie Reynolds  25:35

Yeah, I totally agree; right, corporations, in my view, have been developed almost like Santa's workshop where everyone does their little part, right? And magically, at the end, the toy pops out at the end. And we have something like privacy, which is the issue that goes horizontally across organizations, and almost any type of data that you're looking at, possibly could have some type of regulations tied to it or some other type of obligation; it just becomes really comes hard to maybe understand or to communicate to people, what their responsibility is, who needs to do what because I hear a lot of companies say, hey, we need a policy. We're going to write this policy. Yeah, we're going to do this. And we're okay, so who's going to do this? Who's going to do that? And I'm like, well, we haven't thought about that yet. Right? So part of that, to me, part of clients is doing right. It is action, it's not inaction. So if you're not doing action, then you're out of compliance in my view. What are your thoughts?


Vivek Kumar  26:38

Well, I have seen you have a policy sitting in some portal in the company, but then who is educating the policy, then you have a central team, the central awareness program, but then you choose to work closely with the business to know first, what is the policy and then telling business or telling department and team, hey, this is what you need to do. I think the way I look at it more privacy will be set with business teams, operations teams, IT teams, your incident teams, marketing teams, your HR teams is very important. The finance team is very important. I think the more you get more confidence in how things are building up because it has to sit with them, rather than sitting in some central place, in different locations. So I think those entire channel team structures need to still grow, where privacy should be managed in a very localized and regional way, rather than, you know, fitting in at all? So I think that's something that is growing, I would say.


Debbie Reynolds  27:53

Perfect. I agree with that. I don't know what your thoughts are; this is an executive question, executive advice. Let's say someone's a privacy person, and maybe they're the only person, or maybe there's a small group within an organization; how do they develop champions in other divisions of a company that isn't in privacy? So, how do you reach out to HR, Marketing, or IT folks to try to get champions who can help you do your job? What are your thoughts?


Vivek Kumar  28:27

So, I think we are working in the T-layer approach; rather, I would say in a more generic way. So the T-level approach is you build your core privacy team who are very, very skilled, educated, and trained on advanced privacy aspects of regulation and understanding, but then the second layer is your trainer model. What it means is department, spokes or point of contact, dedicated point of contact to understand the rules and responsibility towards Data Privacy. And they have a specialized, continuous training and monitoring and awareness workshop to where we have workshops with various different channels to maybe a particular connect in the forums, who continuously need to be cascaded, what is happening in the privacy world, what is coming up, what is growing, what regulation applies to you, and building those champions and giving the privacy in their hand. Now, what it means is, I fundamentally believe give them the charter, this is your business, this is your area, this is what applies to you, and you are responsible for driving it, right? So I think the way we are building a cross-functional aspect of culture is where each point of contact POC of that respective department takes this entire charter and drives the compliance while we support them to different channels through policy procedure. We support them through awareness and training. We support them through handholding; we support them with guidance; I think they take charge, working with the privacy team to ensure that the solution they're building is the customer handling. And the requirements they have for the customer are being fulfilled. And I think that also required working with the legal compliance team and required working with HR ID closely. And build that thought in those departments, why privacy is important for them, and how the solution we build on can bring more trust to the customers. So I think that's the layered approach trying to build as a culture.


Debbie Reynolds  30:45

That's a great approach. Thank you for that. Talk a little bit about privacy enhancing tech. So around the time when you were first building your first privacy program, I'm sure there was no privacy enhancing tech around. Now we're seeing more people, more companies in this space that are trying to work there. How has that changed over the years? Is this a good change? I would imagine it's a good change. What are your thoughts?


Vivek Kumar  31:13

No, absolutely. It's a good change. And I think it's after the AI and the way digital solution growing technology towards privacy is enhancing. And when I say technology, there are solutions, vendor-led solutions, and capability that is being built and enhanced towards how we implement data minimization principles; how do we implement the purpose limitation principle of privacy with some technical capability? There are a lot of terms now being used, which we never heard, and a lot of people don't have like federated learning, where you know, you build a solution and try to build an AI solution where this capability helps you to minimize the data and ensure that your data which is being used for the purpose you have collected, there is no concept, which is now being developed, I had ZKP, is zero-knowledge proof. What it does is evaluate your data and try to convert your data into dummy data. So that and then group it and so that you don't you work on actual data, and it's better don't have a basically, no knowledge of actual data in the dummy data, the desk counter is getting built up. And yeah, there are a lot of other concepts getting differential privacy, how privacy needs to be embedded in a cloud environment. So I think the solutions are getting there. But at the same time, there is a greater understanding mechanism you need to require to deploy the solution because there is no one fit for all sizes. So I think the solution works well when you want to use it in a combination of two or three together, and then try to build your solution. I have not seen, at least in my view, where organizations are trying to build those and trying to adopt this privacy and technology very aggressively in terms of ensuring that was the policy and procedure that one has to deploy this. But I think this is still maturing, in my view.


Debbie Reynolds  33:22

Yeah, I think it's definitely maturing; I think there's always this push-and-pull thing that happens with new software or new software areas, right? Where some tools want to do everything. And then other tools, they're smart, and they want to do best of breed. But what that means for a company is that you may have multiple solutions, right, depending on what your issue is. So I think that's always a challenge. Should I get this thing that does 10 things, but it only does one thing really well. And it does the other nine things not that great? Or should I get the best in a particular category? So I feel like a lot of companies, a lot of times they don't know the right question to ask in order to get the right tool, right? So they think they solved their problem, but then it's like, oh, this doesn't do what I thought it did. So they're back out into the marketplace to find something new. Definitely a learning curve there. And so I think what AI has done is really accelerate these conversations where before I was like, let's just wait and see. And I was like, okay, AI is pushing on the door, trying to get in, and we're trying to keep it out. Keep an eye out for different things. And this is not working. So hopefully, this will start more conversations around privacy-enhancing tech, but then also back to one of your earlier points, which I think is vital. Let's say the future data lineage will be that much more important, right? Who is the data from, who does it belong to, and can we track that data all the way through the data lifecycle? And that's something that organizations and never had to do. And I think this is going to be necessary in the future. What do you think?


Vivek Kumar  35:05

Yeah, I think at this AI, one of the discussions that is involved now is privacy. Well, I will talk about the privacy team members’ privacy skills as are second aspect. But then privacy has now moved to more with a data governance aspect where you need to look at not just privacy or the fundamental perspective, building those processes where data as a whole, where the ecosystem of data and company and then building data lineage practices towards aligning those privacy principles in its lifecycle. And I think, with this cloud, where almost the entire world is moving to the cloud now, so now you have a good handle on the data, and their privacy has to be set by design towards cloud adoption, and getting more visibility towards data. One of the setups seen company implementing a lot of data discovery and classification practice, which is the first step towards getting very granular level visibility of your data in your environment. The image also helps to create a data map towards aligning, okay, where the data is coming from, rather than moving, how the data is cataloged within your entire cloud or on-prem environment, what is the correct category of data you have, what are the data elements you have? What are the areas of data, and what regulations apply to the data? So I think, and what are the data protection policies, you have the baseline control with anonymization? So I think that is one aspect, but rather than I would also look at, well, you get the visibility of the data, but then what are you gonna do about it? I have seen companies, buildings of processes, where how do we have a risk assurance, aligning with your data governance practices, so that you know, okay, after getting the visibility, we have effective risk management in place to work hand in hand with this data governance structures?


Debbie Reynolds  37:08

I think one, you mentioned the business, and it's very important. So I think some people have a religious idea of privacy, and I'm just not one of those people, right? I tell people, you advise your company on what they should do, you help them put plans in place, but they have to decide their risk appetite, they have to say how they want to implement, they have to decide what they want to purchase and things like that. So what are your thoughts about that? So I feel like some people are like, oh, let's do everything my way. It's terrible. What are your thoughts?


Vivek Kumar  37:51

No, I think you're absolutely right. Because if you look, I will see the way it evolved. Still, you do not have any standard like information security, which can be fit in any organization, any country across the globe; you can have a lot of ISO standards, which is pretty minimal system you just adopt. But in privacy, the way it is all, where website GDPR. The US has a very stringent, complex set of principles and requirements, whereas the other countries, they have sort of what they try to get more aligned with GDPR. But then they have sort of different aspects coming up. And when you look at the Asia side of it, and China, they have their again, aligning very culturally with privacy. So I think it's very important that the foundation of privacy is built towards the culture of the country, and then implementation of privacy has to be built with the culture of the company, right? And other organizations. So effective risk appetite framework and process two has to develop like I'll just tell, whoever I speak to, privacy teamwork is hands-on and with different departments telling them why is it important. What is the risk associated with this? And then leave it there? Letting them take the decision, let them take the call? How do they want to go about it? And I think more work towards more of an advisor, work towards more of a consultant work, towards more of a helping hand, towards implementing those practices and solutions. Yeah, so that's why I'm looking at privacy team or privacy. Should we structure with a multinational company like we have a Philippine office where there is a dedicated team sitting in privacy where they're also very well matured and in privacy, somebody who has an office in India, the privacy still not getting that much importance? The way compared to the Philippines and Europe, where they will look at privacy, are they okay? And still, third point rather than the first point or third idea is the first priority. So I think that level of cultural understanding towards automation is still very important if you want to drive an effective privacy program within your company.


Debbie Reynolds  40:19

I agree with that wholeheartedly. So if it were the world according to Vivek, and we did everything you said, what are your wishes for privacy anywhere in the world. So whether that'd be regulation, human behavior, technology, what are your thoughts?


Vivek Kumar  40:35

I think, over the past few years, privacy has grown a lot. I can't imagine. If I look at around 70% of the world's population is protected. With India’s law passed, I would say 90%. Now, at least, these people have the right. Individuals have been getting more important; okay, my data is valuable to me, and I have the right to know what's happening to my data. And then I think I always say to everyone that you need to look at it from the individual perspective, then only you can build an effective Data Privacy program, end of the day. The individual is a customer; this is what we are protecting. So it is evolving. And I think with the latest growing technologies, the digital solutions, AI tomorrow will be something else. Like you know, we have challenges with it. But fundamentally, we will be having a balancing approach towards meeting empathy, building the frameworks, and ensuring that, yes, we are serving a customer and have the processes in place which bring more confidence to our customer and individual who we are serving.


Debbie Reynolds  41:50

That's a great list. Well, you definitely have your head on right. You're very level-headed and pragmatic about the way that you approach privacy. And I'm really happy that you were able to be able to be on the show. You're great. Well, thank you so much. It's been a great episode. And yeah, I'm looking for us to be able to collaborate again sometimes.


Vivek Kumar  42:16

Thank you. It was nice to be part of this session today, Debbie.


Debbie Reynolds  42:22

Yeah, definitely, definitely. I'll talk to you soon. Thank you.