Debbie Reynolds “The Data Diva” talks to Prashant Mahajan, Founder & CTO, of Privado. We the company’s growth and commitment to help organizations prevent privacy challenges at the code level, the problems Privado solves for organizations, how to get companies to be proactive about privacy, the conflict between data uses and compliance, addressing speed challenges in operations, majority of data privacy issues missed due to manual processes, how privacy changes affect AdTech and his company’s ability to help, the Ad industry now and the need for change course, how Privado makes the developer’s life easier, what he sees as future issues, harm is irreversible, and thus one should try to prevent it, his client's statements of benefits to them, companies benefit from Data Privacy automation and transparency, relying on memory brings risk to organizations and his hope for Data Privacy in the future.
Support the show
data, companies, privacy, happening, organizations, processes, engineering team, metaverse, customers, compliance, collecting, teams, people, tools, developer, device id, debbie, business, code, audit trail
Debbie Reynolds, Prashant Mahajan
Debbie Reynolds 00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
Debbie Reynolds 00:15
Hello, my name is Debbie Reynolds they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show today, Prashant Mahajan. He is the Co-Founder and CTO of Privado. Privado is a Data Privacy tech company. So he's an AdTech veteran and engineer who has been an engineer for over 15 years and has experience in building and scaling systems. He's also a frequent Data Privacy conference speaker. Also, I want to say a bit about Privado. Privado bridges the gap between privacy engineering by giving privacy teams real-time visibility into engineering systems. Privado helps make privacy proactive by detecting privacy issues even before software changes or new products are shipped. So thank you so much, Prashant, for being on the show.
Prashant Mahajan 01:23
Yeah, Debbie, thank you so much for having me. I'm the Co-founder and CTO. And you are one of the few experts that we reached out to when we started this company. And your inputs have really helped us shape the journey so far. I am very excited to be on your podcast and looking forward to our discussion.
Debbie Reynolds 01:43
It's been a thrill for me to see all the progress that you've made, Prashant, with Privado. And I also want to congratulate you on your recent Series A funding announcement. So can you tell us a bit more about that?
Prashant Mahajan 01:59
Yeah, it is a very exciting time for us. We are backed by investors like Sequoia and Insight Partners. And it's really a validation of our team and the progress we have made so far. And this funding will really allow us to invest more aggressively in r&d, build more open source tools for our community, and really expand into sales and marketing. And our goal has been from day one to bring happiness into the lives of privacy and data teams.
Debbie Reynolds 02:35
Excellent. Excellent. So tell me your story in terms of why you feel like privacy is an area that you wanted to go into. And what problems do you feel like your company is solving for organizations?
Prashant Mahajan 02:56
Yeah, so I'll just briefly talk about how I got into privacy. So before founding Privado, I was a founding engineer at an advertising technology company, where we leveraged personal data to show relevant ads to the user. So I was there for 14 years. I built the infrastructure to serve billions of ads through six petabytes of data. And I find it very ironic and funny that once upon a time, I was using personal data to make users click on ads. And here I am on the extreme end, asking people to minimize the use of personal data. So I've come a long way. And what really changed is when in 2018, GDPR came; you know that the advertising business is all about data, right? So we had to form a team of 50 engineers and reorient our production processes and tools to put appropriate privacy controls in place. And that's when it really dawned on me that these privacy regulations and laws are going to have a huge impact on the way tech companies are collecting, processing and applying the data to solve for different use cases. So I started digging deeper. And in the process, I realized how intrusive these data collection practices were for the consumers. And that really inspired me to be part of something where I can help businesses do the right thing for the customers.
Debbie Reynolds 04:28
You're very smart to have figured that out. I think before GDPR came out, a lot of companies weren't really thinking about this, even though the GDPR isn't that different from the previous data directive. But we know that because the GDPR has these really strong fines, it's become like a C-Suite issue for companies to be able to figure out what's the best way to solve their privacy issues. So tell me about the idea from companies trying to get them to understand why they need to be proactive rather than reactive when it comes to privacy dealing with privacy.
Prashant Mahajan 05:13
Yeah, yeah, sure. So I’ll, I will first take a stab at what problem are we solving, and then we get into why a proactive approach makes sense, right? So I'm going to demonstrate it to you live. I'm going to ask you a question, and you need to tell me what you have understood. Are you ready for this, Debbie?
Debbie Reynolds 05:40
Oh my goodness, a q&a. Okay. Yes, yes, I'm ready.
Prashant Mahajan 05:44
Yeah. So Debbie, (Prashant speaking in Hindi). So I just spoke in Hindi, my native language. And what I said was, “Debbie, you're very smart, but you're not going to understand a thing because you don't speak Hindi”.
Debbie Reynolds 06:11
Prashant Mahajan 06:12
So when you send PII to engineers in a legal language, this is what exactly happens. They don't really understand anything because whatever tools were built four or five years back, they were built for lawyers. So the language is legal. And the whole experience is optimized for lawyers. And when you use the same tools to communicate with engineering, that creates a problem for engineers; they don't really understand the legal language. And then you have a problem that you know, and it is your compliance work getting stuck with engineering, it is because you're speaking to them in a language that they don't understand. And that's exactly the problem that we are solving at Privado. And now, I'll tell you why a proactive approach makes sense. So maybe digitization has accelerated a lot in the last couple of years. Every company is an outtake and data company. And as you see, insight is a new oil. Right? So a large part of your workforce is on the technology team. And inside this technology team, there are various different other teams such as your front end, backend, data analytics, machine learning, and Big Data. These smaller teams are empowered and autonomous. What it means is that they can make their own decisions, they can choose their own tools, they can choose their own languages, they can have their own processes, and they can choose their own third-party software. So, as a result of this, in any organization, you end up with a very diverse tech stack, comprising 10s of languages, several different cloud native and on-prem data storages, and 1000s of third-party libraries. And these teams are continuously pushing the code on production on a weekly or bi-weekly cadence. And often, they end up pushing changes that are incompatible with what you have mentioned as part of your privacy policies and data processing aggregates. And if you look at our privacy, compliance, and risk function today, they sit outside of this engineering work. And that's a big problem; they cannot really do their jobs effectively. That's why you need to start shifting privacy left, go where developers are integrated with their core workflows, such as CICD pull requests, and continuously scan the code so that you can get that much-required visibility. And you can more efficiently collaborate with the engineering teams to take care of two things. First is identifying and mitigating privacy risks, which are there in the code, which you can tackle before your software moves to customers. And the second thing is automating your compliance work, which involves generating your data maps, data flows, and groups. And Article 30 reports. And that's exactly this is the platform that we have built at Privado.
Debbie Reynolds 09:21
Yeah, I'd love to talk with you about when you engage with a company, what types of friction do you end up seeing in terms of the friction between technology and the actual data, operational data uses that are happening in the organization, and the legal compliance part? Tell me a little bit about that.
Prashant Mahajan 09:48
So one of the most common use cases organizations want to generate their report reports for engineering. And they struggle a lot. The example that I gave, right, so you send assessment using tools which were built four or five years back, which use legal language, and then your engineers they don't really understand that language. So that's the first problem. And then the frequency of it, right? So you conduct that exercise once a year. It takes you forever to get it done. And then, even if you get it done, it is not up to date because engineering teams are constantly pushing the code to production. They are working in an agile fashion, and they are a very diverse set of teams, huge teams; they are continuously coding and putting changes into production. So they are organizations that are not able to keep their proper records up to date. And they're also not able to tackle the privacy threats. So these are the two problems that organizations come to us with. And I can give you an example in one of our customers and they were trying to do the data map. And they struggled for a whole year. They couldn't get one response from engineering because they were talking in a language that engineers were not understanding. And when they started using Privado, within two months, they could get their whole report done. And they started more proactively engaging with engineering to ensure that things are implemented in a privacy-preserving manner.
Debbie Reynolds 11:30
Yeah, let's talk about speed. As you say, you mentioned, and I think it comes up a lot. A lot of times, when organizations are dealing with these privacy issues, and they have the technology, or they're pushing out updates and stuff like that, it creates a huge slowdown or a bottleneck, and then to be able to push out products to be able to serve their customer. So I think that this is a real problem, not just from a legal and compliance point of view. But it creates a huge bottleneck for companies really trying to push out updates and push out products in a timely manner. What are your thoughts?
Prashant Mahajan 12:15
Exactly, I think you're very spot on when you say that. So if you look at the current tooling, maybe there is some tooling available that can do data discovery and cataloging. These are very efficient tools that solve problems, but there are challenges that are very reactive in nature. Right. So you already collected the process and use the data to solve for various different use cases. And your developers continue to access the code, access the data through the code, right? So that's one part of it, where you are doing things when something has already happened when data has already been collected. The second part of it is during the development cycle, you have some processes, wherein you might engage with the engineering team to do the design reviews, or some organizations, have a very top-down PII process where you're relying on the judgment of people, you expect people to do the right thing, you expect your product and technology teams to come to you for reviews. And you expect this whole process to work very smoothly. But in reality, it often falls short. And I'll give you an example. Right? So assume that your engineering team is implementing a rate-limiting feature. So, rate limiting is a feature that you use to protect your infrastructure against an attack that wants to bring your servers down. Right. So from a remote machine, I can fire millions of requests. I can exhaust your server resources, and I can bring it down. Right, very simple thing. And how do you circumvent this? You basically keep a map of device ID, and IP address of the server against the number of requests originating from the machine. And if it crosses a certain threshold, you start blocking those requests. A simple change, right? So what if your engineering team now starts collecting the device ID? But let's assume that they were also collecting geolocation as part of the same API. Right? So with geolocation and device ID, now you can precisely track the movement of the user. Right? So it has a huge privacy implication. So what if your engineering team does not know that this trivial change of theirs poses such a privacy claim? So what if they decide not to file a dpi? What if they find a dpi, but forget to mention that they have started collecting device ID? Right? So how do you ensure that your processes are working smoothly? That's where you need automation so that you can proactively engage with the engineering teams and intervene at the right time when the cost of fixing is very low. And it also helps the engineering team going back to your question where you said that, you know, a lot of time in these last-minute surprises. They create a lot of chaos, where you are engaging engineering leadership and product leadership. And you're having these discussions about whether to allow the release or not. So this heartburn can be avoided if you take a more proactive approach and use tools like our platform, where you're continuously scanning the developer's work and engaging them at the right touchpoints.
Debbie Reynolds 15:44
My experience is companies who do, for example, records of processing activities or robots, they do them manually; when they have the opportunity to engage a company or user tool, they find out they missed the majority of the data process and then what happens, because a lot of times, depending on who is doing these processes, they don't understand the technology and they don't understand all the different data flows in their organization. So tell me a little bit about that, like a company who previously tried to do this stuff manually, and how it's helped them a lot trying to have a tool that helps and aids them to be able to create some automation there and also help them see what's actually there versus what they think is there.
Prashant Mahajan 16:38
Yeah, I think that's an excellent point, Debbie. And what we are seeing with our customers is, we have seen 40% discrepancies in what provider has reported, and their existing Roper inputs 40%, can you imagine? Right? That's a lot. And the important reason is that existing data that they are collecting, they're relying on human processes, they are going and talking to product leaders and engineers, and they are asking these questions, and then you are relying on again, human judgment and recollection of what they know, on what data they are processing, actually, it is practically very impossible to scale that and to ensure that everything is up to date. So that's what we are seeing, maybe at least 30 to 40% discrepancy, we are seeing in what they have, what our customers had, with the existing reports. And what we reported was they started using Privado.
Debbie Reynolds 17:40
You had mentioned in the past that you had worked or had experience in the ad tech industry in terms of how people advertise. So now we're seeing so many changes, not just in GDPR, but also in the US and different countries, where they're trying to limit or even, you know, Apple with their app transparency, limiting the data that the third parties can get about people. So I think that at this time, we're going to start to see a lot more companies try to rely on organizations like yours that help them at the front end because a lot of times, companies will end up getting into a lot of trouble if they just do business as usual. So business as usual, in terms of collecting all this data, sharing it, doing different things with it, those times are ending now. So I think that organizations, especially in the ad tech industry, really need to take a deeper look at their data stack and the types of things they're doing with them because the types of things that they're accustomed to doing may not be legal going forward. What are your thoughts about that?
Prashant Mahajan 19:05
Absolutely, I think the ad tech industry is going to go through turmoil. You know, their whole business model is being questioned and is under threat by various developments which are happening. There are privacy regulations that are coming up all over the globe. And then there is platform pressure, where Google is getting rid of third-party cookies. And Apple has done some changes which restrict them from passing the user IDs, right? So I think turmoil times for ad tech. And increasingly, it's going to be very challenging for them in terms of adapting to the changes which are happening, it's going to be increasingly difficult to justify the over usage of data in retargeting the users, so they will have to evolve their business models. And some changes have already started happening where in ad tech companies are now tightly integrating with publishers. So rather than relying on third-party data, there, they're basically started partnering with the publishers to activate the first-party data so that it remains confined within the boundaries of the publisher. And it is used in the context in which the consent was given. So those changes have started happening, but it will be interesting to see how it unfolds. Because there are changes involved across the entire stack, starting from your publishers to your sell-side platforms to your demand side platforms, ad servers, ad agencies to advertisers, right? So there is a whole chain, which is involved. It will be interesting to see how this unfolds.
Debbie Reynolds 21:05
I agree with that. I think the ad agency, in my view, underestimates how much they have to change. It's not these small, insignificant changes. These are going to be, you know, industry foundational changes that have to happen in terms of how they deal with the future and handling data. So tell me a little bit about how you make the developer's life easier because I'm sure there's frustration and there can be friction there when they may be asked to do certain things, or they don't understand maybe the legal ramifications of changes that they're making when they're actively developing code, and they're creating products. So tell me about that.
Prashant Mahajan 21:54
So developer goals are aligned with the business goal. They need to deliver quality code on time to meet the business goals. And they're hardworking people, just like all of us. Nobody wants to ship code with issues and security vulnerabilities. Their challenges may be that the time is finite, and your compliance and risk requirements are competing with business requirements at the same time. So what you really need there is very lightweight tooling that can allow developers to take care of this peripheral work without really taxing the speed of delivering business value. And if you look at security, they have done it exceptionally well. And that's exactly what we are doing with analyses, and they get the whole context of where the changes are. And they can then and there tackle those problems. I think that's very important. And our focus has been to build a world-class developer experience. We've built a product that helps reduce friction between the compliance and web teams.
Debbie Reynolds 23:13
What thing is happening in privacy and data right now or that you see in the future? That you say, wow, this is going to be a big issue in the future when it comes up? What are your thoughts about that?
Prashant Mahajan 23:28
I think Metaverse is going to be very, very interesting. It increases the number of data points multi-fold by maybe a thousand times. Right. So you're monitoring a lot of movements, your location, your gestures, your body movement, right? So you're ingesting a lot of data. And that will, that's going to create a whole set of other problems. Because I can easily correlate all the data points, and I can identify a user. Right, so that correlateability is very high in that case, and that poses a significant risk. So I feel that Metaverse is going to be probably 100 times more complex when it comes to respecting and giving those controls back to the user on what is being collected and how it is being used and shared and all of that.
Debbie Reynolds 24:44
Basically, I think I agree with you that being able to shift privacy or the focus on trying to prevent harm is really important. So when you're saying when you're working with teams, you're trying to help companies shift the privacy focus left to be able to eliminate the privacy issues that companies may have before it becomes a crisis, right? So I think that's something that's really vital, especially as companies are looking to move into the Metaverse and to these different technological spectrums that, as you say, will collect even more data and be more complex. Do you agree?
Prashant Mahajan 25:34
Yes, I do. Yeah. Things are going to get a lot more complex as it evolves with the Metaverse. I think there are different dynamics involved. And the sooner we take a proactive approach, the better it would be. So in the case of Metaverse, it is really good to see that there are some proactive efforts being made to define standards from the beginning. And that really is going to help us take a proactive approach. And that's where I feel that doing things, you know, before it hits your customers, before something goes live on production, it's really important to do things on the left-hand side of the cycle, when it is being developed, right? So that's the right time where you can intervene, engage with the right stakeholders, and ensure that appropriate controls are in place and the right thing is done for your consumers. And the cost of doing that at those touch points is very low, as compared to doing it when something has already happened, and you are in a crisis mode, and then you know, it becomes 100 times more difficult to get things in control when something has already happened.
Debbie Reynolds 26:52
You know, tell me a little bit about the stories that your clients tell about how using your technology has helped them not only streamline their operations but it's really helped them be able to gather more trust from their customers when they are using these types of products.
Prashant Mahajan 27:19
Yes, Debbie, by tackling privacy proactively by doing code scanning, you are also able to address some of the SOC and other compliance-related controls. And that is reflected in the certifications that you are doing. So that will slot credibility and trust in the minds of the consumer. So that's one way it is helping. The second way is as you're taking a more proactive approach, and you are addressing all the risks and compliance work before something goes to production, that again translates into the customer experience, right? So when your users are registering for your service, they can see how your consent is implemented, right? Are you following any dark patterns, right? So those are the signs that customers are looking for when it comes to valuing a company from really a security and privacy standpoint. And this proactive approach definitely helps. Because all of this is reflected in the certifications that you seek and other kinds of HIPAA certifications. And also, it is reflected in the final product that your customers are using.
Debbie Reynolds 28:48
Also, I think it helps a lot with transparency. So, based on what you and I are both seeing in the marketplace, when companies try to especially do ROPA or try to track what they're doing on a manual basis because they're missing so much if something happens with a customer, they may say, well, we think that we don't trust you because we feel like you're hiding things. And it may just be the fact that the person doing these processes mainly doesn't have the visibility to what's happening in the background. So it just makes the company look bad. Because you're like, oh, you know, you're scrambling trying to ask for these people's questions. So I think being able to bring a new level of transparency where companies aren't caught, they have a full and deep understanding of what's happening within their organization with data. And they can, in turn, be more transparent with regulators and with customers about how they're handling data. And then I think that definitely builds trust. What are your thoughts?
Prashant Mahajan 30:00
Debbie Reynolds 31:25
I will say probably one of the messages here is that you know, companies shouldn't try to rely on the memory of people within an organization around how data is handled. Because, you know, memories can be faulty, you know, not everyone understands everything that's happening with data. So being able to have tangible evidence, so to speak, about what's actually happening with an organization really greatly reduces someone's risk. And I agree with you about this dating app and this other app that said that they weren't collecting this data. But they actually are, you know, those are situations where you can tell that probably the compliance part of an organization is disconnected from the operational portions of the organization. So being able to, I think, have more tangible evidence around how you're handling data does greatly reduces the risk of companies saying next, I don't think anyone on purpose, in a way, was saying that, right, they probably based on their best efforts, they probably thought, Okay, we have everything covered. But then, as you say, to get found out by a customer because you can't cover up your digital footprints and the types of things that are happening in that data flow. So what are your thoughts about that?
Prashant Mahajan 32:54
Yeah, you're right. I think auditing and keeping that audit trail of what changes have happened to the system, how they have happened, who has approved it, and what kind of action was taken is very, very important. And it goes back to the question that you asked, right, so how can organizations build trust? So I feel that that's an important use case that we saw. Because using our system, you have a detailed audit trail of what changes were made, when, who made them, and why they were made, right? So everything is well documented, and what actions you have taken that is also well documented. If tomorrow, you know, some agency comes and they start asking you, if there is some incident, if they start asking you, you have that proof that you know, you are using automation, and you're proactively doing things for your customers, and you can show that audit trail and build your keys.
Debbie Reynolds 34:00
That's very important. I hope a lot of people heed that advice. So if it were the world, according to you, Prashant, and we did everything you said, what would be your wish for privacy anywhere in the world? So whether it was technology, regulation, or human behavior, what are your thoughts?
Prashant Mahajan 34:22
Yeah, interesting question. I think that there is a growing influence of algorithms and machine learning on our lives. Our political opinions, the content that we consume, what we are, what we like, it's like our lives are really detected by the algorithms, right? And not a lot of people realize this. And it's a very huge problem. And there are some efforts; there is a start to tackle this problem. But I feel that we have a long way to go. We need very strong regulations which will give those controls back to the user. I want to basically dictate, or I want to decide what algorithmic decisions I want to be part of. Right. So I feel that is one area that really bothers and concerns me that you don't know how algorithms are manipulating your life. And this is really well captured in this Black Mirror series. Have you seen it?
Debbie Reynolds 35:33
No, I haven't. I heard about it, though. Very, very interesting. I agree. I think you know, for me, I think of it, I prefer to have a life unfiltered, right? So I want to be able to see all the choices and make choices. But with algorithms and AI, there was personalization. Basically, what they're doing is suggesting things to you. And they're only showing you what they want you to see. So that could be a form of manipulation, where maybe you would have made a different choice, right?
Prashant Mahajan 36:03
Debbie Reynolds 36:08
Yeah. Very cool. Well, this is great. I'm so glad that you have been on the show. I'm really proud of you guys. The stuff that you're working on. And, you know, I'm glad investors agree with me that I think you guys are doing a great job over there. And, you know, I'm happy to see that we can collaborate in the future, and this is great. Thank you so much for coming on the show.
Prashant Mahajan 33:34
Absolutely. I was honored to be on the show, and I really enjoyed the conversation.