"The Data Diva" Talks Privacy Podcast
The Debbie Reynolds "The Data Diva" Talks podcast features thought-provoking discussions with global leaders on data privacy challenges affecting businesses. This podcast delves into emerging technologies, international laws and regulations, data ethics, individual privacy rights, and future trends. With listeners in over 100 countries, we offer valuable insights for anyone interested in navigating the evolving data privacy landscape.
Did you know that "The Data Diva" Talks Privacy podcast has over 480,000 downloads, listeners in 121 countries and 2407 cities, and is ranked globally in the top 2% of podcasts? Here are more of our accolades:
Here are some of our podcast awards and statistics:
- #1 Data Privacy Podcast Worldwide 2024 (Privacy Plan)
- The 10 Best Data Privacy Podcasts In The Digital Space 2024 (bCast)
- Best Data Privacy Podcasts 2024 (Player FM)
- Best Data Privacy Podcasts Top Shows of 2024 (Goodpods)
- Best Privacy and Data Protection Podcasts of 2024 (Termageddon)
- Top 40 Data Security Podcasts You Must Follow 2024 (Feedspot)
- 12 Best Privacy Podcasts for 2023 (RadarFirst)
- 14 Best Privacy Podcasts To Listen To In This Digital Age 2023 (bCast)
- Top 10 Data Privacy Podcasts 2022 (DataTechvibe)
- 20 Best Data Rights Podcasts of 2021 (Threat Technology Magazine)
- 20 Best European Law Podcasts of 2021 (Welp Magazine)
- 20 Best Data Privacy Rights & Data Protection Podcast of 2021 (Welp Magazine)
- 20 Best Data Breach Podcasts of 2021 (Threat Technology Magazine)
- Top 5 Best Privacy Podcasts 2021 (Podchaser)
Business Audience Demographics
- 34 % Data Privacy decision-makers (CXO)
- 24 % Cybersecurity decision-makers (CXO)
- 19 % Privacy Tech / emerging Tech companies
- 17% Investor Groups (Private Equity, Venture Capital, etc.)
- 6 % Media / Press / Regulators / Academics
Reach Statistics
- Podcast listeners in 121+ countries and 2641+ cities around the world
- Over 468,000 + downloads globally
- Top 5% of 3 million + globally ranked podcasts of 2024 (ListenNotes)
- Top 50 Peak in Business and Management 2024 (Apple Podcasts)
- Top 5% in weekly podcast downloads 2024 (The Podcast Host)
- 3,038 - Average 30-day podcast downloads per episode
- 5,000 to 11,500 - Average Monthly LinkedIn podcast posts Impressions
- 13,800 + Monthly Data Privacy Advantage Newsletter Subscribers
Debbie Reynolds, "The Data Diva," has made a name for herself as a leading voice in the world of Data Privacy and Emerging Technology with a focus on industries such as AdTech, FinTech, EdTech, Biometrics, Internet of Things (IoT), Artificial Intelligence (AI), Smart Manufacturing, Smart Cities, Privacy Tech, Smartphones, and Mobile App development. With over 20 years of experience in Emerging Technologies, Debbie has established herself as a trusted advisor and thought leader, helping organizations navigate the complex landscape of Data Privacy and Data Protection. As the CEO and Chief Data Privacy Officer of Debbie Reynolds Consulting LLC, Debbie brings a unique combination of technical expertise, business acumen, and passionate advocacy to her work.
Visit our website to learn more: https://www.debbiereynoldsconsulting.com/
"The Data Diva" Talks Privacy Podcast
The Data Diva E24 - Kavya Pearlman and Debbie Reynolds
Debbie Reynolds, "The Data Diva,” talks to Kavya Pearlman, Founder, and CEO of XRSI, a Cyber Safety organization dedicated to Virtual Reality, Mixed Reality, Augmented Reality, etc. We discuss the XRSI Privacy Framework as it relates to Virtual and Mixed Reality, the ability to discuss Privacy with people at different tech levels, the different skill sets needed in Privacy in various applications, diagnose and articulate Privacy issues where no law exists, consumer association of VR with gaming, how new technologies will get people’s attention, the possibility of immediate harm from VR and other technologies, the difficulty of being an informed consumer, XRSI Privacy Framework, and her idea for data privacy in the future.
Kavya-Pearlman
42:23
SUMMARY KEYWORDS
people, data, privacy, framework, technologies, XR, reality, VR, inferences, create, consumers, virtual reality, safety, unintended consequences, important
SPEAKERS
Debbie Reynolds, Kavya Pearlman
Disclaimer 00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
Debbie Reynolds 00:11
Hello, my name is Debbie Reynolds. They call me "The Data Diva." This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a very special guest here with me today. It took us quite a while to get this interview going. But I'm so happy that we were able to connect. I have Kavya Pearlman, who is the founder and CEO of the XRSI Safety Initiative. XRSI has to do with all types of reality, right? Virtual reality, augmented reality, mixed reality, any types of reality that we can think about. And Kavya is, in addition to that, she's leading this particular initiative, which is fascinating. And she was kind enough to invite me to participate in this initiative. And I thought it was just fascinating, and I'd love to talk about it. So this is a privacy framework for XR all types of reality. But I would love for Kavya to be able to introduce herself. And tell us a little bit about this initiative and your work in this space in terms of technology and safety, and privacy.
Kavya Pearlman 01:34
Thank you, Debbie, for having me over. I know it's taken us a while to have this conversation. But it's a good thing that we are working so closely on the privacy and safety framework. So I'm really honored to be working so closely with you on this effort. About me, I Where should I start? Well, let's talk about my tech journey. You know, let's talk about that. Because oftentimes, I get comments and feedback that, you know, people find very inspiring. And I hope that it helps other people to feel enabled and empowered to do what I did is I had the audacity to transition my career from a hairstylist to a Cybersecurity professional. And as I did that, and I was, you know, head of security for a virtual reality company, or the oldest existing virtual world Second Life, then I got this, you know, ideas, that all these unintended consequences are coming for the industry. And I started this XR safety initiative. But just a little bit going back to you know, in 2007, I had moved to the United States, and I was like, you know, hey, this is a free country, you can be anything that you want to be. So that's what I decided to be a cosmetologist and was just, you know, working as a hairstylist for about four or five years, until one day, I read this book called "Cyber War," and simply decided that, you know, this is my calling, this is my journey, and I got enrolled in a master's in Network Security. Thereafter, after I graduated, I moved to California worked for a corporate immigration law firm for about a year. And in about 2016, I found myself working for Facebook doing third-party security during a very interesting time, which was the US presidential election time in 2016. So that's sort of knowledge of, you know, how to onboard all sorts of flavors and types of technologies, being a third-party security adviser. And then I moved to this, you know, virtual reality and virtual world company, Linden that that made the oldest existing virtual world where I was protecting two virtual economies doing some compliance work. And so this has been quite a journey. And I'm really glad that you know, this whole notion of XR safety initiative came to me and in fact, I was, I was looking for somebody who knew or who was doing similar work. And that's what I found Abraham, Becky Lee, so our senior advisor, who was basically conducting research and discovered novel cyber attacks in VR. And so together we formed, you know, just started to work towards this initiative. And now we are building standards and putting guidelines together. And there have been many, many projects that have stemmed from it, and there are multiple programs. So I'm just, I'm just honored and privileged to be sitting here in the middle of like, you know, Berkeley, nearby Berkeley, California, where the internet was born, and now I get to sort of fix or proactively propose what could be done better based on what we did before, you know, the making of the internet.
Debbie Reynolds 04:56
Wow, that is amazing. I did not know any of that. So I am definitely a fan of you and what you've done. One thing that I can say about you is, there are just, there are very few people like you, I will say that I know. Thank you. So there's something that you can do that very, I could probably count five people who can do what you're doing right now, which is you have a deep understanding of technology, you know, how to talk to people at different levels. And then you also know how to how to organize or orchestrate or operationalize things, and most people they just don't have. That's like five people right there. So you're kind of like five different, highly skilled people rolled up into one. And believe me, I know, because I work with people all over the world, in different types of industries, and you just really stand out.
Kavya Pearlman 05:59
So thank you so much. And that's very validating and empowering to hear from you. But I have to say, and I have this other skill that is probably the skill is to find people like you. And as you know, I'm kind of like a pest I found you. And I was like, no, I really need you. And so I have this ability to bring these brilliant people together. And so they are the ones that, you know, helped me excel, and help enable the missions that I'm able to discover, like, proactively discovered some problems that people may not even think about for next two years. But then the two years passed by, and they're like, oh, my God, this is a problem. And you know, just like COVID happen, and now everybody's moving to VR. So we are already sort of geared up and preparing for what are the unintended consequences? What could be the bad thing that comes along? So yeah, thank you so much for validating that,
Debbie Reynolds 06:56
That's amazing. That's been awesome while you're doing something else is very important for privacy folks to really understand. And that is, we have to break down those silos of different, you know, all of us have our different skill sets and things like that. But if in terms of privacy, in order for us to really be able to solve or face this challenge, we have to break down those silos and reach across, you know, the table to people that maybe we would not have otherwise even been in contact with. And I feel like you're you creating that, that environment with this project, which is very important.
Kavya Pearlman 07:35
Yeah, absolutely. And, you know, oftentimes, we hear this whole idea of multidisciplinary, or multifaceted or multi-skilled, but I really took to it, because if you look at the, you know, extending reality, which is like the notion of using technologies to extend your perceived reality, by means of like, you know, head-mounted display goggles, and pretty soon, I mean, we see augmented reality glasses that could look very similar to just traditional eyewear. And we are extending our perceived reality into a new realm, where you could have all these other, you know, different software coded data represented visualize the version of reality. So when you do that, you're practically inventing a whole new world. And every domain every, you know, like it is, it is influencing how we travel or to resume education, how we learn remote work, how we conduct, you know, surgery, or training clinicians. So there is just like, all these gamuts of domains that are impacted by these phenomena of extending realities. So it's really important that all of those people pay attention to the privacy impact because every domain is going to experience it differently. So, for example, you know, in the medical XR domain, you may not be super concerned about the, you know, people's data being in it being available and open because you need that data for diagnosis. You need all of this data. So how do we create those boundaries in different domains absolutely requires all these, you know, multidisciplinary top processes, bringing in this medical or education or all these experts. So I'm really glad that people have started to notice the software because it is very important and imperative that we go in sort of a proactive stance is; once these issues are baked into some of these products, it is going to be impossible to undo them. Especially when you talk about the cross-section of artificial intelligence and some of this digitization of all of our movements, gaze poses all this data, once it is digitized, once it's in touch. With the algorithm, it is almost impossible to rectify the unintended consequences, the biases that go in. So yeah, I'm really, really honored that, you know, all of this is sort of coming together. And we'll be; we're sort of able to be a little bit proactive about this.
Debbie Reynolds 10:19
I agree. I think, you know, a lot of the things that are being created with technology right now, there really aren't even laws for this stuff. So being able to articulate what things do, why they do, what they do, what's important, you know, just like you said, a medical, you know, some applications that may be important to delete the data right away, but in sort of a diagnostic situation, you may have to keep the data over time, there has to be kind of a comparative analysis there. So thinking about it in that way, I think it's really interesting. I think when I think about people who are in this room, just like consumers, in general, when they think of virtual reality, they think automatically of gaming, right? They think of the kind of a goggle that maybe you know, their kid or you know, someone plays video games, works with virtual reality is, is so many has so many different applications that I feel like people are really thinking about in the right ways. What's your thought about that?
Kavya Pearlman 11:30
Yeah, and you're right, most people, because for whatever reason, either it is the lack of exposure, or lack of availability, or just, you know, lack of desire to really venture out of their own comfort zone. They think that avatar and this VR is, or AR is only related to gaming, or it's all about gaming. But I mean, just this morning, I was interviewing in a, you know, in a VR environment. I was interviewing Valentino McGill, who is conducting therapy using VR. And we were literally just talking about what are, you know, developmental issues that could emerge from exposing VR to children at an early age, and what research needs to happen? So virtual reality and augmented reality? So this whole XR domain is way beyond gaming. People can now imagine you could literally virtually see an apartment that you want to purchase so long with whatever light fixture, the lighting, and everything, but you can virtually visit it instead of taking a trip to LA, for example, or my mom who is in India could potentially see you know, my, my house or my environment, just using VR and feel like she's actually here. So we are moving towards this idea of teleportation and sorry, yeah, teleportation and not like transportation. And this was something Zuckerberg said it was one time I would agree with him that we should all be teleporting and not transporting, because it also eliminates, you know, the carbon footprint, it speaks to a lot, you know, especially now with COVID we have this necessity to be together but still be apart. So VR plays a huge role, and it is acting as a catalyzing event to now the adoption of this technology. So what I encourage people is if you stumble upon this sort of acronym, XR, extended reality or simulated reality or virtual reality, dig deeper because there are lots of opportunities and a lot of challenges that are coming along. And not just because of virtual reality, but there is a phenomenon of convergence. This technology is converging with lots of different technologies such as artificial intelligence, or 5G for network infrastructure, or brain-computer interfaces for providing input into these devices and sort of creating multimodal, you know, input mechanism and realities. And like, the list is endless, but like robotics, being one of them, you know, you can potentially use virtual reality goggles to manipulate or move certain like, you know, forklift or move a person who could be, you know, potentially in a robotic form. So these are things that are just sort of converging still on the horizon. Not all of it is right here. But this is exactly where we got to learn from history is we have to stay ahead of what is to come because there is a notion of what we call is the cooling ridge dilemma, where once you have these massive changes that can be potentially irreversible once they're baked in, it would be too late. So in this context, what you know, XRSI recommends doing like a recursive design or just like a smaller sprint. So you know that if there irreversible, really bad consequences. And you recognize that you can have a trade backtrack from it, or you can upgrade or fail forward or simple fail kind of thing. And so those are things that are, you know, emerging, converging, and opening up so many opportunities, but at the same time bringing so many challenges, which we're kind of advocating to stay ahead of.
Debbie Reynolds 15:21
Yeah, I feel like, you know, the past and the present have been about keeping people's attention.
Kavya Pearlman 15:29
Right.
Debbie Reynolds 15:30
Thing, you know, whether it's notifications or whenever something getting your attention, but I feel like just talking to you and thinking about the technologies that are coming about, I think now our gaze is what is desired, right? What do you think about that?
Kavya Pearlman 15:49
Oh, yeah. You know, I would recommend my colleague, Avi Bar-Zeev, and I highly recommend following him on Twitter. He has written extensively about, you know, one of his articles was like, the eyes are the prize. So eye-tracking technology being the advertising holy grail, and it's a good thing and a bad thing, because from the eyes, and then you know, the sensors that connect and collect the data of where you're positioned, so like, you know, body motion data, or pose data. What's happening now is all of this data is digitized and recorded, and it's being shared with various different entities. So yes, this is a highly concerning scenario because some inferences can be good, but then some inferences can be kind of risky. And I'll take an example of, in fact, that XRSI with the launch of the first version 1.0 of XRSI privacy framework, we outline this idea of biometrically inferred data. So as you say, Debbie, you know, gaze data, that can probably help us attribute to, you know, what you love, not just like, but like, what do you love? What instances do you love, perhaps biometric identity, like a fingerprint can be captured just the way you move your head or your motion, people can contextualize your cultural background based on your voice imprints or cognitive processes, whether you have slept enough, so all of this like, you know, the amount of cognitive load or your skills and abilities. I mean, fairly recently, there was the research that indicated by typing patterns, you can determine how fast or slow somebody's multiple sclerosis, MS is progressing. So imagine if we have this type of sophisticated metadata that can attribute to all of these personality traits, or mental health and all the other plethora of inferences? What if that data is shared with Blue Cross Blue Shield or any other like, you know, any other insurance providers, and then they come to know proactively that this is what's happening, and they deny you coverage, and it's not they, but perhaps there is an AI algorithm that is, you know, designed to keep track of because, you know, to bring down the cost or to stay ahead of, you know, increased costs. So, that's a problem. Likewise, if some of this data ends up in, let's say, a recruitment interview type of algorithm, so you could be denied an interview based on, hey, you don't even know. So you might be, you know, changing your lifestyles, such as pregnancy or something. And if somebody else knows, first, you could be denied some of the opportunities. So this is just a very basic of it. But you know, when you talk about a large amount of data we've already seen, you know, let me bring up an example of Cambridge Analytica. So in 2016, during this presidential election time, the third party of Facebook, Cambridge Analytica, used about 5000 data points to micro-target American citizens, each of the American citizens had 5000 data points on them, and they micro-targeted them, they influence, and there's tons of research that have already been done that there was a massive influence campaign that was potentially, you know, kind of seen and talked about it as the biggest data scandal. But now, compare that to another research in 2019, from Stanford University, where you can now, and this is 2019. So this is probably more with the convergence of other technologies too. But in 20 minutes in a VR simulation, you could record over 2 million unique body recordings, unique recordings of body language. So 5000 data points did that. Now we have this constant capture of our reality, which is going to be possible with these augmented reality glasses and the NBR; you can only record that many data points. So we really are asked, like, what does the next Cambridge Analytica look like? Or is it already here? We just recently saw the day of insurrection. And that had a huge component of all the data and social media biases and stuff. So all of this is, you know, building towards this kind of massive event that could really diminish our trust in these technologies. And that's where I kind of try to go to enable innovation, share the data, but share it in a way that incorporates and enabled trust. So that's how I'm going about solving it. And you're absolutely right, these gazes, pose, body movements. And now we are going to be talking about brain-computer interfaces. So you're talking about EKG, EEG medical-related data that used to be just limited to doctors, and now it's available to just, you know, developers that are just like thinking, hey, I'm going to develop an app for mental health, and now they have all of your, you know, biometric inferred data on that. That's a that's an issue that, you know, not even at congressional level anybody actually understands. So we're still just like uncovering all these issues. And then we have to go all the way to the policy side. So someday, we can have regulations that actually comprehend and talk about this type of data and put restrictions in place so that you know a vulnerable population can benefit from, you know, or vulnerable population is not put at risk. So all of these considerations are just something that is going into this framework, and hopefully, we'll get somewhere with this.
Debbie Reynolds 21:41
Yeah, that's great. I'm so glad we got a chance to talk about that, that it just occurred to me as you were talking about that, because I'm seeing, especially a lot of people want to use different biometric markers, especially retina scans, that's going to be the hot new thing. So being able to sort of get that picture or someone's eye or, you know, people's eyes change over time, obviously, too. So that may be an issue as well. I think, you know, when we think about things like virtual reality, augmented reality, because it's so attached to our being in our bodies and what we're doing. The reason why, well, there are many reasons why what you're doing is very important. But one of the big reasons why it's important to do what you're doing in a proactive stance is because the harm can be almost instant, right? Where is a situation where you can say, well, we'll just let the courts handle it, it's like, but the harm has already happened. And the redress may not be adequate, you know, what I mean, or timely enough to actually, to address this issue. So I think, in some ways, the law isn't enough to be able to protect people after the fact, because a lot of when we think about laws is that it's like harm has happened, let's create a law, and then hopefully, it won't happen again. But this is a situation where, especially when you're dealing with someone's kind of their body and their life, and their mode of living, it's important to protect that, you know, at the onset, and think about those things. What types of things do you think? What makes it important for businesses to understand that that will make them? You know, I don't know, maybe this is just a classic thing where some people think privacy is kind of a tax on what they're doing, or any additional burden, where I think more proactively as you do, let's prevent the problem, let's prevent the harm. Let's think about what's the best way to do it. So then, down the line, you're not dealing with this on the back end. What are your thoughts?
Kavya Pearlman 24:01
Yeah, and you hit the nail, like prevent the keyword here. And as, as you know, very deliberately, we're advocating about. So there are certain shifts in the top processes when we approach technologies that need to happen. One of the things that we need to understand is exactly what you said about prevent when to prevent harm. once something is done, especially when you talk about children's sprain or vulnerable population, for example, which includes women, minorities, and it could just be a normal person, but just by means of existing in a country where there are no human rights or very lack of it, such as Iran or China. These things can be really problematic. It could, really. It could even be life-threatening because, you know, perhaps their identities are revealed, so when it comes to approaching you to know, proactive mechanism, we have to really be proactive. And that's why we've deliberately put, put prevent, as a focus area where you're not just protecting, you're actually preventing harm in terms of harassment. And for example, we've never seen before any framework address harassment at the level of like safety effort or at the level of creating a framework. So harassment, bullying, content moderation, why do we do content moderation, because we want to proactively get ahead of the inner problems that will stem from it, identity is a huge issue. Because you know, some people believe that in VR, or AR, you can be whoever you want to be. But as I explained, the biometrically inferred data can create an exact imprint of a fingerprint. In fact, there was another research done by Stanford just, and I believe it was this in 2020. Yeah, last year, where they sampled about 500 known sorts of identities. And so within this 500 subset, when the subject was shown a 360-degree video, which is not your coded VR environment, but it's still like, you know, a video or a 360-degree video of your environment, when they were watching it, within the five minutes of watching that 360-degree video, the researcher was able to establish who exactly is this person? So that should tell you that the identities of individuals are highly at risk. So now comes the whole notion of like, okay, well, we need to tell people, so then there is the whole idea of informing people, but with context is like, hey, you're in this particular context. And this is what's happening to your data, giving people choice of what they can or cannot do with the data or giving them control over data. So all of this is kind of the shift that we need to make one other shift in the thought process is about child safety. So if you look at today's landscape or the legal landscape for children's safety, it's like COPPA is the law. Let's say GDPR says, Hey, nobody below 13 or 16, should be you know, should, you know you should gather a consent if you do that. So what does that consent look like? People put a checkbox, Hey, are you over 13, 16 or 18, or whatever the age, they decide based on the jurisdiction, but then people just check the box and move on. And now next thing you see, there are so many children that are pulling these, you know, goggles are exposing themselves to these XR environments, where we don't have any guidelines or research as to what is an appropriate age, even Facebook that says 13 above, but that's like written tiny little bit over there on the box, it's a 13. Plus, that is not advocacy for children's safety. That is not explaining to parents that carry if you're putting your kids under 13. In these environments, you're practically experimenting with their brain. And I see educated people doing the same thing. I see people who don't know anything any better. And then there are some experts who basically go around like, oh, it's okay, if you let people in VR chat, which is, to me, sometimes can be like a little troll fest. And so why would you want to expose your children to that. So child safety is one of the major concerns, and I anticipate that one of the very first laws that we will see around virtual reality and augmented reality would be related to children's safety and would be related to conducting more research just for children's safety. And then there is this special data type consideration, as we talked about, you know, gaze pose biometrically inferences. So while on the regulation side, we're just stuck with PII personally identifiable information or personal data, or, you know, Max, we do like personal health information, PHI. But we're not thinking about all this other metadata that can actually put people at risk that can cause all sorts of inferences. Perhaps some inferences are good, like, you know, hey, what do I like to shop some suggestions from Amazon, Facebook, that's okay. But to what extent do we allow this to allow all these medical and other inferences to go in this algorithm and to be used against us? So all of these key points have to be changed. And we have to shift our thinking, to go from fighting the fire to proactively being preventing the harm. And so that's hopefully we'll be successful in shifting this thought process, shifting the narrative, the way the industry pursues safety or privacy and Cybersecurity.
Debbie Reynolds 30:00
Yeah, that's excellent. What do you think about this? On the consumer side? I feel like there's a chasm, right? So we have people creating these sophisticated technologies. They want to get them in people's hands and in their homes. But I feel like people don't know how to protect themselves. They don't know they're not an informed consumer, right? So isn't let's say, for instance, the Wise the three things that I wanted to do, I don't care about the other seven things they do, but maybe those other seven things, if you knew about them, you make a different decision, or you change the way that you use it. What are your thoughts about that?
Kavya Pearlman 30:39
Right, and it's a really, really good question. What about us as consumers? Should we be responsible for the decisions that these companies are making? Or should they be responsible for the decisions they're making? So before we commence, like I remember the kickoff of this privacy and safety framework? My very first slide was privacy is a shared responsibility. And that's what I would kind of hammer on is that from the consumer perspective, yes, we do want a, you know, you do want to trust these ecosystems. But I believe we are at a cross-section where we can no longer just blame everything on the companies. We have to sort of own up to the responsibility of what it's my responsibility to make sure that I'm not allowing my child under 13 to give away all this data. What is my responsibility? When I'm inside these environments? Or when I'm extending my reality? What about the user interaction? How am I interacting in the world and even outside? I mean, think about bystanders. So you know, people with augmented reality, once they start to capture our reality, there are going to be people who are just around you. And so what about their consent? So we have to, you know, again, advocate this notion of shared responsibility because companies can give you these controls, that's, you know, that's the advocacy that we do as well is like, hey, organization, please give consumers these rights. But even if you have the right to delete or right to, you know, be forgotten or something, you have to actually exercise that, right. So consumers have to be made aware, first, the consequences, then second, the control mechanisms that actually are being built so that people can be a little bit more in control of their data. So I feel like there needs to be a little bit of awareness that, you know, of course, and we are kind of working on it, you know, we have our media platform called Ready Hacker One, we're trying to gear you know, stories towards something that you can trust news that you can benefit from the knowledge that you know, in the XR domain people have, and then share with them. But that's it. It will start with awareness. It should start with the awareness of consumers. So they become familiar with these new technologies familiar with the convergences that are taking place and the impact that they would have on consumers. And thereafter, we can go about, okay, now, we now we have created some sort of awareness, but at the same time creating adoption of, you know, different frameworks, and potentially enforcing these type of frameworks so that we can actually, you know, just like GDPR, General Data Protection Regulation in the EU, say, hey, if you don't follow XYZ guidelines, which potentially could put people at risk, then you will be fined or some other kinds of reprimand, you know, you will be reprimanded. So all of this is definitely, you know, building towards what the consumer is going to get out of it. It's not just gain, but you know, a lot of the other unintended consequences. We just need to create awareness and then go after that.
Debbie Reynolds 34:12
Yeah. I also want to talk with you about the XRSI Privacy Framework, which is a thing that we're working on together that you invited me in which I'm really happy to join you in this effort. The interesting thing about the framework, which I think is different, and maybe I'm exaggerating, in my own mind, but a lot of times when people create frameworks, they create it, and then they don't do anything with it for a year. So it doesn't change for many, many years. So it's interesting to me that on the heels of releasing 1.0, part of the framework, you went directly into 1.1, which I think is brilliant, but tell me you're thinking about that.
Kavya Pearlman 34:53
Right. And thank you for asking that question, and it is a very significant effort. It's a community-driven, solution-oriented, very agile, and iterative thing that we are kind of trying to create together with so many experts, including you, Debbie. So this framework is free, globally accessible, and has this sort of a layered structure. When I say layered structure, first, we identified what the focus areas are. So it's like, okay, four focus areas. And it also gives like a simple language that even a CEO, or a receptionist anybody can talk about this, like four focus areas, underneath it, various functions. So in version 1.0, we had 14 functions in 1.1. We could have more functions. The reason why it needs to be iterative like this, because the technology is changing, we're still developing features. So in this amazing virtual world where you're using the software as a building material and inventing, touch, inventing, sense inventing, you know, hand tracking, and eye-tracking, and embedding it into different use cases, this is still evolving. So yes, we need a proactive framework, but it has to be iterative. So it can kind of evolve with the technologies because the risks would sometimes diminish, sometimes we will incorporate, you know, just inevitably, encompass new risks. So that's why it has to be that way. Also, it allows us to then focus on things that are of the essence. For example, in the current effort, we are focusing on higher ed, as well as the medical XR domain. Because we feel that this is where you know, a lot of the industry leaders are looking for solutions. After the introduction or the launch of this Quest 2 headset from Facebook, there were a lot of concerns from the higher ed and researchers about privacy, about their identity. And you know, there is a regulation FERPA, which is the federal education. I think it's an, I forget the acronym, but it stands for federal education, responsibility, something let me quickly look it up to your Family Education Rights and Privacy Act FERPA. So the purpose is that you should not share students' data. But when you use this headset, there is the issue of Facebook identity. And that means you can't really share the data with the organization, or who is it going to? Is it going to Facebook or their third party? So all of these issues emerge? And, you know, some of the industry leaders that I talked to really needed a solution. So we're like, okay, let's work on this. And then come to medical XR, where patient safety is primary. And if we don't have any standards, guidelines, or framework, what we are doing is we're just experimenting on patients using these massively amazingly new technologies. And that should not happen. We should have some baseline measures. And I'm aware that FDA is working on stuff like that, but we still need supplemental guidance. And that's kind of where XR si fits in.
Debbie Reynolds 38:15
Yeah, that's excellent. That's excellent. Well, I'm proud to work with you on this. Just because I obviously really, you know, love the way you're thinking and the way that you're managing this, I'm happy to be able to help you on this endeavor. The last question I always ask people is, if it were the world, according to Kavya, and we do whatever you say, what will be your wish for privacy or data privacy, either in the US or around the world?
Kavya Pearlman 38:46
So I think I can only give the answer for today. And I don't know what the future looks like, perhaps my answer would differ in the future. So in an ideal world, given the kind of technologies we are building, we would be so clear on our expectations of privacy. So yes, we will have constant reality capture, yes, we will have tons of data going back and forth. But if we could have clear expectations, and clear control, and clear accountability of privacy, we will solve a lot of problems. We will build a lot of trust. So companies being more accountable. People who are impacted by this technology, even if they're not using it, they're just standing on the side, if they knew what is to be expected in terms of privacy from these technologies, life would become simpler, you know, then or mechanisms would be evident because they know hey, I'm standing here that means some of my data is going into this or accountability means companies will give them the option to opt-out or not opt-in based on you know their reasonable expectation. So I feel like, if I could really just do that magic wand, then I would hope that you know, all the regulators have good laws around privacy, companies are creating reasonable privacy control mechanisms. And then people understand their expectation of privacy once they change the context. So that would be an ideal world, at least at the moment.
Debbie Reynolds 40:27
That's actually really good because I like the fact that you're talking about accountability of the individual as well, as opposed to just putting it on someone else's responsibility. So I agree that there is a shared responsibility. And I love the fact that everyone answers this question differently. So well, thank you so much for having for being on the show for letting me interview you. And I'm happy that we were able to get this arranged. I was happy to see this on my calendar. So thank you so much, Kavya.
Kavya Pearlman 40:59
Thank you so much. And if I may just say I really appreciate you bringing in more people to get involved with this framework. I appreciate all the support that you have landed, and now even handling an entire section of compliance for the framework, which is remarkable. So thank you for leading the space together with me. And I'm really happy that you've arrived at XR and helping me out here and anyone else. Also, you know, wanting to get involved, please contact Debbie or me or go to xrsi.org. Or just you know, we are always on LinkedIn and Twitter, just, you know, DM us and stuff. We are still working on this. It's an open project right now will commence in May. So would love anyone who's interested in getting involved?
Debbie Reynolds 41:46
That's amazing. Yeah, I will love that. So definitely, I'm trying to wrangle people and anyone who's really interested in this space. I really want you to go to Kavya's website xrsi.org, correct? Yes. And take a look at what's there and see, that's something you'd like to get involved in. I would love to see other people join in. So thank you so much again, it's so sweet if you were able to do this for me, and we'll talk soon.
Kavya Pearlman 42:13
Thank you, Debbie. Thank you