"The Data Diva" Talks Privacy Podcast

The Data Diva E186 - Timothy Nobles and Debbie Reynolds

May 28, 2024 Season 4 Episode 186

Send us a text

Debbie Reynolds, “The Data Diva” talks to Timothy Nobles, Chief Commercial Officer at Integral. We discuss the intersection of AI and healthcare, with a particular emphasis on data privacy and compliance. Timothy Nobles discusses the intricacies of data de-identification and the challenges posed by HIPAA regulations in the US healthcare system. We explore the methodologies of expert determination and safe harbor for preventing re-identification of data, and the limitations of publicly accessible data in certain scenarios. We discuss the importance of retaining data fidelity while protecting privacy and the need for active responsibility in data usage and consent. The conversation also touched on the increasing demand for de-identifying data sets and the influence of GDPR on personal data rights. Timothy Nobles highlighted the challenges posed by state-specific regulations in the US, particularly regarding de-identification standards, and anticipated a shift towards broader definitions of de-identification. We also explored the ethical considerations of data rights and privacy in utilizing AI technology, emphasizing the need for a balanced understanding of its potential impact. Finally, we discuss data compliance challenges and the need for a business case to facilitate data sharing. We emphasize the significance of a chain of custody and data providence in Integral’s solution, aiming to alleviate the complexities and liabilities associated with compliance. We also highlight the lack of in-house expertise for handling these tasks, underscoring the value of external support in navigating data compliance challenges and his hope for Data Privacy in the future.

Support the show

33:40
SUMMARY KEYWORDS
data, identification, hipaa, regulations, compliance, identify, ai, datasets, understand, privacy, fidelity, point, care, types, patient, increasingly, produce, computers, certifier, love
SPEAKERS
Debbie Reynolds, Timothy Nobles

Debbie Reynolds  00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a very special guest on the show today, Timothy Nobles, he's the Chief Commercial Officer at Integral. Welcome.

Timothy Nobles  00:36
Thank you for having me. I'm very excited to be here.

Debbie Reynolds  00:39
Well, I always love to talk to geeky tech people like me at geeky tech companies, and I love the stuff that you guys are doing around sensitive data and de-identification. But before we get started, why don't you tell me a little bit about your career trajectory and how you ended up being the Chief Commercial Officer at Integral?

Timothy Nobles  00:59
Yeah, thank you. That is a super fun question. In practice, I joke all the time that I started adult life as a professional rock and roller. So I was a bassist by trade and spent about 10 years on the road doing all of that; somewhere along the way, I decided I should actually get a real job that found me actually on a marketing track. Along the way, I used data extensively as part of the formula for success and the various marketing responsibilities that I had. I was forced to grapple with a little bit of the ethical concerns around it and what was comfortable with what was not. As I matured through my professional career, I definitely started to realize that there's a pretty gratuitous abundance of data available on us, and I'm not really sure how I feel about it. So, about that same time, I found my way into the healthcare side of my career. That's when PHI and HIPAA and the idea of data compliance and privacy preservation and privacy enhancing techniques were really introduced. That just became increasingly important to me overall. Along the way, I've had the opportunity as Chief Product Officer for Agricultural and Health, where we built a ton of very sophisticated predictive analytics, doing so in a compliant way that still gives you very high-fidelity outputs and confidence in what the data itself is saying. Along the way, getting the data to a state of compliance actually ended up being one of the little things that really excited me. So when I ended up having the opportunity to connect with the founders of Integral, I picked on them all the time and said it was love at first sight; we were able to hit it off. I really believed in what they were trying to accomplish, and I really appreciated the way in which they were trying to solve the problem. That brings us to here, and glad to unpack any that you feel is appropriate.

Debbie Reynolds  02:43
Well, we have an international audience, so not everyone understands HIPAA or why it's challenging for us in the US, especially because in the US we don't have universal health care. 

Timothy Nobles  02:54
Yeah, good point.

Debbie Reynolds  02:55
But we have more challenges around this data portability and transferring data from one place to the next. But tell me, what is it about HIPAA that makes this data challenge unique?

Timothy Nobles  03:07
HIPAA is a set of regulations that are really intended to protect the privacy of individuals; the genesis of it is really around the idea of preventing discrimination based on a health condition or whatever, but also just in overall protecting that of a patient's privacy in the US from that there's a very fuzzy set of outlines about what constitutes being in compliance with HIPAA. There's a few spirits into this. First, there's a methodology of data set, preventing the risk of re-identification. Because methodologies are referred to as experts' termination or Safe Harbor, the big differentiation there is that experts' determination is a statistical model, and safe harbor is a redaction-heavy model. Safe Harbor, the easiest way to think about it is that if you just remove it, there's no risk of identification concern, it ends up being a pretty heavy and blunt instrument to a dataset moving immense fidelity and immense usefulness under the guise of care and treatment or support of improving outcomes for a patient extra determination being a statistical model, the goal there is basically to generate a nonzero probability of re identification in normal speak, that's how do you get that down to as close to a zero risk if your identification as you possibly can, accepting that it'd be in a statistical model, there's no way to purely get to zero from that the goal is to really make the data useful and portable in support of a concept we love to talk about based in Nashville. One of our favorite words is interoperability here. How do you make this data useful? How do you share it? How might you produce commercial value from the data without actually giving up an individual's privacy and so doing?

Debbie Reynolds  04:40
I think there are two challenges in HIPAA. One of them is you tell me whether I'm wrong or not. One is more of the statistical challenge. So people's data going into datasets in a statistical manner and trying to make sure we did the identify people that way. Then also there's more power of this substantive matter, maybe redaction or trying to de-identify datasets where maybe it's diagnosis, some other qualitative information that is known about someone. So, to me, those are different. But tell me, what are your thoughts?

Timothy Nobles  05:16
Yeah, that's a really constructive question and one that is definitely chewy, for lack of a better way to put it. But in practice, taking a statistical approach, one of the ways to think about it is you're trying to preserve as much fidelity within the data as possible. So let's just use an ICD or zip code; we'll actually reference both of those, right? So statistically speaking, if you can produce a large enough aggregate or a large enough audience, depending upon how you might like to think about that patient panel within that, once it gets large enough, the ability to pinpoint an individual starts to go down. So, for instance, on an ICD 10, which is a diagnostic code to describe what a patient is dealing with, then that's a pretty high degree of fidelity in terms of what's going on. However, there's the idea of an ICD three. So, very similar to the way GPS coordinates work, the more digits you have behind the decimal, the higher the precision of that location, and the closer to the decimal point you get, the broader and broader and broader the radius becomes, becomes increasingly difficult to pinpoint an individual. The zip code is pretty much the same way it is nine versus an MSA or a CBSA, geographic definition CBSA or statistical or core-based statistical areas, which is just a big, larger geographic definition. Again, you're opening up, the populations getting bigger, and then the sensitivity to a condition or a disease type is lessening, because you have more people who probably fit that criteria, especially using the ICD three, you'll have more people that ultimately aggregate into that. The other part of that, too, from a statistical point of view, is that you also have this idea of small cohorts; CMS, the Center for Medicare, Medicaid, here in the US, they have this notion of small cells as suppression, which is the small cohort thing. So they call it less than 11 geographies have, typically you want to keep them north of 20 to 20,000 population. Said another way, it's very easy in some of our rural parts of America that even the geography is really big, there are more cows than people, you have to make sure that you're always watching out for that. So, in some cases, there's just too few people, no matter how broad you get, in which case, those just automatically become excluded. Then, thankfully, at the data supplier side, there's very specific types of scenarios, heaven forbid, a violent crime, terrible trauma related to that, like an automobile accident, or things that have other publicly accessible data around them that don't actually show in the data at all, that's typically not provided in the first place. So there's already some level of redaction on extremely sensitive matters like that.

Debbie Reynolds  07:41
Very good.

Timothy Nobles  07:43
Okay, so then seek to flip to your redaction sentiment. So the goal here is that as you're aggregating, and I would kind of call it softening the edges a little bit, you're actually retaining a lot of fidelity and a lot of use in the data so that the data itself actually becomes descriptive. You're trying to actually pursue a therapeutic design or something of that nature, the more fidelity you can have, the more you can understand about co-morbidities, the more you can understand about environmental conditions, etc., around these populations, the more useful that becomes in the design of a therapeutic. Then, when you can take the safe harbor approach, which is just go cut the data out, you end up with fragments, and there's a lot of holes and a lot of inferences that you end up having to make from a data analysis point of view, that can be very challenging. They're also a lot easier to I'm going to stop short of saying get wrong, but it gets a lot easier to skew it and lose a lot of that course thread as a consequence of that data being removed.

Debbie Reynolds  08:43
Correct, right, because you want to retain that usefulness. But you also want to protect the identities of the people for whom you're collecting that data.

Timothy Nobles  08:54
Yeah. So for example, with HIPAA, sorry, you can't have first name, last name, social security, home address, none of that's admissible. That's all gone. There's really robust tokenization technology that's considered privacy preserving, or in some cases, privacy enhancing where an individual has been obscured to this very complex token. It's actually derived from a host of inputs, but it's so wildly obscured in even the same data set. If you purchase the same data set as I did, we would have different token sets on each of them. So, through very advanced technologies like that, I've helped bring us along so that the idea of longitudinal records can be better understood across multi-year patient care and treatment without ever getting at the ability to re-identify an individual.

Debbie Reynolds  09:43
So tell me what Integral does that is different from other people, or companies.

Timothy Nobles  09:50
We really focus on the idea of compliance as a service directly. It's always been to buy all this data and then it's okay, I've got to ship it off to the compliance side of the house. There it wanders around in the wilderness for in some cases, and eventually you get it back and a whole bunch of stuff has been asked to be taken out of the data set. Then you have all these remediations, you have to go through, and so anywhere between 6 to 12 months later, you finally have a production data set that you can work with, the approach that we've taken is that through advanced algorithms and automations, we've been able to basically take that down and within 24 hours, in most cases, there's a few exceptions to that, of course, but within most cases, about 24 hours, you've got the opportunity to look at what the top end privacy concerns are within the data. Within roughly 48 to 72 hours, you're actually looking at remediation recommendations, and the important thing there is that it's actually recommendations. It's not do this or else, which pretty much any consultant in the space takes a conservative posture, rightfully so. I've personally purchased a number of those, and they're effective and they do exactly what you need them to do. But along the way, our founders and their experiences and my path in healthcare data is this really, hopefully, could become a more collaborative process, in which case, we tried to put that information face up and make it a conversation so that we can ensure that the compliance is one being held to the strictest standard, and then to being as supportive as possible to the use cases of the customer.

Debbie Reynolds  11:10
I like that approach because I feel I don't know what you think, but I think, especially with artificial intelligence, even though it's very helpful in data sets, sometimes it's more of a machete than a scalpel. So I think that there has to be more conversation. What do you think?

Timothy Nobles  11:28
I totally agree. Algorithms are, well, I'm just gonna say computers, be a little bit more generalized; computers are really brilliant at math and detection and classification. One of the really important things about what we do is we actually not only use the computer, but we also keep a human in the loop for us. So we have human certifiers that are checking our work and running in parallel with this. All of the certifications that we produce on behalf of our clients are both signed by us integral as well as an expert certifier, who will have a very deep background in biostats, or something of that nature. Our head certifier is Dr. Bradley Malan, he's actually written a lot of the guidance on how to interpret HIPAA. So that helps a lot. We constantly rely on them to help interrogate the results that we're producing. Because computers are good at science, people are really good at art, you put those two things together, and you have something quite powerful otherwise, to your point, do you think, though, that computers are kind of dumb and dutiful? I mean, even AI? Well, we support a lot of LLM large language model datasets; we have a number of customers who have asked us to take unstructured data and run that through get doc compliant in such a way that the model development that they're doing is in zero risk of actually, unintentionally inheriting something that is personally identifiable or that through its development could produce ID data, which is really cool. Then that actually helps support some kind of early stage routing for healthcare. I mean, you're seeing a lot of this in the mental health domain, you're seeing a lot of this and on-call nurses here in the states where that approach is being taken. But I mean, to that point, these are still largely using math to guess what the next most probable thing is, which is genius in its own right; it's, I'll take me wrong, but at the same time, the risk they're in is really so much about the inputs associated with AI, more so than what AI is gonna give you back. So it's definitely one of those: your output and AI have so much to do with the input. That's where we've taken a great deal of care to be able to support unstructured data, pursuit of would inevitably is going to be lots and lots and lots and lots of AI development over time.

Debbie Reynolds  13:24
Yeah, I think the inputs are key. Unfortunately, some people think they have terrible data, and they think somehow, magically, AI is going to make it better to actually just create more junk in the end. 

Timothy Nobles  13:37
Yeah.

Debbie Reynolds  13:37
So, making sure that the data is good before it goes into those models is really key.

Timothy Nobles  13:41
I kind of liken it back to when Google really took the ubiquity of search for pretty much everybody. As a point of opinion, to me, it's really important that before you input something else, I think about the context of what you're doing. Many of my peers and I always joke about Dr. Google, the great symptom checker. But when you do that, like if you think about what Google actually knows and understands about you, especially if you're signed in using Chrome, that's a pretty high degree of fidelity about what they understand. Then, if you're sitting there doing a search on a particular type of pain or trying to understand the condition of something, you inevitably are starting to just voluntarily give away PHI. Well, Google makes it easy and very transactional. So if I stick my toe I look to what are the symptoms of a broken toe? I'll do it myself. But at the same time, how can we think before we act when it comes to these models so that we're also taking an active responsibility and making sure that we're not unnecessarily volunteering information, that through the Terms of Use, we are consenting to that information being used at scale by these other companies?

Debbie Reynolds  14:47
Yeah, I will just pass on 2 things that are happening, and I want you to definitely correct me if I’m wrong, but obviously, there is personal health information that's collected about people that is derived outside of maybe a patient-provider relationship. So, a lot of HIPAA, but then also a lot of privacy laws now, in different US States and different locations around the world, are calling for recommending that companies de-identify data sets. So I think that's going to lead to more organizations leaning on people like Integral who understand de-identification, even though it may not be HIPAA-related. But what do you think about those two?

Timothy Nobles  15:35
I think you're spot on for our European friends; the GDPR is really, in my opinion, a very brilliant precedent for personal rights around data; like any legislation and regulation, anywhere, there's some things about it that are super strong, and some things about it that are kind of shaking it out as we go, what we've seen is a pretty significant amount of inspiration from that idea on the consumer side. So California, Colorado, and other States are enacting very similar definitions of the consumer and the notion of the right to be forgotten, as well as extending into the territory of if health is concerned; how do we interpret that here in America, many of the States default to the HIPAA standard, however, there's some like Washington with a recent My Health My Data Act, which is very thoughtful, very complicated, but it's a thoughtful act. But they've actually taken a considerably more conservative stance on what the definition of identification is than even HIPAA. With that, I definitely think that we're going to see a tendency to move towards these much broader definitions of de-identification; thankfully, there seems to be a lot of the technology in place to be able to keep pace with that. I think businesses are going to have a very unique challenge if they manage that national data set, which many of the commercial data providers qualify that in America since you run national sets of data that now if the data is being used in these geographic territories, like Washington State or California, Colorado, Vermont, and fill in the blank, what are the governance rules around that. And so I anticipate seeing a lot more focus on the structures of governance at a company level and the ability to showcase it through processing controls, in addition to the idea of the data at rest being considered compliant, but also mitigating the possibility of producing a dataset that has a lesser definition of the identification and Washington that data going into Washington State preventing those types of scenarios.

Debbie Reynolds  17:29
I think what you're describing is just the complexity that we have around data and trying to extrapolate what that means in terms of regulation in different jurisdictions; the US has almost, like, not 50 States, like 50 different countries. So that's right, it's really very challenging. It's very, very challenging. What is happening in the world right now, that's concerning you, whether privacy or technology health-related or not? What do you think?

Timothy Nobles  18:04
That's a lovely loaded question. I think that historically, and the world I live in just the umbrella reach that a BA or a business associates agreement, also known as a subcontractor agreement has really called the industry together this contractual handshake that, okay, it's cool for you to have this data at this high level of fidelity. So, for instance, potentially straight out of a health records management solution, or whatever practice management solution, in which case, you can share fully identified patient-level data with someone, I think it's these new regulations are coming out, a lot of that is actually being challenged very aggressively. I'm not sure that we've quite come to terms with what we're going to do about that yet, which is good because it's time for that to ultimately update. Don't get me wrong, at the same time. HIPAA has the spirit of what's referred to as minimum necessary. That means you don't give up any more data than is absolutely essential to do the job at hand. I think as these regulations come on, it's really bringing a little bit more attention to that idea of the minimum necessary. So, do you have to have the whole data set in order to do this? Probably not probably need, like 1/3 of it, maybe one half? So I think that that's going to end up being really interesting. Then I also think one of the other big challenges that we're facing is each state ultimately writing its own legislation around the language. The scenarios that haven't been thought through yet, the California Act, for instance, there was this moment of okay, well as a physician, a consume, and that's can you actually assign a physician as rendering care, the further into the States we get, the more of this stuff will get ironed out, but there's still probably a good five-year horizon maybe more as all of these unique exceptions, or lack of clarity around what is or isn't the case associated with it and trying to get a voice to clarify that and set a standard or at least established precedent? That is going to be something that's going to, at least in my opinion, probably cause a lot of thrashing along the way.

Debbie Reynolds  19:56
I agree with that. Well, we have outside of health are industries that are accustomed to collecting as much data as possible indiscriminate data collection, and then really not understanding what has to be done to truly identify it. So just taking someone's name out or putting their last initial on something isn't sufficient.

Timothy Nobles  20:18
No. Actually, just to riff off of that, I think the other thing too is if we start looking at more AI agent-based care, so again, let me just say mental health, for instance. So voice is that identifier, then you have all the phone level data and all these types of things that could be printed. And so I think there's going to ultimately also probably be some very, very difficult decisions to make around with his data rights are and how much of that you actually need to retain. In order to make the model, any model that you develop, you have to continuously help give it new information. But how can you do that with this unstructured data? How and where can you use technologies that create a synthesized voice versus an actual voice that stripped the idea of gender stripped the idea of identifiable attributes that can be found within a voice or the same thing from the video conference telehealth conference? How might we be able to go create and actually use AI to go create synthetic composites of the patient to wear for training purposes, or academic medical institutions be able to use this to help train future providers to where they can actually get out and watch the encounters and learn what was happening and understand the inflections but again, without actually portraying I didn't see or personal attributes about an individual?

Debbie Reynolds  21:29
Yeah, I read an article that said that there's a technology that exists. I don't know how well it works. But it said they can take a sample of someone's voice and tell certain health conditions that they may have, just based on their voice.

Timothy Nobles  21:43
Yeah, I mean, I wouldn’t be surprised. I don't know that personally. But I wouldn't be surprised.

Debbie Reynolds  21:48
Yeah, it's a Wild Wild West; in terms of AI and privacy, where do you think we're going with AI? Obviously, AI has been around for quite some time. A lot of industries use it quite a bit. But I think when AI bursts out into public, I guess the public consciousness of it, we're seeing stuff from different ends of the spectrum. So one is the world's going to end in two years. I actually read the article that came out because of artificial intelligence, and then the other is always it's going to cure cancer. It's like, no, it's not going to do that, either. So, what do you think about this wide spectrum or the conversation about artificial intelligence?

Timothy Nobles  22:30
My dearest brother was a sci-fi guy, and I got steeped in futuristic considerations and have a slight appetite for a dystopian bent. But with that, I think there's a little bit of truth in all of these things. Computers, getting smarter is not Armageddon. It's not quite the Terminator narrative. But at the same time, solving world peace isn't necessarily true. Cheesy, sensational headline, as it is written, what we have seen is computers and algorithms that I have been getting increasingly good at helping either identify gene sequences and things that like can actually contribute to much faster breakthroughs in medical science now, has it replaced the human's ability to interpret that and actually sort out what that looks like? No. It'll be a long while yet, but does it mean that it will end up being an increasingly kind of a creative assistant to those professionals? Absolutely. So I think that's actually where the blend of this says is that it's okay for us to be a little bit shocked by it. A dear friend of mine always made the joke of oh, Facebook changed the color of the button. We all hate Facebook now. If something changes just a little bit different, as we all hate it, give it a couple of weeks, we'll all get okay with it, and so right now, it feels like we're going through one of those moments of there's change afoot, and we don't really know what to do with it yet. But if we spend a lot of time with it, we'll realize where and how it can be of use to us. I mean, at the moment, the goal is really probably for it to be more assistive. There are tons of things that we can use to unburden ourselves. Tactically speaking, the little mundane chores and things as an example, I use one to help build homework schedules at my house. It's like, alright, what was the day, here's the time, like, let's go build it out. The crew loves that. It's really fun that technology is out there helping. So that helps us get done something that before it might have taken 20 minutes, and now we're done in like three, and that's great. That's cool. But does that mean it is going to go do all the homework for us and all of that? No, not yet. Yeah, I could go on for an hour about this. But you know, I'll stop there.

Debbie Reynolds  24:28
Well, it's fascinating. I'd love your thoughts there. I'm totally just really thinking about a lot of companies that have data right now. Maybe they are concerned or afraid to be able to use the data because they're concerned about being able to do it in such a way that they're not running afoul of certain legislation. But they really want to take a stab at de-identification to see if that's something because we know that if a data set is sufficiently de-identified within certain regulations, it will be easier for them to do that data sharing. Do you talk to someone about that from a business case level?

Timothy Nobles  25:12
Yeah, so one of the things that we really think about a lot is this idea of chain of custody or data provenance, as we like to think about it. So part of the way we've designed our solution is to actually be one of the transport rails. So how do you get it from the source to your customers, infrastructure, data storage, and so meaning that we've really taken an approach where we're trying to help you to pick up, we can do all the compliance and the remediation and the pipes where by the time you take receipt of it, you're already in a good way relative to the regulation. Part of why we've taken that approach is really the exact spirit of what you're getting at is that there's a lot of complexity, there's a lot of intricacy. There's also a lot of liability associated with that, and most executives are tasked with trying to figure out how to grow the business, not going to follow these little minute details about, oh, Nebraska has issued something needed. So what the heck, what does that mean? Oh, we'll handle that. But, the idea of compliance as a service is about putting it in with the flow of the data. So, as silly as it might sound is really not a whole lot different than the chain of custody on a gallon of milk; the second it leaves the dairy to get into the cooler at the local market, the integrity of that trip is very important, because every degree of temperature the milk might lose is actually days off the expiration. And so the quality of the product is decaying very fast. Our goal is to be able to pick this up, and give it to you as quickly as possible in a compliant manner so that your teams can act on it as quickly as possible in service of the business or strategic intent.

Debbie Reynolds  26:36
Also, I think one thing that's happening, and it's something that you all help with, which I'm glad you're doing this as a service, is that a lot of people don't have the skill in-house to be able to do this. So even if they have this task they have to do, it's hard to have someone do it who does not have the training or expertise in it. What do you think?

Timothy Nobles  26:56
That's exactly right. Also, if you think about an overall legal perspective, a broad privacy perspective, there's a lot of value in having an objective party look at it. Because really, at the end of the day, as long as it's ethical, we don't really have a specific concern on what the data itself is. We have a concern about whether or not it's satisfactorily de-identified and in support of the use case, the customers and proceed. So with that, to your point, there's not a whole ton of people around the globe that actually have the qualifications and the interest and the willingness to get to tackle these types of problems. Typically, what you find is there's a group of academics and then some very specific, very high-level biostatisticians or statisticians who love this kind of stuff. Again, you're dealing with a very rare community around the globe. On top of that, a lot of times, from an enterprise perspective, there's not actually a reason to try to own that resource. They're costly in the day, their outputs could very easily be considered as potentially contentious. Whereas we have those of us in the space that focuses on the idea of compliance, we have the ability to go back and deliver that news. But we can deliver it in such a way that we just need to make sure you understand where these risks are, and what this is and what this is. Now, let's make the right decision to service the best data possible while maintaining the strictest compliance.

Debbie Reynolds  28:12
Now, I definitely think that's the right approach. I foresee definitely more people going this route, even if they don't have HIPAA concerns, because they really don't truly know how to do the identification.

Timothy Nobles  28:26
Yeah, and I mean, to your exact point, just as consumer rights get increasingly strict, finance around the world is another really great example. I mean, they have very similar and spirit types of compliance frameworks. We're gonna see, ultimately, more and more and more of this over time; your point is exactly that: the end of the day, corporate responsibility is going to be really about protecting their customers, and part of that isn't always having their full identity floating around 50 different places, or hundreds of different places. It's going to be this idea of data responsibility and ethics around that. Selfishly, I hope to see considerable increases in transparency from companies about how our data as individuals and consumers are actually being used and what is actually supporting their business. We're starting to see that in some putting more bespoke and digital forward businesses, which are also starting to see larger brands as well start to take a little bit more of that responsibility. I hope that becomes considerably more pervasive versus when we hit the accept cookie button. Well, how long before the brokers have that? Within seconds? That information has been sold off somewhere? Hopefully, we can get to where that's a much more transparent process.

Debbie Reynolds  29:29
Yeah, I agree. I was actually quoted in an article in Legal Tech News not long ago. The reporter asked me about anonymization and de-identification. Some big brand. I can't remember some really big one. I can't remember saying oh, well, don't worry about the data you give us because it's like the identified or anonymized. I'm like, Well, what does that mean? Exactly? Detail. To your point about transparency like that can mean 1000 different things. So what does that mean? He said equity. So yeah, I agree. What do you say that it does need to be more transparent? But how do you think it shouldn't be more transparent?

Timothy Nobles  30:08
Well, I mean, I think great, thankfully; oddly enough, I think this is where regulation could be a friend as the regulations go along. And IT certifications are not much different than being a B Corp or other types of environmental green standards, which there's an ever-growing collection of those. There are all sorts of ways in which there are governing third-party bodies; let's just say they can say, hey, you're abiding by these standards. These are the behaviors that you are consistently demonstrating with respect to a topic. How can an expert's termination report, for instance, serve as one of those and the fact that you can comply across all of these states? Yes, that does put a little bit more work on a corporation, but at the same time, relative to the value that could earn them. That's a very minuscule moment of effort relative to the value that produces and very likely the loyalty of the customer that would produce and how that shakes out yet. I don't know. Honestly, I haven't really thought that far ahead. But I do think that there are existing frameworks that we could reference to how could a business approach say, No, we're ahead of this, we care about you, these are the things that we're doing, and in plain English and our Terms of Use, this is what this means.

Debbie Reynolds  31:13
Yeah, I agree with that completely. So if it were the world, according to you, Timothy, and we did everything you said, what would be your wish for privacy anywhere in the world, whether that be regulation, human behavior, or technology?

Timothy Nobles  31:29
So fun, I think, in general, and as I say, I have zero idea how to actually pull this off. But I think it'd be really great for us to be able to have direct control and access over all of our data. It'd be so simple as the little toggles like what we're used to, things like what we're in, how could we have that level of granular control, right, so if you sit down to watch a program on Netflix, now, I don't want Netflix to now I want to go buy ice cream, then sure, I don't care for anybody. So I like ice cream, but I like being able to participate in those decisions. Along the same line, I would love to see the idea of how consent based participation around data could activate a lot of use cases that don't exist right now. What kind of one of my pie in the sky wins, we'd be willing to be, especially in America with a commercial payer, like us, like UnitedHealth, etc. What would it mean, if a patient were to consent to allowing their data or holistically to be used to identify be used for premium offsets? Meaning that could you actually get a plan for free to where instead of paying this monthly premium, you actually now have money and she goes, seek care? But to you, that exchange was worth it to be able to say cool, you can have my grocery shopping data, but no secrets? Sure. Then meanwhile, that gave me back whatever, several hundred dollars a month, you know, to actually seek care and pursue that healthier lifestyle to where I'm not showing up in emergency departments with high acuity situations.

Debbie Reynolds  32:56
Never thought about that. That will be interesting. It'll be interesting. So, thank you so much for being on the show. This is great. I love this conversation. I'm sure a lot of companies thinking more about this, as they start to really dig into a lot of these state regulations and also internationally when we're thinking about the identification of trying to actually use the data in ways to actually protect you.

Timothy Nobles  33:22
Thank you for having me. It's been an absolute delight.

Debbie Reynolds  33:24
Thank you so much, and we'll talk soon.

Timothy Nobles  33:26
Thank you. Take care.