"The Data Diva" Talks Privacy Podcast
The Debbie Reynolds "The Data Diva" Talks podcast features thought-provoking discussions with global leaders on data privacy challenges affecting businesses. This podcast delves into emerging technologies, international laws and regulations, data ethics, individual privacy rights, and future trends. With listeners in over 100 countries, we offer valuable insights for anyone interested in navigating the evolving data privacy landscape.
Did you know that "The Data Diva" Talks Privacy podcast has over 250,000 downloads, listeners in 114 countries and 2407 cities, and is ranked globally in the top 2% of podcasts? Here are more of our accolades:
Here are some of our podcast awards and statistics:
- #1 Data Privacy Podcast Worldwide 2023 (Privacy Plan)
- The 10 Best Data Privacy Podcasts In The Digital Space 2024 (bCast)
- Best Data Privacy Podcasts 2024 (Player FM)
- Best Data Privacy Podcasts Top Shows of 2024 (Goodpods)
- Best Privacy and Data Protection Podcasts of 2024 (Termageddon)
- Top 40 Data Security Podcasts You Must Follow 2024 (Feedspot)
- 12 Best Privacy Podcasts for 2023 (RadarFirst)
- 14 Best Privacy Podcasts To Listen To In This Digital Age 2023 (bCast)
- Top 10 Data Privacy Podcasts 2022 (DataTechvibe)
- 20 Best Data Rights Podcasts of 2021 (Threat Technology Magazine)
- 20 Best European Law Podcasts of 2021 (Welp Magazine)
- 20 Best Data Privacy Rights & Data Protection Podcast of 2021 (Welp Magazine)
- 20 Best Data Breach Podcasts of 2021 (Threat Technology Magazine)
- Top 5 Best Privacy Podcasts 2021 (Podchaser)
Business Audience Demographics
- 34 % Data Privacy decision-makers (CXO)
- 24 % Cybersecurity decision-makers (CXO)
- 19 % Privacy Tech / emerging Tech companies
- 17% Investor Groups (Private Equity, Venture Capital, etc.)
- 6 % Media / Press / Regulators / Academics
Reach Statistics
- 256,000 +Dowloads
- We have listeners in 114+ countries
- Top 50 in Business and Management 2023 (Apple Podcasts)
- Top 5% in weekly podcast downloads 2023 (The Podcast Host)
- 1,000 to 1,500 - Average Weekly podcast downloads
- 2,500 to 5,500 - Average Weekly LinkedIn podcast post engagement
- 12,450 + Monthly Data Privacy Advantage Newsletter Subscribers
- Top 2% of 3 million + globally ranked podcasts of 2023 (ListenNotes)
Debbie Reynolds, "The Data Diva," has made a name for herself as a leading voice in the world of Data Privacy and Emerging Technology with a focus on industries such as AdTech, FinTech, EdTech, Biometrics, Internet of Things (IoT), Artificial Intelligence (AI), Smart Manufacturing, Smart Cities, Privacy Tech, Smartphones, and Mobile App development. With over 20 years of experience in Emerging Technologies, Debbie has established herself as a trusted advisor and thought leader, helping organizations navigate the complex landscape of Data Privacy and Data Protection. As the CEO and Chief Data Privacy Officer of Debbie Reynolds Consulting LLC, Debbie brings a unique combination of technical expertise, business acumen, and passionate advocacy to her work.
Visit our website to learn more: https://www.debbiereynoldsconsulting.com/
"The Data Diva" Talks Privacy Podcast
The Data Diva E204 - David Evan Harris and Debbie Reynolds
Debbie Reynolds, “The Data Diva,” talks to David Evan Harris, Chancellor's Public Scholar, University of California, Berkeley, Business Insider AI 100. Debbie Reynolds, “The Data Diva,” talks to David Evan Harris, Chancellor's Public Scholar, University of California, Berkeley, Business Insider AI 100. We discuss civic engagement, election integrity, responsible AI, and governance, and Harris brings a wealth of experience and insight to our conversation.
Throughout the episode, Harris delves into the profound implications of AI technology on democratic processes, particularly its impact on elections. He underscores the urgent need for legislative frameworks to mitigate the risks of AI manipulation and preserve the integrity of democratic institutions. Drawing from his experiences, Harris advocates for robust privacy protections, positioning privacy as a fundamental right in the digital age. He emphasizes the importance of transparent privacy settings and user consent mechanisms to empower individuals and safeguard their personal data from exploitation.
Beyond his advocacy for privacy rights, Harris explores the ethical responsibilities of technology companies in developing and deploying AI systems. He challenges the industry to prioritize ethical considerations and accountability, urging policies that ensure technology serves societal good while respecting individual freedoms. Harris shares insights from his engagements in public policy, highlighting efforts in California and Brussels to strengthen regulations around AI, privacy, and social media rights.
Throughout the conversation, Harris' reflections are punctuated by notable quotes that encapsulate his stance on data privacy and ethical AI practices. He stresses, "AI companies shouldn't see the world's data as theirs for the taking. Privacy should be a right, and consent and compensation should be key principles in data usage." Harris also questions the status quo of privacy settings, advocating for defaults prioritizing user privacy and clear, accessible explanations of data practices.
This episode offers profound insights and thought-provoking discussions for listeners interested in the evolving landscape of AI ethics, the impact of technology on democracy, and the future of data privacy. Harris' expertise and advocacy provide a compelling narrative on the complexities of AI governance and the imperative to balance technological advancement with ethical considerations. He also shares his hope for Data Privacy in the future.
43:36
SUMMARY KEYWORDS
ai, companies, work, elections, privacy, people, team, called, democracy, thinking, california, systems, social movements, data, platforms, facebook, ways, world, silicon valley, ethics
SPEAKERS
David Evan Harris, Debbie Reynolds
Debbie Reynolds 00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello. My name is Debbie Reynolds. They call me "The Data Diva". This is "The Data Diva Talks Privacy podcast, where we discuss Data P{rivacy issues with industry leaders around the world with information that businesses need to know now. I have a very special guest on the show all the way from Berkeley, California. This is David Evan Harris. He is the Chancellor's Public Scholar at the University of California in Berkeley and an AI expert. Welcome.
David Evan Harris 00:45
Thanks so much, Debbie; it's an honor to be here with you.
Debbie Reynolds 00:49
You have so many different accolades, but one I'll point out here is you received a Business Insiders AI 100 award.
David Evan Harris 01:01
Yep, that's true. Thank you so much for bringing that up. Yeah, I was pretty surprised and honored to get recognized on that list with a lot of people who are a lot more impressive than I am. But yes, that's true.
Debbie Reynolds 01:16
Well, tell me a little bit about your career trajectory and how you got into AI. Obviously, this is very timely technology in terms of people's real interest in it; Geek people like me were working on it before people got super hot about it. But this is a very exciting time to be in the space. But tell me how you got into this tech career around AI,
David Evan Harris 01:40
Yeah, well, I was really lucky early in my career to have a job at a think tank in Palo Alto called the Institute for the Future, and I wasn't working on AI when I joined there in 2008, but I was really lucky to get put on a team of people researching the future of video, and that was something I was really excited about. I was doing some documentary film work and video art at the time, and I got asked to be part of this research team looking at the next 10 years of video as a medium and in 2009 the research team I was on put out this report and talked about how the world could be different 10 years out if everyone had access to the tools to produce computer-generated images or computer graphics. That was the kind of thing back then that we're really only used to seeing coming out of Hollywood studios and movies like Star Wars, and so we were thinking back then about what that would mean for human rights and for documentation of crime or for forging evidence of crimes? Just a couple of years after that, I was really lucky to be running the fellowship program at the Institute for the Future, and I was able, around 2012 and 2013, to bring a fellow named Sam Gregory, who's now the head of a nonprofit called Witness. I was lucky to be able to bring him as a fellow to the Institute for the Future, and he was working on the question of deep fakes and validated images so much longer before anyone else. Now, he's regularly testifying before Congress and just a huge international voice on this, but he was thinking more than 10 years ago about what it would mean to validate images, to use them as documentation of human rights atrocities, and how we could be sure that that could be done in a way that wouldn't compromise the privacy of the human rights defenders or of the people who are documenting abuses, especially in parts of the world where it could be dangerous to reveal your identity as someone documenting human rights abuse. So again, that was where I was thinking about it a lot, and then I think 2017 was the first time that I really led a research project that was focused on AI. And that was this great opportunity. I had a major hardware manufacturer ask the Institute for the Future, and I was selected as the lead of this project to put together a workshop all about AI ethics. At that point in 2017, It was what you could imagine, in the sense that the big questions around AI ethics were around self-driving cars and how they make decisions if they have to choose whether to kill a pedestrian or a cyclist or kill the person who's in the self-driving car itself, classic trolley problem. There were also, back in 2017 already significant questions around bias and bias in ways that they. AI can be used to make important decisions about people's lives, criminal justice, and sentencing. It was already becoming clear that that was an issue back in 2017 and also with loans, deciding who gets the loan and who gets job applications back then. We were already seeing early AI tools, but it wasn't this generative AI wave that everyone was a part of. So I'd say 2017 was the first time I really dug into this field of AI ethics, and then I really brought it into the classroom a lot at Berkeley. I've been teaching at Berkeley Since 2015 and teach a class called Scenario Planning and Futures Thinking, where AI is a regular theme in that class. Also, I have a class at Berkeley called Social Movements and Social Media, where we look from early on. I even brought in a co-instructor, Aisha Nazar Khan, who led a team at Twitter that was working on AI and ethics, and she brought a really strong technical understanding; she's an engineer and not into the classroom to help us think about and understand the role of AI systems and algorithms and determining what social movements succeed and what social movements fail. During that class, we were going through every week a different movement we went through Black Lives Matter, Me Too, Arab Spring, and tried to really dissect the role of which AI systems, which algorithms were determining factors and success of each of those movements. So anyway, those are a couple of my earlier experiences working on this topic.
Debbie Reynolds 06:44
It's a tremendous opportunity now that people are really talking about AI to be able to get these themes out there, especially in more of a wide-scale public way. Tell me a little bit about what's happening now in the world that's concerning you as you think about AI.
David Evan Harris 07:05
Yeah, I mean, there's so much, and I'll talk a lot about elections in AI, because that's just a topic I've worked on a lot. But before I do that, I know that you spend a lot of your time specifically thinking about data and privacy, and I think this is actually a really important topic that I actually haven't had that much of a chance to work really directly on. But I do think about it a fair bit, and I talk about in some of my classes, and I think that there is this moment that we're in right now, where in the European Union, people have different rights with regards to data and privacy and AI than they do where I am, where we are, here in the United States. It's really frustrating to me, in particular, what we saw last month was a number of companies, but I think the highest profile was Meta, my former employer. I worked there for almost five years, but sent notices to people in Europe and, I believe, also in the UK, telling them that they had an opportunity to object and opt out of having the content that they produced on Facebook and Instagram used for training AI systems that were going to be produced by those companies. Now, this went really viral; I first saw it in a Tiktok video that someone made with instructions for how to opt-out. Then I tried to follow the instructions on TikTok, and I couldn't figure out how to opt-out. I don't want all my personal photos; I don't want all my personal Facebook posts to be used as training data, but there was no way to do it. And it turned out I wasn't the only one who was frustrated because simply that interface, that ability to actually opt out, was not available to me in the US. Then a similar thing happened on LinkedIn. Someone wrote a great post again with instructions for how to do it. And I tried to follow the instructions. And sure enough, there's no way to do it unless maybe you have to be in the EU, or you have to have created your account in the EU and said that's where you live, or some combination of those things. But I couldn't do it. And 1000s upon 1000s of people, maybe hundreds of 1000s, maybe millions of people, saw those posts on Tiktok and LinkedIn, and probably others on Instagram and Facebook and other channels, and it's just so frustrating that there's nothing we can do in the United States. We do not have a right to object to the use of our content as training data for these AI systems. Now, there are a couple of different ways you could get around that. You could just make every post that you've ever made private on different websites. They're really, I think a lot of these platforms are maybe drawing a line they'll train on your publicly visible posts, but not your private posts. But if you use Instagram and you want to actually use the product to reach an audience, setting Instagram to private, I mean, it kills the impact of a product, and that's similar for a lot of other platforms. I think it's such an important conversation right now to think about privacy and AI and your right to not be trained upon, not to be food for these systems. And part of the reason for that, that I think that's so important, is that once these systems are trained, it's very hard to get that personal data out of the systems, and there have been quite a few people who've demonstrated the ability to get AI systems to inadvertently or accidentally spit out people's private data and and the fact that we don't really know how these AI systems work, and we don't know exactly what the tricks are that might be able to get it to give up your address, give up your phone number, that's really scary, too, and Think we haven't yet seen a situation. Maybe you'll correct me if I'm wrong here, but I think we haven't yet seen a situation where an AI company has been told that they have to destroy an entire AI system because it was trained on illegally obtained training data. But I hope that we get to that place, because I don't think it's appropriate the generative AI companies just see the world's data as theirs for the taking, for the scraping, and do that without properly disclosing that they did it even Luckily, the EU AI act is going to require that disclosure, and there are some laws being proposed in the US at the Federal and State level that would require some level of disclosure on the part of AI companies about what kind of copyrighted information they use in the process of training. I really hope that we get to the point where taking training data is something that you have to do with consent. You have to compensate people for it if they don't want to give their consent for free. So, I thought maybe I'd start out in that direction. Maybe that's something that you've thought about, your audience has thought about.
Debbie Reynolds 12:28
Yeah, absolutely; you brought up quite a few issues. I think in the US, one of the main issues we have that's different from the EU and our regulation is very consumer-based, as opposed to human rights-based, and so that leaves a lot of gaps because not every human is a consumer. I like to say, in the US, we don't have a right not to share. So that's what, yeah, in the EU they have that we don't have. So that's why, when you go to websites, you try to opt-out or exercise your rights, you don't have the same rights as someone in the EU has, and I think AI because it's developing so rapidly, it's so easy for these companies with these huge tech capabilities to be able to take this data. They have enough money to fight these cases in court until the earth crashes into the sun. So for them, it's like, yeah, let's take the data, and then we'll fight it out in court. The problem with the individual is that can you fund a multi-billion dollar lawsuit as an individual? So then it becomes incumbent upon maybe some of these other rights groups that maybe gather people in class actions, and they try to fight that, but it's so in my view, regulation is important, but it may not be adequate in terms of redress for individual, and it doesn't happen fast. I think those are two big issues. What do you think?
David Evan Harris 14:01
Yeah, I completely agree, and I really like your characterization that the companies will spend money on lawsuits until the earth crashes into the sun because it is a Silicon Valley strategy to not just innovate on technology, on building new things, finding new ways to make devices and screens and people connect and do innovative scientifically, boundary-pushing things, but Silicon Valley innovation has for a long time also been about doing stuff that is maybe illegal or at the edge of legal, or is operating a space where laws have not yet been clearly created. And there's one analysis I remember reading a long time ago where it was pointed out that PayPal was illegal in all 50 States when it, started, and in some ways, that was not something to be ashamed of. It was something to think about as just a strategy, like, what is it going to cost to overcome the banking industry of that time that had no interest in a new Silicon Valley company showing up and allowing people to make these easy electronic payments. What was it going to cost? How much investment would they have to get not just to run their servers but also to run their legal and public policy divisions? And if you look at Airbnb, it's a similar story. If you look at Uber, it's a similar story. These companies go into these markets that have legal systems that have been crafted for decades, or even maybe more than a century, with taxis or hotels; the legal systems have been crafted by these existing incumbent industries, and Uber wants to take them out, and Airbnb wants to take them out. I've used Uber more times than I can count. I've used Airbnb quite a few times, and I think that the important thing to think about, though, is, how do we as people, as scholars, as researchers, you as a public figure talking about this stuff, get people to wake up to the fact that Silicon Valley is able to change laws. They are able to invest all that money, and we as citizens need to think about do we really want them to change those laws in exactly those ways? What does Uber do not just to make it easier for me to get a ride somewhere, but what does it do to drivers' income and salaries? What does it do to the ability of professional drivers to be able to get access to health care for themselves and for their families and benefits? What does Airbnb do to the average rent for people who need housing in these big urban areas? And I think that it is becoming very clear with AI that there's a large set of public policy conversations that need to be had about how to update our legal systems to make it really clear that the companies can't just do whatever they want. Now, I live in the San Francisco Bay Area, and so I hear a lot from different social events, from people about what people are talking about in different startups. Something I heard recently, just in the last six months, is from two friends who work a lot in the startup community, who go to hackathons with people from lots of other startups and invest in startups or do engineering work. I've heard from two people now that one a common thing that they're hearing in the startup community is we got to build these AI products fast before they're illegal, and to me, I'm just slapping myself in the forehead saying, what they're racing to build things that they are confident, or have some degree of confidence, will be illegal soon. Like, where did they get their morals and ethics from? Like, why would you work as fast as possible to make stuff that you think is going to be illegal? I think a lot of people who aren't deep in this Silicon Valley scene don't realize the way things like that, like calculations, are actually made explicit, and part of the explicit strategy of companies right now, do it quickly before it's illegal.
Debbie Reynolds 18:46
the ethics were definitely lacking. Yeah, for sure, for sure. My concern, and I want to get a bit into your work on elections, because I think it's very timely. My concern always is that I think companies should make money. I'm not against that, which is fine, but I don't think that people should be harmed in the process. So, being able to find ways to be able to leverage technologies in a way that can help people and not make people and their data food for AI systems or take advantage of them in ways that they cannot, again, have any redress or recourse, be able to stop whatever those harms are, is really important. But tell me a bit about your work on AI and elections, very important.
David Evan Harris 19:38
Yeah, yeah, happy to do that. So, I've been interested in concerned about elections and the future of democracy for a long time. For me, it really hit I'd been teaching this class, the one I mentioned to you, social movements, and social media at Berkeley. For a while, I've been to. Watching another class called civic technology, and there was never a point in my life where I would have raised my hand and said, I am a techno-optimist. But if I look back on myself retrospectively, between 2008 and 2015 looking back at that version of myself, I was overly techno-optimistic because I subscribed to this belief that I think a lot of people had that if you gave everyone in the world a cell phone, and everyone in the world had a video camera on that cell phone to document any kinds of abuses of power, and they could use that cell phone to participate in democracy and share their ideas and create social movements and build power for communities. I really believed that that was going to make us a better world, and that was given I held my nose at the advertising-based business model that was making so much of that possible online. I've long been a subscriber. I'm actually a lifetime subscriber to a magazine you might have heard of called Adbusters magazine. It's the magazine that was credited with coming up with Occupy Wall Street movement in a poster they made. So had critical ideas about advertising fueling all this technological innovation, but I thought, overall, yeah, ads are bad. It's not the best place to power all this stuff. But overall, I really did think during that period that it was just a given that when people get access to all this tech, it's going to make the world a better place. So then I read this book called Twitter and Tear Gas by Zeynep Tufekci, and I think it's a fantastic book, and she coined this term called networked authoritarians. I think it's her term, and she talks about all the ways that authoritarian leaders and people that they hire or get involved with can actually sometimes use the tools of social networks and technology and mobile devices better than grassroots movements than activists when they are shameless about doing things like deliberately manipulating people, like deliberately distributing misinformation, like using network technologies not just to promote ideas, but to surveil activists and to spread vicious rumors about or false information about their opponents. So I teach with that book in my classes at Berkeley, Twitter, and Tear Gas, and I think reading that and watching what was happening with Brexit, with Cambridge Analytica, with the Russian hacking of the US 2016 election was a real wake-up call to me that things weren't really going the way I thought they would and wasn't that the technologies weren't as effective as I thought they would be. I think it really is this idea that people who are shameless and have no ethics about how they use these tools. Putin is a perfect example. But there are many other authoritarian situations where leaders are using these tools and also censorship. Can't forget, good old censorship is part of that set of tools with technology, but using these to stay in power, to get power in illegitimate ways, and to really damage democracy around the world. So I got really frustrated at that point, and I had in my classes always tried to bring in guest speakers from tech companies so my students could really hear straight from the people who work in the trenches what it's like developing these technologies. I just decided in 2016 that I really wanted to work inside Facebook because I was really worried that there were going to be some decisions in particular made inside of that company that would determine the future of democracy. If there was going to be a future of democracy, these platforms, at least at that time, back in 2016 and 2017, there was very little momentum around meaningfully regulating these companies. Now, I would say there's a lot more momentum and interest in regulating them, and real regulatory achievements in Europe back then it's just seemed like the only way I could have an impact on the future of democracy was to get inside. So, in 2018, I joined Facebook to work on what was then called the civic engagement team. At that time, I was actually working more on helping develop. I joined as a researcher and eventually went on to manage teams of researchers within the company. But I joined working. On a product called Town Hall, which was a tool to help people be able to communicate with their elected officials on Facebook, worked on another product called community actions. It was basically a petition platform to help again, to help community organizers, to help social movements grow on the platform. But as the year unfolded, it became really clear that what Facebook needed to do was not produce more of these civic engagement tools to help community organizers but to really clean up what was going on on the platforms in terms of Cambridge Analytica stealing people's data. Supposedly, I think in retrospect, there's a pretty broad consensus that Cambridge Analytica, as a company, they definitely stole people's data. That's a fact for sure. But then there was their marketing pitch that they knew the personality types in really granular ways of almost every voter in the United States and in other countries too, and they could use those tools to target advertising very specifically that would play on the people's personality types and get them to vote in certain ways. I think people think they were overstating what they could really do because of their marketing and their need to pitch themselves. Facebook really needed to clean that up. Facebook needed to clean up the Russian hacking. There were two different groups in the 2016 election that were really active. There was something called the Internet Research Agency, which was in an office building in St Petersburg. You can see pictures of it online, and reportedly 1000 people went to work. It was their full-time job to go hack elections all over the world and make fake content, but especially focused on the US election. Then there was also the Russian military intelligence unit, the GRU, that was manipulating people through producing fake content and posting ads and boosting content, and even engaging with individual activists in the US and trying to create fake events, make count protests, and counter-protests, so that people would hate each other more. That whole thing became such a big issue that the civic engagement team at Facebook, basically, most of the people working on it, myself included, got shifted into what was called Civic Integrity team. And so I worked on that team, I think, starting in 2018 until sometime in 2020, and I did research about how people were trying to interfere in democracy all over the world. A lot of the stuff I worked on was highly confidential, and then it became not at all confidential because of Francis Haugen, the Facebook whistleblower, who was a coworker of mine. We were on the same team, that civic integrity team, you know, she leaked 20,000 documents, and so I can say a little bit more now that those documents are out there than I could have at the time. I was building one program to do research in what we called at-risk countries, and those are countries at risk of civil war, genocide, and election interference, and those are a lot of them, are countries where it's too dangerous for Facebook to even have an office or have employees on the ground or even send us to travel. I did a lot of work just looking all over the world to where the worst situations where the platforms are being used to manipulate democracy. And of course, this is AI, because the people who were trying to interfere in the point 16 election were manipulating the AI that determines what people see in their newsfeed. Really that's two things. It's the newsfeed ranking algorithm that chooses what you're going to see. First, when you open the Facebook or Instagram app, what you're going to see second, what you're going to see third. But then it's also the AI systems that are used in targeting of advertising, and those are two very much interconnected types of AI systems that have a huge role in elections. I went on after that team to work on another team called the Social Impact team to again, do work on developing products for social movements and for organizers and for nonprofit organizations to recruit volunteers. Then after that, the last team that I worked on was called the responsible AI team at Facebook. And on that team, I looked a lot at bias. I was managing researchers working on AI fairness and inclusion, and also another group on AI governance and accountability, and in those teams, we were also thinking a lot about a lawsuit Facebook had been sued by the US Department of Housing and Urban Development for facilitating housing discrimination through its advertising targeting tools. And so that was big thing that that team had to deal with. Also, we were thinking about how AI could continue to be used in different ways in elections, and while I was on that team, we were seeing, really, the beginnings of generative AI. I was testing out things like stable diffusion, and we were looking at large language models and thinking about, how these things could be used to manipulate elections and how they could be used in bad ways, and so since then, I've been out of the company for about a year and a half, and I've been really focused on public policy, because I think looking back, two of the teams that I spent the most time on inside of Facebook, the responsible AI team and the Civic integrity team, no longer exist. Those teams were not mandatory for the company. The company doesn't, strangely, in the United States, at least, the company doesn't really have a strong legal obligation to even assess in a systematic way the impacts of its products on elections on democracy. There are civil rights laws in the United States, and that's why the Department of Housing and Urban Development was able to sue. It wasn't under any laws about AI specifically, it's a thing that's mandatory. It's voluntary, and Elon Musk came in and bought Twitter and laid off somewhere around 81% of the company is the number that's been floated around the most. Then, after that, Mark Zuckerberg said that he admired what a lean company Elon had been able to make Twitter. And so we're in this situation where the social media companies are abdicating responsibility for election integrity on their platforms, and now we're seeing this really big bifurcation of what the companies have to do in Europe to comply with the Digital Services Act, which, not only in the act itself, requires that companies do a lot to assess the risks to democracy and elections that their platforms pose. They also because of the ease Digital Services Act, they have to mitigate those risksand make risk mitigation plans. They have to get a third-party auditor to come in and audit their risk mitigation and assessments, and then they also have to open up to outside researchers and academics in Europe to let them research and study what's going on with democracy on those platforms, and it's not just democracy. There's a list of different areas under the Digital Services Act, mental health, public health, children, protection of minors, those are on that list too, and the companies really have to do a lot on those issues to demonstrate that they're doing everything they can to protect society on those topics, and they just don't have to do that in the US. So by and large, many of these companies, the biggest ones, are saying, well, we're just not going to do that in the US, and to be fair, it's a minefield for them, because whichever direction they step, they risk making people unhappy on both sides of the political aisle in the US. But I don't think that's a good excuse to divest from work. So since I worked at meta, I have been focused a lot on public policy, and I'm figuring out ways to make policies that could allow for the that could require companies to do everything they can to protect democracy and protect elections online. Mostly I'm working in California and in Brussels on this because I actually spent some time last year in Washington, DC. I talked to all the people I know and trust about AI legislation in Washington and where it's going to go. And I asked a lot of people point blank. I said, Hey, I'm in academia. I don't have a jurisdiction. I just want to be impactful, and I really think we need laws about AI and laws about AI and democracy and misinformation, especially. Where should I spend my time? Should I be coming to DC once a month and trying to figure out how I can influence Congress or influence executive decision-making, or should I stay in California, where I live, or should I go to Brussels? And the answer I seem to get everywhere was California and Brussels, or where it's at DC is gridlock. We don't know if there's going to be any meaningful legislation. I mean, with privacy, we don't even know how many years California is ahead because the California Privacy Protection Act, California Consumer Privacy Act, was passed in 2018. APRA has been proposed at the Federal level, but we don't know if it's going to pass at all. So California is arguably at least six years ahead of Washington on privacy, and in terms of elections, democracy, and AI, we can only guess that maybe it'll be something similar to that. Who knows? So, I've been focusing my energy on helping the EU, helping them. Number one, I've gone to Brussels twice in the last year, and I've helped give advice, and was asked to produce some memos and and give them my input as an expert about the EU AI act and how the AI Act should treat certain types of challenges. One was around how the AI Act should deal with general-purpose AI systems and how it should set thresholds for what kinds of general-purpose AI systems are high-risk and which kinds aren't. Also, I was able to give them a lot of input about how the AI actually treats open source, which I like to put quotes and asterisks on because open source is not a term that cleanly applies in the world of AI. Yet there's not an agreed-upon definition of that. So I've been thinking about AI and elections, in particular, though, in California and in California have been thinking about and working on writing, helping write some legislation here. So I'm part of the team at this organization called California Common Cause that has an initiative called cited, the California Initiative for Technology and Democracy, and it's cited. I've been working on this legislative package, and we've got two bills that are about deep fakes and making it harder for people to distribute damaging deep fakes that could interfere in elections and another bill that's about provenance, authenticity, and watermarking of AI-generated content. I've been greatly privileged to be able to work with a lot of lawmakers on this. I've testified six or eight times in the last six months or so and different committees of the California Senate and Assembly, and really excited about the potential for hopefully some of these bills I'm working on to pass, and I'll be testifying again in August about the California social media Bill of Rights, and trying to really push forward legislation and statements about the rights that people should have on Social media, and that social media bill of rights that has language specifically about privacy in it as well. But yeah, elections, I think, are just so important. We're already seeing elections being impacted by deep-fakes election interference in the front of the Biden robocall in New Hampshire. And are examples of interference from Slovakia last year in a really tight election, with an audio deep fake and so much more. But I'll stop there.
Debbie Reynolds 37:32
That's tremendous work that you're working on. I think all of these things will culminate together, and so I'm happy to see that California is, and has been, for many decades, really taking the lead or taking the reins on privacy, because the State action is where it's at right now, not on the Federal level. So I'm glad to see that you're advocating for those things in California, in the US and in Brussels, where it's very much needed. So if it were the world, according to you, David and we did everything that you said, what would be your wish for privacy or AI, whether it be regulation, human behavior, or in technology?
David Evan Harris 38:15
Wow, I think privacy, as you said earlier, is not a consumer issue. It should be treated as a right in the social media Bill of Rights that I've been working on in California. One of the rights that we have been talking about is that people should have the right to have their privacy settings set at the maximum level of privacy as the default, and then, if they are offered something, given something that makes them really want to give away some of their privacy rights, they should have the right to have that explained to them in a really clear way, and then have the ability to, in an informed manner, they should be able to consent to giving away some of their personal data or personal information. I think I was very excited when I saw Apple introduce this new feature where, when you install an app, you get to choose whether the app gets to track you, but I hate the language around it, and the language around honestly scares me, because the button you get to click is ask app not to track me. Why do I have to ask the app not to track me across other applications? I would like to tell the app not to track me. Why am I in this position where I have to politely ask, and I have no ability to know if it has decided to grant my request? I mean, why ask? I know that's progress to even have us have the ability to do that, and so now, every time we install an app, or almost every time we get this option, and the app developers slip us these questions in order to provide you with a great user experience and to give you all these great features, we really want you to click on allow to let us track you across all your other apps and everything you do on in other places, all over the Internet, and so you have to get asked. Why do I have to answer that question every time? Why not just at the device level? I'm going to tell it. I'm never saying yes to that question. Why with cookies? Why do I have to navigate through that thing every time through these misleading they're called deceptive design patterns? They're designed to get you to take all of the cookies and to give away all your privacy. Why do I have to do that? Why can't I just set that at the browser level, at the device level, and just say, No, never. Don't want to be tracked. Don't want any of these cookies that are not absolutely essential, and I don't want to be misled 10 times a day, and also, there's always the x, and you never know when you click the X on the cookie thing. Did the x mean I did consent or I didn't consent? It's a bizarre situation that years into this, we are still just being taken advantage of the way that the companies are allowed to engage with us and talk to us as people who should have rights to privacy and rights to our data, just trying to confuse us into giving these rights away. I just don't think it should be like that. So my wish would be that privacy is a right, that we have a right to automatically decline all of the cookies that we don't want. We have a right to automatically no app should be able to track us anywhere else, or probably shouldn't be able to track us in that app unless we give them permission, and companies shouldn't be able to come to us trying to deceive us into giving us that stuff they should have to, very clearly, ask us and maybe pay us. I think it's not unreasonable that people should get paid when getting their data accessed, not that we should have to pay not to have our privacy violated, which is a model that a lot of people are advocating for.
Debbie Reynolds 42:44
I agree wholeheartedly. So I support you in your wish. I support your wish, and I co sign that for sure.
David Evan Harris 42:51
Thank you so much. Debbie, I appreciate that.
Debbie Reynolds 42:54
Well, it's been amazing to have you on the show, and I love the work that you're doing. I'd love for us to be able to find ways we can collaborate in future.
David Evan Harris 43:04
Yeah, thank you looking forward, and thank you for the great work that you do, bringing easy to understand information about these really challenging topics to a broad audience. I love what you're doing, keep up the good work, and thank you so much for having me on.
Debbie Reynolds 43:23
Thank you again, and we'll talk soon.