"The Data Diva" Talks Privacy Podcast

The Data Diva E231 - Soribel Feliz and Debbie Reynolds

Debbie Reynolds Season 5 Episode 231

Send us a text

Debbie Reynolds “The Data Diva” talks to Soribel Feliz, AI Governance, National Security. AI Coach - Ex-Meta. Former Diplomat. We discuss artificial intelligence policy, governance, and its societal implications. Soribel shares her unique career journey, beginning as a U.S. diplomat serving in Europe, South America, and Washington, D.C., before making a bold transition into the tech industry. She provides a behind-the-scenes look at her work at Meta, where she contributed to election integrity and content moderation, and later at Microsoft, where she helped shape the company’s response to the emergence of ChatGPT. She also discusses her time in Congress as a Rapid Response AI Policy Fellow, where she played a crucial role in helping lawmakers understand and regulate AI, leading to her current work in the US goverbment on AI compliance and governance.
Throughout the conversation, Soribel examines the necessity of AI guardrails to mitigate potential harms while fostering innovation. She challenges the notion that regulation stifles technological progress, arguing that responsible AI development is essential to prevent unintended consequences and protect vulnerable populations. She also provides insight into the growing efforts within Congress to improve technological literacy, including specialized fellowships and collaborations with think tanks to ensure more informed policymaking.
Debbie and Soribel also discuss the broader global impact of AI regulations, particularly the EU AI Act, which has set a precedent for risk-based governance. They explore the challenges of implementing age verification laws, weighing the benefits of child protection against the privacy risks and potential barriers to access that such laws may create. Soribel emphasizes the importance of workforce adaptation, noting that as AI reshapes industries, professionals must explore new career paths and leverage transferable skills to remain competitive. Drawing from her expertise as a career coach, she offers valuable advice on transitioning into emerging fields without the need for a complete restart.
The conversation highlights growing concerns over AI’s effects on employment, economic inequality, misinformation, and data privacy. Soribel underscores the importance of making AI discussions more accessible to the public, avoiding overly technical jargon, and focusing on real-world impacts. She warns of the dangers posed by unchecked AI development but also encourages a balanced perspective that acknowledges both the risks and opportunities presented by the technology.


Soribel shares her vision for a future where AI’s economic benefits are more equitably distributed and where technological advancements align with sustainability efforts. She advocates for a more responsible and ethical approach to AI development—one that prioritizes fairness, transparency, and societal well-being.


This episode offers an in-depth look at the most pressing AI policy challenges and the evolving role of governance in shaping the future of technology.

Support the show

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:13] Hello, my name is Debbie Reynolds and call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information the businesses need to know.

[00:25] Now I have a very special guest on the show all the way from the east coast of the US Soribel Feliz. She is a Senior AI and Tech Policy Advisor.

[00:38] Welcome.

[00:39] Soribel Feliz: Thank you so much. Thank you for having me.

[00:42] Debbie Reynolds: Well, I'm excited that we get to chat today. We've been connected on LinkedIn for quite a number of years and I really like to see the things that you post and the things that you talk about on LinkedIn, especially as it relates to data privacy and responsible AI.

[00:59] Give me a background of your career trajectory and how you became a senior AI Tech Policy Advisor.

[01:08] Soribel Feliz: Yeah, well again thank you for having me. I'm really proud to be here.

[01:14] So a little bit about me. I'm Senior AI and Tech Policy Advisor. How I got here is very interesting. Not a typical career path, not a traditional career path at all.

[01:27] So my career actually started in diplomacy.

[01:31] So I was US diplomat for eight years right out of grad school. That was my career path. And in that career path you stay there until you're 65 and then you retire with a pension.

[01:45] I stayed for eight years. I served overseas in Europe and South America and in Washington D.C. but after the eight year point I wanted to get much more involved in tech policy as it related to both geopolitics and also as it related to us as users and consumers.

[02:07] So I jumped into the fire. I got a job offer to go to work for Facebook, Facebook at the time Meta now. And then I was working in the Trust and Safety Org working on election integrity, civic integrity, crisis management.

[02:27] And then that role evolved into a regulatory compliance role. And in that teen I had the I was introduced to algorithms and machine learning. I was working for the team that built the content moderation algorithm and I was in the team that was training this algorith that was doing a lot of the content moderation alongside human content moderators.

[02:54] So to me that was fascinating because you have this algorithms that are making decisions about what goes on the platform, what doesn't go into on the platform. So that was fascinating and I really wanted to learn more, really educate myself on this technology.

[03:12] And so I was there for almost two years actually and then I, I went to work for Microsoft shortly after ChatGPT was released. I was in their kind of like their PR shop observing and helping determine Microsoft's response to the world's response to ChatGPT.

[03:36] Well, that was very interesting as well. Shortly after that I saw an opportunity to apply to the American association for the Advancement of Science. They had a rapid response AI Policy fellowship, basically trying to place AI experts in Congress to address that kind of like lack of expertise on AI.

[04:01] So I applied and I got it and definitely rapid response because I had to move to Washington D.C. within a week of getting the fellowship. So that was really fun.

[04:13] I was there for a year.

[04:15] Fascinating work, get to do a lot of cool things in Congress.

[04:20] Congress is a very fun place to be. And so then I just wanted to keep doing this. I wanted to keep working on this. And now I work for Department of Homeland Security also on AI policy and governance and compliance as well.

[04:37] Debbie Reynolds: Wow, I had no idea that's such a cool path. I didn't know you were a diplomat. That explains a lot.

[04:44] I want your thoughts especially around Congress. So I think a lot of times people are frustrated when they see laws and regulations, especially the tech people. And we feel like we really need more knowledge in congressional spaces or for lawmakers or policymakers around technology.

[05:07] But tell me your, your thoughts or on that.

[05:11] Soribel Feliz: Yeah, I know exactly what you mean.

[05:14] And I agree like our congressional leaders, the people that are making our laws, they should be very knowledgeable about what's going on in the world, what's going on with this technologies.

[05:26] And I understand that there is a public perception that our lawmakers are not up to date with the latest technology.

[05:35] Like they need to be more involved and understand it. I think that came from those infamous hearings back in the day, back in after 2016 with the elections, when one of the senators asked a tech CEO how do you make money?

[05:55] Did not know that Facebook make money using ads. Right. So I think that has been has marked us and has left perception that Congress is not knowledgeable enough about technology.

[06:11] But I think Congress also has knows that is the public perception and they are definitely trying to break that perception and they don't want this to happen again.

[06:23] And so now there is a lot of interest, a lot of self education or actual education.

[06:34] There's a lot of congressional staff are seeking knowledge, education. They are doing what they're supposed to do in terms of AI technology and all of these things that are happening.

[06:49] There's Also in Washington D.C. there's a whole industry of nonprofits and think tanks that provide free education to congressional staff. They know the Congress People are busy and they know the staff are busy, but they're like, here come and this boot camp, come to this policy event, learn.

[07:14] They go to our offices, they brief us, they leave a lot of literature.

[07:21] So we, we have plenty of resources that we can tap to be educated to understand what's going on.

[07:30] A lot of the staff are actually pursuing degrees, master's degrees and actually agrees on that. So I know the perception, but Congress also knows that perception and they're trying to change it.

[07:42] They're doing their best. There's also the program where I, where I was able to join Congress. The Rapid Response Fellowship is one of those programs that places people like me and other tech experts into Congress, into congressional offices so that we serve as a free to the, to the member resource.

[08:07] We are senior advisors, we are trusted people that can help them make decisions, help them think through legislation.

[08:15] And there's lots of fellowships where we can come in and serve as this strategically, strategically place tech experts. So I think it's, it's a different world right now. And they took note of that kind of embarrassing interaction and now they don't want to repeat it again.

[08:37] Debbie Reynolds: Yeah, I can see that.

[08:39] Definitely. I want your thoughts about this argument. I call it a strawman argument where some say we don't need guardrails for artificial intelligence because we'll stifle innovation.

[08:53] And so this is to me is a strawman argument because I feel like it's one of those things, that circular argument that never gets solved. But a lot of people bring that up.

[09:01] But I want your thoughts about that and how that plays into responsible AI.

[09:06] Soribel Feliz: Well, my thoughts are, I definitely, I can provide counter arguments to that coming from first of all, the ethic, AI ethics, responsible AI. We absolutely need guardrails. We need to protect vulnerable populations that can have disproportionate, can suffer disproportionate harms from, from any technologies as that has already been the case.

[09:40] And with this very powerful technology, it's still going to be the case.

[09:47] We need to prevent catastrophic risks. So there are some very powerful emerging technologies out there that can have very, very significant unintended consequences that could impact humans, that could impact our civilization, our society.

[10:09] So we need to prevent those things.

[10:12] There is the ethical considerations, individual privacy, safety, our societal well being. We have seen what happens when there are unchecked and unregulated changes in our society. We have seen how that turns out for us.

[10:29] So of course we need guardrails. I think what the real argument is, how can we put guardrails in place while also encouraging innovation, while, you know, we don't prevent or stop innovation.

[10:46] And I think that's going to be a very delicate balance that we need to find. I don't think it's an impossible thing to do. I think it's just taking a nuanced approach to how we develop and deploy AI.

[11:01] Debbie Reynolds: Very good.

[11:03] So the European Union has been first in terms of creating the most comprehensive artificial intelligence regulation with the EU AI Act. Tell me, what do you think the impact of that act will have on the US or other places globally?

[11:22] Soribel Feliz: Well, the EU is definitely a trailblazer, Right.

[11:27] As you know, the EU was the one that set the stage with GDPR for how we approach privacy. Right.

[11:36] So I think the EU act is going to be a little bit similar to GDPR in that it's going to set the tone and it's going to provide a regulatory framework.

[11:49] It provides one way that we can approach AI regulation by approaching it on a risk based classification system, placing different AI developments into different risk categories. And so it's a good approach.

[12:08] It provides a regulatory framework. Now with AI, it may be a little bit different and more difficult because the, the, the technology, the AI technology that, the level of development of AI at the time of the EU AI act approval in March 2024.

[12:31] You tell me, you tell me, we are a different level now. A lot of improvements, a lot, you know, new developments have happened in between March 2024 and December 2024.

[12:47] And so it's just very hard to pass legislation, pass regulation that will capture this new developments in AI.

[13:00] Having said that, it's good for us to have that framework, that base, that benchmark, that approach, those high risk requirements. I think it's good for us to think about it though.

[13:14] I think it's good for us to have risk assessments and transparency requirements and testing obligations and documentation obligations. I think it's good to think about what we will prohibit, what we will accept.

[13:31] It's a little bit different than privacy because privacy is privacy.

[13:35] Debbie Reynolds: Right.

[13:35] Soribel Feliz: It's kind of more, a more stable discipline. But AI is very unruly and kind of more abstract kind of technology to regulate.

[13:46] Debbie Reynolds: What is your thought about the impact that all these kind of age verification laws will have on how, you know, jurisdictions either try to regulate that or how companies address kind of the age verification issue?

[14:05] Soribel Feliz: Yeah, that's a good question.

[14:09] I think as with any laws, right, there are good intentions, there are positive consequences and there are some negative impacts. Right. So if you look at it from like the intent, the intentions of age verification law.

[14:26] It's good intention. We want to protect children. We know that children have been, in a way, been victimized by all of these new technologies, right? So we want to protect them.

[14:37] We want to protect them from online predators, from inappropriate digital interactions. We don't want predators and pedophiles and just bad people having access to our children. We don't want them to access inappropriate content, whether that is nudity or adult sexual content or any other harms that can befall children when they are left unsupervised online.

[15:09] We want to give parents more control. We want to give them more the ability to restrict what their children experience online.

[15:20] There is a statistic, it's a very interesting statistic that says that companies already have parental controls, but they have access to it, but only about 2%.

[15:32] Please don't quote me on this.

[15:34] It could be a little bit higher than 2%, but it's a very low percent. Only about 2% of parents actually use those parental controls.

[15:42] Now, that's a very low number. So we want to give parents more tools and also increase transparency and increase awareness about the existence of these tools.

[15:56] However, there are potential negative consequences. There's privacy concerns because you require the collection of very sensitive pii. Right?

[16:09] And as data breaches, they're out there, everywhere, all at once. So it's just more data out there that parents have to put out, that children have to put out. And there's that.

[16:23] We Americans, we. We are very a little bit touchy about data tracking, right? So there's even more potential for data tracking because in certain instances, parents have to provide their ID for their kids to be able to access this game or this content.

[16:41] And so it's just one more piece of data that you have to give up. Another challenge is that this is hard. This is very hard to do. It's very hard to implement.

[16:51] It's hard.

[16:52] I worked on some protection laws while I was in the Senate, and I got very educated about it. And it's really hard. It's really hard. And companies definitely don't want to take on that responsibility.

[17:06] So the social media platforms want the Play stores and things like Apple Store and Google Play stores. They want them to take care of that. They don't want to be the ones who do all of this because that creates friction for the products.

[17:23] So it's difficult to implement. Nobody wants to take responsibility for it. And it's also, kids are very smart. Like, kids can easily, like, circumvent anything and they can Provide false information, they can get around it.

[17:40] And then there's digital equity issues, right? So for example, there's a lot of people that don't have an id, right? They don't have a state ID or whatever. So that can exclude marginalized populations.

[17:54] It could create barriers to access. Like already there's a lot of digital inequity in this country. So if a kid is in need of a parent's ID to access educational content, but the parent either doesn't have the ID or they're working and they can't be bothered, then you know, that's a barrier already for people who are already a bit marginalized.

[18:19] It's hard. And there's also the free speech complications because the first Amendment, very important principle for us. But what if there is over censorship and what if there is information restriction and we're not able to access information?

[18:34] So it's a very nuanced approach to protecting children. Good intentions, but definitely certain complications out there.

[18:44] Debbie Reynolds: Yeah, I agree with that wholeheartedly. So what's happening in artificial intelligence right now that's concerning you most?

[18:53] Soribel Feliz: Give me a second to think about that.

[18:56] I think, oh, a few things. A few things.

[19:00] So one thing is mass unemployment and mass automation.

[19:06] I don't think we're there yet, but I think we should be. We should be thinking about this. All of us should be thinking about this.

[19:13] With AI having the potential of wiping out a lot of white collar professions, displacing workers at a time where we are already getting this place. As you know, the tech job market is very hard right now.

[19:31] What if that extends to all other professions including law, including health and finance and banking. So I think we should start thinking about that. There's also the as a segue to that economic inequality.

[19:47] As you know, this technology will make a lot of people rich, but as I mentioned can also cause mass unemployment, a lot of job losses.

[19:59] So imagine that people getting very rich and people losing their jobs. That is going to be hard for our society. That's going to be a very hard pill to swallow for our society.

[20:11] And it's hard for the country's wallet, right, because you lose a lot of chunk of a tax base. So we have to definitely invest in workforce retraining, adapting for new economic reality and strengthening the safety net.

[20:31] There's also the potential for misinformation, disinformation, manipulation, the deep fakes out there, the videos out there, the synthetic information.

[20:42] It's a lot out there. And I personally, I can no longer tell you that I know how to distinguish a real video or photo from a fake video photo. Like, your eye has to be trained for that.

[20:58] And I work in tech, so imagine if my eye is not trained for it. Imagine my mom or someone who's older and has less exposure to technology.

[21:09] So that is very concerning for me. And there's also the privacy, the surveillance concerns. Our personal data is out there and it's being used sometimes without our consent or like, not everyone will go into, let's say LinkedIn and change their settings to don't take my information.

[21:33] Most people won't do that. I did it. But most people won't because people are on LinkedIn because they want to network, they want to get a job, they want to provide a resume, they want to talk to the hiring manager.

[21:45] So people are not. That's not on their priority list. And so our information is just out there, it's being taken advantage of. So that's. That's very concerning to me.

[21:57] Debbie Reynolds: I agree with that. How do we get people to care about responsible AI? So I feel like a lot of people, they're going to AI right now is like a gold rush, right?

[22:08] Where people are like trying to make money and they're trying to get in a position to, you know, champion over one tech over another.

[22:17] But I feel like when we do that, some of the responsibility of that is kind of getting lost because we don't really have good guardrails in place. And for me, in terms of AI, I think it's very different where the harm to someone, I think there will be harms where I don't think there'll be adequate legal redress, you know, even if we do get laws right.

[22:41] So if someone, you know, lost a job or, you know, they're put on a certain track in education because of maybe some, some error or some bias in a, in an AI system.

[22:53] I feel like if you don't, unless you have the money to actually fight this system, I think it' hard for individual to get any type of suitable redress for those types of harms.

[23:05] But I want your thoughts.

[23:08] Soribel Feliz: Yeah. So how do we get people to care? I think the more we can talk about responsible AI in less abstract terms and more concrete terms, I think that's how we win people over.

[23:24] Right. Because we. What is responsible AI? It just sounds so squishy, right? Like Reid Blackman said, it just sounds very squishy. Like what is ethics? What is that? So I think once we are able to tell a story of how the consequences of AI gone wrong, how it can affect them how it has affected real people and can affect them in the future as well.

[23:52] I think that's when people take a step back and say, whoa, that could happen to me too. I could be misidentified by an algorithm. Oh, I could lose my job too.

[24:07] Oh, I could. It could happen to me that a driverless car could hit me. Because they don't, I don't know, because I was jaywalking and they didn't see me as a person, they saw me as some.

[24:19] You know what I mean? So I think once we provide real life examples of how it could affect us personally or our families, that's when people will start to care a little bit more.

[24:34] I think we should quit the jargon, right? Like instead of talking about large language models, I think that's as technical as we should get. Then we start talking about rag and this and multimodal and we lose people, we lose them.

[24:53] So I think we should just explain it to me like I'm five, I don't need to know all the technology behind it. Tell me what it could do for me.

[25:03] Yes.

[25:04] But also tell me what it could do against me. Right. And I think just to wrap it up, I think we should also refrain from doomsday talk. We should not like go all negative or all positive.

[25:22] We should take a balanced approach to it because it is a powerful technology. It is. It can give you superpowers. I know it can. And we should not shy away from embracing it and from taking advantage of it.

[25:37] But just make sure people understand what they're getting into.

[25:45] Debbie Reynolds: All right, Sorrebel, so I know that you had talked a bit about, because of AI, there was kind of a need for people to upskill. And so I want your thoughts about how to get people to move into maybe new career paths, maybe moving from maybe a non technical path to a technical path.

[26:09] What are your thoughts about that?

[26:11] Soribel Feliz: Yeah, I think it's going to be a necessity. Right. A few things are going to be deemed obsolete due to AI. So I think there's, as I mentioned, there's going to be a lot of automation and some job losses, but not all is lost.

[26:29] Right. I think it's a great time for people to start thinking about the careers of the future. Right. And there are quite a few career paths that could work for, for different people and not all of them have to be technical.

[26:46] So I'm not a technical person at all. And you know, I actually, I've actually changed careers twice. So I changed from diplomatic career to a tech role. In trust and safety and regulatory compliance.

[27:00] And then I changed from that to policy advisor to lawmakers. And now I work in govtech government technology.

[27:09] So my advice would be take an inventory of your skills, your education, the work that you've done, your current skills, and what you're willing to do in terms of upskilling, like know your limits.

[27:23] If you know, if you know coding is not for you, don't go into that. Study something else.

[27:29] There's new areas of tech that are in high demand, like cloud computing, like data analysis.

[27:38] Software development is in a bit of transition right now. But there is AI, machine learning, quantum. So there's lots of areas of growth that, where you can, you know, enter relatively easy because there's low competition.

[27:55] So I'm actually a career coach. So I do a lot of work with my clients to, you know, do a personalized career coach where in six months they can transition from where they are to where they want to be.

[28:11] I provide, you know, resume review, LinkedIn optimization, and just coaching in general on how they can transition from one career to another without having to start over from scratch. Because your current skills are very much transferable.

[28:28] They're transferable and they are worth a lot. And so, you know, I do that kind of work. I love doing that kind of work because sometimes career transitioners think, oh my God, what have I been doing for the last 10 years of my life?

[28:45] I have no skills.

[28:47] And that is absolutely not true. It happened to me when I was transitioning, but then I realized that what I've, I've done quite a bit in my, my decade in government and I can bring those skills to any tech company, any private company in have great impact.

[29:09] Debbie Reynolds: Very good. Definitely people should definitely reach out to Sorbell for that. So you're really good at that.

[29:16] So if it were the world according to you, Soribel and we did everything that you said, what would be your wish for privacy or artificial intelligence anywhere in the world, whether that be regulation, human behavior or technology?

[29:31] Soribel Feliz: I would go with human behavior. I want in. In an ideal world for me, we would implement a system where the wealth created by AI is either equally or not so unequally distributed, where all that wealth, let's say 90% of that wealth, goes to the top 1% and like the leftovers go to the bottom 90%.

[30:08] I would prefer more of a at least a 60, 40, some kind of wealth distribution where we don't just see a few people getting filthy rich with this technology, we all profit from it.

[30:25] We all prosper from it. I would also love to see us finding ways of finding harmony between AI development and the amount of natural resources that AI consumes.

[30:42] We use the technology for better energy consumption, for more efficient energy use.

[30:49] That would be a dream of mine.

[30:52] Debbie Reynolds: Those are good.

[30:54] I really like those. Well, thank you so much, Sorbell, for being on the show. I really appreciate it. And it's been a joy to watch you blossom and grow on LinkedIn and be able to share your knowledge.

[31:06] So it's great. Thank you.

[31:08] Soribel Feliz: Thank you. Thank you for having me.

[31:10] Debbie Reynolds: Yeah. Well, we'll talk soon for sure.

[31:12] Soribel Feliz: Okay.

[31:13] Debbie Reynolds: All right. Thank you.