"The Data Diva" Talks Privacy Podcast
The Debbie Reynolds "The Data Diva" Talks podcast features thought-provoking discussions with global leaders on data privacy challenges affecting businesses. This podcast delves into emerging technologies, international laws and regulations, data ethics, individual privacy rights, and future trends. With listeners in over 100 countries, we offer valuable insights for anyone interested in navigating the evolving data privacy landscape.
Did you know that "The Data Diva" Talks Privacy podcast has over 480,000 downloads, listeners in 121 countries and 2407 cities, and is ranked globally in the top 2% of podcasts? Here are more of our accolades:
Here are some of our podcast awards and statistics:
- #1 Data Privacy Podcast Worldwide 2024 (Privacy Plan)
- The 10 Best Data Privacy Podcasts In The Digital Space 2024 (bCast)
- Best Data Privacy Podcasts 2024 (Player FM)
- Best Data Privacy Podcasts Top Shows of 2024 (Goodpods)
- Best Privacy and Data Protection Podcasts of 2024 (Termageddon)
- Top 40 Data Security Podcasts You Must Follow 2024 (Feedspot)
- 12 Best Privacy Podcasts for 2023 (RadarFirst)
- 14 Best Privacy Podcasts To Listen To In This Digital Age 2023 (bCast)
- Top 10 Data Privacy Podcasts 2022 (DataTechvibe)
- 20 Best Data Rights Podcasts of 2021 (Threat Technology Magazine)
- 20 Best European Law Podcasts of 2021 (Welp Magazine)
- 20 Best Data Privacy Rights & Data Protection Podcast of 2021 (Welp Magazine)
- 20 Best Data Breach Podcasts of 2021 (Threat Technology Magazine)
- Top 5 Best Privacy Podcasts 2021 (Podchaser)
Business Audience Demographics
- 34 % Data Privacy decision-makers (CXO)
- 24 % Cybersecurity decision-makers (CXO)
- 19 % Privacy Tech / emerging Tech companies
- 17% Investor Groups (Private Equity, Venture Capital, etc.)
- 6 % Media / Press / Regulators / Academics
Reach Statistics
- Podcast listeners in 121+ countries and 2641+ cities around the world
- Over 468,000 + downloads globally
- Top 5% of 3 million + globally ranked podcasts of 2024 (ListenNotes)
- Top 50 Peak in Business and Management 2024 (Apple Podcasts)
- Top 5% in weekly podcast downloads 2024 (The Podcast Host)
- 3,038 - Average 30-day podcast downloads per episode
- 5,000 to 11,500 - Average Monthly LinkedIn podcast posts Impressions
- 13,800 + Monthly Data Privacy Advantage Newsletter Subscribers
Debbie Reynolds, "The Data Diva," has made a name for herself as a leading voice in the world of Data Privacy and Emerging Technology with a focus on industries such as AdTech, FinTech, EdTech, Biometrics, Internet of Things (IoT), Artificial Intelligence (AI), Smart Manufacturing, Smart Cities, Privacy Tech, Smartphones, and Mobile App development. With over 20 years of experience in Emerging Technologies, Debbie has established herself as a trusted advisor and thought leader, helping organizations navigate the complex landscape of Data Privacy and Data Protection. As the CEO and Chief Data Privacy Officer of Debbie Reynolds Consulting LLC, Debbie brings a unique combination of technical expertise, business acumen, and passionate advocacy to her work.
Visit our website to learn more: https://www.debbiereynoldsconsulting.com/
"The Data Diva" Talks Privacy Podcast
The Data Diva E202 - Meghan Anzelc and Debbie Reynolds
Debbie Reynolds, “The Data Diva” talks to Meghan Anzelc, President, Chief Data and Analytics Officer, Three Arc Advisory, and Chief AI Product Officer. We discuss her expertise in data and AI, emphasizing the importance of responsible integration of these capabilities into organizations. She stresses the need to align AI with business strategy and problem-solving rather than succumbing to the hype surrounding AI.
The conversation also explores the evolving dynamics of board composition based on organization size, emphasizing the critical role of technologists in larger organizations. Additionally, the importance of a diverse blend of expertise in the boardroom and the need for continuous learning and supplementation of skills and experiences were emphasized.
The discussion also touches on the multifaceted privacy concerns related to AI tools, the critical role of data provenance and lineage in the governance of AI, and the challenges and best practices for implementing AI in organizations. Anzelc and Reynolds emphasize the importance of documenting data and building governance muscle, articulating problem-solving approaches, defining metrics and KPIs, and implementing monitoring frameworks to ensure AI solutions' successful implementation and ongoing performance. The conversation provided valuable insights for organizations navigating the complexities of AI implementation, the responsible and ethical use of AI, and her hope for Data Privacy in the future.
Many thanks to the Data Diva Talks Privacy Podcast Privacy Visionary, Smartbox AI, for sponsoring this episode and supporting our podcast. Smartbox.ai, named British AI Company of the Year, provides cutting-edge AI. For more information about Smartbox AI, visit their website at https://www.smartbox.ai. Enjoy the show
40:56
SUMMARY KEYWORDS
ai, organization, data, people, tools, companies, board, talk, privacy, capabilities, governance, nuance, thinking, ways, important, piece, outputs, technologist, experiences, industry
SPEAKERS
Meghan Anzelc, Debbie Reynolds
Debbie Reynolds 00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello. My name is Debbie Reynolds. They call me "The Data Diva". This is "The Data Diva Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world, with information that businesses need to know now. I have a very special guest on the show, Meghan Anzelc. She is the President and Chief Data and Analytics Officer of Three Arc Advisory. Welcome.
Meghan Anzelc 00:40
Thanks, Debbie. I'm really glad to be here.
Debbie Reynolds 00:42
Well, it's very nice to meet a fellow Chicagoan. We happened upon each other on LinkedIn. You made a comment on a post. I was on a podcast of Pamela Isom about her AI or Not podcast and you made a comment about something I said, so cool, and I looked up your background, and I thought, oh, my God, this is exactly who we need on the show, someone who's an expert in AI and governance. I think that's what we're lacking right now. But why don't you introduce yourself and tell me about your career journey into your role right now.
Meghan Anzelc 01:19
So my whole career has been in data and AI and bringing those capabilities into organizations in ways that are both responsible and that add actual value to organizations. So my educational background is all in physics. So, I'm a scientist by background. I have a PhD in particle physics, and then I've worked in the financial services industry and insurance industry, doing data science and predictive modeling. So that's an industry that's been using AI for 30 or 40 years for everything from pricing insurance products to claims fraud modeling to operations analytics. Then I switched industries and went to executive search, not as a recruiter, but really to help figure out how to leverage the data they had to better serve clients and candidates. So that's a less regulated space, but one where, in my opinion, you have to be thoughtful and careful about how you use data to predict things about people, in particular, because at those executive levels, you're talking about it historical data that's overwhelmingly white and male, so it's very easy to uncover proxies for gender and ethnicity when what you want to do is really get to the experiences and skills that matter. What I'm doing today is consulting with companies on these capabilities, so helping organizations wrap their arms around what to do and how to get started. Lots of people are, I think, overwhelmed, struggling to figure out what to do. And given my practitioner background, we have a very pragmatic and practical approach. We generally will tell you that AI is not the right solution for most problems. It's really only appropriate for a small subset and really encourages organizations to focus on their business strategy and the problems they're trying to solve and then figure out where AI can be used, rather than, I think in the hype cycle, there's a lot of folks saying, oh, I want to use AI. Let me go find a place to put it, whereas we really recommend starting with the problem and then working backwards to see if AI is the right solution.
Debbie Reynolds 03:25
Wow, I think you're answering all my questions there. I think that especially because the democratization of AI has burst out, it's normal parlance for people, and they feel like they can reach out and touch it when they're using things like Generative AI tools, and they hear a lot about it in the news, and so I think it's getting companies excited about it in a way that may not be helpful, especially as you say, they're excited, they want to use it, but they don't really have a good use case for it. What do you think?
Meghan Anzelc 04:01
Yeah, I definitely think that happens, and I think there's a part of that that's okay. So I wouldn't want to discourage anyone from experimenting or trying out these tools because I think to your point that democratization is actually really important and really different than where we were just a few years ago, you no longer have to be a really large organization with a big budget and a huge team to utilize these capabilities. So that part is really positive and exciting. So I think where organizations aren't sure about what to start with, but they do have that excitement and interest, then to me, I think it's about channeling that interest into something that is a safe area to play with. So things like, could you try using some of the text generative AI tools, ChatGPT or Gemini or quad? Could you help you draft internal community communications as an example? Where you have a kid in the loop, you're not automatically sending things out, but you're testing and trying out tools for a particular purpose, even if internal communications isn't a real problem for your business, even if it's not a real value add.
Debbie Reynolds 05:17
Absolutely. So you touched on something I would love to dig deeper in, and that is around risk. So I tell companies this all the time: start with these lower-risk, smaller use cases that you can dip your toe in the water. I see people in these news articles, they're like, yeah, AI is going to cure cancer, or is going to end the world in two years, or something like that, as opposed to, what are practical problems that they have, or maybe things that can be automated in ways that maybe they weren't able to do before because it was too expensive to do. What do you think?
Meghan Anzelc 06:02
I think absolutely that starting small and with a few pilots or proofs of concept can be a really good way to get started. I think when you're thinking about risk for the organization, to me, I think there's also a couple of other dimensions. So in addition to that, starting with the business strategy and what problems can AI help you solve that sort of business framing around what's important for your organization? I think another key piece that a lot of people miss is really thinking about the risk tolerance of your organization. So this is something that's important for board directors and the executive team to consider, just in general, when the organization is considering other kinds of strategic moves. How does the organization think about risk? So, as an example, something simple: are you comfortable as an organization with partnering with early-stage startups, seed Series A, or Series B, or as an organization, is that too risky to you, and so you only partner with vendors and organizations that are more well established, that have a longer, more stable track record as an example. So I think that risk tolerance framework is an important one as well, and then I think the third thing that organizations should be thinking about is that the underlying risks of AI themselves are not particularly new. It's still the same sorts of things we've been talking about forever, frankly, around privacy and intellectual property leakage and the potential risks of miscommunicating or having an unintended outcome of something. What is different with AI is that the likelihood of different risks is different, and so the ways that you've managed risk may need to be adjusted because risks you used to have very low on your priority list might now need to be moved up. And then just that the pace of change and evolution of these capabilities is so rapid and continues to be so rapid that I think you need to revisit that view of the risk framework every so often, probably more frequently than you used to, just to make sure that your approach to managing risk still holds today, as well as it held six months or a year ago. So those are a few things I think that organizations sometimes miss, that in my opinion, they should be thinking about.
Debbie Reynolds 08:32
Yeah, I agree with you wholeheartedly. Let's talk a little bit about governance. So governance, for many years, for a lot of years, for some companies, has been a dirty word because when they hear governance, they don't hear profit. They hear oh, my God, we have more money we need to spend. Or is it more compliance-based? But I think we're moving into an age where we need to reimagine what governance means, especially as we have more people take on AI and they're using data in ways they have never used before, and they're having capabilities that they never had before. What are your thoughts?
Meghan Anzelc 09:12
Yeah, I think there's three things around governance to highlight. So, one is, a lot of the conversations I have with organizations is to help give them some of the history of how these capabilities have been used. As I mentioned before, the property casualty insurance industry they've been used for 30 or 40 years. That's also true in other parts of financial services. So last year, Visa celebrated its 30th year of using machine learning algorithms. AT&T has spoken publicly about using AI for many decades, and those are all highly regulated companies and industries. So when I'm talking to organizations, they often find it reassuring to hear that AI can. Be used in a way that's responsible and ethical, that complies with regulation, and that is adding financial value to organizations. So one, I think, is demystifying that piece around governance, that profit isn't possible with governance, it is, and there's many, many organizations proving that. So that's one piece. The second piece is I also talked to lots of organizations who tell me, AI is not part of our strategy. We don't see it adding value. We don't have any plans to use it. Therefore, we don't need to do anything about it, despite the fact that maybe AI shouldn't be a critical part of your business strategy. But the reality is that most knowledge workers, I think the most recent study I saw showed over three quarters are using AI tools in their day-to-day work. So whether or not their organization approves those tools, whether or not there's any guidelines, employees are using AI tools in their work because they see the benefit to that makes things less frustrating for them. So even if you don't have AI as a key part of your business strategy, at minimum, in my opinion, as a board, you should be providing some light governance for how AI can and should be used at the organization and where it shouldn't, so that there's some guardrails and so that the organization is protecting itself. Because the reality is people are using these tools, whether you know they are or not. So that's number two. Number three is I sit on the advisory board of the Athena Alliance, which is a for-profit, private organization, and as part of my work with the Athena Alliance, I co-chaired their AI Task Force and was one of the co-authors of their AI governance playbook. So as a resource to folks, this is freely available to download from their website, and I'm happy to give you the link for the show notes, but this is a very practical and pragmatic guide across a handful of pillars really directed at board directors and executive teams focused on folks that aren't technical by background, to really help them start thinking, what are some of the considerations and what are some of the questions that The board should be asking of themselves and the questions that board directors should be asking their management teams. So that might be a useful resource for listeners if they're also struggling with where to get started or how does this apply to me, there's some very practical, actionable pieces of that guide that might be useful to folks.
Debbie Reynolds 12:42
Yeah, Athena Alliance, it's Coco Brown, yes, yeah, I met Coco in Spain. Tell me a little bit about boards. So I think there has been a lot of talk over the last several years about board makeup and having people on the board who really understand technology, not just the bread and butter business of the company, but actually the operation. So there's been talk around having more people on boards and cyber, having more people on boards around privacy, and then also having people on boards about AI. How do you see that shift? Or is it shifting?
Meghan Anzelc 13:28
Yeah, I think it is shifting. But I do think there's some differentiation depending upon the type of organization and the size of the organization. So what I have found is that for most smaller organizations where data or AI is really the focus of the company. So a lot of the startups in this space, generally, they're not looking for a technologist on their board, and instead, really what they need is a technologist who serves as an advisor to the organization, or perhaps consulting technology related, but they don't actually need a fiduciary board director who has that to use when you get into medium and larger sized organizations, and I think particularly where some form of technology, whether It's AI or cloud or something else is really critical and key to that business strategy and the long term value creation of the organization. That's where I think having technologists on the board is really critical, and I think the other piece is that nobody wants to have someone who's a one-trick pony who only does one thing. So I think that's the other pieces. Is for technologists to make sure that they're making clear to boards what they offer. So part of what I talk about is, yes, the technology of AI is important, but sometimes we focus only on the technology when, in fact, from my perspective. Boards should also be spending time thinking about the talent strategy implications because those are huge, both in the near term around how do you give people guidance around what to do and what not to do, but also for the longer term, talent strategy and workforce planning of the organization, right of, how might roles change? How might entry level jobs be very different in how you train and move up in an organization. How might your talent strategy needs shift over time as AI becomes more embedded in the way that we work. So I think that's another piece for folks, is to consider, what are the different aspects of your experience that you can bring to a board that's valuable, and particularly, I think that blend, as you were highlighting, Debbie, of understanding actual business, actual operations, the organization, how it works, is really critical to have those voices in the boardroom. And then for boards. I think it's also important to keep in mind that no one technologist can know everything about every type of technology, so I know a little bit about cyber, but that's not my area of expertise, but I know enough to start asking the right kinds of questions and then know where my knowledge ends and where the board might need to bring in someone with deeper expertise for a particular issue as an example. So I think that's the other piece. Is that blend of Who do you need actually sitting on your board, add and evolve to the learning of the board over time, with specific topical experts coming in, or consultants at times to help supplement the skills and experiences of the board members.
Debbie Reynolds 16:47
Excellent. I know people are going to love this, because I get a lot of people who ask a lot about boards, women and how to get on boards, and so I think that you've really laid it out really well and very succinctly, so people really understand that. Tell me a little bit about how you think your experience with how privacy intersects with AI, because I think a lot of people don't understand the interplay between those two things.
Meghan Anzelc 17:16
Yeah, I think there's a few different dimensions to it. So I think one is thinking about the data that AI tools are trained on, and how privacy folds into that. So we've seen things around intellectual property concerns, for example, in Generative AI tools that do image generation, where there's concerns about copyright infringement, but privacy could also be an issue, depending upon where the data is sourced from, and if it was sourced in responsible and ethical ways, the actual outputs from the AI tools and how those get used, and particularly how they're designed to Ensure that privacy is not violated in some way outside of what the intent of the tool was, whatever it was built to do. So that's another piece, is the people who are actually building the tools to be thinking about privacy and how to protect privacy in appropriate ways as they construct and design and implement the tool. And then third is the users of the tools. So we've seen examples where people can effectively hijack, particularly on the generative AI tools. There's been news stories about somebody who managed to buy a car for $1 through a dealer website chatbot because they managed to convince the chatbot to do something that it clearly wasn't intended to do. So you can envision where there might be ways to convince a tool to give you private information that it's not supposed to, if it's not designed with the appropriate guardrails in place, and then beyond the AI, direct implications around privacy, I think the other part is the complexity of the landscape in terms of thinking through and understanding how different tools may be using your data or using your content for other purposes in ways that to you may feel like a privacy violation, and it may follow the terms of service, but that gets really complicated and difficult, I think, pretty quickly. So Zoom is actually one example where they changed their terms of service, I think three or four times in the summer of 2023 and we ended up stepping away from a paid subscription to Zoom after that final version last summer still allowed Zoom to store the kind of content and be able to access the content of meetings, and they made clear that they wouldn't use the data to train AI models. But I think there's a misunderstanding that that doesn't mean that they can't use the data to run AI models. So I think there's some technical jargon that sometimes gets thrown into terms of service and privacy policies, where, if you don't really understand the details of how these tools are built and utilized, it might sound like it's okay to you and meets the criteria you have around privacy, but sometimes there's nuance there, where you really need more expertise to help you weigh in and make decisions about what you want to use from a tool perspective. I bring up that example specifically because it's not an AI-centered tool. It's not a tool we think of as being an AI capability, so you can think about that across any tool you use every day. Obviously, none of us have the capacity, or necessarily the training or the interest to go read 500 different tool Terms of Service. So I think that one is going to continue to be a challenge, both for organizations and for us as individuals, is how we make choices about what we want to use and what we don't want to use when it can be tricky and difficult to really dig into understanding what the actual thing is you're signing up for in ways that are more plain English, like the legal jargon of a lot of the disclosures.
Debbie Reynolds 21:40
I want to talk a little bit about data provenance and data lineage. I know you heard me talk about this on that podcast, but I think that AI is changing and will change the way organizations think about governance in that way because a lot of data is in organizations. A lot of times, companies lose track of data as to his the organization. It gets duplicated, it gets split out in all these different ways, and people don't know where data is, and to me, that's one of the biggest challenges that companies have, is around tracking that data. I think in an AI world where companies are using this data, and there's more regulatory scrutiny there, especially around the types of higher risk uses of AI, they really have to track that data all the way through the data lifecycle. But I just want your thoughts on that.
Meghan Anzelc 22:35
Yeah, I think here again, looking at some of the highly regulated industries can be really helpful because they have often spent a lot of time figuring out how to do that better than others. And I agree with you. And frankly, if AI usage helps companies better keep track of their data, then that's better for everybody and has more benefits than just for any AI application specifically; I think you're right too in that, especially with a lot of the evolution of AI capabilities over the past few years, it really allows organizations to unlock a lot of the value from the unstructured data that they have that they may not have really made use of in the past, if you're starting to track, Okay, where is this data being sourced from, even inside the organization? Where am I pulling it in from? Where am I storing it, and what am I doing with it, and where do the outputs go? That documentation, right? I think you and I know is really critically important, and hopefully, will help build that muscle in organizations to understand that that ability to trace backward is really, really important, having been in some of those highly regulated spaces, you don't want to call from a regulator and then not be able to dig that out, and I've been In that position of having it for code that someone, for me, had written findings the code, which is one of my least favorite things to see, because then you don't know where it went or what it was. So I think some of it, too, is the experiences and some of those bruises and scars that will have been through this before. I think that's another set of experiences and skills that can be really valuable in organizations. So as they're thinking about, how do we build some of this governance muscle? How do we encourage people to document some of the data;it’s not the most interesting work, but it's important and valuable. It can be helpful to draw in talent from organizations and industries that have more experience with us, especially the people who have had to deal with the fallout when it wasn't done correctly. I always find those people are the best at. Setting up the frameworks and training others, and bringing other people along because they've had to live through the experiences where things weren't in place and scrambling to try to deal with it when it does become important in the organization.
Debbie Reynolds 25:16
Yeah, I want your thoughts about just the care and feeding of AI systems in organizations, and so I think this is especially important for companies that maybe are dipping their toe in the water of AI and maybe they're thinking, okay, well, we'll set this tool up, and we'll set it and forget it. I don't need any help from an AI person. I don't need someone with special skills, and that's just not true. These systems need care and feeding. They need tuning. They need a lot of expertise that maybe companies that are new to the AI space may not have internally. But give me your thoughts on that.
Meghan Anzelc 25:58
Yeah, so I'd say the first step is to go back to the things we've talked about a couple of times, of what is the problem you are trying to solve, and how does this solution help solve that problem, and really spending time thinking that through and articulating that upfront before you even design the solution, much less build it or implement it. In addition to that, think about, what are the metrics and KPIs we're going to use to measure the outcomes once we have this implemented, and again, spend time thinking about that and articulating what those should be before you start building this tool. So actually thinking about, how will you measure and monitor? To your point, Debbie, that should include both pieces around, how do we know if we've been successful and we're actually achieving the outcomes we intended to achieve, and how do we monitor the outputs to make sure that they are still working the way we thought they would? That can cover anything from things like model drift to I've seen data feeds that broke, and the model stopped receiving whole data sources to coding changed in some field somewhere, and the model doesn't know how to handle that, and so you stop getting outputs for some subset of your business. So there can be a range of reasons why you want to do that monitoring, but that is important to think about and to make sure that the requirements built for monitoring that that is part of the project. I see lots of organizations do exactly what you were describing, building or partnering with a vendor implementing something, and then it's only after the fact that they think about how they might want to measure or monitor, and it's much, much harder to design and build those monitoring frameworks after the fact. The other piece is that I think a lot of people are doing what you said, of that said and forget it, of saying, oh, well, I'm partnering with a really blue chip vendor. They've got a great, responsible AI framework, clearly they know what they're doing. I don't need to do an independent evaluation or independent monitoring, and I think that's a mistake for a couple of reasons. So one is, no one knows your organization better than you there is going to be designing solutions that work as best they can for your organization and for your industry. They won't know all the nuances, and so there can be things that pop up. The other is that sometimes there's nuance in how things are constructed and how they're used that aren't obvious, even if you've got a great legal and compliance team that you're partnering with. So, as an example, we were talking with a Fortune 100 company that had partnered with a blue chip vendor, a name that all of us know and probably nearly all of us, use every day, an internal deployment. And what they found is, after like a month or so, they realized that there was material non public information that one was being fed into the tool that they didn't know was going to be that the tool was going to access. Then, two, that material nonpublic information ended up being made available to others internally at the company who did not have the right to have access to that information. Immediately, there was fear that potentially, that material nonpublic information may have also been disclosed externally, so then they were doing a lot of backtracking to immediately try to put stops in place and back things off and rethink things through it's a Fortune 100 company. It's not like they don't have people at the organization who know how to read through the terms of service and so on, but it's really. Nuanced in how some of these tools are built and deployed, and you really do need somebody who's been there, done that to help you really kick the tires and make sure that the way in which you're using these solutions applies to your organization. Now, that doesn't mean necessarily that you need to hire a chief AI officer or build up a big game, but make sure you're playing expertise for specific areas or for specific designs. I do think is really critically important. So that's where I think there's can be a bit of misunderstanding of this presumption that I can just partner with somebody who's well known, they'll take care of everything. For me, we've also seen companies that end up with the underlying data changing, and I think this has shown up in the news a couple of times, where your vendor may have a new model version that gets deployed, and if you haven't, on your side, frozen to a particular version, you may end up with outputs today that don't match the outputs you were getting yesterday, and so that can cause production issues and workflow issues and issues with customers and suppliers and clients and employees and all of that. So, there are some nuances to these that I do think really require deep expertise. But that doesn't mean you necessarily need to hire a whole bunch of people, it depends on, again, I think, how much AI is embedded to your organization's strategy, and how critical those capabilities are for your organization's future and success.
Debbie Reynolds 31:34
For organizations that may be newer or, more new to AI and using it in these new interesting ways, how do you feel they're tackling training their employees?
Meghan Anzelc 31:51
Yeah, I think that's still a real challenge for organizations. I haven't talked to anybody yet who they feel like they've really figured out, and I think there's multiple levels of training, but I think really the first one is making sure that the board and the executive management team have their own set of learning and understanding of AI, the capabilities, the risks, the governance really starting there is, I think, very important, because I don't really know, how do you train employees at your organization on a capability and on some of the guardrails if you yourself and your peers don't also have that understanding to begin with? So I think that's one piece that's really critical. I think the other part is similar to any kind of wide scale training program, is to think about what do people really need to know in their day to day and for the particular role in in the organization? So hourly employees at a store who might be using an AI tool in a chat bot forum to help answer customer questions that training might look very different than, say, within your HR Talent Acquisition team, where abilities are being embedded into Some of the recruiting tools that the company's using, you may need different training delivered in different ways for different types of employees in different roles in the organization. So I think that can get very complex pretty quickly. But I do think that starting with the board, starting with the executive management team, and then really thinking through, what does the organization need to know? How do you stage rollouts of training so that it's actually achievable? And again, the same thing we said before, of measuring outcomes. How do you know that the training achieved the objectives that you intended, and making sure that you're building those feedback loops in so that you can adjust accordingly.
Debbie Reynolds 34:05
Very good. Let's talk about end of life. So, data has or should have an end of-life. Companies should have a good strategy around data retention or want to get rid of data. I think that's going to become a huge challenge with companies, especially as they deal in AI systems, because there may be data that's old is no longer needed ages out, you need to be able to take it out of these tools and be able to refresh it on a regular basis. But I mean, this is another issue where I feel like in traditional data systems companies haven't done a great job on that, on the end of life, because they were keeping everything, and they think everything is important. We know that if you really want your AI systems to work well, you want it to have the most vital, most important, most up-to-date information, and you don't want it dragged down with. Information is not as valuable, not as important, maybe not even true. So what do you think?
Meghan Anzelc 35:09
I think you're right. It's an important consideration, and I agree with you that a lot of organizations don't get the attention it deserves. So I think there's a few ways that organizations can start to think about these things or improve if they're kind of in the early days. I think so. The one is even just making that part of the project scope of what is the life cycle of this solution that we're building? How long do we think it will be in place? How will we monitor to know if we should sunset sooner or later? So even just some articulation of that intent upfront can be valuable, as well as some sense of when will you revisit it. So will you revisit in a year or in two years? Even just that small habit, I think, can make a difference. Another piece is, if there is data that you need to store for a long period of time for a particular purpose, you may be able to find ways to put that data to the side or restrict access or store in a different method. And this is going to vary widely depending upon the actual use case, the volume of data you have and the kinds of systems you're in. But thinking through some of those characteristics can be helpful as well, and then talking to whoever your provider is of your data storage system, so whether that's a cloud provider or somebody else to optimize how you're storing data, and particularly for some of that very long term storage, where it may not be accessed very often, but you need to keep it around for some purpose. So again, it can be complex, but thinking through those different pieces is important both to make sure you're storing things appropriately and also to optimize the spend on the storage so that you don't have everything at your fingertips and the most expensive storage solutions. Then, to your point, when will you actually delete data, and what does that look like? And again, some articulation of the principles you have around that, and again, when will you have another review? I think those can be important things to consider and put to paper.
Debbie Reynolds 37:35
Excellent. So if it were the world according to you, Meghan, and we did everything you said, what would be your wish for either AI or privacy anywhere in the world, whether that be regulation, human behavior or technology?
Meghan Anzelc 37:53
That is a huge question. I think there's an enormous opportunity for AI to really have meaningful benefit to organizations and to people, to humanity broadly, in ways that are very, very positive. My hope has always been this unique moment in time where there are so many people who are interested in this topic and talking about this topic that we can find a way to work together so that AI is really used responsibly and ethically. I really do think that opportunity is there, and I think we've got lots of history of organizations using these capabilities well to point to that there is a path to be able to achieve that, and then I think the other part is to keep an open mind and to continue to adapt as things evolve. I think David Bowie was interviewed by the BBC in, I think, 1996 about the internet and what that was going to do to the music industry. The interview is fascinating, but there's a point at which David Bowie said something along the lines of, we don't know, and we can't possibly envision where this will go, and so we have to keep an open mind. I thought that was absolutely brilliant because, in 1996, David Bowie would never have envisioned Spotify. So, I think the other pieces for all of us are to keep our minds open to the possibilities and the opportunities and to continue to adapt as things evolve over time.
Debbie Reynolds 39:30
That's amazing, and I love your David Bowie quotes, I love David Bowie. Well, thank you so much. This is unbelievable. I know this is the right message for the right time for people who are really trying to sort it out and figure it out. Also, I think you're a tremendous example of women on boards who can really stand our ground and add so much value to organizations. So, what's the best way for people to get in touch with you if they want to use your services?
Meghan Anzelc 40:06
Sure. So I'm easily findable on LinkedIn, so that can be a great way. Our website for the company is 3arcadvisory.com so the number 3arcadvisory.com and you can get hold of us through there too.
Debbie Reynolds 40:22
Excellent. Well, I'm excited to have met you on LinkedIn. I'm happy to see that we're Chicagoans, so hopefully, we'll be able to meet in the future, and I look forward to possibly collaborating with you in the future.
Meghan Anzelc 40:36
Yeah, love all of that. Debbie, thanks so much for having me. It was great talking to you.
Debbie Reynolds 40:40
You're welcome, it was my pleasure, talk to you soon. Bye.