"The Data Diva" Talks Privacy Podcast

The Data Diva E193 - Nneka J. McGee and Debbie Reynolds

July 16, 2024 Season 4 Episode 193
The Data Diva E193 - Nneka J. McGee and Debbie Reynolds
"The Data Diva" Talks Privacy Podcast
More Info
"The Data Diva" Talks Privacy Podcast
The Data Diva E193 - Nneka J. McGee and Debbie Reynolds
Jul 16, 2024 Season 4 Episode 193

Send us a text

Debbie Reynolds, “The Data Diva” talks to Nneka J. McGee, Ed.D., J.D. Former Chief Academic Officer San Benito Consolidated Independent School District (CISD) Texas, Artificial Intelligence (AI) in Education Researcher and Advocate. We discuss the critical topics of artificial intelligence (AI) and privacy in education. Nneka McGee shares her career journey influenced by her parents, both mathematicians educated in the Jim Crow South. Her path took her from teaching mathematics to a deep dive into the potential and challenges of AI in education. The conversation opens with the importance of protecting the privacy of young students, particularly those under 13. Nneka stresses the importance of educational institutions and parents being vigilant about terms of service and data-sharing agreements to safeguard children’s privacy.

The discussion then explores AI's role in education, highlighting the fourth industrial age driven by AI and automation and its impact on teaching and learning. Nneka elaborates on the complexities of digital contracts, touching on various regulations like FERPA and COPPA in the US, and GDPR in Europe, that schools must navigate. She shares insights into the broader implications of data breaches and privacy violations, emphasizing schools' need to precisely understand and negotiate terms to protect all stakeholders.

If not appropriately managed, AI’s transformative potential versus its risks is a significant theme. Nneka expresses her wish for a balanced approach to AI in education, advocating for decision-making that includes diverse perspectives, especially from educators. She underscores the importance of teaching students technological skills, critical thinking, and agility to prepare them for future advances, such as quantum computing, and hope for Data Privacy in the future.

Many thanks to "The Data Diva" Talks Privacy podcast supporter Integral, a group that is revolutionizing health data compliance. Top tech and pharma leaders trust Integral's Privacy Workbench platform to simplify and speed up the expert determination process, ensuring compliant de-identification of sensitive datasets. No more guesswork about privacy risks or remediation options—Integral’s continuous monitoring keeps your data consistent and secure. Curious to streamline your data collaboration efforts? For more information about Integral, visit their website's Data Diva Link: https://why.useintegral.com/thedatadiva



Support the show

Show Notes Transcript

Send us a text

Debbie Reynolds, “The Data Diva” talks to Nneka J. McGee, Ed.D., J.D. Former Chief Academic Officer San Benito Consolidated Independent School District (CISD) Texas, Artificial Intelligence (AI) in Education Researcher and Advocate. We discuss the critical topics of artificial intelligence (AI) and privacy in education. Nneka McGee shares her career journey influenced by her parents, both mathematicians educated in the Jim Crow South. Her path took her from teaching mathematics to a deep dive into the potential and challenges of AI in education. The conversation opens with the importance of protecting the privacy of young students, particularly those under 13. Nneka stresses the importance of educational institutions and parents being vigilant about terms of service and data-sharing agreements to safeguard children’s privacy.

The discussion then explores AI's role in education, highlighting the fourth industrial age driven by AI and automation and its impact on teaching and learning. Nneka elaborates on the complexities of digital contracts, touching on various regulations like FERPA and COPPA in the US, and GDPR in Europe, that schools must navigate. She shares insights into the broader implications of data breaches and privacy violations, emphasizing schools' need to precisely understand and negotiate terms to protect all stakeholders.

If not appropriately managed, AI’s transformative potential versus its risks is a significant theme. Nneka expresses her wish for a balanced approach to AI in education, advocating for decision-making that includes diverse perspectives, especially from educators. She underscores the importance of teaching students technological skills, critical thinking, and agility to prepare them for future advances, such as quantum computing, and hope for Data Privacy in the future.

Many thanks to "The Data Diva" Talks Privacy podcast supporter Integral, a group that is revolutionizing health data compliance. Top tech and pharma leaders trust Integral's Privacy Workbench platform to simplify and speed up the expert determination process, ensuring compliant de-identification of sensitive datasets. No more guesswork about privacy risks or remediation options—Integral’s continuous monitoring keeps your data consistent and secure. Curious to streamline your data collaboration efforts? For more information about Integral, visit their website's Data Diva Link: https://why.useintegral.com/thedatadiva



Support the show

33:42
SUMMARY KEYWORDS
artificial intelligence, ai, people, impact, terms, data, district, kids, trauma, ensure, education, thinking, systems, understand, coppa, language, years, privacy, models, technology
SPEAKERS
Debbie Reynolds, Nneka McGee

Debbie Reynolds  00:00
Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello. My name is Debbie Reynolds. They call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a very special guest on the show, Nneka McGee. She is the former Chief Academic Officer at San Benito Consolidated Independent School District. Welcome.

Nneka McGee  00:40
Thank you, and I appreciate the opportunity. I recently transitioned from the district, but there's an awesome team there. I enjoyed my time there and wish them the best.

Debbie Reynolds  00:52
I saw you on LinkedIn because you put a lot of interesting information out there about children's education and Artificial Intelligence. So I thought, wow, we should really talk about this because I feel like this is a super hot area where it's intersecting with emerging technology and privacy, especially as it relates to children, definitely children of all ages that are in K through 12, but definitely those who are under 13. Pretty interesting, but tell me your trajectory in your career also; I would love to know how you got involved in Artificial Intelligence.

Nneka McGee  01:29
Well, definitely. Thank you again for having me today. So I always start with my parents. Both of them were mathematicians, and they actually were educated in the Deep South during Jim Crow, so I did not start thinking that I would end up teaching math, which is what I did. But I tell a story about how my dad when I was in college, he used to sometimes drive to New Orleans, where I went to school, and he picked me up, and we were going to Mississippi, and I was like, where are we going? The pathway was different, and he actually took me to see his math teacher, and I'll never forget it. I shared that story when I transitioned into education. Within three years, I was nominated for District Finalist for Teacher of the Year, and I told that story because only years later, for transitioning from another career, I realized the impact of that meeting, what it meant, and how it aligned to somewhere out there in the space, and understood that one day I would become an educator, and two with my parents, My mom actually was an administrator, Assistant Dean in Engineering at The Ohio State University, and so from an early age, I had been exposed to computers, exposed to engineering, and I have this picture of myself in her office when I was young, and I tell myself, this girl always had a chance. I was always immersed in that field. So, when I was 10 years old, I begged and begged. I needed a computer. I had to have one, and so they purchased what's called a portable laptop, IBM, 5155 that weighed 27 pounds, and boy did I try to lug it around anyway. So then you had to cope. The Basic was, in order to make your computer work, you had to know what to do, and just from there, I've always had a love of computers. As I transitioned into education, I saw that students didn't have the places I served. They didn't have the exposure I had, so I would work and try to work with people to get computers and set up different types of labs in the schools that I served. Eventually, with Artificial Intelligence, I was always looking at different ways that my kids could learn and be excited about learning. I tapped back into Artificial Intelligence in about 2017. I had been looking at it; it ebbed and flowed. It was exciting for a little bit. Then it died down. Once I started looking for lessons. I saw a couple of things that people were doing, very few in the States, much more outside of the United States, and then I started building lessons. Found a course from ISTE. They were doing an explorations course in 2018. I decided to take that, and once I took it, I was like, wait a minute, I'm doing all this work in Artificial Intelligence and education and how to integrate it into classroom instruction. I would love to explore this in a doctoral program. So that led me to work on a dissertation, which I defended at the end of last year, on exploring the lived experiences of teachers who are implementing or preparing to implement Artificial Intelligence in K-12 classrooms in the United States.

Debbie Reynolds  04:54
That is tremendous. That's exciting. I love the story about your parents. It's so interesting the things that you look back on that happened to you as a result of your parents planting those seeds and how it has grown in you. That's phenomenal. I want to talk to you a little bit about you said that, and it's true that the interest in AI Artificial intelligence has ebbed and flowed over the years, so it never really went away. But the interest was, even people weren't super interested in it, that people get hot about it, and then get not hot about it. But now, because of the general purpose large language models and chatbots, I feel like what it has done is really reignited that interest in Artificial Intelligence and done it in a way that helps democratize, in some way, Artificial Intelligence, where it was a lot of times when we think about Artificial Intelligence before these general purpose models run onto the scene, these are things in backrooms, back offices, big corporations, things that you couldn't really see or feel or touch. But now, because people can actually see, feel or touch these things, I think it becomes more relevant in how we try to figure out ways to use it. But just tell me your thoughts about just the fact that people are so interested in Artificial Intelligence, all of a sudden.

Nneka McGee  06:18
I have to admit, I love it. I mean, I think that it has so much potential, and I will tell you that before obviously working, I looked at the world before Generative AI exploded on the scene, not that it wasn't there, but just to the extent that, like you said, we're talking large language models, and then the after and before just it was a small group of people interested in it. How is it going to impact us? We see it's coming, and then for it to explode and have so many people interested in engaging in the technology, which has the potential to be very transformative, which actually, through my research, showed that it's driven us into what is called the fourth industrial age, where we're looking at the impact of Artificial Intelligence and automation on our society and within our career flows, our home flows, everything, and so I think that it's amazing. I think there's so much promise. I call myself a cautious advocate. Someone out there is like, did you coin the term? No, I didn't. Obviously, it came from somewhere else, but I'm an advocate because I understand the potential; I understand the promise that is inherent in emerging technologies that can be transformative. But there's also the peril and the caution of ensuring that we do it right right now that we have the power to do it right, and I feel like if we don't take control, for lack of a better word, or be very intentional about how we continue to deploy AI, particularly in educational systems, where my heart lies, that we can lose out and we can't say 5 years, 10 years from now, oh no, what happened? How did we get it wrong? The foundation is being laid right now to get it right,

Debbie Reynolds  08:16
I agree with that. We chatted a little bit before we started recording around things like contracts, Terms of Service, Terms of Use, and data use, and this is a huge issue, not just in education. I think in education, the danger is more apparent to me because you're dealing with children who are developing; their lives are being formed right now, as opposed to maybe an adult like they're an actual AI. I had a chat with another woman that I know. She's a very good attorney, and she talks a lot about Terms of Service and these agreements that we get into when companies or organizations want to adopt Artificial Intelligence. So tell me a little bit about that area. I think a lot of people will be interested in hearing about this.

Nneka McGee  09:05
Yes, well, definitely, you're absolutely correct when it comes to school districts, in particular in the impact of Terms of Service, memorandums of understanding, data sharing agreements; school districts have to comply not only with several State regulations, but I'm thinking of the United States. Obviously, in Europe, they have GDPR, but particularly in the United States, we have FERPA, which deals with personally identifiable information. We have COPPA, which addresses parental consent with children 13 years of younger. Then we have CIPA, which focuses on libraries and obviously classrooms, but in particular with libraries and funding to ensure that the materials that our children are exposed to are not indecent, etc, etc. So with that, we could have, and I'll provide an example. So when ChatGPT first came onto the scene in OpenAI, their terms of service were 18 years or up, so why was there so much of a focus on plagiarism and cheating when it first came out? I made the decision in our district to restrict access to ChatGPT because of those terms of service. How many of our students are actually 18 years of up or could use it? So while it was exciting, and let's get dive right in, we had to pay attention to the terms of service right now. Some of those terms of service, for example, will say that kids 13 years or younger cannot use the service. Some of them say you need parental consent. Some of them say, in terms of, if you want to use our service, we cannot be held liable. Some of them will say, your data will be shared with third parties, and if something happens with the third parties, we won't be liable. People have to be aware of what language is governing their information, and that is essential, and it is inherent. No one's hiding per se; they post it. Some of them are longer. They seem very unwieldy. Sometimes, to look through that language, I would say that school districts definitely tap into your legal services. In Texas, we have a School Boards Association that reviews documents and reviews policy for alignment and adherence, but without that, once we enter into those agreements, we are beholden to them, particularly if we don't negotiate and protect our interests, particularly when it comes to our students and our staff and what could happen, not just because of AI, but when we think about Terms of Service in general, like you said, it happens at every market. Let's say there's a data breach. We've all seen the news. You're a privacy expert. Of these school districts, including our own, that did have a data breach had nothing to do with Artificial Intelligence, but we did have a data breach. Did have people trying to hold us ransom. This is real for districts, and then we're having to navigate these agreements and send out 20,000 letters telling people what happened. In particular, one-time last year, I received a letter. I never knew that my data was being used by a third party, and yet I get a letter if we were a third party with a company that you directly shared your data with. We were using the data; we had a data breach. Can you imagine a family going through that and having to navigate that in our district, we had to have in-person meetings where we had to inform families, not only of the issue, but of their rights, and have to ensure that they had the protections in place in terms of monitoring services and all of that. It took a lot for us, and it comes down to at its heart, ensuring that the agreements that we entered into that first, we understand them and how they can impact us in the event of some unfortunate incidents. That's just the one I described.

Debbie Reynolds  13:07
Yeah, oh my goodness, and then, for example, like data breach. A lot of times there's a data breach, a lot of the organizations say, okay, well, we'll just give you one year of free credit monitoring. It's like, well if you're 13, you don't have credit,

Nneka McGee  13:23
Not yet, until they give you credit, till someone takes it, and you're 13 with a Mercedes, right?

Debbie Reynolds  13:38
Very true, very true. I want your thoughts about some of the things that are happening around talks of raising the age of things like COPPA to 18, 16-18, so we already know that organizations have trouble navigating COPPA, and a lot of them have gotten fined for that. So to me, these laws, if they change, where they say, okay, well now we're going to have COPPA. But instead of 13 being that cut-off, it'll be 18. What do you think that's going to do to just technology and education?

Nneka McGee  14:19
So it will be interesting to try to navigate that for kids. Does that mean we start using more closed systems where data isn't shared or can't be released, and what does that look like? Is that even possible, given the interconnectedness of our networks? I mean, right now, in terms of data in our district, you have to integrate with our student information system, and that alone, not being able to do that, what does that mean? It would be definitely a protection is needed to the extent that I think studies have shown the negative impact social media and too much screen time has had on our kids. I think it has been interesting to see more now. Gen Z are becoming parents, and they're saying, you know what this thing? I know what the screen time did to me. I know what being online did to me. So it's a problem that may take care of itself. However, I feel like it can have a chilling effect to the extent that what does it look like to verify? We're starting to see these age verification where people online, we're having to show our IDs, verify information, then that gets taken. So, have we really solved the problem? It's like, yes, we're trying to protect student data, but now in order to verify ages, we're having to take all of this data to verify the ages that have the potential to be breached at some point if the system is not secure or, I mean, it could be as secure as it was, but you have somebody that's able to overcome that. So, while I do think that there does need to be modification to the extent that it can, I'm not sure what the impact will be to have 16 or 18. Does that solve the problem? Is that the problem? Or is there another problem, or is it the symptom that we're addressing and not the condition?

Debbie Reynolds  16:11
Yeah, I agree with that. I personally am always concerned when someone says that in order to solve our problem, we need more data. Me, I don't think that's true. I need to have more risk in order to solve this problem. So I'm hoping that there will come more technology, or more people thinking through these issues in a way where we're not creating more risk for ourselves trying to do these verifications.

Nneka McGee  16:39
I would love for them to talk to more educators. Yeah, particularly because we're dealing with education. If it was healthcare, and I was a physician, I would say I would love for them to talk to more physicians. I would love, and trust me, they talk to some, but I don't think enough educators, and particularly teachers, are at the table to provide their input when these decisions are being made, so everyone has the best intentions. I just think that there needs to be a positioning of more people at the table that are in this work, and so that we can see on the front end the impact of these decisions, and not finding out on the back end, because we haven't taken that time to invest.

Debbie Reynolds  17:18
I agree, and it's a people process as well. So I feel like sometimes we think the technology can solve all of our problems, so maybe we need to actually have more humans in the loop to be able to do this, yeah.

Nneka McGee  17:31
Most definitely.

Debbie Reynolds  17:32
So, what's happening in the world right now that's concerning you as it relates to privacy?

Nneka McGee  17:39
Well, again, I think that right now, because of the increased interest in AI, particularly in the field of education, there are a lot of voices out there, some more knowledgeable than others, and I think that misinformation is of great concern to me. I see people who are providing policy advice, who've never worked in policy when it comes to privacy, and not that you can't, but if you have a school system and trusting and then you're seeing this information is inaccurate because you've been in the field and done the work, it's concerning. My hope is not to diminish anyone else's voice but to encourage everyone to become knowledgeable on best practices when it comes to policy because the result is kids who are negatively impacted, and that's something that we can't afford. I definitely see the potential for a world of AI haves and AI have-nots because not only in terms of financial, there's some districts that are able to afford all the bells and whistles, and their kids are engaged with the appropriate and best uses of Artificial Intelligence, whereas you have other districts who cannot afford. Some districts don't even have computer labs, but people are shocked. Some districts still don't have district-wide Internet connectivity, and people are like, really, yes, we're often not. It's actually scary, and what does that look like when we say we're in the fourth industrial age, that we will have a society dominated by AI and automation, and yet our students are not prepared to address that in whatever field that they want to go into, because a district has not been able to give them the appropriate preparation due to financial strains. With that, AI have and have nots the access. So with large language models, I think it's well known that there is an inherent bias in these AI models due to how the models were trained, and I'd liken it to when I say access and opportunity, we could have a one-to-one district all the. LLMs, all the tools in the world. But if a student enters in or has an input and the output is not reflective or representative of their experience, have we really provided access, or are we reinforcing that this individual doesn't have a place in the society? That is also concerning to me because, once again, we have this opportunity to ensure that the models that we're using, particularly for Generative AI, or the sets that we're using for machine learning, are reflective of the society in which we live. So those are areas that concern me right now as it relates to Artificial Intelligence, those are two big ones.

Debbie Reynolds  20:43
Yeah, I am very concerned. So the first thing you mentioned, I've spoken about this as well, and that is the impact of having people who really don't exactly know what they're talking about. I'm like, you can hurt someone. There is harm here. So, to me, I take it very seriously to make sure you want to have the right advice, you want to have the right knowledge, and you want to have people with the right skill set because this is not for play. So, these things will determine the trajectory of people's lives. So it's very important that it be accurate and around the bias issue for people who've not experienced that, yes, absolutely there. If you type into a Large Language Model and ask it to show you a picture of a CEO, everybody's white, and they're all men, hardly any women. If you type in something about showing me a group of women, the women aren't working, hanging out with dogs, doing makeup parties, those types of biases have already existed before even AI systems were created. But I think what AI systems can do will really exacerbate those harmful impacts, because it can do it so fast and so a lot of people aren't checking. They're like, okay, well, this is fine. I think as people, especially people of color, we see and feel that every day. So that's why I think it's very important to be in this space right now, and the one thing I am happy about is that all this stuff is in the code. So it's like, we don't have to guess why these things happen. We can look at the code and find out why it is that way. So I'm like, bring it.

Nneka McGee  20:47
Definitely, and I'll add to that that's so powerful, Debbie, because the other thing is that AI is not alive. AI can't think. AI is math at its core, and we attribute, what is it, anthropomorphism? Yeah, I just want us to get beyond that as well, because I think it's becoming more of the issue, and then with that being reinforced, the other thing that is being reinforced when, when there isn't that knowledge base, is the conflation of large language models with all AI, and when I'm talking about things like machine learning and deep learning, computer vision, and people are like, huh? It's like, yeah, that's Artificial Intelligence as well. Or because there's so much fear still associated with Artificial Intelligence, one exercise I do when I'm providing training or facilitating professional development is I'll ask, how many of you use AI in your life beyond Generative AI, and people will, oh, I don't know, and I'll start to show them. Did you use your phone today? Do you use a translator to translate material? Have you used the recommender system? Have you even done a search today for information? All of those things are powered by some type of AI system. People are like, really, and it's like, yes, but unfortunately, because of the misinformation you have, way too many people, in my view, still thinking that AI dropped on the scene in November of 2022 and that it has some type of human quality, so that's why people trust it. Oh, well, it's Susie, or whatever the name is, told me this, and it's like Susie is a pattern, and at the end of the day, it predicted what should be the output based on the input.

Debbie Reynolds  24:14
Yeah. I mean, I tell people, especially for Generative AI, these systems are made to give answers, but the answers don't have to be correct, though. It's like you take it at your own risk, and they even say they have all these disclaimers now, oh, this may be wrong or whatever, so you definitely have to check the work, and have to understand that they're not human, they'll never be human. They're not sentient. They'll never be sentient. I get so upset when I hear people even talk about this. That is so pie in the sky, just bananas, bonkers. So we need to get down to the reality of a thing and be able to deal with it and understand not only the benefits but also the risks.

Nneka McGee  24:52
And I think it's happening. I mean, I put up a post on LinkedIn that was, we're in a bubble. I mean, people really express themselves in terms of their views of the bubble and who was who, and what was what, and again, I understand when someone comes to say, well, this person is not an expert. This person, why are they talking through this? I'm like, bring them to the table. We don't want to exclude anybody because I first want to understand their viewpoint, and then I want to help educate them. We can educate each other and help, but if I tell this person you shouldn't be doing this, they're still going to do it, but then they're going to perpetuate misinformation. So I really try to be as open as possible and be as curious as possible to the voices. But definitely, that dynamic is happening. I feel because, and I forget the name of the hype cycle, where somebody's like, we're just going into the trauma disillusionment. I was like, yes, we are. So I think some of the people who may have been in for the wrong reasons will start to filter out, and then we'll come to a place where we're starting to see more research, more best practices, more things that'll help get us to that plateau. So we've seen, I mean, it's a scientific research cycle, actually, after someone mentioned it during one of the posts, I went to look, and I was like, yeah, this is pretty much it. This is where we are, and we need to get to that place; for the reasons that I said, we cannot afford to look back 5 to 10 years from now and say, how did we get this wrong? It doesn't have to be that way at all, particularly with the amount of influence, for lack of a better word, power, that we all have to impact change and impact the way that it's implemented in the right way.

Debbie Reynolds  26:39
Yeah, I love the fact that I'm seeing people like you and others who are really telling these stories. You probably know Renee Cummings, she's a senior fellow at the Brookings Institute, and she's brilliant, but one of the things she talks about is data trauma. So I love that phrase, because I think it perfectly encapsulates the fact that not all of us experience data in the same way, and that is my issue that I have with AI, where people are like, okay, everything's all cupcakes and unicorns. For me when I use AI and I'm like, I know people who have been harmed by Artificial Intelligence, that they weren't even as advanced as the systems are now. So, what are your thoughts about that?

Nneka McGee  27:25
The harms or the data?

Debbie Reynolds  27:29
Just the data, that idea of data trauma, where people experience data in different ways?

Nneka McGee  27:35
Oh, I agree with you, and it goes back, inherently, to what we talked about earlier, about what is true access, does the data reflect? I mean, you see out here with the deep fakes now and the trauma. So going back to kids whose brains aren't fully developed, our brains aren't fully developed until we're, like, 25 years old, so they're not understanding putting these type of pictures out. It's just an enhancement of cyberbullying, and because as the advancements are coming out and they're released, even now, there are times when people can't discern the truth from fiction. So now you have something on the Internet that can impact a kid for their entire lives. So the hurt, the trauma, doeseson’t just start when it's put out; the trauma is long-lasting because now they say one of the hardest things to do is to defend a lie, and I'm sure you've experienced that. It's actually true. So, can you imagine that? So, the harm and the potential for harm is there, and how do we combat that? I mean, I don't have that answer, and I know the AI literacy conversation is happening; I think that eventually, it'll just be a part of digital literacy encapsulated there. I don't think it's something necessarily separate, but will that be enough? Because we've had digital literacy, we've had in terms of schools’ codes of conduct, and still, people find ways to do harm before that harm was analog harm. You may be able to end it, squash it, but as we know, the Internet is forever, and that's where I fear, with the amount of data, how data is used, in the propensity for those bad actors who want to use data in the wrong way, or, unfortunately, people who don't understand the impact of what we're doing is wrong and can impact someone for the rest of their lives. So that's chilling to me.

Debbie Reynolds  29:29
Right, and to me, part of not understanding that impact is that if you're in a group that is not negatively impacted, you assume that other people have the same experience. We know that that's not true, definitely. So if it were the world, according to you, Nneka, if we did everything you said, what would be your wish for either privacy or Artificial Intelligence anywhere in the world, whether that be technology, human behavior, or regulation?

Nneka McGee  30:01
So, my wish is that we can all breathe, take a step back, and take a measured approach that provides everyone the potential to benefit from AI as intended. That is the world that I would love to live in, where we take a step back, get together, collaborate in a collective fashion, bring more people to the table, be more reflective of our society, and make decisions in a measured approach for our kids. Because, for example, you had your shot, I had my shot. We're in our respective positions. We're at different levels of success for the adults in society, our kids need their shot, and right now, we are in a place more than ever before where things outside of their control can impact their shot, so we have an obligation, I believe, to ensure that doesn't happen. The final note I'll make on that is with Artificial Intelligence, and I'll continue to push this: yes, we have to teach our kids, but it's more important that we teach our kids to think and become agile thinkers with complex thinking, critical thinking, creative thinking, computational thinking, and the reason being, and then collaborative thinking, because Artificial Intelligence is not the end of the world. Quantum computing is coming, and if we think we're blown away now, just wait until Quantum computing is here. If we only teach kids Artificial Intelligence is the end and this is everything, they will not be prepared for Quantum computing, and so even more than this space we're in now, which we have to pay attention to, we also have to ensure our kids know how to think so that they are agile in the world that is being prepared for them.

Debbie Reynolds  32:09
I agree with that wholeheartedly, and I support that as well. Education, I think, is going to change tremendously because of all these new innovations with technology, and so, yeah, we have to be prepared. We have to be ready for the future. I agree. Teach people not what to think, but how to think so, yeah, perfect. Well, thank you so much for being on the show. This is great. I was so happy to have you on the show. Thank you so much.

Nneka McGee  32:34
Well, thank you for having me. I definitely appreciated being a part of the show. I follow you. Definitely a fan of what you're doing out in the space, and so I'm very appreciative.

Debbie Reynolds  32:46
Yeah, and I'd love people to follow you, ask those deep questions. So I actually saw that post about the bubble. That was a good one.

Nneka McGee  32:53
Thank you. There was some heat on there.

Debbie Reynolds  32:57
It was, it was.

Nneka McGee  33:00
Good heat, good heat.bVery respectful, I think, for what it was worth, very respectful, back and forth. But a little high. I had to step back a little bit, yeah.

Debbie Reynolds  33:14
Well, that's good. That means that you're getting people really thinking and talking. That's what it really means, that dialog for people, yeah, well, thank you so much. I really appreciate it. I will look forward to seeing more of your work on LinkedIn.

Nneka McGee  33:27
Thank you. You as well. Likewise.

Debbie Reynolds  33:29
All right, bye, bye.

Nneka McGee  33:30
Bye, bye.