"The Data Diva" Talks Privacy Podcast

The Data Diva E167 - Kurt Roosen and Debbie Reynolds

January 16, 2024 Season 4 Episode 167
"The Data Diva" Talks Privacy Podcast
The Data Diva E167 - Kurt Roosen and Debbie Reynolds
Show Notes Transcript

Debbie Reynolds, "The Data Diva" talks to Kurt Roosen, Head of Innovation, Isle of Man, Government Digital Agency. We discuss various topics related to technology, privacy, and ethics. We cover Kurt's career trajectory, his work with the Isle of Man Government Digital Agency, and the challenges of balancing government regulation and innovation while protecting citizens and the environment. We also explore the implications of AI on society, emphasizing the importance of understanding the risks and opportunities of AI and the need for education and critical thinking to avoid over-reliance on technology.

The discussion also touches on online vulnerability and how companies have taken their bad habits from the analog world and moved them into the digital and cloud. We discuss the concept of security through obscurity and how it has impacted companies and share examples of companies that hackers have outmaneuvered despite their strong defenses. We also discuss the challenges of balancing responsible and irresponsible use of AI, citing examples of bias and unintended consequences in Amazon's hiring and X-ray analysis.

Throughout the episode, Kurt and Debbie emphasize the importance of transparency, accountability, and privacy in creating a more just and equitable society. We discuss the potential risks of using data to make inferences about people's beliefs and identities and the need for relevant and ethical protections under GDPR. We also highlight the role of human rights lawyers in defending individuals' rights. Ultimately, the episode provides a thought-provoking discussion about the complex issues of privacy, ethics, and technology and his hope for Data Privacy in the future.



Support the Show.

43:08

SUMMARY KEYWORDS

ai, people, data, isle, government, call, thought, bias, privacy, innovation, talk, internet, information, understand, man, organizations, citizens, society, world, middle ground

SPEAKERS

Kurt Roosen, Debbie Reynolds


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva”. This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show all the way from the Isle of Man, Kurt Roosen; he is the Head of Innovation for the Isle of Man Government Digital Agency. Welcome.


Kurt Roosen  00:43

Very pleased to be here. And that's quite a mouthful to say in one go there.


Debbie Reynolds  00:49

You and I are connected on LinkedIn; I thought that your background and the things that you do are really interesting. So I'm very interested in emerging tech, things like that. But I would love your perspective, especially when you work with governments on innovation. I don't think people associate governments with innovation. So this is even more interesting to me. But then also, it's so funny because a lot of times when people talk about countries that have EU adequacy, the Isle of Man is on that list. Right. And a lot of people don't even know where that is; they don't know what you guys are doing. So, what's your career trajectory in the role that you're in now?


Kurt Roosen  01:33

Ok, I've got a kind of long history in the IT industry. So I'd like to think I was one of the first people who went through university and did an IT course to be an IT professional, more years than I want to admit to it when I graduated, went straight into the IT industry, I'll kind of frame my university journey, in that this was before the Internet existed. But what I give you is a really interesting perspective because you were there when things were created. And you could see how things were constructed. And much as people like to say, everything we see is new and glossy. Everything has its roots back somewhere else. And if you go back far enough, you've kind of seen things before just called something different. For example, we talk about Artificial Intelligence now being this wonderful new thing. But if you go back even further than me, it was the late 1950s when robot advice or bots actually were invented and started being put into use, and Deep Blue, we know about, was playing chess with Russians many, many years back. So I like to think that I'm kind of the first batch of IT professionals, but people that didn't come from somewhere else, that weren't scientists that moved or accountants that had moved, or people of that nature. And that meant that my kind of training was really about how to make use of IT, as opposed to just having it. So that taking it out of academia and bringing it into the, into the real world, so to speak. And I think that journey has really given me quite a lot of empathy for the people who actually use it, and the love of concern for the bad use of it, and the bad use of data. And that's really what got me into the whole privacy, as we call it. Really the whole how you put security together, how your economics work. I'm a great fan of a lady called Shoshana Zubov; it's a very difficult thing to say who kind of coined the phrase surveillance capitalism. And I think that that's right, she kind of talked about it, a shift of people or consumers of technology not being customers not being employees, not being products, but actually being the raw materials of the economy, that people were using their information to sell. And to define that kind of economic strategy. That's great when it works for you. It's very, very bad when it works against you. Just to add to that flavor, in the many roles that I've had, in 2019, whilst head of IT for a bank here, we went through a data hack. That was one of the biggest in the region. And I'm one of those very, very rare IT directors that started off in the hack and was able two years later, to still be in the same place remediating it. So I got to see the whole thing. And that was actually given the fact that I've been in IT for so long, a really scary thing to be, really, what was a war at that point, a tiny little bank in the middle of nowhere that people have never heard of there was essentially attacked by the Russian state, as it turned out, and you then wonder why they did that, or why did they do that? Because they wanted some information. They didn't want money. They didn't want to do anything to the bank, but they wanted to know about somebody who was at the bank. And then that's what really kind of brings home to you when you're going through that really horrible process. How important the privacy aspect is to everybody, and why it's so important to protect it. And that's not just from those bad guys, the hackers, but sometimes from what we perceive as the good guys, the big corporates that want to use your data, sometimes not for the best of purposes. That's kind of my journey going into government and innovation. Yes, I'm often told by my colleagues that I am the only person in the Isle of Man government who has the word innovation in their job title; there's a kind of heavy responsibility to do something with it. And I'm very much about positive change for our citizens here, and actually, when we talk about innovation for government, sometimes it can be very self-centered innovation. And so how can we do something better? And it's for our benefit? And while I'm also looking at how we can do things better and more rationally, brass citizens benefit. And to some extent, how can we set precedents because we're a small government that can be quite nimble and that can be used elsewhere? We're an odd place here, I don't know whether you're aware of the term biosphere, the UNESCO Biosphere, which is a status that's given normally to kind of nature reserves and things like that. Well, the Isle of Man is the only entire jurisdiction, the entire state, that is a biosphere. So we kind of have a responsibility to look at how we merge or how we balance people, the environment, and also the economy within the space where all the things, all the normal economic activities take place, which is entirely different to kind of ways you look at things elsewhere. We don't just look at the money, the capitalism, but we do look at all of those ethics and things that go around it. And that just naturally takes you into when you're in the IT world. How do you handle data? How do you manage it properly? How do you make sure it's secure? How do you make sure it's used for the purpose it’s intended and not for an unintended purpose? And this is where you find it's hard to stop me talking. So, I'll pause.


Debbie Reynolds  07:23

I love it. That's fascinating. I love to hear your backstory. I, too, started in data fields for a kind of commercial Internet. So, I understand exactly where you're coming from. As you were talking, I was thinking about something that maybe people don't understand. And I think this is having a huge impact on companies and the way that they do business now that they're having a hard time adjusting to cybersecurity and privacy. And I think it is; I'm going to call it security through obscurity. Where before, let's say, no one cares about this document, or it's just in the back room, and I'm just going to walk out the door, and nothing's going to happen to it. So the Internet really changed that. And I feel like a lot of companies that are having trouble in the digital realm sort of took their bad habits in the analog world and just moved them into the digital and into the cloud. What are your thoughts?


Kurt Roosen  08:25

Yeah, absolutely. We have another perspective here that the Isle of Man, there's a whole load of people in the world that have no idea where that is, or what it is, that gave people an added sense of security in the old world because we've got 30 miles of sea around us. So you kind of feel safe and protected because no one's going to come here and attack you in any form. And then the Internet came. And of course, that changed, but the attitude was even further back here than you might have expected elsewhere. So things like that cyber security attack that the hack that comes as a great shock to people, they go, well, is anyone attacking us here? Why would that be the case? Not really understanding that it isn't actually geographical anymore, and they not picking on the Isle of Man; they're picking on the bit of the Internet that happens to be there. And that kind of shift in attitude from, I'm safe because I can see my environment or environment really does change quite substantially when you go onto the Internet. And I think you're right. There's a lot of organizations that really haven't figured that out. I really don't understand the fact that being online means being vulnerable for everybody for however big an organization you are; I was in a bank that was a highly protected organization, and hackers that actually found their way into the organization took four years to do that. There were four years of effort. I couldn't devote four years of effort from the bank in defending that. So it was kind of being out-maneuvered. And there was an old adage that said that for security purposes, you just need to be stronger than everyone else. And the hackers will go away if your defenses are too strong because they'll go well; there's easier things to do. In our particular circumstance, they did exactly the opposite. They said all these defenses are very strong. So that means there must be something interesting behind them. So we'll actually continue that way. So that thing about you not being caught by the bear because you can run faster than your friend really didn't work out for us. I do think even though we've never been in this situation for so long, there is a vast number of organizations and companies and individuals who really don't understand what they're participating in. And I think that the biggest danger is that lack of understanding, that lack of transparency, that helps people to understand creates problems that people aren't aware of. And that's the biggest danger in life really, isn't it? If you're walking into something you don't understand the risks of, then that's always going to create your problem.


Debbie Reynolds  11:10

I know for example, in AI, I tell people that people treat AI like a teddy bear, and it's actually a grizzly bear. So, if someone puts you in a room with a grizzly bear, I'm sure you have questions. Maybe do things differently, yeah?


Kurt Roosen  11:27

No, no, no, I think, like a lot of things in IT, there's always a good side and a bad side. And that's no different than life really is. If you look at things like AI. And you go well, this can be used for fantastic things. But ultimately, you have to understand the risks; you have to understand what it can do and what it can't do before you can properly participate in that world. And if you don't, it's very dangerous that you can do things yourself that you don't want to do, you can reveal things that you don't want to do, and it can introduce bias because, ultimately, it's taking its feed from us. And we haven't worked out bias speak unconscious or conscious bias, or discrimination. If it's taking its cues from us. At this moment in time, it hasn't worked that out yet. On top of that, it doesn't have empathy or the kind of faults that we have actually started writing a book a few months ago. And I was getting into the mode of thinking that if you think that AI draws its intelligence from what we provide to it, and it's our original thought, so the kind of masses together to come up with its original thoughts. Then, as we start depending on AI more and more, we’ll stop having original thoughts, and therefore, it will have exactly the same base of information to work on. So, actually, we'll get dumber, and so all AI at the same time. That's a horrible thing to think about us doing to ourselves, and that we actually put ourselves into the position where we stagnate because we stop thinking. And I think that the huge danger of AI is that we rely on it too heavily. And we believe it without question, without an interrogation, without critical thinking. And I do have a huge concern that just generically across our society. We're moving from education upwards, that necessity to criticize to question and to actually think, is this right? Or is this wrong and understand how to do that? There's a great book. And I'll probably quote the title wrong, which is why do I need a teacher if I've got Google? If you read that, it sounds very threatening, although I've seen it on a lot of teachers’ desks, probably looking for how to combat it. But what it really says is that you've got this information available, we should be teaching our children how to use it properly, how to be critical of it, how to go and say, is that right? Or is it wrong to be able to tell the difference between good information and bad information? And I think because this has all been driven by data, AI is really amplifying that. It's kind of taking it forward quicker than we've reacted to it at all. That's where I saw the danger of lying. I want AI to do smart things. But I don't want people to stop thinking because once they do, then that's kind of hiding to nothing.


Debbie Reynolds  14:44

I agree with you completely, and I can't wait to read that book for sure. Your book, the book that you're writing, I think about this a lot. I guess that is a corollary to this; my sister and I laugh about this a lot because of my parents. I guess they were polymaths. I didn't know that at the time. I didn't know what that word was. But I feel like I learned more from them than I did at school, right? Because they taught us how to think, taught us how to reason, not just that George Washington chopped down a cherry tree, right? It was how to do things. And so, in the digital age, now, we laugh about the fact that a lot of people don't know how to read maps anymore because they use Google, or they use Uber, or whatever it is, you still have to know how to read a map. Still, they're not infallible. They make mistakes from time to time, GPS tells me to turn here, and it's like, oh, turn into the lake. No, I can't do that. Right. So I think that you're right; I am concerned that people will use AI as a crutch in a way that makes them think, okay, I don't need to learn anything because this technology does that for me. What are your thoughts?


Kurt Roosen  15:57

And if it's kind of go to one side of that, as well, we have the kind of fake news page where things are inserted into that. I kind of go back to when my kids were slightly younger. And they started using Facebook. And I remember my daughter coming to me and saying, I need to send some money somewhere. And you go, why do you need to spend some money somewhere? She says, because there's this person in Africa who needs our help to do this. And we need to send them some money. And I said, well, how do you know that? She says because Facebook said so. And you go well, have you checked it? Have you gone and found another source? That obviously hadn't occurred to her; she took that at face value as a trust. That was a bit of an eye-opener for me. And that kind of lesson for what needed to be taught when Generative AI kind of first came in. Someone who asked, I think it was Chat GPT, or a variant of it, to create a picture of salmon swimming upstream. And so he went away and thought about this. And they came back with a picture of fillets of salmon actually swimming upstream because it looked at the data, at the Internet. And there were more images related to salmon of things that you bought in the shops than the actual fish. So if you think that one through, if you've got enough computing power and ability, you can insert things into the Internet, those bots will use that they'll take as a truth, but can be a complete mystery. If you do it enough times, if you have enough power if you do all of those things. So I was making the point for somebody: if China decided they didn't like me, and they said Kurt Roosen is an idiot enough times on the Internet, then when if you asked AI what it thought of Kurt Roosen, it would say Kurt Roosen is an idiot because that was the base of its knowledge. That's a real scary element that can be brought there, especially when you look at the international flavor of that, what countries can do and the efforts that they put into these things, how they try and affect elections, etc., there's a real chance that they can affect the questions and answers that are coming out of AI to bend it in a particular way. And I find that quite challenging and something we need to think about. Yeah, I love the way you're thinking about this problem. I agree. 100%. Because I think for some reason, maybe this is a bias in and of itself, where people think something coming out of a digital realm is more true than something that maybe someone told you, right? Well, I've heard that a lot. I think doctors have this problem where you go to the doctor, and they bicker with them about something that you saw on Google, even though your doctor knows you, and they know your results, stuff like that. Think of it another way: here we're running at an innovation challenge, a competition for this thing. And one of the subjects we've gotten there is AI. So we're trying to think of some cool things to do in a competition. So one of the things we're putting in there as what someone might put into a competition is an AI politician. Actually, when you think about that, if you've got AI absorbing all of your legislation in a very impartial way, I can tell you what the law is, it can tell you what you're able to do and what you're not without any emotion. But without any bias in there. It's just taking it from what it's seen. Cause you have to be very careful about what it ingests, where that comes from, we have kind of hopeful that we can take all of that legislation and put it into and I think, and actually, the politician is going to ask questions can kind of be a political adviser, but without the bias that that's actually quite a cool thing to do. But the key to that is knowing what data is going into it. If anybody had the ability to insert data into there, you can imagine how quickly that would get out of hand. Because it would introduce the biases that people wanted to introduce. So, as I said, there's always kind of two sides to the coin on this, IT and technology can do wonderful things. But I think it's not realistic to say that it can self-regulate, that you can just let businesses do what they want. And they'll always do things for the right reasons. We've never really found that to be true in the past. So I don't know why it should be the case in the future. But similarly, you don't want to over-regulate and stifle these things. So probably, we're in a place now that is the toughest that balance that we've ever been in. But it requires the kind of collective brainpower of people who are in that space to kind of put their heads above the parapet and say, no, no, we have to do things this way. And this is the right way to do it, as opposed to just letting it happen.


Debbie Reynolds  21:04

Yeah, I agree. I guess, a lot of times, maybe it's just the nature of the Internet or the nature of algorithms that they want people's eyeballs and attention for. Whereas I feel like I hear a lot of those extreme arguments on one side or the other, don't hear enough about that middle part, never regulate or regulate everything, as opposed to let's calm down, think it through, figure out what makes the best sense for what and I. Hopefully, we have a lot more of those conversations and more level headed folks being involved in these decisions, for sure.


Kurt Roosen  21:40

I'm relatively new to being in government. So I kind of avoided this all of my career and kind of got to the end of my career and said, I need to try that. So, I've been inside the government for 18 months, and it does change your perspective slightly. Because if the government is doing its job correctly, it's obliged to do things to protect its citizens and to intervene in some way. So it can't just hold his hands up and say, whatever happens to you happens to you. And that's not our problem because the government is there to take its society forward and look after its people. So this middle ground, because at the same time, the government also has to earn the income from taxes, and all of those things that business generates. So you are absolutely, totally stuck in that middle ground, where you have to develop the economy. And at the same time, you have to take into consideration all of those things that people want you to do to protect them and the environment and all of those things around. And it's actually quite a responsibility. It's a really tough job, tougher than I've considered when sitting out in the private sector saying, why aren't the government doing this? And why aren't they doing that? You start to see inside the real dichotomy you’re stuck in, and how to deal with it is not simple.


Debbie Reynolds  23:01

Excellent. Talk a little bit about privacy; how does privacy play into the spaces that you work in?


Kurt Roosen  23:09

Well, obviously, government, every government is probably the largest holder of private information, one way or another when you put all the bits datasets together. And it's also the one that's going to be most heavily criticized if it fails to protect from a security perspective. And also privacy. The problem, I guess, is a number of circumstances exist where governments create their own problems by almost breaking their own rules. And because they consider for whatever reason, whether it be national security or some other reason that they're put in there, they see they kind of have to break in the people's privacy. And again, first of all, I'd say that's a really tricky balance, you know, when you are trying to protect the security of society against the privacy of everybody, there is a balance there as to how that's done. But we're kind of fortunate here in the Isle of Man, we only have 82,000 citizens here. It is so tiny, a little country, and we're only a very small landmass. 29 miles high by nine miles wide, it's an island. So everybody knows everybody. There are no secrets here. And you have to and I've talked in the UK about this in government conferences. We can't hide behind the huge government and be anonymous; it's not possible. So actually, we have to take almost everything to the people. We have to be transparent to the government and say, put their hands up and say, yeah, we're doing this. And we're doing this because of these reasons. Not wait for that to be found out. Because it will because it's a small nation. But that's actually quite refreshing. Because you get people to buy into the problem and say, this is how we're solving it. You may not agree with that, but at least you know about it; it comes back to that knowledge of what's happening to you; again, a few years back, we had a citizen Confidence Survey; governments do this every now and again, just to kind of play with their own heads. But we had a 75% Trust rating, but from our citizens. So our system didn't say 75% of them weren't comfortable with our government doing whatever they're going to do because we know they do the right thing. There was a caveat to that, which I kind of inserted at a time. And that was because I think perhaps that 75% of the populace, thought we were so incompetent that even if we had the data, we wouldn't know what to do with it. But I do believe even in government, that there is value to that transparency to that saying, this is what we do. This is why we do it. If you object, you object, we take that on the chin, but we're going to convince you that we're doing this for the right reason rather than keeping everything kind of secret. And as I say, letting people find out by themselves, I think is a disaster of a thing to do.


Debbie Reynolds  26:14

I agree with that. I think transparency for organizations, whether they be governmental bodies or businesses, I think that's the future, right. So, they haven't been transparent in the past. That's what is expected of them, especially when they handle data individuals.


Kurt Roosen  26:34

We have GDPR, as you mentioned; the Isle of Man has an equivalent of that. So, it is pretty much the same as European GDPR with a few changes. But I mean, there are some little-known parts of that, for people consider that it's all to do with data. But there are actually some aspects of GDPR, which are about processing. But for example, there is a clause that if you're a decision is made with you by a machine in its entirety, you have the right to know that you also have the right, if that is the case, to ask for that to be reviewed by a human. And that is already written into GDPR. But not many people pick that up. And that actually is quite contrary to AI and robot advice. And that is one of the things that the EU is wrestling with at the moment is new AI apps and things to say, well, do we keep that or don't we keep it? Do we find a middle ground? No, certainly here, because we have to revise the GDPR laws as well. What we'll be desperately trying to do is find that middle ground, that one that says in certain circumstances, yes, you can make these decisions using AI. But someone has to be accountable for that. Someone has to own that process. And I think one of the things with AI that's a bit disappointing is when you look at where AI is constructing books in the style of somebody that is basically reading their books and almost copying, plagiarizing. To a large extent, you go well; someone set that task to do it. So whoever set that task did that, and should have had the foresight to understand the implications, therefore, should be liable for it, not the AI. The AI is a tool. It's a very fast, very sophisticated tool, but it doesn't wander around the Internet by itself looking for things to do. People tell it to do things, and they should be held accountable for what they do. And the companies that install these things should be accountable for I think, the same way they're accountable for their employees and their actions in relation to other people.


Debbie Reynolds  28:43

I agree with that. Yeah, right now I think people are doing the dog ate my homework. I don't know what happened. This bad thing happened. And I'm not responsible. And it just can't be that way. It shouldn't be that way in the future. And as you say, the tools do what we tell it to do. So it's doing things you don't want it to do. There's a problem. I think there was a story I read about. There are some researchers have said that they had AI looking at some X-rays, and the X-ray AI could tell the race of a person without actually seeing them or knowing any information. The researchers were looking at when they asked like, so why did they do that? And you're like, well, we don't know. That's not acceptable. Why do you need to know if it's doing something that it was not intended to do? You need to look at that. That's your responsibility. It's not the AI's responsibility.


Kurt Roosen  29:37

Amazon went through that where they were using AI to actually filter jobs. And what they discovered from looking at the statistics after the fact was there was a significant bias against women. It was hiring men. And so they kind of went back to the AI to try and figure out how it was doing that or what was in there because that wasn't the intent. And they went to the extreme of filtering out all of the gender information in there. So, the AI didn't know that it was a man or a woman. But because it had access and was going and finding information from elsewhere, it could figure it out. So, in the end, they concluded that they couldn't get the bias out; they couldn't figure out where the bias was coming from. So they stopped using it. That's the responsible use of this. They were checking what they were doing. They were finding it, but the results weren't what they should be doing. And if they couldn't fix it, then they removed that from the process. But a lot of people aren't doing that. And that's the problem at the moment.


Debbie Reynolds  30:40

That's true. That's true. Also, another Amazon example is that Amazon they were denied so many candidates that, at that rate, they would have run out of people; there were no other people that they could possibly hire because it was just weeding people out; they had to figure out what was happening with the algorithm. But you're right, you have to definitely look at that. Also, I think one other thing that concerns me about AI that I'm seeing is that it's not a major move, but I'm hearing people say, okay, we want to use AI; I have to do things autonomously, which I think you should not do. You need to have a human who understands what the AI is doing, not have AI go like install applications or some researcher had AI doing these autonomous tasks. The AI decided that they were going to deny him access to the system because they felt like he was getting in the way of their objective. Saw that in a movie, right?


Kurt Roosen  31:40

Absolutely, yeah. But it is kind of getting to those sorts of points. And that's why I say I think we're in a challenge of the moment about getting that balance between, let's call it, good and evil. Sometimes, it can be stark because if we can't intellectually figure that out, then we shouldn't be setting AI off using that kind of lack of foresight as an excuse. I do believe we're at a turning point, I do believe if we do it right, this can be incredibly positive. I believe if we do it wrong, I don't think it's doomsday and disaster. But I think there are aspects of our lives that could be badly affected. And I particularly worry about this inherent bias that could exist here. Again, when I was kind of thinking about writing the book, I was trying to think about how you arrive at some decisions that you make as a human. And if you think about it, there's a whole range of things that are there, and I was trying to build up for okay, why did I make this decision at this particular point in my life? What affected that? And what you look at is actually an imperfect mind that's that you don't remember everything that you do. You remember certain bits of what you do. There was an example I have in the book, which was a bit of a film, I remember the rest of the film, and I don't remember how it arrived there, or where it went to, but something stuck in my head. And that affected the way I thought about something further on. If you look at AI, it operates in a perfect world, not an imperfect one. It doesn't remember fragments. So it will come to different decisions that we would, because it's looking at the perfect answer. When none of our answers to anything we do are perfect. We do have biases some of those positive ones. And you know, what, should we do the right thing? Shouldn't we do the right thing? What determines what's right and wrong in there? What's a good thing? What's a bad thing? All of those are very subjective. And they're built up in layers and fragments of what we get over our lives. So when you start making the perfect thinking machine, actually, that imperfection is lost. And that loses what I suppose we call humanity.


Debbie Reynolds  34:03

The thing that you're talking about, exactly, I call it wisdom. Now, I don't think the AI systems can ever be wise. It's just getting data doesn't mean that you have deep enough knowledge or understanding of it.


Kurt Roosen  34:20

On LinkedIn, I asked the question, how do you define intelligence? And the best answer I got back was intelligence is is defined by being able to derive original questions. And I thought that's quite deep. Because that's not what AI does. AI doesn't come up with the questions, it comes up with the answers for okay, that I can get with that I can figure it. So if you then take that as a definition of intelligence, we're not actually creating Artificial Intelligence. We're creating machine learning. It's doing what it thinks we want to see the answer to that, but it wouldn't come up with an original question. I think that's quite a cool way of thinking about it.


Debbie Reynolds  35:05

Oh, that, too, is trying to give you an answer based on the past and the past; the future is not going to be like the past. So I can't know what that's going to be. I've heard someone say, someone that I know, instead of calling it Artificial Intelligence, they call it Artificial Capability. And I think that's probably closer to what is doing. So it's not really smart in that way. It's just doing things that will take a human longer; we're doing it a different way. I'm concerned about inference in AI systems. So, when you're talking about the Amazon example, I'm not picking on them exactly. But to me, I feel like AI systems are fancy math. So they look for patterns or different things like that, the inferences I'm concerned about, let me make an example. Let's say, oh, this is a good one, Cambridge Analytica. So, Cambridge Analytica, when that thing happened, one of the researchers that was on that project, he said that they have a thing called the KitKat Project. And so what they found in the data is that people that they interacted with on Facebook, if they put out an anti-Semitic message, and they liked it, or something, they correlated that with their data. And they said that those people also tend to like KitKat bars. So that's what they call the KitKat Project. So does that mean that if you like KitKat bars, you're anti-Semite, you know what I'm saying? So, I think those inferences can be made just by over-collection of data and looking at the wrong thing. What are your thoughts?


Kurt Roosen  36:48

Oh, yes, I know, there's a lady called Suzie Alegre, she's a human rights lawyer. But she's also been involved in the data world. And she gave a talk fairly recently. And she was talking about the capabilities that she saw that they had to kind of rein back, one of them doing it with a degree of accuracy. There was a system that looked at people's faces on the street, and it could tell with a high degree of accuracy their political leanings just by looking at their faces. You thought that that sounds impossible. But the statistics were saying, do you think well, what's that useful for? Why do you want to do that? I had one entry into my innovation challenge last year, which was about detecting race at borders. We don't have borders here because we're kind of linked to the UK. So it was irrelevant to us anyway. But again, for any people, why do I want to know someone's race? What's the purpose of that? And if you go back to GDPR, and the kind of protections that are put in there relevancy, you hold data, but you have to show that it's relevant to something that you're doing. And you have to show why. And there are circumstances in which the Data Protection Commissioner could say, well, that's not a valid way or a valid thing to do. And then, of course, that layering on top of that, what do we think is ethical there? You know, why would they be doing that? China uses this kind of race detection in its community. And without knowing or investigating, I'm sure other countries do as well. But they shouldn't. We see that as a society is unacceptable. And that's where it comes back to, again, who's the society? Is it the government? Is it the people is it that people like Susie, the human rights lawyers, who are kind of defending those boundaries? It's a tough one. But it's a battle that has to be consistently fought, and people have to be prepared to do it, to say, I'm not sure about this and go back to that accountability. I am going to hold somebody accountable. If this happens, and it's been raised and or if it's been held in secret happening. I think sometimes technology can just reflect some of the problems we have in society in general. And I think even more so now, in that we're using AI to take information about its view of society. That may not be random, that may not be unbiased.


Debbie Reynolds  39:35

That's tremendous. Well, yes. Susie Alegre, she's been on the podcast, so you're in good company.


Kurt Roosen  39:42

Oh, good. Do you know she's also from the Isle of Man?


Debbie Reynolds  39:47

I did not know that. Oh, it's so cool.


Kurt Roosen  39:51

Until she left back here a few years ago, her siblings lived here, which is how we get to come and do talks for us without having to pay a big fee.


Debbie Reynolds  40:00

Oh, wow, that's amazing.


Kurt Roosen  40:02

Yeah, she was the Interception of Communications Commissioner for a number of years. So we do have a body here that actually, you can appeal to if your phone is tapped or anything like that. And you think that wasn't done in the correct way. So yeah, she's a very interesting lady.


Debbie Reynolds  40:20

Very, yeah, totally, totally agree with that. So if it were the world according to Kurt, and we did everything that you said, what would be your wish for privacy, or innovation or IT, Cyber, anything, anywhere in the world, whether it be regulation, human behavior, or technology?


Kurt Roosen  40:41

I would kind of hop back to that. I want technology to work for us. And that's for people; I accept the fact that it has to work for corporations because there's an economic aspect to it. But within boundaries, ultimately, nothing we should do should really take away from the fact that there are people at the other end of these, as I see it. And there's a kind of perspective of the Isle of Man in there. Whilst I'm not an advocate of secrecy, because that goes against transparency. I am ultimately an advocate of privacy; people have a right to privacy. And ultimately, every element of government and society should be attuned to protection. That kind of should be its job. And it should help people who feel aggrieved by a process or something that's happening. So, for me, it would be a kind of realization. And this sounds like something that someone would say in a beauty pageant here; it's not quite world peace. But it's understanding actually, we all get on a lot better if we were a lot more transparent about what we did, and took responsibility for the things we did wrong and admit to them, as opposed to try and hide them.


Debbie Reynolds  42:03

I agree with that completely. Well, this has been a tremendous episode; I am so happy that we were able to connect. You know, I love your perspective. And I can't wait for your book to come out.


Kurt Roosen  42:16

Yeah, I keep saying that. I'm gonna have to stop talking about my book. I'm going on holiday soon. And part of that is to kind of, there's a hotel I'm going to that has igloos. So I'm gonna sit in an igloo and write the rest of the book.


Debbie Reynolds  42:31

Well, we'll be waiting. We'll be waiting whenever you're ready, whenever you're ready. So thank you so much for being on the show. This is tremendous. I'm sure that the audience will really love your insights. Spot on.


Kurt Roosen  42:45

Thank you very much for inviting me. I hope to speak to you again.


Debbie Reynolds  42:50

Yeah, absolutely. Talk to you soon.