"The Data Diva" Talks Privacy Podcast
The Debbie Reynolds "The Data Diva" Talks podcast features thought-provoking discussions with global leaders on data privacy challenges affecting businesses. This podcast delves into emerging technologies, international laws and regulations, data ethics, individual privacy rights, and future trends. With listeners in over 100 countries, we offer valuable insights for anyone interested in navigating the evolving data privacy landscape.
Did you know that "The Data Diva" Talks Privacy podcast has over 480,000 downloads, listeners in 121 countries and 2407 cities, and is ranked globally in the top 2% of podcasts? Here are more of our accolades:
Here are some of our podcast awards and statistics:
- #1 Data Privacy Podcast Worldwide 2024 (Privacy Plan)
- The 10 Best Data Privacy Podcasts In The Digital Space 2024 (bCast)
- Best Data Privacy Podcasts 2024 (Player FM)
- Best Data Privacy Podcasts Top Shows of 2024 (Goodpods)
- Best Privacy and Data Protection Podcasts of 2024 (Termageddon)
- Top 40 Data Security Podcasts You Must Follow 2024 (Feedspot)
- 12 Best Privacy Podcasts for 2023 (RadarFirst)
- 14 Best Privacy Podcasts To Listen To In This Digital Age 2023 (bCast)
- Top 10 Data Privacy Podcasts 2022 (DataTechvibe)
- 20 Best Data Rights Podcasts of 2021 (Threat Technology Magazine)
- 20 Best European Law Podcasts of 2021 (Welp Magazine)
- 20 Best Data Privacy Rights & Data Protection Podcast of 2021 (Welp Magazine)
- 20 Best Data Breach Podcasts of 2021 (Threat Technology Magazine)
- Top 5 Best Privacy Podcasts 2021 (Podchaser)
Business Audience Demographics
- 34 % Data Privacy decision-makers (CXO)
- 24 % Cybersecurity decision-makers (CXO)
- 19 % Privacy Tech / emerging Tech companies
- 17% Investor Groups (Private Equity, Venture Capital, etc.)
- 6 % Media / Press / Regulators / Academics
Reach Statistics
- Podcast listeners in 121+ countries and 2641+ cities around the world
- Over 468,000 + downloads globally
- Top 5% of 3 million + globally ranked podcasts of 2024 (ListenNotes)
- Top 50 Peak in Business and Management 2024 (Apple Podcasts)
- Top 5% in weekly podcast downloads 2024 (The Podcast Host)
- 3,038 - Average 30-day podcast downloads per episode
- 5,000 to 11,500 - Average Monthly LinkedIn podcast posts Impressions
- 13,800 + Monthly Data Privacy Advantage Newsletter Subscribers
Debbie Reynolds, "The Data Diva," has made a name for herself as a leading voice in the world of Data Privacy and Emerging Technology with a focus on industries such as AdTech, FinTech, EdTech, Biometrics, Internet of Things (IoT), Artificial Intelligence (AI), Smart Manufacturing, Smart Cities, Privacy Tech, Smartphones, and Mobile App development. With over 20 years of experience in Emerging Technologies, Debbie has established herself as a trusted advisor and thought leader, helping organizations navigate the complex landscape of Data Privacy and Data Protection. As the CEO and Chief Data Privacy Officer of Debbie Reynolds Consulting LLC, Debbie brings a unique combination of technical expertise, business acumen, and passionate advocacy to her work.
Visit our website to learn more: https://www.debbiereynoldsconsulting.com/
"The Data Diva" Talks Privacy Podcast
The Data Diva E210 - Nigel Scott and Debbie Reynolds
Debbie Reynolds, "The Data Diva", talks to Nigel Scott, Director, X-Digital Pty Limited, Digital Strategy, Project Management & Marketing (Australia). We discuss his career journey and share his insights on artificial intelligence (AI), particularly Generative AI. He emphasizes the need for dedicated individuals to truly excel in utilizing this technology and questions the widespread appeal of generative AI. Nigel and Debbie discuss the implications of Generative AI for data systems and organizational productivity, emphasizing the importance of teaching people how to think and ask complex questions when using generative AI.
The conversation also goes into the impact of privacy on data in the web era, prompting contemplation about the future implications of these interconnected elements. Nigel emphasizes the importance of finding a lazy way to come to a solution to ensure high user satisfaction and adoption in user experience design. Debbie initiates a discussion about data deletion and the complexities of privacy laws, particularly referencing the "right to be forgotten" in Europe. The conversation concludes with excitement for the future possibilities and the anticipation of the episode's release.
Nigel and Debbie discuss the future of technology and trust, particularly focusing on the role of AI in negotiating trust between parties. Nigel emphasizes the importance of trust over privacy and highlights the potential for AI to foster an environment of trust in the digital space. They also touch upon the challenges regulators face in understanding the complexities of technology, the need for a shift in mindset to embrace the potential of AI for the benefit of humanity, and his hope for Data Privacy in the future.
E210---Nigel-Scott
45:59
SPEAKERS
Nigel Scott, Debbie reynolds
Debbie reynolds 00:00
The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds. They call me the data diva. This is the data diva talks privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know. Now I have a very special guest on the show. I have Nigel Scott. He is the Director of Digital partners in South Perth, Australia. Welcome, yeah, thanks for asking me on board. Been looking forward to it, yeah. Well, I'm excited that we get to talk today. You are very prolific on LinkedIn. I love all the charts that you put together
Nigel Scott 00:51
about marketing and different trends. And I would love to know, you know, how did you get started and in your career, and how did you come to this particular line of work, but while rolling years back, I studied architecture and then design, and ended up doing computer graphics, computer animation, and I won a number of filmmaker awards while I was still a university and ended up making special effects TV commercials, so Australian audiences of a certain generation would would know of my work, because that summed up as a who's who of the major advertisers in Australia. Anyway, anyway. From there, I ended up being asked by the Education Department to to write a course in digital media. This was in the mid to late 80s, so this was just at the point where the Macintosh was a useful tool for graphic designers, and we spent a lot of money, and we we built a big school in Perth and equipped it with rooms full of computers and built a TV studio there. And, yeah, it's been spent five years running that course, and those students ended up all around the world working for advertising agencies and film studios and and so forth. Some are now at that age where they're managing directors of ad agencies in various locations, and after that, early 90s, just as the internet was launching. One of the big mining companies over here in Australia asked me to jump across and get into interactive training. So to take those television skills and then apply them to interactive media, and that kind of snowballed into a decade of E commerce, some very cutting edge R and D for clients all around the globe, and including people like Sheraton. And it tended to culminated in around 2000 working almost exclusively in banking and finance. And so I shifted from essentially an advertising and marketing background into basically systems integration, because we were building forex trading systems, wealth management systems, retail banking systems, commercial loan systems, insurance all that kind of stuff. And then I shifted sideways again into defense. And there we were building capability forecasting systems around projects and finance and resourcing and all that kind of stuff for the military, essentially trying to find out what happened all the money so they could go to set to the Senate and say, Well, this is where the money went. And so that was kind of the journey between marketing and data. So that's kind of the background to where all the charts and the insights come from. And since then, it's been a lot of E commerce. And I, seven years ago, I ended up going to Hong Kong with Accenture for their FinTech lab, because we'd taken some of the military technology and applied it to the challenge of forecasting cyber risk. And that was a journey in itself, which took me to London and New York and San Francisco and so, yeah, it's, it's been a meandering path. But yeah, all those, all those elements come together with the stuff that I throw up on LinkedIn to see, to see what. What bites and the LinkedIn goes back to a blog I ran around 2010 around the GFC time, so we just finished a whole stack of work for one of the larger telcos in Australia around the time of the iPhone launch, predicting or forecasting what people are going to buy and what people are going to use and those sorts of things. And so those elements became the backbone of the blog. The blog became a Twitter thing, and then I jumped over LinkedIn just to see see what was over there. And LinkedIn is probably more like the blogs, and then the blog was really it's just a whole bunch of random notes and posts to see if anybody's actually remotely interested in in the connections which, which are out there, if you go looking for them.
Debbie reynolds 05:54
Well, I definitely could tell that you were a smart cookie, and I love the things you post, and I'm always chuckling, because you always have such wit and humor when you put out these posts. And I can tell, I guess, as a data person myself, who's also had a meandering path through different types of jobs and different types of careers, I completely understand, and I think it's so cool that you're applying all the things that you know and all these different spaces, I think that's amazing. Let's talk about one of the topics that you talk about a lot on LinkedIn, which I love to hear you talk about, which is artificial intelligence. So artificial intelligence, you know, we know, we know, the artificial intelligence isn't new, but now, because of things like chatgpt and people getting super hot about large language models, it's kind of democratized the conversation around artificial intelligence, and there's just a massive amount of MIS information about what it does and what It can do. And you do all these models of predictions. You did this pixie dust model thing recently, which I thought was hilarious, but tell me a little bit about these charts and graphs and things that you put together, like predicting things around artificial intelligence and generative AI
Nigel Scott 07:18
that has two sources, one on LinkedIn and Twitter. I read a lot of gurus who say you can do this, and because you can do that, this means this. And so in 10 years time, it's going to be this big. And so what I do is say, Okay, this is what you say you can do. Let's run a thought experiment, or better still, a data experiment to see whether we can actually replicate what you're saying. And I can guarantee, nine times out of 10, a lot of the stuff, which everybody's saying you can do can't be replicated. That that's not to say that there's some magical things that happen with generative AI, if it's in the right hands, but for me as a tool at the moment, generative AI, when I was a uni was at that point where synthesizers had just become a thing. And I say just become a thing, because you no longer needed 10 million in a room to put a synthesizer together to make a noise. You could get one and put it off the shelf and start tapping away at the keyboards. So what you had is workshop labs there where kids could play with the synthesizers. And let's say there was 25 kids in the class, but there were generally only one or two who really got hooked on the synthesizer, and at the end of the three years, the body of their work was the stuff they were producing from those machines, and it was just that experimental play time which set them apart. I think Gen AI is going to be exactly the same. I think there's going to be some exceptional outliers, and I think they're going to be 18 to 25 years old, and they're just going to get so hooked on this stuff that they're going to become immersed in it, and they're going to do some extraordinary things. But I think putting generative AI in front of the average person, it's all going to be too hard, because it's no different to the synthesizer. One, you're going to have to learn how to play keyboard. Two, you're going to have to have a musical ear, and you've got to have that urge to keep coming back and tinkling the keyboard and hunting for new sounds. And so for the most part, as I've been saying the last couple of weeks, I think the biggest barrier to generative AI adoption is at the moment, I can sit on my mobile phone or my tablet and I just use my thumb or my finger, and I can just keep scrolling through an infinite amount of stuff. Most of it's garbage, but occasionally you'll find something which grabs your attention. But the point is the amount of effort to. Do that is negligible. It's no different than sitting in front of the TV with a remote control and switching to another station when the ads come on. And so what you have is this, this inertia and generative AI actually requires you to think. And I have a big question mark about how many people actually out there want to apply the energy to come up with complex questions to ask a machine to actually get something positive out of that system. And so what you're going to get is a bunch of outliers, a bunch of people who who do put that energy in, who do go that extra step, and do spend all that play time to find things that people aren't even thinking about going looking for, but for the rest of us, it's going to be Hey. Actually, I'm happy sitting here on the couch or on the train or or wherever, just scrolling through my phone hoping I can get a cat video. And I think that's a huge hurdle. And I look at the I look at the valuations of the the AI companies, and I say, well, that's how much you think you're worth, but you haven't solved that user experience, that UI problem to justify that valuation. Because, to be honest, the technology is such at the moment to have any practical application in any home is it's a bit like saying here's here's a do it yourself nuclear reactor kit. You can create free energy in your own home if you're willing to put in time and effort to figure out how to build it. And I just don't see people going there.
Debbie reynolds 11:41
I like that insight. Also, one of the things that I saw about the valuations, and I want your thoughts, my view is part of these sky high valuations, is that, you know, the market definitely favors applications where they can get more eyeballs on things. So the fact that they have so many users that are providing data information that they can actually use and test on while they're actually, you know, charging money for it, I think that's part of that valuation. What do you think?
Nigel Scott 12:15
Well, the thing that interests me about chat GPT is two things, obviously the narrative which launched it, which was the huge PR campaign back in November, December, 22 that's interesting itself. But the other thing that's interesting is, if you look at the growth chart, you had this, this, this Apollo rocket ship, which was taking off in December. They just kept going and going and going. And then somehow, by March, it started to go towards the horizontal. And by May, it was horizontal, and for all intensive purposes, hasn't moved. It's flatlined. And that probably gives you some sense of just how many people in the universe, or at least around the globe, were willing to put in that amount of effort to come to that thing on a daily basis and actually feed it with difficult questions. I know there's an argument to say, well, they launched the API, so everything's happening in the background, and it's all the developers and everything else, but you got to remember that the problem with the API is this thing isn't an Apple iPhone. People aren't going to be spending $1,500 buying a piece of hardware to subsidize everything that's happening in the background. The reality is, this API is sitting on top of everything else, and generally speaking, if somebody's buying into an API, what they're trying to do is, is buy something which, which essentially is, is Unitised data, which essentially is going to keep doing the same thing over and over again. So if I, if I buy an API, which is, say, Yahoo, stock finance feed, then what I'm expecting to see is every day on the hour, on the minute, on the second, those numbers which I want to feed my system so that when I'm running a dashboard, I can make those decisions. Now, the key aspect to the Gen AI thing is, it's probably a lot more experimental, whereby most API data feeds are actually Unitised data which you're buying the consistency of the data feed. There is no argument to be made at the moment that this thing offers consistency. What it does offer is creativity, an expansive opportunities to do things differently, but, but is that a useful API in the sense that most businesses and commercial organizations and arguably government essentially are systems. Which have been built so that you can plug and play people in and out. You know, any business which is reliant on on superheroes to keep the thing afloat is going to die very quickly because you've got a key man or key woman risk. The idea that what you're doing with an API is actually feeding data into a system which which is a flow of information and a flow of activity which is repeatable, because that's where the margins are. The business makes its money by being repeatable and cutting the cost of delivering that repeatable unit, putting something in which essentially generates stuff on the fly creatively is like saying, Well, what I'm going to do is I've got this really nice engine which runs at 300 miles an hour, pulls heavy loads behind it. And what I'm going to do is I'm trying new fuel, which basically is a chaos agent, and I see what happens, and you just don't know what's going to happen. Maybe it might improve the cost of running the fuel might, might reduce your fuel costs. Might, might allow you to tow heavier loads, but it also might blow up the engine. And this is, this is the big unknown, which, which is hanging around the whole generative AI API concept is, yeah, is it stable enough to put into a stable system and keep that stable system operational and functional and profitable at the end of The day?
Debbie reynolds 16:39
That's true. That's true, right? Because data systems are typically curated data sets, and they do a certain thing, and you want it to do that certain thing the same all the time, as opposed to, you know, you ask it the same question, but it gives you a different answer, or now you have a blob of information that may not be even related to one another, and then you're kind of scrambling it together. So I think that's a problem, especially when I see companies say that they're building things on top of these Janae I models because I'm concerned about them being stable, the model collapsing or forgetting. I did a video about that catastrophic forgetting. Because even as I play with some of these tools, depending on how you're doing it, it it's hard to have it remember base facts for you to build on top of, yeah,
Nigel Scott 17:35
but it gets to, gets back to the old data warehousing paradigm, and that is in the data warehousing world, you had maturity. So you had the small business that ran on a spreadsheet, and then you had the next level, which was a whole bunch of spreadsheets somehow gathered together into a pyramid. And then you went into a data mark, and then you went into your more sophisticated data warehouse, and then you went into data lakes. The key to understanding that maturity was always about how successful the organization was in cleaning and keeping quality control over the data. Because anybody's ever run a data warehouse knows that at least 80% of the work, if not 90% of work is actually just data cleansing and getting rid of the garbage and then matching the patterns and parsing the data. So it'll actually work together. Because essentially, when you bring five systems together, you can guarantee that what means something over here actually means exactly the same thing over there, but it's been the terminology and the framing is totally different. So somebody has to sit there and match these things together. I think the challenge with generative AI is simply this, and that is, in any organization, what you what you have is the spreadsheet jockeys, you know, the Excel gurus, the go to guys who, whenever there's a special problem outside of the normal run of the business, if there's a special problem, they get tasked with building models to to find answers to point solutions, which pop up over and above the day to day activities, I think generative AI sits in that space. And that doesn't mean it's going to take over the world. What it means is it's Excel 2.0 or 5.0 or whatever number you want to put on top of it. But essentially, generative AI in the hands of a spreadsheet jockey is going to produce some amazing things if they couple it with with spreadsheets. Same thing's going to happen with video editors. Same thing's going to happen with graphic designers, and honestly, probably the same thing is going to happen with with copywriters and people like that, people who are at the top of their game. You. Can see some value in the tool, adding a new dimension to the toolkit that they already have, but putting generative AI across the board and then suddenly expecting everybody's going to go from using Excel to to count numbers and rows to being spreadsheet jockeys is I suspect, totally unrealistic, because the headspace isn't there. And at the end of the day, generative AI is only as clever as the questions you ask of it. And I think this is the big thing. A lot of the lot of the people buying into this story are missing, and that is, it isn't about training the system. It's about training the guys and the girls who actually ask the questions. Because if they're asking dumb questions, you're going to get dumb answers. And I can guarantee is an awful lot more dumb questions out there than what there are smart questions, because it's just the nature of the world, I'm
Debbie reynolds 21:01
hearing people say that companies who are trying to use who are excited about AI, maybe they're trying to implement it, they're really pressuring people within the organization that now that we have this tool, that you're gonna suddenly be more productive. And it just doesn't work that way. Like you say, people need to know how to use it, and very few will get the best use out of it, because they don't really. If you don't really understand data you're going to have a hard time or understand data systems and how data systems work. You're going to have a hard time really leveraging this tool. And like, like you say, it'll be a few, a small few people in the organizations who can do those special things. But I think in order for it to be a useful business tool, I think the types of things it needs to do needs to be narrow. So like less less parameters, smaller models, smaller tasks. Because to me, that's that's the way that we you know, before this, that's the way AI systems were built, right? So they were purpose built for a certain thing, and it just did that thing, and it did it really well, and that was fine. But now you have this, these models that where it can do a multitude of things, and people don't even know how to use it, really, and then you're pressuring them, okay, make something magic happen. But traditionally, companies haven't done a great job in educating people on how to use these systems, so they kind of fall back on what they they know already. But
Nigel Scott 22:30
if you look at the launch of generative AI and chat GPT in November 22 soon as that thing was launched, you looked at what it could do, and you played with it. And my initial reaction was, if this thing is going to be a thing, then what I should be seeing in the next three to six months is an explosion in courses teaching people how to think, because if they don't know how to think, and more importantly, if the you don't have the ability to trigger their curiosity and their intuitive ability to want to play with ideas, then the value they're going to get out of this system is negligible. And I didn't see that happen. All I saw was a whole bunch of academics coming out with studies saying, well, we plugged this in, gave it to some hotshot consultants, and some of them got better, and some of them stayed the same, and but there was never any any kind of okay, if this study is going to have any value, let's actually test the intuitive nature of the people who are being tasked with using the tool, and that is, would be the benchmark would be, let's see the gamut of people who and their ability to think and process ideas and create questions, and then you start running your tests over who got more out of it and who got less out of it. I mean, if generative AI can be put into schools and teach kids how to ask more complex questions, then for me, it's done its job. I don't care if they're using it to teach using it to cheat at homework, because what I'd be looking for is these kids asking more complex questions what they were three months ago, and that would be my true measure of success with this technology in schools, because if I can see that, then I could see it. I could see this logarithmic growth curse that if they had five to seven years with this technology, then then high school students and primary school students to be off the charts. They'd be they'd be thinking like university and post postgraduate university students before they even got anywhere near university, because the tool taught them how to think, because it challenged them again. I hadn't seen any studies looking at those questions. So all I'm seeing is people are passing exams, and for me that that's redundant question, because this technology isn't about answering known questions, it's about using a technology to discover unknown questions. And only then will you find unknown answers, and only then will you be able to actually measure whether having artificial intelligence is actually useful in your context? Because what's the point of introducing a technology to solve question solve answers to stuff you already know the answer to? It's illogical, isn't it? Right?
Debbie reynolds 25:36
It is. I agree. I want your thoughts about privacy as a data problem. I always like to say that private, that privacy is a data problem that has legal implications, not a legal problem that has data implications. So from you and your point of view, how does privacy play in to data work? The
Nigel Scott 26:00
key to the privacy thing goes back to Andreas and a Netscape and the whole circa 1995 this is the future of the web. Because, to be honest, for the web to actually gestate and flower and grow and become what it want. Something had to be sacrificed. And what was sacrificed was privacy. And I had those discussions inside the banks at that stage, because at that stage back back. I mean, most people probably don't go back that far, or if they do, they can't remember that far back. But the discussions I had inside the retail banks in Australia when they were building their, their their their their websites, was the security around this technology is non existent. It's built on the premise that information should be free. The whole the whole premise of the web was, everything's out there, everything's going to be layered out there, and it can be scraped and do anything you want with it. So they do the whole Wild West approach. Of the origins of the west of the web was information is free and in abundance, and it's through that abundance that we'll be able to build other things on top of that. And that has happened, but to achieve that abundance, something had to be sacrificed. And what was sacrificed was privacy. Now you can't somehow magically roll that back unless you try and do what Apple does and says, Well, actually, we've gone beyond the web. We're going to close the whole thing down, and unless you want to stay in our universe, then, you know, so if you want to close privacy down, you're essentially saying, Well, what we're going to do is run in 180 degrees to where the web is taking us. And so, so you've got this magical trade off, you either, and this is essentially, is at the heart of the Gen AI, did they pinch the IP? Or have they used, have they fairly leveraged the IP, which was out there for free anyway, and this will be the arguments played out in court. That is the stuff there. So we just took it because it was there. Otherwise it wouldn't have been there because they wouldn't have put it there. Otherwise they would have put it up behind a locked dog. But because it was unlocked, we walked in and we took it and so Gen AI is actually the culmination of this web idea of information should be free, because by being free, it allows us to build these bigger and bigger universes of information, because we can make connections between bits of information, which then creates new information. And so we get this bubble Bigger, bigger, bigger, bigger, bigger bubble of of information, which new things come have come out of and and things which, which wouldn't have been achieved, or anything was behind closed doors. So So you either sacrifice this ability to share and therefore make endless, combatorial ability to make something new from the data, or, you say everything, we got to roll the whole thing back and make it private, and lose that endless ability to make those combinations. I don't think I mean, you could probably make an argument to say there is a sweet spot in the middle, but I don't see how you regulate that. I don't see how you enforce that. I don't see how I just don't see how you make it work, because the two are in friction with one another. So at the end of the day, the world made a choice and a sacrifice privacy for the web, because the web would never have emerged in its current. Form, without that sacrifice, at least, at least, that's my view. Yeah, when, when we were building cyber risk models, which we took to the big ingredientsurers and stuff, you know, I'd said, I can put you a landscape up there, and I can predict with a with a fairly high degree of accuracy who is going to get hacked because and who is a single points of failure and network, things like the crowd strike outage, telcos getting hacked, hospitals, databases getting hacked, government getting hacked. That's not a question of if it was always a question of when. And to be honest, the amount of information which is in circulation today almost makes the privacy thing redundant, because there's already too much information out there anyway. I mean, the only way, anyway you're going to solve this problem is, is to to to draw a line and say, Well, we're going to have to come up with a whole new set of privacy identifiers and start all over again, because we can't put the genie back in the bottle. And there's probably a question there, which says, Would AI in its generative form with large language models? Can it help us solve the privacy problem by being at the root of the problem? Is there somehow, at the heart of it, a magic switch which actually allows us to invert the problem and therefore find a solution? I don't think anybody's looking at that. Yeah, and I say that because we had so many iterations. We had the the Andreas and Bitcoin is the future of trust argument, which was around 2014 and so often the tech falls well short of the dream of this is what it could possibly do for us. But the reality is, the one thing the last 50 years of tech has proven is, if there is a lazy way of doing something, that is way the water will flow. You know the hype the hyperlink got up served by the scroll and the swipe, because the scroll and the swipe is a lazy way of navigating the webs, and what it is trying to click stuff with a mouse and jumping to web pages and waiting for them to load. And so, you know, first lesson of user experience design in this environment is, is there a lazy way of coming to solution to this, because if you can find it, then you can guarantee user satisfaction will be high and adoption will follow.
Debbie reynolds 32:49
Yeah, I agree with that. Fascinating. Since you're a data person, I always like to ask my data people this question, and that's about the Lisa certain privacy laws have stipulations around certain data that has to be deleted. And even in in Europe, you know, they have something called right to be forgotten. And there's a lot of debates online, especially between kind of legal people and data people. And I try to tell people that is that deletion is almost impossible. You know, you can you can suppress things you can do. You know, there are many different ways you can make data not available, and it's not as good as deletion. But I think the idea that somehow, I think people think they press a button and the thing goes away, and it's never to be you know, it doesn't exist anymore. And that's just not how data systems work. So I want your thoughts, if you can explain this to people,
Nigel Scott 33:53
I think the trade off with privacy on the web is a really simple one, that is people have got used to things like being able to click and buy on E commerce and Facebook, and if you're if you run an e commerce Store, you you become addicted to this idea that Facebook can find your customers, and all you do is feed the slot machine and people arrive and people pay For stuff, and you make a margin, and everybody's happy. The reason that happens is because privacy has to be suspended to allow that to happen. As soon as you pose privacy on top of that model, it's so fragile, there's so many break points, and it takes one of those break points to snap and suddenly the model becomes unviable, because cost of customer acquisition to the E commerce shop owner becomes twice what the price of the goods are. Suddenly, people find the ads they're getting on their feet have got no no resemblance to anything they're remotely interested in. I. The system just falls into chaos. Because the whole point is it's it's this rough shot riding over privacy, which actually makes the system work. As soon as you try and switch that off, the whole thing becomes just a collection of random nodes. And it's a strange one. People struggle to get the idea that the web is a bunch of nodes, and the thing that makes it work is the connections. But at the heart of those connections is that privacy issue, and you can either allow privacy to make those connections or you can allow privacy to break those connections, but as soon as you change privacy to break those connections, the web becomes redundant, because there's no way of jumping from node to node to get to a point, a to B's through C, D and E and F. And this is the that's this is what I'm saying about that that trade off at the very beginning. You can't have one without the other, because as soon as you, soon as you forego that privacy thing and make it too hard the guys who are operating the system, I mean, yeah, let's say you're running Facebook and you have a jurisdiction that says we're going to make it so hard you're going to have to forget everybody who actually comes onto your website to use your website. And then you've got another jurisdiction that says, Well, you can keep doing business as usual. What are you going to focus your energy on? And what are you going to let wither and die on the vine? And so what happens is, and this, I think, is the challenge for the EU is, if e commerce, and more specifically, B to B commerce, represent somewhere between 20 to 25 to 30% of your economy, Are you really willing to hit that toggle button and say, well, we don't want that anymore, because we're going to sacrifice that part of the economy for privacy. Are we actually going to think like big people and figure out a way of making it work? Because at the end of the day, these, all these things, are not an A and B choice. They are an A plus B equals C. And I just don't think people working in that space have the imagination to understand the complexities of it. Does that make sense?
Debbie reynolds 37:31
Absolutely right? And, and we're adding AI into it makes it even that much more complex and more difficult, right? Because you're you're throwing that on top of everything else.
Nigel Scott 37:43
Well, AI becomes a razor, because basically says you're either going to be in the future and adopt AI, or you're going to stay in the stone age, because AI isn't going to work if you're going to use privacy as a barrier to building AI, and so it really is a regulatory question about where the future is heading for you as an economy. And I really the discussions I've had with regulators over the years about these things, is they a they tend to be poorly informed. They struggle to get the people around them who ask the difficult questions, and they're looking for simple answers in a space where there is no simple answer. Because if there was a simple answer, it would have been found few decades back, and it would have been thrown in the mix, the one that, if you look at the history of the stuff, so far, it's really been a case of do it and ask for forgiveness. And the forgiveness part becomes so complex, people can't even be bothered trying to get them to do the forgiveness, forgiveness part, because at the end of the day, the products and services which have emerged out of saying, well, the hell with it, we're just going to do it and worry about what happens afterwards. The products and services which emerged, which for by and large, are free, are so overwhelmingly loved by consumers, the question of forgiveness is forgotten in the benefits which, which have emerged from, from what's been developed. And so, yeah, that I think there's a lot of hand wringing, but there's not a lot of thinking around the hand wringing. And certainly, if I had an answer, I'd say, Hey, this is, this is the way I would do it. But at the moment, I got to be honest, I been thinking about it for years, and I just don't see, I just don't see how one works without the other.
Debbie reynolds 39:55
Well, the world, according to you, and we did everything. You said, well. Be your wish for privacy or technology in the future, whether that be regulation, human behavior or technology,
Nigel Scott 40:12
I think the key to it is this. I think I think trust is more important than privacy, because what is dysfunctional about the world today is, to be honest, there's not a lot of trust out there. Whether social media is sort of compounding that or not, I'm not sure that. I think a technology and potentially could be aI which can actually create an environment of trust is far more valuable than say the privacy. That's not to say the privacy isn't an issue, but the reality is the glue that holds society together and holds commerce together, and everything else is centered in this ability for human beings trust one another, that if you say you're going to do something, you're actually going to do it. And we've come to an agreement and all of this, and that my good friend Steven Scott, who I met at the FinTech lab in Hong Kong, and he spent his career focusing on this issue of trust and how different digital trust is from from from everyday life, real world trust. I think the great thing that AI could bring is this ability to negotiate trust between parties in the absence of parties. Does that make sense? Yes, and I think in that context, AI would be great for humanity, so long as AI is willing to say okay, as long as humanity is willing to say, Okay, I've let my AI negotiate with their AI, they've come to an agreement which is in the best interest of both parties. Therefore we'll agree to agree that this is the best solution for all the parties involved, which which probably gets back to Andreas and ideas of Bitcoin, the thing I loved about Andreas and Bitcoin argument was it assumed that commerce was an environment where nobody trusted anybody, and so you had a universe of generals who could not be trusted. Therefore, had you come up with a system that allows everybody to be untrustworthy but still come up with a trusted result. I think AI Trumps that in the sense that it can actually render that whole concept redundant, so long as the objective of the AI we build is to actually get get that outcome right. Does that make sense? But I'm not talking. I'm not I'm not talking in terms of a God, like AI that essentially becomes a communist Overlord, where everybody gets down on one knee and and bows and scrapes to the to the machine that we've created. I'm talking about an AI that works in partnership for the best of humanity, for the best of what we believe and strive for. So it isn't an AI that breaks barriers, it's an AI that builds connections. Yeah,
Debbie reynolds 43:40
I love that. So AI basically brokering a relationship and that can be trusted, Yep, yeah.
Nigel Scott 43:49
And that AI may take many forms, because relationships take many forms, but in the in the end, that would be the true, true expression of what the web set out to be, because the web was about connecting all these disparate points. Yeah, and you will never traverse all of those points, but you can get from A to B if you go looking for it. And I think AI is just a just a friend to help you walk that journey, if that makes sense.
Debbie reynolds 44:26
Oh, does absolutely, yeah. I'm excited to see what we could do. I'm always excited when I saw some of these tools come out. I'm excited to see what people can do, how creative they can be, like you say, it's very important that people learn how to ask questions just in digital systems, period. So I don't think we're that great at that anyway. So that is definitely a skill that people need to be able to get so that they can get the most out of these, these types of architectures. But thank you so much for being on the show. This is great. I really appreciate. Getting up early in the morning to be able to do this, this session with me. Yeah,
Nigel Scott 45:04
no. Thanks for casting me on board. It's been, uh, yeah, fun chat. It's been always good to go to places you weren't expecting.
Debbie reynolds 45:14
Well, thank you for your sage wisdom. Um, and I'd love people to follow you on Twitter and definitely on LinkedIn, because I just love the charts and things you put out. It's amazing. So
Nigel Scott 45:24
thanks for that. It's it's fun.
Debbie reynolds 45:28
All right. Well, I'll talk to you soon. Thank you so much. Okay,
Nigel Scott 45:31
see you. Bye. All right. You