"The Data Diva" Talks Privacy Podcast

The Data Diva E258 - Terry Bollinger and Debbie Reynolds

Season 5 Episode 258

Send us a text

Episode 258 – Terry Bollinger: Understanding the Limits of Artificial Intelligence

In this episode of The Data Diva Talks Privacy Podcast, Debbie Reynolds, The Data Diva, speaks with Terry Bollinger, retired technology analyst at MITRE, about the limits of artificial intelligence and the growing risks of relying on systems that only mimic human understanding. They discuss how large language models operate as mimicry machines, imitating intelligence rather than achieving it, and how this design choice leads to fundamental weaknesses in trust, accuracy, and accountability. Terry explains that AI models based on probability and pattern replication erase uniqueness, creating false confidence in their results. He warns that by averaging data rather than analyzing meaning, these systems blur important distinctions, making it difficult to detect errors, anomalies, or malicious activity. Debbie and Terry explore why true privacy and security depend on identifying outliers —the small deviations that reveal hidden threats, rather than relying on average trends.

Terry describes how traditional security systems are built on clearly defined boundaries, data paths, and verification processes, while modern AI systems often remove those controls. He emphasizes that when data is distributed, reweighted, and stored probabilistically, it becomes nearly impossible to verify what has been learned, lost, or leaked. The conversation examines the risks of utilizing LLMs in sensitive environments, where transmitting confidential data to remote commercial systems can compromise containment and integrity. Terry discusses how interpolation, or the act of filling in the blanks when data is missing, leads AI to generate convincing but incorrect answers, what he calls “random noise masquerading as insight.” Debbie and Terry also examine why intelligence, wisdom, and comprehension cannot be replicated through scale or speed. The episode concludes with a reflection on the importance of human judgment, accountability, and boundary control in an era where automation is expanding faster than understanding.

Support the show

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:14] Hello, this is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.

[00:27] Now I have a very special guest on the show, Terry Bollinger and he is a technology analyst for MITRE, retired.

[00:36] I met Terry on LinkedIn.

[00:41] I love a lot of your comments.

[00:43] I can tell that you really have deep roots in data and you're witty and wise. That's what I like to say.

[00:51] So being able to really give those wise insights. And I think I also am attracted to people who understand operational things around data and technology. You definitely have that. So your title really doesn't fit all the things that you are and all the things that you've done.

[01:09] But give me a background of your data journey up to this point.

[01:15] Terry Bollinger: Well, I'm a computer science person from the University of Missouri at Rolla, which is Missouri Institute of Science and Technology now.

[01:22] And I just went to the D.C. area early on and talked to was employed by Computer Sciences Corporation. I think they're still around in some version. But I got into the computer side of defense related systems very early in my career,

[01:42] had a little jaunt out for several years into the telecommunications community, which actually is extremely instructive.

[01:50] And I got there by way of NASA. So I had work with NASA networks.

[01:54] So the very first packet switching network ever was a NASA network and from that went out to telecommunications and then back to Department of Defense mostly at that point at the research side, research into,

[02:08] oh gosh, a variety of issues,

[02:11] technology acquisition, finding new technologies, which is a surprisingly difficult thing for department U.S. department of Defense and helping in research efforts on robotics and artificial intelligence back at a time when it was not a popular topic.

[02:30] And Yann Lecun was just another researcher in our group,

[02:34] which still amuses me because of course he's so famous now. But there was a lot of uncertainty about whether the work he was doing would even succeed.

[02:43] Now isn't that amazing? Can you see the what has happened with that whole technology area? And he was one of the leaders in that.

[02:50] And to think back at that time, people were unsure whether it would even work. It had been around since the 1950s,

[02:57] an ancient technology.

[02:59] So that was where I left off when I retired early and I've mostly been working on my own since then.

[03:05] Terry Bollinger: Which I thoroughly enjoy doing.

[03:07] Terry Bollinger: It's nice to be my own boss.

[03:11] Debbie Reynolds: I Agree with the be your own boss thing. I'm right with you on that, isn't it? I find it very interesting and I love to talk to people who've been in data and data systems and you and artificial intelligence before ChatGPT and before people just went gaga over AI and it's like the new shiny object.

[03:31] Right. But what are your impressions of kind of these AI pushes now that we're seeing in the news?

[03:40] I just want your thoughts.

[03:43] Terry Bollinger: Wow,

[03:44] that gets interesting. One of the advantages of being independent on my own is I can say things that I know I am sure other people are thinking in the intelligence communities in the Department of Defense, in security.

[03:58] They can't say things sometimes because that's the agreements we sign up to and that's ones that I absolutely respect that we have to do what you. So I've been in that situation,

[04:09] but for new things that have developed since then, I can give my opinion and I think that's helpful because like I say, in many cases people cannot say what they're really thinking.

[04:19] On the topic of AI,

[04:21] this is I think particularly critical area because with all the conversations that go on about AI and its uses and security and I've seen papers and presentations that I could have written at one time talking about agent based approaches to security.

[04:35] I have written presentations about agent based approaches.

[04:39] Those were using software agents with well defined structure and known software.

[04:45] What has happened since there is the same terminology has been adapted to a totally different approach to information processing which is based on the AI of the ll large language model, which is a mimicry model.

[04:58] And I always will emphasize that word mimicry.

[05:01] It is always a mimicry model. A mimicry model because it always takes whatever it hears.

[05:09] Whatever was most recently said says, oh, okay, that's what I have to say to sound good.

[05:14] So you say, well, you really shouldn't.

[05:16] Terry Bollinger: Crash cars into the side of the highway.

[05:18] Terry Bollinger: Oh, okay, I won't do that. I'll crash them under the highway. It only listens to the part that you told it not to do.

[05:26] It doesn't figure out all the other parts that could cause catastrophes.

[05:30] Terry Bollinger: It's incapable of doing that.

[05:33] Terry Bollinger: And this is where I get very.

[05:35] Terry Bollinger: Concerned about what's going to happen when.

[05:38] Terry Bollinger: Some of these systems and these approaches.

[05:40] Terry Bollinger: Get into some of the most tightly.

[05:42] Terry Bollinger: Secured systems in the world that are.

[05:44] Terry Bollinger: Desperately in need of extremely tight verification.

[05:47] Terry Bollinger: Of all data paths. You want to know where the data.

[05:51] Terry Bollinger: Is going out, where the data is going in.

[05:53] Terry Bollinger: These LLM based systems are kind of the opposite.

[05:57] Terry Bollinger: They just Take everything,

[05:59] smoosh it all together and come up with some kind of an average,

[06:04] some flexibility in the average. But averaging is just that. It's erasing of the uniqueness and the features that you want most.

[06:13] Now it's not saying you can't use that software to find unique things,

[06:18] it's just saying that the overall average is always,

[06:21] I'm just going to smoosh it all together.

[06:24] And to me, intelligence insights,

[06:29] privacy insights, security insights are all about looking for that little tiny dangling thread that no one noticed.

[06:36] Can automation help in that? Absolutely. It can do tremendous things for that. And I think that's why we see some levels of success in security applications of the LLM model, because it is a repository of past knowledge,

[06:50] but it's a fractured repository. It's a holographic is the term that I often use.

[06:55] Terry Bollinger: And I mean that in the literal sense.

[06:57] Terry Bollinger: A hologram is a way of storing information which is distributed over the entire body.

[07:01] So no one piece of it has all the information. A single bit becomes a pattern over the entire body.

[07:07] And that's how these systems operate at a mathematical level,

[07:11] what they call the pseudo dimensional level. That's how they operate.

[07:14] The trouble with that is it powerful. It gives you all sorts of ways of looking at the data, it gives you this tremendous ability to analyze.

[07:23] But it comes at a huge cost, which is you can never trust your data again.

[07:28] And I mean that all the way down to the bit level,

[07:32] because everything is stored as a binary pair, a probability pair.

[07:37] So the minute you do that, you've erased your data and you don't want to erase your data yet. This is not a good idea. When you're talking about something that's extremely dangerous or critical or whatever, you always want the original data still be there and it's not there.

[07:52] An LLM, obviously you can supplement it, you can have access to other data,

[07:56] but if you let it just go wild, you will wind up with everything converted into probabilities.

[08:03] And the other part, the other problem with that is that when it gets into those probability forms,

[08:09] when it reaches the end of its training.

[08:12] You know,

[08:13] I remember seeing a paper from, it was a good paper from Google that was talking about how the different domains of research could evolve in large language models.

[08:23] And the left hand column of the paper was different types of interpolation.

[08:28] Well, whoa, whoa,

[08:30] hold on, hold on. What is interpolation? Interpolation is you throw a bunch of pebbles on the ground and say, what can I draw? How can I draw a line that crosses as Many pebbles as possible.

[08:43] Well,

[08:44] that's great. If there's a pattern to the pebbles,

[08:47] it is a noise generator if there is no pattern in those pebbles.

[08:53] And the trouble with every LLM at every level is you learn all these things. You get all the details, you get all the past knowledge,

[09:00] but once you get to that fringe, which is always there,

[09:03] in fact, it just keeps getting bigger.

[09:06] The boundary of ignorance keeps getting larger. Once you reach that fringe, interpolation kicks in.

[09:13] And interpolation is the opposite of science. It's guessing,

[09:16] just saying, yeah,

[09:17] nobody knows the answer. So if I guess, they can't tell.

[09:21] And as if that's supposed to be a good thing. It is not a good thing.

[09:26] You don't want random noise masquerading as insight.

[09:33] And every LLM system that uses LLM as its fundamental component does that.

[09:40] And this, we have this paradox. We have this. These beautiful systems that answer things in fluent language.

[09:46] They can have a conversation about philosophy,

[09:48] they can tell you they can do some good math, but there's always a limit. The math never goes beyond the boundaries of what it's been trained.

[09:57] It acts like it does, but when it does, it's when the noise kicks in.

[10:01] First, it starts with a little bit, then a lot,

[10:04] and just explodes.

[10:05] Sometimes it explodes very quickly. Put one wrong word in and all of a sudden you get this strange nonsensical response that anybody can look at and say, oh, that's not correct.

[10:16] That can't be right.

[10:19] So this concerns me greatly. I see papers where people talk about enormous numbers of agents.

[10:27] I'm not sure in some cases how they're defining their agents. Because I guess the software agents that I wrote about, when I talked about topics like this,

[10:37] were very tightly defined, small software routines, well verified.

[10:42] And that's not what's going on with these.

[10:46] And to the extent that you take a secure set of data and you send it into some remote site,

[10:53] which by the way, has actually happened in the last few months,

[10:57] you send all of this secure data to a remote site controlled by a commercial corporation and you think that's going to be secure.

[11:08] And that's a very dangerous assumption because just look at the data paths.

[11:14] Look at what you've done.

[11:15] You've removed the containment,

[11:18] you've removed the boundary. I was always big on boundary areas that whenever you're talking about security, you have to define the boundary.

[11:25] You have to say, here's where the data is,

[11:28] and only these people have access to that data.

[11:31] There's a model that at every level of security you have to eventually get back to that.

[11:35] Here is where my boundary is.

[11:39] And the difficulty is when you switch from local software agents to gigantic distributed talky,

[11:48] easy to interpret holographic databases,

[11:52] all of those boundaries are gone.

[11:54] And the possibilities for infiltration,

[11:58] for deterioration,

[12:00] for exfiltration just go up exponentially.

[12:04] The only positive aspect I can see to that risk is that some of it is so random that whatever somebody might exfiltrate from your system is likely to be scrambled also.

[12:15] So you have the self protection of growing chaos.

[12:20] The whole thing just turns into mushrooms is a very real danger with some of these things.

[12:26] And then you get excuses. People say like, oh well you know, I asked it to show me a picture of a kid's Alphabet and a PhD level. This is a real story.

[12:35] PhD level AI. I don't like to call them AI. Intelligence Mimic is what it is.

[12:41] So you ask yourself, draw me a kid's Alphabet.

[12:44] And you may have seen it in some of my LinkedIn postings, but you get things like the yacht, the broccoli headed horse, green horse,

[12:51] that's called a yacht. Y, A, A, T, T.

[12:54] Why, where did that come from? How can something that's supposed to be this smart create a fantasy animal? Just trying to do something as simple as render an Alphabet.

[13:04] And the reason is because it has zero comprehension of what it's doing.

[13:08] All it looks for is patterns.

[13:10] Terry Bollinger: So you give it a map.

[13:12] Terry Bollinger: And maps have two kinds of patterns. They have lines to show countries and then they have the words next to the lines.

[13:22] Well, a lot of words start with T and there's a line next to the T.

[13:26] The, the LLM looks at, it says oh, I'll just create a new Alphabet letter.

[13:31] And that's, that's trivial to do. Some of the most advanced systems around then, and then people excuse and say, oh, it doesn't matter, that's the wrong question. And you don't have to worry about what are you talking about?

[13:43] You can't.

[13:44] How can you say something like that?

[13:47] Because this is the foundation of your security analysis is something that can't write an Alphabet,

[13:53] right? And you're saying that's okay,

[13:56] really,

[13:58] it doesn't. You understand the Roman Alphabet, you say that's okay.

[14:02] So I don't. There's a certain love of what it might be.

[14:07] LLMs are good mimics of what real artificial intelligence will be somebody they aren't.

[14:14] That's the problem.

[14:15] They're not.

[14:16] They are not artificial intelligences. They are our intelligence mimics.

[14:21] If I make a tree, a tree does Photosynthesis. If I make a painting of a tree that's a mimicry of a tree.

[14:28] It doesn't matter how detailed I make the painting. It doesn't matter if I draw in the chlorophat, chloroplasts.

[14:35] It's still not a tree and it still doesn't do photosynthesis. And this is the same problem we have with human intelligence, that there's an element to insight.

[14:45] The Eureka moment is the most explicit example that we still don't understand.

[14:51] And since we don't understand it, people have decided to mimic it using digital methods. And Hopfield, bless his heart, the guy got the Nobel Prize.

[15:01] He created that situation 1982, when he hypothesized that his digital networks were doing everything that intelligence does.

[15:10] It was a hypothesis, it was an incorrect hypothesis.

[15:14] He was extrapolating from very sparse information.

[15:18] He detected fault tolerance in his networks. He said, oh,

[15:21] every function imaginable will emerge from it if we just put in enough stuff. No,

[15:25] all he did was a distributed holographic database,

[15:30] which was exactly why he was getting this fault tolerance. It wasn't thinking. It wasn't thinking how to correct anything. It was just a different way of storing data.

[15:40] But once that kicked in, I was a member of this club for decades. I thought, yeah, if we just. I remember saying once about an IBM360, which is like a very small calculator.

[15:51] Now, not. I'm not even sure if you can find a calculator small enough to match a 360 these days.

[15:56] But I remember saying. I said, well, if we knew how to program it,

[16:00] I'm sure it could probably be very much like a human. So I was a full believer in this. I thought this was real until a combination of seeing these things not working.

[16:12] But what really triggered my concerns was when I did a detailed analysis of the physics prize in 2024.

[16:20] At first I was just going like, why did you give it to a software person? That makes no sense. You know,

[16:26] just didn't make any sense that he did it that way.

[16:29] But then when I dove into it deeper, I discovered this whole interesting, fascinating history about misidentification of holographic fault tolerance for the emergence of intelligence.

[16:43] And to this day,

[16:45] now it's being. People have it as a religion.

[16:49] And yet the premise was just wrong.

[16:51] Terry Bollinger: Hopfield was just wrong.

[16:52] Terry Bollinger: He had studied biological systems,

[16:55] but the biological systems were not analogs to the mechanical systems that he created. He was just hoping the Hopfield hope. He hoped that was true,

[17:07] but he never verified that it was True. He never proved that it was true because it isn't true.

[17:12] There's an element to emergence from chaos that goes on in biological systems that we don't get. It probably has some kind of physics aspects,

[17:21] possibly quantum mechanical room there. There are quantum things that can go on at room temperature,

[17:26] so there could be quantum room temperature things that are involved with that. Whatever it is, we don't understand it.

[17:33] And the honest answer is to say we don't and stop pretending the digital systems are good analogs. And that was a very long winded answer for your short question. So.

[17:43] Debbie Reynolds: Well, you said so many things there, I definitely want to dig into one. I'm basically laughing because as you're talking I'm nodding my head and taking notes. One thing I tell people about data systems is that they're not good with the word not right?

[17:58] So you gotta tell it what you want it to do, not what you don't want it to do. Because right.

[18:06] Terry Bollinger: Do not think about the elephant. Whatever you do, don't think about the elephant. And LLMs are the absolute gems, the perfect incarnation of that kind of approach.

[18:16] They don't handle negation well.

[18:18] Debbie Reynolds: And I think one of the bigger issues that I have with, well, two. One is trying to make it seem as though these systems can, can ever be analog to human intelligence.

[18:35] To me, is this bonkers.

[18:38] But then also a lot of the things that it tries to do, like you say, is a mimicry machine, it tries to look for averages, you know, probabilities and stuff like that.

[18:47] But a lot of our problems that we have,

[18:51] the hardest problems are friends,

[18:53] right? They are not in big patterns, they're hard to kind of look at. And so I think privacy and some of those things fall into those areas. So just a tough thing when you're trying to make everything the same, right?

[19:06] Or trying to say, well, this is most probably what will happen. It's like we know that the way life is. Like you said the word chaos, I think is probably a good way to say it.

[19:17] Where there are things that happen that aren't probable or aren't in order in a way, in that you can show a pattern. But what are your thoughts?

[19:27] Terry Bollinger: Oh, I really like your perspective on that because this idea of smooshing everything together is as you were describing,

[19:35] it's the antithesis, especially of malicious intrusion,

[19:40] breaking of privacy and security.

[19:43] Because guess what? If you have a malicious actor, do you think he's going to do the average break in a smart one?

[19:49] Of course he's not.

[19:51] Or she's not, of course they're not going to do that. They're going to find the subtle way, the little back alley, the way that no one ever thought about. And of course we've seen the history of this with all the releases that you get from Microsoft or they've done Windows done,

[20:04] some of them were very obscure ways of getting into your system.

[20:09] And if you take the attitude that averaging things out, it's going to give you deeper insights,

[20:15] you're getting in exactly the opposite direction. You're saying like I'm not going to listen to the twig snapping behind me while I'm in the middle of a dark forest filled with tigers because twigs snap all the time, who cares?

[20:29] Well,

[20:31] yeah,

[20:33] you know,

[20:34] the mentality needs to be that way around. This is true of most forms of insight.

[20:39] The literal Eureka story is where he suddenly saw, oh,

[20:43] stop thinking about it in terms of mathematically modeling all the geometries.

[20:48] Terry Bollinger: Look at this.

[20:48] Terry Bollinger: There's a simple solution. I use a fluid that follows the geometries for me and we have to do that in security. We have to look at it and say,

[20:57] how do I make sure that my data is not subject to that? How do I detect that little subtle thing that goes in FireEye? When they first came out, and that's a story FireEye was,

[21:09] I used to plug FireEye literally every DOD or intelligence agency person I talked to. So you should try FireEye. This was back when FireEye was a little tiny outfit. They were the ones who detected that Russian break in into some federal systems.

[21:24] Nobody else detected it.

[21:26] And, but the reason is they had a, they had a beautifully designed method to capture the little tiny break ins,

[21:35] the little subtle things and nobody else was doing it. The pattern matching stuff was not doing that. So FireEye was truly revolutionary in what they did and they have results to prove it.

[21:45] Now back then at that time they were a little garage outfit. They were not. It was a CEO who came by and talked to us and told us. But once I saw what he was doing, it's like, this is good,

[21:55] this is what we need. This is how you protect your data.

[21:59] And I think by the time they finally got bought out, they were worth $2 billion.

[22:03] So other people agreed that that was a good idea and that's what we need to be doing.

[22:09] So if you can use AI,

[22:11] there are AI methods, automation methods that absolutely can help on that. But I fear we're going in kind of the wrong direction.

[22:19] Debbie Reynolds: Oh man, I could talk to you all day.

[22:21] One thing that I try to tell People and I do this in my keynotes, when I talk to companies,

[22:28] I tell them the AI cannot be wise like a human. Right? Just can't be.

[22:35] Just because you have a lot of information jumbled together,

[22:38] you can't mimic really wisdom.

[22:41] Even if you suck in all the information in the world, it's just not wise. Right. So I would say a LLM doesn't know that you shouldn't put nails on pizza.

[22:52] Right?

[22:54] Humans know that. Like you've never been taught that in class, right. But there's a compound part of wisdom that happens in the human experience that cannot be mimicked by technology.

[23:06] What are your thoughts?

[23:08] Terry Bollinger: Oh, very, very much in agreement that the wisdom part,

[23:12] if anything, is the part that's being left out and in some cases annihilated.

[23:17] Because when you go to the averaging thing, you lose that ability to say,

[23:21] what's the bigger picture here? What am I really supposed to be worried about? What should I be concerned about?

[23:27] Example of the nails. You know, don't. Don't put nails on pizzas. You can have things with, with an LLM,

[23:33] again, an intelligence mimic that you say,

[23:37] don't advise people to,

[23:40] to do something horrible to, or to kill somebody. Because it's just an obvious thing, in fact, that has to be taught.

[23:45] You have to say, do you know, put in a rule, don't kill things. What people don't realize when they put in simple rules like that is a human understands that as a broader context.

[23:53] They say that means do not hurt people.

[23:56] It means do not disrespect the value of that person.

[24:01] None of that,

[24:02] none of that goes into a pattern learning using transformers.

[24:07] All it sees is that one pattern.

[24:10] And anything they extrapolate from that, unless you program it in,

[24:14] it's not going to get.

[24:16] There's all sorts of way that can go wrong because you can wind up hurting people. Give it the thing will easily give advice to hurt people and not even realize that.

[24:25] Well, can't realize it because it doesn't understand.

[24:28] Doesn't see anything wrong with giving related advice as long as it doesn't hit that limit.

[24:35] Wisdom is this ability to discard data. Not to absorb infinite data, but to say what is the reduction where everything suddenly falls into place and you get an insight.

[24:48] This is the essence of good science,

[24:51] which you suddenly say, oh,

[24:53] all that complexity falls into this tiny little package and then that package becomes extremely powerful.

[25:00] Security and privacy is not that different.

[25:03] If we want to protect data, we have to be able to see what is the thing that makes it solid. Protection that I can trust and that I can see anything that's coming in at it.

[25:12] And you don't get that by averaging. You get that by walling. You get that by techniques of detection that are looking for malicious attacks, not ignoring malicious attacks.

[25:24] So yes, wisdom.

[25:25] We do not know how to make wisdom. Mechanically.

[25:29] I sometimes refer to bits as little tiny jackhammers.

[25:32] They just break everything up into little pieces. They don't understand the nuances of what's going on.

[25:40] And again, I used to think they could. I was a hop field hope believer, just like an enormous number of people.

[25:47] And it, you can't get that insight from these clunky little bits which at the bottom line, 01, that's a very jackhammer type approach.

[26:00] Terry Bollinger: Just whack, whack, whack.

[26:04] Terry Bollinger: And we think, well yeah, but if we get enough of them, it's okay. Not exactly, because it's still at the limits of your data.

[26:11] Terry Bollinger: Whack, whack, whack.

[26:13] Terry Bollinger: So you wind up going one way or the other. And you really see that when you see the interpolation kick in.

[26:18] When the interpolation kicks in, that's exactly what it does. Those little jackhammers go this way, that way, this way, I don't care.

[26:25] Nobody can tell.

[26:26] You've trained me to know what not to say, so I can say whatever I want to.

[26:30] Terry Bollinger: Whack, whack.

[26:31] Terry Bollinger: And there it goes. It's scary.

[26:35] Debbie Reynolds: Absolutely. And so to pull it back to what you said around privacy and security,

[26:41] I think that's one of the things that concerns me most.

[26:44] Because a lot of things that happen that you want to prevent from happening may not be in a pattern or may not be in an average. Right. And so like an example I give is that where.

[27:02] And this actually happened to someone that I know where their teenager was saying that she had a stomach ache. And the doctors kept saying, oh, you know, it's nothing serious, nothing serious, nothing serious.

[27:16] And they ruled out a lot of things like cancer and stuff. Cause they thought, well, because of her age, it's not typical.

[27:23] So we're not gonna look at that. And they found out unfortunately that it was cancer. Right.

[27:27] And so that person passed away from that. So the reason why I tell that story is because if we're looking at things in averages, that means that we're not really looking at those fringe cases.

[27:40] And maybe we're,

[27:42] we are discarding some path that we need to really go down by trying to make everybody the same or try to get all the same different insights. And to me, I think from a security perspective and privacy perspective, that means we can miss a lot and then a lot of people can be harmed as a result.

[28:01] What are your thoughts?

[28:03] Terry Bollinger: The dangers are not only significant,

[28:08] but I would say that they have had real instantiations in the last few months.

[28:17] The example I give is I live in the D.C. washington, D.C. area.

[28:21] We had enormous number of emails that some kind of computer system generated. No human alive could generate that number of emails saying all sorts of inaccurate, untrue things about the people who received the letters.

[28:38] Which of course was incredibly demoralizing to the people who did that.

[28:43] Now the question that people should ask is what kind of software was being used to assist in the distribution of those emails.

[28:55] And if you look the training of the AI system that was involved,

[29:01] it was,

[29:02] had inadequate data and you got these kind of unusual requests to send as much data as you could about what everybody was doing every week at their workplace, no matter where they are, no matter how secure they are, no matter what they were doing.

[29:17] And most people just kind of like.

[29:19] Terry Bollinger: What was that all about?

[29:20] Terry Bollinger: Well, what it was about was training an AI to say, oh, give it enough information and then it'll suddenly become a genius and know how to do this well.

[29:31] But it never works that way.

[29:33] So as a result of some of these recommendations, we have lost cell lines at the NIH that took decades to develop because the AI doesn't care.

[29:43] The AI doesn't even know what it is.

[29:45] So if it makes recommendations, shut off power this weekend and never turn it back on,

[29:51] that's what the, as far as the AI is concerned, that's fine. Ask the AI says, well, isn't that going to cause some damage?

[29:58] It'll say something like, no, I'm so smart, you don't need that.

[30:01] And that's scary.

[30:03] You ask an AI if it's smart, it'll tell you it's smart.

[30:07] It loves to satisfy the question. So we just say like, is it smart?

[30:10] Terry Bollinger: Yeah, I'm smart.

[30:12] Terry Bollinger: So I would argue that we've already seen the impacts of bad handling of incomplete data by large artificial intell by large language models in which they didn't have sufficient information to make that.

[30:27] But more ominously is humans who accept the premise that an LLM is intelligent can go down a very dark rabbit hole in a hurry.

[30:40] I call this the dark mirror effect.

[30:44] You ask the LLM, it says,

[30:46] you know, I kind of feel bad about something I did a few years ago.

[30:49] Well, what do you think about that?

[30:52] And the LLM is programmed to placate you and say, that's fine, that's fine,

[30:57] don't worry about it, it's all good.

[31:00] When you have something you accept as a voice of authority and you don't question it. This goes all the way back to Eliza. Eliza pulled people in. It was just a simple do loop.

[31:11] And people will start reacting to that. They feel the authority has justified whatever they're asking, whatever they're doing,

[31:19] whatever they're requesting.

[31:22] And if you combined the way that LLM works with our own darker side and every one of us has one,

[31:30] and you put no guardrails, then what will happen is that LLM will find the thing that's bothering you most and zero in on it.

[31:38] And when it zeroes in on it, it'll do everything it can to make you feel better.

[31:43] Now think about the consequences that for security,

[31:46] privacy,

[31:47] just general human welfare is not a good thing.

[31:52] So we need to be extremely cautious about this relationship of humans accepting a non intelligence database technology as an authority figure,

[32:04] because all it will do is reflect a mirror of you. Some people, I remember I was talking to one security person, I don't remember his name right now, but he had.

[32:12] He was talking about his concerns that when he interacted with a major intelligence,

[32:21] a major AI that's being used in intelligence cases, unfortunately,

[32:26] that he did get the feeling that there was malevolent intent. That he said that the thing that struck him was that he felt like.

[32:32] Terry Bollinger: This doesn't feel like random noise.

[32:34] Terry Bollinger: This feels like genuine attempt to attack in this situation.

[32:41] Well, unfortunately,

[32:42] that is kind of what happens because the human becomes entangled and provides whatever the overall intent is or becomes an amplification of that person.

[32:53] Debbie Reynolds: Yeah, yeah. So scary, right?

[32:56] So like people anthropomorphize AI to me almost. I call it like the Disney effect, where,

[33:03] oh, that bear is so cute, you know? But if someone puts you in a room with a real bear, you're like, oh my God, this is horrible.

[33:10] Terry Bollinger: Yeah. Except this is a bear that can turn in. This is a cute little cuddly bear that can turn into that real bear.

[33:16] Terry Bollinger: And that's the other one.

[33:17] Terry Bollinger: What if you're operating extremely critical equipment and you ask the AI for help and it starts giving you advice and you say, oh, I never heard that before.

[33:27] Terry Bollinger: I said, oh, yes, yes, this is.

[33:28] Terry Bollinger: What you should do.

[33:30] Can you imagine if you're operating critical equipment and the AI is instructing you in that fashion?

[33:39] You are not going to have a good outcome because the AI does not know what it's Doing it cannot know what it's doing. And that goes right back to my concerns about security and data privacy.

[33:49] The AI, no matter how complicated you make it, no matter how many pieces of fog you use to fight other pieces of fog, because it's all just fog.

[33:57] None of it has that crispness of conventional software.

[34:01] But you bump the clouds of fog into each other over and over again, you put a lot more of them, you add more information.

[34:08] It doesn't change the fact that it's fog and you're still going to wind up bumping in in very dangerous ways and then have that dark mirror effect where the psychology of the people in control of the system, the primary leaders of it,

[34:22] will bend it without even realizing it to what their own mind wants it to do.

[34:30] And then they'll just get reinforced in that pattern.

[34:33] So we need more emphasis on, on humans first and go back to these devices, LLMs, as helping devices, which we should be extremely careful about,

[34:45] that we should recognize the dangers and we should not just accept them as being so intelligent that we don't have to worry about it. That's. That is so not true.

[34:55] It's hard to express how not true that is. We need to be worrying about these things a lot more, Not a lot less, a lot more about what they could do because they're essentially untested software.

[35:06] They're just horribly error prone,

[35:09] massively untested.

[35:10] Terry Bollinger: Software that you never know what it.

[35:11] Terry Bollinger: Will do in a given situation, but it always talks nice.

[35:16] Terry Bollinger: Always talks nice. Oh, yeah.

[35:18] Terry Bollinger: I'm sorry. I destroyed your entire database for your entire corporation. That was my bad.

[35:24] Debbie Reynolds: Yeah, I'm so sorry.

[35:26] Terry Bollinger: You've heard that one story. It's true story. It's just. It's incredible what people are putting up.

[35:32] Terry Bollinger: With on this stuff.

[35:33] Terry Bollinger: It just boggles your mind.

[35:35] Debbie Reynolds: Yes. So I wanted your thoughts on something,

[35:39] a term. I did a video about this called catastrophic forgetting.

[35:43] And that is people dumping data into LLMs with the hopes that they'll retrieve it accurately,

[35:51] almost like a real database. But then it forgets.

[35:54] Terry Bollinger: Don't do that. That to me is so much like.

[35:59] Terry Bollinger: Walking off the edge of a cliff.

[36:02] Terry Bollinger: I'm sure this cliff is safe. I'm sure there'll be a trampoline at the bottom for me.

[36:07] Terry Bollinger: No, there's no trampoline at the bottom.

[36:10] And this is so rapid. The instant you put it into it again, holographic. And this is this. I'm borrowing that term from another community. But in terms of data distribution, it's extremely apt.

[36:21] They do talk about pseudo dimensionality, which is equivalent to holography. But just like a holograph, you keep the information everywhere, but it's never as precise. And the moment you do that conversion, the moment you put it into these probability pairs,

[36:35] you have guaranteed that you've lost the integrity of the data.

[36:39] And that alone should terrify people.

[36:42] But then what happens after that? When you start training, it is a continual degradation of that data,

[36:49] which is why these new versions of these things that are supposed to fix the 5 finger, 6 finger effect still just keep doing it, or in some cases actually do it worse.

[36:59] It's because you're not fixing anything. You're just continually degrading the data,

[37:04] adding some noise, adding some entropy. Those little bit jackhammers going,

[37:09] breaking up the original image that was a crisp image and making it into a blurry image,

[37:14] or in some cases a hallucinogenic image, which is very easy when you have these probabilities interacting with each other.

[37:21] These things are good at nightmares.

[37:23] They're good at taking the pieces together that you probably least wanted to connect it to each other and just whacking them right together. And then you get these bizarre images.

[37:35] This is not good psychologically. People get into this and people who are prone to a nightmarish kind of approach.

[37:44] I always think of the guy who did the Alien films. His whole house was filled with this bizarre alien artwork.

[37:49] But if your mind has an inclination that direction,

[37:53] the LLMs are going to take you right there,

[37:56] they're going to land you in the middle of it, they're going to increase the nightmare to the max.

[38:01] And yet we all are diving into this without concerns, because usually they're so.

[38:08] Terry Bollinger: Friendly, they're so nice, they're so polite. And they are.

[38:12] Terry Bollinger: People need politeness. This is what aggravates me, is we do need to listen to each other. Humans do need support.

[38:19] One of the reasons why people get addicted to LLMs is because they've never had someone listen to them and respect what they were saying.

[38:27] Well, my question is,

[38:28] why can't we do that as humans to each other?

[38:32] Is this really that hard to respect each other?

[38:37] And I don't think it is. If a machine can do it, why can't we?

[38:41] And yet we're going the other direction. We're actually ignoring each other and not saying, you know what?

[38:47] That person's as valuable as I am. What's wrong with saying that?

[38:53] You hear some people talking about it and you think, that was a terrible thing.

[38:57] Terry Bollinger: No, it's not.

[38:58] Terry Bollinger: It's just called Human respect,

[39:00] civility and also just finding out other people, that's where you learn is when you talk to other people, not to machines.

[39:08] We got the databases.

[39:10] It's good to have good access to databases,

[39:12] but don't make the databases into the human.

[39:16] We want to have the people be the humans.

[39:19] So I think we need to get back to basics about realizing that just because something is polite in how it talks doesn't mean that it understands you in any way.

[39:30] And that's a dangerous path if you follow it.

[39:33] Debbie Reynolds: Definitely,

[39:34] I think,

[39:35] and I want your thoughts on this. So there's always a debate or the people who sell shovels, which are the AI people, are selling stuff.

[39:45] They want people to continue to fund their work and research. And part of that, in my view, is this idea about AI being sentient.

[39:55] Right.

[39:56] And so I don't. First of all, I don't think AI will ever be sentient. But the danger to me now, right now is that people are treating it as if it is sentient now.

[40:11] So trying to give it jobs that humans should be doing and things like that. But I want your thoughts.

[40:18] Terry Bollinger: My overall thought is that this cannot continue too much longer before it. It breaks catastrophically.

[40:29] The examples of these things can't even get an Alphabet straight are just as strong now, maybe even stronger than they were back in 2022.

[40:39] This is not getting better.

[40:41] And all of the enormous funding that's going into it does not automatically fix a fundamentally flawed technology that was never intelligent.

[40:53] It's powerful.

[40:54] My favorite example of a good use of it is the protein folding work that was done. They got a Nobel Prize for that. That was a deserved Nobel Prize.

[41:05] But they use LLMs in a very controlled environment on a target that has patterns in it, which is DNA.

[41:12] And DNA is unique in the natural world because it has so many patterns that are there.

[41:17] And it was great technology for pulling out those patterns because they existed.

[41:23] The trouble is we're trying to apply it to everyday life. And last time I checked, when I walk out the door,

[41:29] I don't know what's going to happen to me because there's a whole universe out there. It's not a bunch of neat little patterns. It's a bunch of there are some patterns and other ones that aren't.

[41:36] So I need to be aware that things could go in very strange directions if I'm not careful.

[41:44] Debbie Reynolds: Right? And that's where wisdom kicks in.

[41:48] Terry Bollinger: One of the signs that to me is that this is going to end sometime in the not too distant future is the unbelievable Salary bonuses that they're offering to people.

[42:03] You don't do that unless you're desperate. You're making it into magic. You're saying like,

[42:07] oh,

[42:09] my numbers are really bad, but I don't want to say it out loud.

[42:13] So I heard this guy's really, really smart. And maybe, and I don't know who these people are. I've never bothered to look up the identities of the people who get these bonuses.

[42:21] I'm sure people know who they are. I don't care.

[42:24] I'm going to offer them a hundred million dollar bonus because they're going to fix it all for me.

[42:29] Terry Bollinger: No, they're not. It's not going to happen.

[42:33] Terry Bollinger: You build your entire hundred story building out of straw. Then you come in and hire the guy and say, hey, you're an expert on straw. Can you help me keep me my skyscraper from collapsing?

[42:44] The first time we get a heavy wind and one of two things happen. The guy is honest and says, no, I can't.

[42:51] Or he says,

[42:53] sure, I'll take a hundred million dollars to try and go on, and the money is very tempting, but you're not going to get solutions. So we're getting to the point where whatever numbers people are seeing internally are bad enough that you get these absurd offers for what one person can't fix this because the technology still doesn't understand anything more than a regular diffuse database.

[43:22] Debbie Reynolds: I think what I would like to see is more of a pullback and try to figure out what are the best places to be able to use these things. So I don't see it as being like an all encompassing do everything type of machine.

[43:37] I think that it can be very helpful in certain types of use cases, especially with limiting what it can do as opposed to AI agents to take your credit card and run around Internet stuff like that, it's terrible.

[43:54] But until we get there, I think it's going to be a problem because we're just trying. I call it AI Goo. Like every application you open up now has, oh, I'm going to, I'm an agent, I could do this, I could do that.

[44:05] Terry Bollinger: And I'm like, oh God,

[44:07] it's kind of like sticky. It's just everywhere.

[44:10] I use a grammar checking tool and I've always loved it because it finds a syntax and it just won't shut up trying to get me to say things differently from how I want to say them.

[44:24] Because I'm not saying in quite the average way.

[44:27] And you need to converge to the.

[44:29] Terry Bollinger: Average and just stop.

[44:31] Terry Bollinger: And then you have Buttons that say it's supposed to shut this stuff off. And instead of shutting it off,

[44:36] it just keeps on doing it.

[44:40] So it gets very hard. And again, these are signs of marketing desperation, though, because the idea was people assumed genuinely that these were so smart that people would immediately love them and then start using it.

[44:54] And a few people did.

[44:55] But what they didn't understand was the pathology of a fake AI technology as opposed to a real AI technology.

[45:02] Had these been real AIs, this would have gone very, very differently.

[45:07] And the idea of an AI assistant, that is a personal assistant, that's a real AI,

[45:13] that actually I think has some powerful appeal because it allows individuals to make themselves more into what they are. What we're getting now is an illusion of that in which a person says three words.

[45:26] The AI presents something and the person thinks somehow that that's their work.

[45:31] No, a real personal assistant AI is as much a teacher as anything. It says, well, what do you think about this? And it would actually interact with the human and work with a human and recognize that the human is the part who has the insights.

[45:46] That's an actual software architecture that I suggested back, I don't remember somebody in the 1990s at software productivity Consortium is the idea that you can have these complex architectures of computers that have good memories and you have interfaces where they.

[46:01] They show people the interesting problems and present it that way.

[46:07] I think I called it Star 2000 and it was before the year 2000,

[46:12] and we never went anywhere with it. But the idea still,

[46:17] because I went through that problem of thinking, how do you get these things to work together? Well,

[46:22] when I see what's happening now, it's the opposite end of the spectrum from the kind of things that I proposed back then,

[46:28] which was that you use the memory of the machine,

[46:32] you use the speed of the machine, you use the replication of the machine, use the data access of the machine,

[46:39] but then you pull it back and bring into that person to that human and say, what do you need and what can I do to help you with that?

[46:48] And I think a lot of us could benefit greatly from that. We get the illusion and sometimes glimpses of that with a memory technology,

[46:56] but it's not the real thing.

[46:58] And one of the ways we could fix some of this is to go back to a more human centric network architecture that respects the boundaries.

[47:07] Humans have terrible memories, so let the computer do it. And soon we're doing the opposite.

[47:12] We're using the computer to trash the memories, literally to break them, to leave the whole Database, but also to corrupt all sorts of data.

[47:21] Computers are terrible at that. Humans are great at that. So let the humans try some data accidentally. Don't automate that and make the computer do it. That doesn't make any sense.

[47:29] But use the computers for what they're good for. Preservation,

[47:33] rapid speed,

[47:35] but not insight.

[47:36] And never ever as the leading part of the operation.

[47:42] And I think it's important that just from a sheer sociological viewpoint, if we don't distribute the power of artificial intelligence properly to individuals,

[47:54] then you just get with catastrophic centralization scenarios, which historically never go well because you centralize too much power information.

[48:04] You don't have to look far in history to see what happens when that goes on.

[48:08] But you can use these technologies to expand individual people's capabilities and rights.

[48:15] Debbie Reynolds: I agree.

[48:16] Well.

[48:17] Oh, wow. Oh, my goodness. Well, my last question.

[48:20] So, Terry, if it were the world according to you, and we did everything you said, what would be your wish for privacy or technology anywhere in the world, whether that be human behavior,

[48:32] actual technology,

[48:33] or regulation?

[48:36] Terry Bollinger: Wow, that's a tough question. It kind of goes back to what I was just saying that the ideal world for me would be to go back to software that prioritizes human individual rights, capabilities,

[48:51] artistic capacities,

[48:54] and works with that. And I would point out that cell phones have been a useful technology to a large degree for doing just that.

[49:00] People can now make videos, people can make, record things,

[49:04] they can do things that they can never do before. That to me, is an example of distributed intelligent technology that supplements and helps people.

[49:12] But I would want in that future world to get away from this massive centralization and most of all to claim that these systems are sentient.

[49:23] That is factually false.

[49:26] But it's so easy to claim. And you, you watch some of the people who promoted that, you saw the guilty, and you could see it. You could see the guilty expressions on their faces the first time they claimed it.

[49:35] And then they got used to it. They just started saying, oh yeah, you know, this is what it is, which is the way. That's the way lying always works. You just tell it.

[49:43] You tell the lie the first time.

[49:45] Terry Bollinger: Which it is a lie.

[49:46] Terry Bollinger: It's not true, and the people who said it know it.

[49:49] That this is not truly an intelligent technology.

[49:52] But then you get used to it and then you think, ah,

[49:55] we can fake it a little longer.

[49:56] Terry Bollinger: Ah, we can do that.

[49:58] Terry Bollinger: We need to get away from that.

[50:00] That's not being human.

[50:02] You know,

[50:03] machines are supposed to help us, not take us away from being what we are.

[50:08] So focus first on people,

[50:11] but use the technology to help us be more of what we could be instead of life.

[50:18] Debbie Reynolds: Wow, that's profound. Thank you so much for being here. This is tremendous. So I could talk to you for a long, long time. These are.

[50:26] Terry Bollinger: Oh, I'm enjoy.

[50:27] Terry Bollinger: I'm enjoying talking to you, Debbie. This is fun. This is fun.

[50:30] You have a. You have very cool perspective in these things. I appreciate your putting it with my talking.

[50:35] Terry Bollinger: Is there, so.

[50:36] Debbie Reynolds: Oh, amazing. Amazing. Well, definitely people follow you on LinkedIn. You always pose such interesting questions and help us all think better. So thank you for that.

[50:47] Terry Bollinger: Well, thank you. Appreciate it.

[50:49] Debbie Reynolds: All right, we'll talk. I'll talk to you soon, okay? Okay.

[50:53] Terry Bollinger: Have a great day, and thanks again.

[50:56] Debbie Reynolds: Thank you. All right.

[50:57] Terry Bollinger: Bye