"The Data Diva" Talks Privacy Podcast

The Data Diva E98 - Bogdan Grigorescu and Debbie Reynolds

September 20, 2022 Season 2 Episode 98
"The Data Diva" Talks Privacy Podcast
The Data Diva E98 - Bogdan Grigorescu and Debbie Reynolds
Show Notes Transcript

Debbie Reynolds “The Data Diva” talks to Bogdan Grigorescu, VP, Head of Quality Assurance Engineering & AI Systems, Scaling Automation at Afiniti. We discuss his interest in data, AI, automation and his career trajectory, the human impact of automation, the effects of automation on Data Privacy and security, the need for trust when data is gathered, and those without access can slip through the digital divide, his post on “Automating Harm”, the need for human control and judgment in AI systems, the Google researcher who claimed a chatbot had become sentient, the importance of acknowledging psychological considerations in automated systems, and his hope for Data Privacy in the future. 

Support the show

51:32

SUMMARY KEYWORDS

people, data, automate, automation, systems, talking, real, access, ai, health hazard, virtual reality, computer vision, understand, bots, called, output, day, human, posts, years

SPEAKERS

Debbie Reynolds, Bogdan Grigorescu


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.  Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show. His name is Bogdan Grigorescu. He is in London. He is the VP and head of quality assurance, engineering and AI systems at Afiniti. Welcome.


Bogdan Grigorescu  00:46

Thanks, Debbie. Hello all, and thanks for having me on your great show. Looking forward to a great discussion.


Debbie Reynolds  00:52

Thank you so much. I love your content. I like the things that you put out. You have so much common sense and deep knowledge of technology. And I thought it would be great even before we started recording, and we were having really deep conversations. So I think this would be wonderful to have conversations that we can share with the audience. Why don't we start by you telling me of your interest in data and AI and automation and your career trajectory up to where you are now?


Bogdan Grigorescu  01:28

Yes, it would be automated all the time. You know, it doesn't have to be that intelligence to help you a lot, like small things that you do day in and day out becoming boring, but the necessary to do it makes sense to automate them. And, but obviously, for the right word, because you can automate something doesn't mean you just do it; you have to have that purpose in mind. So whether rule-based or not, you always have a case for automation. And when you try to automate something, you have to go deep, you, of course, you have to understand, you know what I'm what I try to solve, like in any domain, that is no exception. But you have to go, and you have to understand really well how you're supposed to work and what you are using, and who you are talking to, how you collaborate, and so on. So when you automate, you have a lot of failure at the beginning because of a steep effort, then it pays off very quickly. But at the beginning, it's very costly in terms of effort for a short while; it's a spike. So that has to be very well understood is not something that you just, you know, spend an hour or two and somehow your problem is resolved, is not going to happen. So that's why you have to be careful, you know, what you automate and when because that will be, you know, make or break. But it, you know, people have been automating things for, you know, centuries, millennia in various forms and shapes. I started in electronics and telecoms, engineering, and all about automation, making my life easy first, and then maybe making other people's life easy and saving a lot of money in the process. Because in the old days, we were like repairing, you know, computer boards, I'm not talking personal computers, I'm talking industrial robotics, I was in industrial robotics. And so what you're going to do is just replace the board; what if you don't have a board? And most of the time, we didn't. So you have to repair it? What if you can automate certain tasks? Make sense? So that's how I started and, you know, testing all the time, experimenting all the time. And I was always a quality mindset type of guy trying to do the right thing, not always succeeding, of course, most of the time didn't succeed at first, but you know, little by little getting there. And so it became a way of working, trying not to cut corners. And I started learning more about the current state of AI systems about four to five years ago with natural language processing and conversational experiences. And since then, it has been a roller coaster. I got involved with standards for AI systems with the ethics aspects, deeply and of course, the data around it, or you know, how should we treat data and especially personal data in today's world that is so different from even 10 years ago.


Debbie Reynolds  04:57

So thank you for that. I think there's a lot going on. We have talked a bit, and I would love to get your thoughts on automation and the human impact of apps. Because I don't know, I'm seeing a lot of people try to; I don't know, like brain-computer interfaces, trying to free that human element in different ways. What are your thoughts about automation and the human element?


Bogdan Grigorescu  05:37

It's, it might take; it's more non-technical than technical. I mean, the technical aspect is, of course, very big and very important and very complex. But the psychological aspect trumps it because very small changes make a big impact on people but little impact on machines. So also, people's behavior change, but machines don't change. So that interaction is more or less continuously reshaped in some way or form. And as I said, even if it's a small change, it usually has an important impact, positive or negative, on people. Case in point, perhaps these VR headsets that became better and better, you know, year on year. I mean, they're still awful, that's the word, awful. They're a health hazard. If you wear it for a long time, obviously, not just two or three minutes, but we're talking half an hour plus, really has had headaches, a number of symptoms, eyes get affected, and so on. Plus, they're like, they're bulky. They're bulky. I'm also this is a big question mark about the things I have to communicate over the Internet. Right? So you have a modem, similar to a mobile phone. Not exactly the same, but nevertheless, it's just next to your brain continuously for hours. Is that good in the long run? Well, as we know, is not good to sleep with a fridge in the same room. That's a, you know, doctors will advise you against that. Because in the long run, you'll develop well; it's cancer, cancer. So it's a major health hazard. What about if you have a much smaller electromagnetic field but just next to your brain for hours, seven days, or five days a week? For a few years? Could that be a health hazard? I think so. No matter what the hype says. So that's machine interaction right there. But done, right? It could be very good, for example, doctors operating remotely. Right? It's also machine human interaction, but it has very positive results. If it's done, right, of course, easy to use, having this, you know, very low latency. So doctors can see pretty much in real time what's happening, that's very important, high precision, so And doctors get that feedback and, you know, a little bit of help if they're just about to go amiss by a millimeter or so. So that feedback in real-time, which is accurate, that's also a very natural part of human-machine interaction. But let's not get overhyped here. You know, it's incremental steps. And then, every once in a while, there's been a breakthrough. But this is once in a while, right? This is not every year or every two or three years, and it's once in a while. And it all adds up across the year. It's not going to be quick. And this is going to improve continuously. And pretty much like in automation, start a little bit small, but we scale at the back of your head it's designed for scale, but don't go all in at the beginning because it's not going to work out. Just start small, automate the small things that all add up, but do it in such a way that you can scale up big.


Debbie Reynolds  09:38

Excellent. What are your thoughts about the impact of automation on privacy and security? You know, there's more data out in the world. There's more information, and there's more stuff that needs protecting, right? So I think automation has changed the way we think. In a way, we should think about privacy and protecting data and information. What are your thoughts?


Bogdan Grigorescu  10:05

In today's world, there's a few factors that enable planetary scale automation. Today, infrastructure, the cost of hardware is very low for the power of computing, you know, the storage, the capacity and the performance of networks, and so on. So very, very, very low cost compared to, say, 10 or 20 years ago, let alone 50 years, but just 10, 15 or 20 years ago, is like more than 10 times slower in real terms. So infrastructure is one, and the other one is mobility. Again, you can access a lot of stuff from almost anywhere in the world. That was a dream 20 years ago. So we're talking like video calls and downloading big pictures, even movies, even movies like 20, 30 or 50 gig on the go, not in seconds, but certainly not in hours, much faster than that. Storage, I mean, you know, my iPhone has 120 gigabytes of storage. 20 years ago, a computer, a powerful computer, had that much desktop, so you know that there's that, so the mobility is another factor. Then, Wi-Fi was everywhere, almost like it's widespread now. And we have WiFi 6 and, and WiFi 6x, it's coming up, which is even, it's actually a breakthrough. IoT is coming up with edge computing, which means a lot of computing happens not at the source but on the device. And the device usually is what people are on. So on the go, you have a lot of processing going on. And the latency is so low, and yeah, so low. That essentially, for people, is real-time. Autonomous vehicles, they're not really autonomous, they are level two slash three, but you get the point. Drones, semi-autonomous drones, and computer vision are much more developed now. So you know, Shell, for example, is using drones with computer vision and deep learning models behind the scenes to assess the state of pipelines. So instead of sending engineers for kilometers and miles to investigate, they just go to the point exactly where it's a problem. And now they will, I think soon they will start using drones, and well, robotic drones to paint, to paint or not sending, if not sending even people there to repair certain things. So that's also a form of automation, AI systems, computer vision, robotics, autonomous, semi-autonomous vehicles, and deep learning. But that uses a lot of data. A lot of data points from the environment, and not just from the environment. And what that means for people is that as a lot of personal data is being collected, is being collected is being processed is processed and also is being used for inference. For example, insurance companies get the data from the customers, mostly legally. But then, in order to assess the risk, they inferred using machine learning models and other techniques. And so they create data out of your data that is used to calculate your risk and your premium. There's no laws barring that but is that fair? I think consumers are pretty much doubtful of the fairness of that. The other thing is social media platforms and trackers all over the place and impersonating this and that you have your picture in some conference or in your LinkedIn profile on your Facebook profile that gets collected by scrapers, automated scrapers that scrape the web day in day out and using a crude computer vision model. They can just impersonate you, or obviously, they will need a little bit of other personal data, not just a picture, but you get my point you can be impersonated much easier than, say, 20 years ago. That happened. The Bots, the bots, again, social media is probably the most prominent example. It's full of automated bots, spreading disinformation, misinformation, or just hype, or just, you know, fake likes, fake comments just to raise the awareness to the algorithms of social media to, you know, make certain posts more prominent. And so the author of that post, which is a real person gets more attention because of those bots, you know, the fake reviews on Amazon, and so on, we know, but that's all automated. The fruits of calling 1000s and 1000s of people through bots, dialing bots, they also, now it's up and coming. Full impersonation, so not just a picture or the voice; it's both. You can create anyone. Still, the barrier to entry is very high. Create a video of, you know, me or yourself; they'll be talking about something that goes actually against your beliefs. And it looks quite real. Now, this is in its infancy, but it's not being done at a very low scale. So what that means for people is that people have to be aware that their most important asset today is their personal data. Not their house, and it's not the money in the bank account that, by the way, that's all personal data. The money in your bank account is personal data. But it's your personal data, you can touch it, but it's part of your identity, what makes you yourself. And it's out there for the taking. Yeah, there is GDPR and the Data Privacy laws and this and that. But how do you know who accesses your personal data? Did you even give your consent? Because it was asking, okay, are you okay with giving your consent, and you say, okay, the social media platform will get just a social media platform? What about the ISP in between? What is the social media platform doing? When do we actually know? It's selling it legally, to whom you don't know. There's many entities, in fact, that will buy it in some form or shape. Piecemeal, or the whole lot or, you know, wholesale. The fact is, that's a piece of you. Your personal data it's a piece of yourself. So that's the situation right now.


Debbie Reynolds  18:12

So let's talk about trust. As these systems become more complex, they are gathering more data. The regulations are trying to create more agency for people. But a lot of this, in my view, has to, in order for people to want to give data to these systems and see it as it benefits them. It has to have a trust factor. What are your thoughts about that?


Bogdan Grigorescu  18:45

Well, trust has been destroyed, like, completely destroyed, because of just talking about it. The masses of bots on social media, the impersonating, the fakes, deep fakes, audio, video, and now audio and video, you know, the fake reviews that, you know, I personally spotted bots on the LinkedIn client trying to connect with me, I'm not sure I got them all, but I certainly got quite a few. Also, as people become more aware of this, you know, shady practices of just getting the data, ie pieces of themselves, they just don't trust platforms that much anymore. Certainly a lot less than three or four years ago. And access to data, in general, is the biggest driver for quality but also the prime factor for the digital divide. And you know, it's silosification, some people talk about colonization, yes, it is a form of colonization. It might sound preposterous, but it's real. Because access dictates everything. Access in health care, no matter how advanced it is, if those advanced methods of, you know, curing bad diseases are really not accessible to most people, what's the value of them? They exist. But only one in a million can access it. It's the same with data. So who has access to data drives and access, reliable access to data, not patchy access, not you know, now you have access. And tomorrow, you don't really have it, and then you have it again that has no value. That's a problem. But reliable access to data, the reliable access to average data, it's way better than unreliable access to high-quality data.


Debbie Reynolds  21:09

I agree.


Bogdan Grigorescu  21:11

So but you see, that's how both operate. They have reliable access to data from the masses. Most of it is not quality data, but their access is reliable. That's the key thing. So they scrape all these masses data of which they may use only, not even 1%. But they have it reliably, and they do something with it day in and day out. And that's why they have such a massive impact because of the access that they have.


Debbie Reynolds  21:47

I agree. I agree. I think you touched on a couple of different points. So one is the digital divide. And that's something I've talked about a lot about, you know, I think we're creating a digital caste system where people who have access to data will have more opportunities, they'll have better insights to be able to do things, and people who don't have access won't have that, for sure. I would love to talk about...


Bogdan Grigorescu  22:20

Sorry to interrupt with that said, you know, this digital divide is really evil, is really, really evil. And it's bad for the super vast majority of everybody on this planet, not for everybody. But almost everybody. It's like in a classroom full of pupils. The people sit in rows, and there are those that are in the front row. For the second row up there at the front, they have better access to the teacher, you know, they hear better, they, you know, they can make themselves heard, right, much easier. Those at the back. Particularly though, if they're not that tall. They're like, almost invisible. They're not heard. So it's similar to that, obviously, on a much, much bigger scale, of course, but it's similar to that. Forgotten, forgotten unless useful. They don't have a say. And if they do, it is by being allowed. Now you're allowed. Now you're not allowed. Right? If you fill a purpose, then you're allowed to speak to a certain point. If you don't, then you don't exist. That's what the digital divide is to us.


Debbie Reynolds  23:48

Yeah, I agree with that wholeheartedly. Absolutely. It's definitely a huge problem. I would love to talk about, and there was a Brookings Institution report that you posted about I think it was called "Automating Harm". So it's talking about digital, things like IoT devices, and then collecting data. Can you talk a little bit about this? This is a fascinating study, I thought.


Bogdan Grigorescu  24:21

It's all about caring in the end. It's all about care. Do you care about the data subjects or not? More than just a little. Do you really care about that? Because if you don't, and you deploy these AI systems, which are essentially automation, Intelligent Automation, then what's going to happen is that you're going to automate harm. There will be major negative impacts that will scale by themselves because of the nature of the work. Automation, AI systems do not have written rules like rules. They're not rule-based; the output becomes input, or part of output becomes input. And not going to go into technicalities, but the output has a major influence on the input, input has a major influence on the output, and the output becomes input. You get my point. And that is done at a really big scale means that there will be unforeseen impact in remote jurisdictions, for example, very, very much simplified but real nonetheless, a company in say, the US, doesn't have to be the US, I'm just saying the US, could be any other western country, for example, has some troubles, and they have to make redundancies or actually cut the orders significantly. Now, they have a number of vendors and suppliers. And those suppliers, at least some of them, rely on services from, let's say, Africa or Southeast Asia, remote jurisdictions. Now, they have a partnership, but those remote jurisdictions and those companies in Africa have no idea who the end client is. Their client is a vendor that supplies services that are just being cut in the US. And so they lose business. And so they have to cut significantly the orders back to the African service provider. The African service provider has to lay off people en masse, though people have no idea why it just happened. They have no idea why. Even the bosses have no idea why. They just say we can't get orders anymore. We've done nothing wrong. We're always on time, quality, and good on price. They don't have the money to place that many orders anymore; we have to lay off people, but they have no idea why. And that's the nature of global economics. But when these decisions rely on output from AI systems, that's where the problem comes; if there's not good observability there and the output is not actually understood, this is taken at face value because it is taken at face value. And so those decisions may have been very different or done in a very different way or put in practice in a very different way. Should that output be questioned more? Is it truly understood? Yes, actually, as we this is the best way we have, we can't do it any other way. And at least try to explain, give some heads up to people. But if it's done for speed, then you know, computer says so cut the orders in half, layoff, you know, 1000s of people and that's that. And the data subject, the people that suffer, have no idea why. In I think 2020, there was a case in one of the US states where somebody was in a court of law accused of doing something unlawful. It was not something that serious to end up in jail, but nevertheless, end up with a criminal record. And so, you have a human right to challenge the decision of the judge and ask for clarification How did you get to that conclusive time period, and the judge is obliged to explain. Here is a problem. In that case, the judge based his decision in part on the output of an AI system. Now, he did not know why the AI system had that output. So he asked the company that was running it so that he can explain his decision. And he was refused an explanation. Because IP rights were involved. We can't tell you because that will jeopardize our trade secrets. That's it. So the person the accused has his human rights breached because of the invocation of IP rights, because the output of the AI system was not explained, which in turn would have explained the judicial decision on may even have overturned the judicial decision; you never know. And so that's another impact of automation and the lack of regulation and laws. With respect to IP, IP rights,


Debbie Reynolds  30:35

Yeah, the systems, I feel like it has to be a human right. I think humans have to make the final judgment, right? Even if they use AI systems to gather information insights, I don't think it's acceptable to say, well, the computer told me this or the algorithm told me that.t I mean, it has to be deeper than that. I'm hoping to see more kind of human judgment at the top of these types of decisions. What are your thoughts?


Bogdan Grigorescu  31:05

Yeah, I mean, people, humans in the loop, should be, in my opinion, should be legislated. Do not externalize decisions, no matter how small, to machines. The output can point in one direction, but decisions should always be human and not take that output at face value. And just, you know, the computer says so, so this is what I'm going to do. As a way of working. I think that should be actually legislated because the risk is just too high. And because data travels across jurisdictions in unforeseen ways. The impact may be in a totally different part of the world. It's not always that it will be local. You cannot foresee that. That's the point. Big because of the speed, because of the globalized economies, and also, because of you know, data travels across countries, continents, and it does travel in unforeseen ways as well. And so nobody would actually understand the impact. So better safe than sorry. human in the loop at all times. I agree with that.


Debbie Reynolds  32:33

Well, let's talk about a story that's been in the news. You posted about this as well. I read it, and I just thought, oh, my goodness. This has to do with the Google researcher, who thought that a chatbot had become sentient. You know, I know what you're going to say, but I'd love to hear your thoughts on this. And I'll give my thoughts as well.


Bogdan Grigorescu  33:04

Yeah, you know, when people use things, objects, tools, and so on. And they use them regularly on a daily basis, or, you know, regularly they personify them, it's like, it's personal. And so they said, when it's broken that they have to let go because it's beyond repair, they, you know, they become fond of those objects. It's psychology. But the same is true with non-physical objects. But a non-physical object that we interact with, right, like, in this case, this chatbot, which actually has a large language model behind an AI system, but your interaction is through a screen, and you just hear well, you just hear the voice which sounds quite natural. Also, if you want to believe something, you want to be convinced of something you're not yet convinced of. But you want to be convinced. It means you are already half convinced. And it's a kind of bias, right? You're not like anything but neutral, very unscientific, of course. And you know, not much critical thinking that you don't question much because it's kind of, it's reinforcing your bias, it's reinforcing what you want to believe. All right, all these tough human-like questions. It cannot know that? Well, why do you say it cannot know that? Put it through a Turing test, ask somebody that knows about Turing tests much more than yourself, you know, talk to, you know, more people in the domain, ask their opinion. Experiment hundreds of times, thousands of times, in all sorts of contexts. The more diverse, the better. Before you can actually reach a conclusion. So yeah, you're gonna have sentiments about it and feelings about your work, that's fine. But don't let them cloud your judgment because that's very dangerous and unscientific. And the hype plays a lot into this. Yeah, that's been going on for quite a while now. And it's just to reach observed levels. Essentially, it pushed the science aside, swipe right out of the carpet, and you see it in the masses of paper that, you know, have dozens, and sometimes, I mean, they have a list of authors, like one-two pages long, what kind of content, how can you have 50 authors on a paper, I don't know, beyond me. But anyway, and you know, they don't like, you know, five-year plan, you know, you don't write papers by the dozen a year, you're not going to get funding for your lab, therefore, your outdoor job as an academic. So the academic is not the push to be a salesman and just right, no matter what the quality is. And so obviously, you have to put the science aside, because science means you know, you're on one or two papers, high quality a year. That's about it because it takes time because you have to experiment a lot because you have to go to peer reviews multiple times, and you have to be open to criticism before you publish it. So that you do the right thing. That is very time-consuming. I mean, how many papers did Marie Curie write per year? Right? Probably not even one. Yeah, exactly. Because she was busy experimenting in the lab that she built with Pierre Curie with her own hands in her own free time while she was teaching at university. But that's a true scientist, isn't it? And so the hyperlink has a lot to answer for that. And the other part is psychological. I believe, as I said, if you want to be convinced about something, you're already half convinced, and you will just try to reinforce your bias no matter what. So unless it's pretty much obviously false, you're going to be inclined to believe it. Yeah, we have to question everything, really. As a scientific method, we have to question everything. Because the aim is to do the right thing. We don't question because we can question; we question so that we are sure we are doing the right thing. And we may not be right, and then we have to change our opinions. And that's fine. Right? That's fine. But that's, that's my opinion on this. This chatbot that some people believe it's sentient and has feelings and all that.


Debbie Reynolds  38:30

There's a psychological element here as well, I think. And so, to me, a parallel, and this will be maybe in virtual reality, or people creating deep fakes, right? So sometimes, your brain can't tell the difference between something that's real or fake. So you may have real emotions and feelings and thoughts about these things, but that doesn't make them real. So being able to do, like you say, do the science, not go on the feeling, is very important. But you know, I think one of the things I'm concerned about is when people are in systems like VR systems, where they're immersed, and they're having reactions or feelings for space that aren't real, but they're having real physiological reactions to them as if they were.


Bogdan Grigorescu  39:25

Yes, and I struggled with this so-called virtual reality. I mean, why would you even want that? There are, of course, some practical use cases, but not for the general public hybrid, if at all. For example, in research, of course, medical research, for example, and simulation but you know, simulation has been around for a long time, you know, in flight simulation and space and so on. So it's not like it's useless. It is useful but not for the general public. I mean, why? Why would you? Would you stay glued to a screen of sorts where it's a headset, VR headset, or just a computer screen where you're an avatar that is supposed to represent you, I mean, bytes representing you, come on man. And in a very unsafe environment, though, for weeks, you might believe that it's totally safe. No, it's not safe at all. And don't get me started on the network. vulnerabilities of DNS and CDN. It's just unsafe; believe me, technically, it's unsafe, psychologically, it's unsafe, what is presented to be safe. And we already have seen women being harassed in virtual reality, already! I mean, how safe is that? And the thing is, you are static; you are static for long periods of time, which is very, very unhealthy. And for what, like, what do you want to be in the end? Do you want to be a couch potato just looking at the screen for hours on end all day? Right? This is what you want? It's all these like, you know, the ultimate question for humankind, right? Shakespeare put it as to be or not to be; that is the question. What do you want to be? Not to do not to achieve? But to be because there's not the same thing? To be. Do you want to be a good person? Do you want to be a, you know, enjoying comfort? Because comfort is a big risk, right? Comfort? It's, you know, making you lazy. And procrastinating and uncaring goes, in the end, you're going to chase your comfort, you're going to do whatever it takes to get your comfort. It's kind of addictive. If it's too much, of course, you need some comfort, for sure. What if it is too much? I just fail to see how being immersed in virtual reality will make you a better person or help you become a better person. I just don't see it. I mean, I might be stupid, but I just don't see it. And if somebody can explain to me in, you know, in plain English, or you do at least two or three hours a day, in virtual reality, this is what it is going to help you become. Not do, but become right? I'm going to take you, and I'm going to demonstrate. It's going to help me become a better person. I want to spend three hours a day in virtual reality, no question about it. But let me doubt it quite a lot.


Debbie Reynolds  43:12

Wow. So if it were the world, according to Bogdan, and we did everything you said, what would be your wish for privacy, AI or automation? What would be your wish?


Bogdan Grigorescu  43:27

Caring. Care about your data subjects. Data subject means the people that actually own that data that is used by AI systems to create jobs, reduce costs and inefficiencies, to make people happier because they don't have to do repetitive work. And now they can concentrate on interesting topics and work making life better. No, but you have to care. Everybody has to care, from a junior engineer to the business owner to the venture capitalists that sponsor to the project manager to the end user in the company. And also, the data subjects should care. What is my data used for? By whom? Have a good idea about it? And every single detail is not needed, but have a good idea about it, so caring, caring about it all. And not just about one element, wherever it's increasing revenues or cutting costs, and that's it. That, yes, but a few other things as well. In balance. That's my wish, for people to care more and care more about each other as well.


Debbie Reynolds  44:53

That's wonderful. Thank you so much. This is a great session. I'm sure people will love it as much as I do. You're such a deep thinker, and I'd love for people to follow you on LinkedIn and see your commentary and things like that.


Bogdan Grigorescu  45:07

Virtual reality, like, I can't get it. So I talked to a few people in Africa, in Nigeria and in Ghana. And I didn't know I mean, I've read about challenges when I talk to them directly. A whole lot of different understanding. Because it was coming straight from the source, right? Now, virtual reality that's not the source of it. So ideally, I would travel to Nigeria or to Ghana or some other African country. And just seeing that, I get ideas and say, actually, you know, these people have it hard. They have to rely on the cloud to do even the most basic things, then they are very innovative because of that, right? Because not having but wanting to have makes you innovative. How do I get that understanding without talking to people that are coming from there, right? Oh, virtual reality will give you that, no matter how good it is, it won't, you know, it's like comparing coding and programming. So coding is to programming what typing is to writing. Right? It's the same way that like looking at the screen with a lot of information is like reading an article in a newspaper; do you become knowledgeable? Now you become informed, but you don't get through. You don't get it until you emotionally connect with that environment with those people. And let me assure you, it will be very relatable, always, always, because we're all people, and we all share more. It's a lot more than unites us than divides us, a lot. And it's just good. But you're not going to get that from a screen. The screen is good when you cannot travel. And also, obviously, you get informed; you do not get knowledgeable. And in the end, you have to have that personal relation or personal conversation with somebody that is from there. And they will learn from you as well. It's a two-way highway.


Debbie Reynolds  47:30

Yeah. I agree. I think the thing that's missing too much is like let's develop, like you say, let's develop this tool or this technique. Because we can make money or do revenue or something else. And it doesn't really bring the human in like, how does that benefit me as an individual, you should really think about that. So having more of a human-centric approach to using technology and ways to help people and not hurt them, I think, will be helpful.


Bogdan Grigorescu  48:49

Yeah, what does it mean for me, what does it mean for my customers, and what does it mean for my department and for my team as well? For my team, does it mean that they will have to work like 12 hours a day, 15 hours a day for months on end? I don't know. Or maybe does that mean that we'll be bogged down in doing mostly boring things because we have to get an outcome, but it means to do boring things all day long? You get that meaning before you do it. If you're forced to do it, you still have to get that, meaning there's no other way. That's the meaning right there. There is no other way it needs doing. You have to sweat it. You have to sweat it. Okay, I get it. I'm going to sweat it because I understand the purpose. I relate to the purpose. It's nasty. It's uncomfortable. But it's beneficial in the end. But you've got to relate to that. You've got to have that meaning; if you don't get that meaning and what meaning will you get from virtual reality? I struggled to understand that.


Debbie Reynolds  49:16

I totally understand. Oh, my goodness, we could probably go on for another hour, I'm sure. But thank you so much for being on the show. This was wonderful. I'm happy to chat more with you about some of your ideas, and hopefully, we can collaborate in the future.


Bogdan Grigorescu  49:33

Sure. Thank you very much. I really appreciate your show. I'm an avid reader of your posts and comments as well. And I hope it will get even a higher profile, and you should be on TV. Just because everybody understands what you're saying. It's high thinking. It's hard-hitting. It's deep. It's wide. And everybody understands it unless they don't want to understand, okay, if you’re biased, if you really don't want to get it, you're not going to get it, of course. But other than that, everybody understands it. That is what is likely. Most complex reports that are hard-hitting to people fail because they don't understand them. And with shows like that, they at least understand the fundamentals of what it means for them through educating them.


Debbie Reynolds  50:31

That's a high compliment. Thank you so much. Oh, wow. That's so sweet. Oh, yeah. I'd love to chat with you further. This is great. This is great. I'm sure people will really love this episode.


Bogdan Grigorescu  50:44

I hope they do. I hope they find it useful. And I'm always open to questions and collaboration.


Debbie Reynolds  50:51

Excellent. Thank you. Thank you.


Bogdan Grigorescu  50:54

Thank you, Debbie. Again,