"The Data Diva" Talks Privacy Podcast

The Data Diva E183 - Debesh Choudhury, Ph.D. and Debbie Reynolds

May 07, 2024 Season 4 Episode 183
The Data Diva E183 - Debesh Choudhury, Ph.D. and Debbie Reynolds
"The Data Diva" Talks Privacy Podcast
More Info
"The Data Diva" Talks Privacy Podcast
The Data Diva E183 - Debesh Choudhury, Ph.D. and Debbie Reynolds
May 07, 2024 Season 4 Episode 183

Send us a text

Debbie Reynolds “The Data Diva” talks to Debesh Choudhury, PhD, Information Security Researcher (India). We discuss the application of image recognition for security, with both expressing concerns about the accuracy and sensitivity of the technique, particularly in matching images and creating databases of individuals. We also highlight the potential vulnerabilities in image security, emphasizing the need for complex hashes and the development of quantum-proof techniques. Additionally, the conversation touches on the limitations of voice recognition as a biometric factor for authentication, focusing on the ease of spoofing and the necessity of multiple factors in security measures. Debbie expresses deep concern about the rapid advancement and potential dangers of deepfake technology, citing a news story about a significant financial loss resulting from a deepfake video call. Debesh emphasizes the need for research to detect deepfakes and discusses the multifaceted impact of deepfake technology, including its potential positive applications in entertainment and the movie industry, as well as its negative implications in political scenarios and personal lives. The discussion also touches on the pressure for companies to adopt AI and the potential risks associated with overreliance on artificial intelligence, drawing parallels to historical industry shifts such as the adoption of Linux by Microsoft. Debesh raises concerns about the widespread adoption of biometrics and its potential threats to privacy, citing examples of government orders and the impact of the pandemic. We discuss the negative impact of such technologies on individuals and advocated for their rights. We discuss the need for logical thinking and education in the context of introducing AI and robotics courses in high schools, emphasizing the importance of nurturing critical thinking skills in students and his wish for the future of Data Privacy.

Support the show

Show Notes Transcript

Send us a text

Debbie Reynolds “The Data Diva” talks to Debesh Choudhury, PhD, Information Security Researcher (India). We discuss the application of image recognition for security, with both expressing concerns about the accuracy and sensitivity of the technique, particularly in matching images and creating databases of individuals. We also highlight the potential vulnerabilities in image security, emphasizing the need for complex hashes and the development of quantum-proof techniques. Additionally, the conversation touches on the limitations of voice recognition as a biometric factor for authentication, focusing on the ease of spoofing and the necessity of multiple factors in security measures. Debbie expresses deep concern about the rapid advancement and potential dangers of deepfake technology, citing a news story about a significant financial loss resulting from a deepfake video call. Debesh emphasizes the need for research to detect deepfakes and discusses the multifaceted impact of deepfake technology, including its potential positive applications in entertainment and the movie industry, as well as its negative implications in political scenarios and personal lives. The discussion also touches on the pressure for companies to adopt AI and the potential risks associated with overreliance on artificial intelligence, drawing parallels to historical industry shifts such as the adoption of Linux by Microsoft. Debesh raises concerns about the widespread adoption of biometrics and its potential threats to privacy, citing examples of government orders and the impact of the pandemic. We discuss the negative impact of such technologies on individuals and advocated for their rights. We discuss the need for logical thinking and education in the context of introducing AI and robotics courses in high schools, emphasizing the importance of nurturing critical thinking skills in students and his wish for the future of Data Privacy.

Support the show

37:17

SUMMARY KEYWORDS

biometrics, techniques, fakes, means, ai, work, application, image, technology, utilized, artificial intelligence, problem, deep, conference, security, privacy, change, good, spoofs, fingerprint

SPEAKERS

Debbie Reynolds, Debesh Choudhury


Debbie Reynolds  00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a very special guest on the show all the way from India, Debesh Choudhury, PhD; he's an Information Security Researcher. Welcome.


Debesh Choudhury  00:39

Hi, thank you, Debbie. Thanks for inviting me.


Debbie Reynolds  00:44

Wow, I'm excited to have you on the show. I love the things that you write, I see you're a podcaster as well with your own show, and you really dig deep on data issues, and information information security issues. So, I would love for you to introduce yourself and give us an idea of your career trajectory, how did you get to where you are right now?


Debesh Choudhury  01:10

Thank you, I presently I am focused on information security, but I came to information security after working in various fields when I did my Masters. So, I went to pursue a doctoral research Ph.D. So, that was in holography, three-dimensional image processing, and a little bit of optical techniques, which are also relevant in security applications, such as privacy protection and other things. So, after completing my Ph.D., I joined a Defense Research Lab at the Government of India as a scientist; there, I worked on the development of electro-optical instrumentation, such as night vision devices, etc. After that, I went for post-doctoral research in Japan at the University of Electro Communications from 2000 to 2002. There, I worked on a project focused on wavefront sensing; wavefront means, say, we see the light waves that are coming to our eyes and the sensors. So what is the shape of the wave, so that basically, it tells about the quality of the image? I did that there and also did some work on three-dimensional target recognition, as you can see, an interest I found from a colleague who was working in a different lab. So my professor permitted me, and I also worked on an experiment on three-dimensional target recognition using print projection, like whatever you see in Apple's iPhone, that 3d Face ID that also works on a similar principle that in Apple phones, they project invisible wavelengths of light, a structured light pattern on human face and gather the information and give recognition. So, that type of thing I did for different three-dimensional objects. So, after returning, I went back to my work and continued my regular work of electro-optical instrumentation. Besides that, I also pursued an interest in target recognition and applied it to three-dimensional human face recognition long ago; it was started in 2003. You can say Apple actually introduced that technique in 2016 with the iPhone, and that was earlier, but we did not consider that for going for a commercial application. I was a little bit frustrated about the tolerance of biometrics like the physical condition; I also mixed up the modalities like face and fingerprint so that the target recognition is more robust and stronger, increasing the strength of performance, but still, I found that it is not giving the result because it is not deterministic. So, after doing a lot of research, I presented papers on privacy protection, even face recognition, in China and also in the US in Baltimore. There is a conference on Defense Security from SPIE. So, I understood that it is not a technique that gives a result that can be utilized for authentication in an application like banking; we need a deterministic answer, yes or no. So, in 2019, I had an interaction with Ito Shikoku, and because of my interaction. connection with Japan what many years so we get along quite well and I explored their technique, and I introduced myself like as a learner of authentication using graphical images, which is what so that is my present in erased. On every occasion, when I write about security, especially authentication techniques, I see many post advertisements about biometrics because of my real hands-on experiments. So I'm convinced that it is not going to give the result which is required for such authentications. They're proposing to apply it say for banking and other things. To tell you without naming companies, big companies, they request me to say some remarks on their face recognition improvement. It is a paid thing of course, but I did not agree, I don't agree every time because I know that it is a problem and it is not actually money, it is a true requirement of something which, in human society as a whole, throughout the world banking is a very important thing. If biometrics is made the default password then it will be very, very problematic. Whenever any bank tried it, the very first day they tried it, it is broken, and it is causing problems. Presently, I'm also trying to use that image also to say in the cryptocurrency domain people require to protect their private keys. Once you lose the private key, you lose all the cryptocurrency assets. So I'm also trying to develop techniques using image graphical images if we can protect and preserve private keys of applications, not only cryptocurrency, any other blockchain-based techniques, because in decentralized applications, all the blockchain matters, there is a inherent issue, it is not the issue, it is a trait, like private key, once lost, is lost forever. It cannot be reset or regenerated because it is decentralized and that is their particular property, why they are saying that it is very strong. So that work also I'm doing and of course, some paper presentation also was done in a conference in us into 2022. I have a collaborator in the USA, a university professor, so you can see the technology interest. Because I also taught in university as a professor, after quitting the defense job, I worked from 2007 to 2017 as a professor of Electronics and Communication Engineering. I took interest of mentoring students, especially the young students, high school students for taking a path which is correct. Like always, that is high. There is the hype of artificial intelligence. There is a hype there was the hype and hysteria hype or biometrics and many other technologies. Even today, in the morning, I visited a high school in Kolkata, a very big top High School, very big is like a factory, how many students I don't know have seen hula buses on site of the cool and they have to introduce a lab for robotics and artificial intelligence, because from the board and worldwide also there is a introduction and also having some obligation, it would not say directly political but take no political obligation of accepting the hype technologies, such as, firstly, it was blockchain and blockchain is good, but they made it a hive that blockchain means everything is good means you can do anything with Blockchain. Then game Biometrics. They are saying that biometrics had got this much of say $144 billion of investment. AInvestment actually doesn't prove the authenticity of the technique. Investors always invest because they think that there is a potential of market business. So they invest, but investment doesn't actually prove whether the technique is foolproof or problematic. So I have told a lot many things. Please ask if you have any other questions from here or any other place?


Debbie Reynolds  09:00

Yes, I do have a question. I would love for you to expound upon what you said, which is true, which is that biometrics are not deterministic, though I think the problem that we have with a lot of people who are selling technology they're selling it as very accurate, foolproof, you don't need other types of ways to verify or authenticate people, and that's just not true. So, explain for the audience the ways in which biometrics are not deterministic.


Debesh Choudhury  09:36

Yes. Firstly, biometric recognition depends on comparing the biometric straight digitally with software, and biometric states are always changing. Number one, it is changing and it depends on how it is captured say, now looking at the camera and they say that you could fingerprint or face on the camera and every time the camera captures images a dynamic is dynamic so it is changing. So the picture changes, the dimension changes many things change. While pursuing PhD, as I mentioned there was optical security. So there were many techniques. One professor, I don't want to name, he was also in the committee of US Presidential Defense, like Homeland Security. So, he did very initial work on optical security , and after that, there are many applications they say that biometrics it can be tolerated like it is rotation invariant, it is elimination invariant. It is scale-invariant, like I'm here, I'm coming closer to the camera, many things. Now, I understand that all these tolerances, developed all the researchers worldwide, and innumerable research papers published in peer-reviewed journals, high-impact journals, and conferences also are presented, and there are many investments made during these researches like from the Department of Defense in the US and other countries. So, these huge amounts of money are spent to develop techniques which will help the criminals. See when it is tolerant, it means say recently in India in the Digital Identity Project, they call it Aadhar. So, in Aadhaar, they have attached to other enabled payment system using fingerprint and all banks must use other enabled payment system that is ironic, I mean, I don't know why and the one of the biggest bank State Bank of India, they wrote in 2017. To other that we don't want to put other integral payment system in our application, but other replied to State Bank of India that it is mandatory, because they have made a government act. So, that means they have opened the window for the criminals. Now, every month every day, there are cash transactions using fingerprint spoofs in India and all poor people like say they have in their account only saved meagerly 10,000 Indian rupees and at a time 30,000 Indian Rupees they can withdraw using other enabled payment system. So, they are going so, that means, all these developments say tolerated tolerances and improvements of the biometrics recognition techniques actually indirectly helped the criminals to utilize it to enter break the authentication system a security and makes a banking transactions like an example of here in India. So, that means, spoofs are very easily to make one point is there and another point is that it is tolerance. So, that means proofs are not very correct and it works on the principle of tolerance when you say that physical condition is giving the entry so, it is having a scale invariance, elimination invariance and many other things so, people can use spoofs like seeing apples 3d face ID one company, one bank in Singapore in 2017. They tried. There are huge reports from security researchers that they can use a face mask and break it, even say it son and daughter or son and mother are similar looking, they also can break the 3d face ID and three dimensional face recognition now being projected as a very improved thing and almost every couple of months I'm getting request that please verify  everything and give a comment. So I even recently got a hybrid job from Europe for being an advisor of biometrics. But I am not doing it because I tested the steps and even say NISD standards that are references it is mentioned it is review of several professors of US universities and mentioned that there are inherent in our in these probabilistic issues. Issues means it is a trait. it is natural. Nobody can actually get away from the probabilistic processing of biometric security. That is a problem.


Debbie Reynolds  14:12

Yeah, and I'm very concerned, very concerned with this area. So I talk about it a lot, especially because these systems work very poorly on people of color, very poorly. Some of the things you were talking about from this research, a lot of their research into these biometric tools., they're using photographs from driver's license bureaus and things like that. Those are very clear pictures, good light, and stuff like that. So, it doesn't really apply to situations where the light isn't as good. A lot of times, they're trying to use this as evidence in cases, and there have been situations where people have been falsely accused and arrested as a result of the use of biometrics because some people assume wrongly that these technologies are infallible and then they misuse them in bad ways. One way that I've seen biometrics work best if you use it at all, and actually, to your point about the Face ID, my sister's face can unlock her daughter's phone. So yeah, we test that out. But one way that I've seen it work best is when someone has a clear picture of someone, and then they try to match it with a current picture of that person. Well, to me, I think that's a pretty good use of it, where I feel like some of these other uses, they're trying to create databases of people, and they're trying to match them when they're not the same, and especially if they're a darker skin, it's just terrible. What are your thoughts?


Debesh Choudhury  15:52

Illumination, color, yes, many things. Another point is that the graphical techniques we mentioned that actually depends on a particular image. So, if one pixel is changed, it means the password is changed. So, here we are not using image recognition, it is actually the image is converted to a very complex long text. Okay, a man-in-the-middle attack is possible, and I actually talked to many other network security professionals. So, they said that the man-in-the-middle attack is very, very, very highly not impossible. In our technique, we are utilizing, say, different forms of, combinations of different images, and if somebody also guesses the image and particular picture is changed means like, somebody creates the same image or got it from a different source and say even you save the image, save, like imaging software you save as and give a different name. So it processes in the software. So the pixel values changes, our eyes cannot make out but in that technique, it will be so that means when a graphical image is touched, security is gone. So, that is very, very sensitive and highly accurate. Deterministic.


Debbie Reynolds  17:14

Right. It changes the hash value.


Debesh Choudhury  17:16

Yes, it changes the hash, of course, yes. It is a simple technique. Yeah, very simple, that all security is depending on hash, they can try to improve the complex hashes, like different quality of hashes and improve the strength and till that RSA, whatever that technique are being utilized for banking and other things, they are actually waiting, of course, there are cases about, say, when quantum computer comes, what will happen, and they can regenerate the text by say, the computer is fast, so they can try to regenerate it. But that also we discussed and image regeneration. I've also tried to make a conversion from text to image. So that was in this search, it is not possible, like because hashing is not a reversible process, right? So it is under a research image to text. So again, text to image.


Debbie Reynolds  18:11

That's fascinating.


Debesh Choudhury  18:12

One paper we presented in 2022 at a conference, but it is not what we wanted.


Debbie Reynolds  18:19

Yeah, I saw a story recently, this is about voice recognition. Biometrics, where some bank somewhere I don't know why they did this. They were using voice as their only factor.


Debesh Choudhury  18:31

Yes.


Debbie Reynolds  18:32

For authentication. Obviously, it was spoofed. It's very easy to spoof voice actually, and then they were upset that someone broke into people's accounts and stuff. First of all, as you say, it's not deterministic. That's first thing. Second thing is, it definitely shouldn't be your only factor. It cannot be. If you're going to use it, you need to use a multiple factors, right?


Debesh Choudhury  18:54

Yes, yes, yes. People actually should dedicate time to develop techniques which are quantum proof, like quantum computing proof, and to improve the text based password whatever is existing, they should try some algorithm and change something different so that it cannot be broken by quantum computer, although a meaningful quantum computer, general purpose quantum computer is far away, and maybe we cannot see in our lifetime, but someday, it could be.


Debbie Reynolds  19:26

Yeah. Let's talk about deep fakes. Deep fakes are the next level of deception that we're dealing with in AI and it's coming rapidly.


Debesh Choudhury  19:37

Yes.


Debbie Reynolds  19:38

I feel like we weren't paying attention to deep fakes while it was developing. Now, it's a huge problem. There was a story in the news about someone being on a video call with people that they thought were their co-workers and they worked on this to transfer a lot of money. I think it came out to up to like $20 million US dollars; this actually happened in Hong Kong, I believe. So, a lot of the rage on the internet is about the person being fooled by deep fake technologies; what are your thoughts?


Debesh Choudhury  20:14

Yes, deep fake is a concern. Actually it should be the research shouldn't be how to detect deep fakes like whether it is deep fake or it is real. That type of research should be done, and deep fakes, of course, it is fun like say for creating animation, animated movies and even in the movie industry, they will be having many complex things solved by deep fakes, some celebrity actor is not available, see after some work, they may try to put, I don't know whether it has already been used, say another date they cannot try for that and maybe say a few seconds they want to make some change. So, they probably try deep fakes and a star cannot refuse,  even if he or she knows it is not a financial problem. But of course financial because indirectly, that additional time the star gives to the producer, it could have given them some earnings, which probably would be problem solved by defects by that otherwise the defect, it is also giving problem to people when I say in political scenario also, it can be temporarily during the vote, before people can get to know that it is a deep fake, just prior to the date of voting something very abnormal come from the mouth of the candidate and people watch it and the vote was affected, and nothing can be done. I mean, afterwards, it is a very complex scenario. So impact of artificial intelligence is severe, then we can guess day by day, they are saying that ChatGPT can entertain people, it can enable people to create a billion dollar business and many other things, but they don't consider when their company will be under threat using AI. So how they will compete, yes, every company which competes using AI and the competitive market. Another thing, say AI generated content, I just had interacted on contents by Robert C Aldini, the father of influence. So if AI generated contents give marketing that easily business, then this could not have existed. I have seen that a particular change in digital marketing, engagement changes., and that particular word is coming from the human thinking, and that artificial intelligence cannot create that type of pitch for marketing. It is very simple, but it is for a computer, I think it is very complex to generate, to reproduce.


Debbie Reynolds  22:52

The thing that concerns me, obviously, we're concerned about deep fakes. But I know that people, like governments, want companies to try to find ways to tag or flag something that's a deep fake. I think the issue I find there, and I want your thoughts, is, let's say you have a system that created a deep fake; maybe they could put something in the metadata, or maybe they can be put some type of watermark that's invisible or something on a video, then that way depending on where it's shown, I guess it would need some type of system to read it right? Let's say someone created a deep fake video, and it was not in a system; it was exported just like you said the save as something, a file that was just kind of free-floating; you would need a system to read that. So, how would a normal person read that? What are your thoughts?


Debesh Choudhury  23:52

No, the situation is complex. It is for a normal person, say as in all person I also get fun when it is shown that artificial intelligence generates so many interesting things like a statue, it will statue or blinking of Mona Lisa, but in reality when it comes to a human beings, say which affects our own family, then it is beyond imagination what people will think that somebody's daughter or somebody's father, mother or somebody is projected using deep fakes and their entire family gets affected. That is some concern, people who are earning and who are pushing AI for even it is not required all the startups are trying to adopt AI because there is a pressure like in their website if they cannot put that password that they are algorithm they are things are AI supported then probably they think that the market will be lost. But in reality, my personal thinking, of course, even say big company like Microsoft, they will lose everything, almost everything because of AI. Because they are putting everything on AI say if AI is lost, they are nowhere. See when Microsoft told Linux is a cancer, how many years they told Linux is a cancer, Bill Gates and his longest time CEO, I forgot his name. He told that Linux is a cancer and now their main revenue earning business Azure, a cloud, is using Linux and it is more than 65% of the users are using Linux instead of Windows, they thought that now they are trying to purchase Linux by money, but they are becoming the platinum sponsor of the Linux Foundation and trying to vote and maybe they are trying to just take control they purchased a GitHub. In the State the government orders are being driven everywhere and the citizens are not accepting it, but the comments can do you cannot do anything. We are saying that you have to take this you have to do this and when people understood that this false, fake things, then they will not do it. After say one to three years they will not do it They will say for the pandemic, now people are immune to any type of that type of disease because that fear has gone. It was a bad thing economically and many other way but the good thing is that people understood realized because of their bad experiments that what is true and what is not. So, that will come out long run it will come out maybe we will not be there by nature will clean the system. So, somebody told me that why are you saying that Biometrics is bad? When computer came, people said they will not use the computer but now everybody is using computer, but that cannot be compared, like computer is a general purpose utility. And Biometrics is a special trade they are portraying as a better thing for a particular application financial application. So that they have to consider.


Debbie Reynolds  27:13

Now, what is happening in the world right now in privacy that's concerning you most?


Debesh Choudhury  27:18

Yes, in privacy there are so many conferences and recently in India, various specialty institutes and statistical institute, they got a conference organization like there is a conference called Indian conference on pattern recognition, ICPR, and it is a very prestigious conference and Institute's bid for taking the organizer or as the organizer, but it is difficult after two three times bidding some people get it. So here they got it this time. I'm amazed that out of the four tracks of Indian conference in International Conference on Pattern Recognition, ICPR, one track is biometrics, beause biometrics investment is huge, they are getting sponsors that is why they have opened a stack biometrics and this application. What is there to improve in biometrics and it has to be included in a conference like ICPR. That is a big thing. I mean, I don't understand, maybe I will submit some paper on some other applications, my paper will be rejected that I will not be unhappy, but I'm unhappy that something it takes to crack a subject which is having several issues, they are projecting it to the academicians to the young researchers that you work on here. That is very, very disheartening.


Debbie Reynolds  28:48

I feel like sometimes the part of this advocacy is for people who are impacted negatively by these tests technologies, what are your thoughts?


Debesh Choudhury  29:04

Yes, people are affected by technologies and they people lost jobs for AI. That is true many people have lost jobs for AI that technology is coming that cannot be stopped and the research should be carried out. Say I will use artificial intelligence for some job which it does very nicely, say I've given an example in one post that that is a photograph of a mass concert of a big concert and from far that is a photograph of the crowd. So artificial intelligence can very easily guess the number of AIDS in that. So that type of application is good, when we cannot do it without using other techniques. So nicely when it is very approximate and a good one say for biometrics the police can utilize biometrics as an additional information to get to know about an idea whether it can be criminal or cannot be criminal, but if they cannot deterministically say fingerprint matches this person is that though in the courts it is accepted still now, but it should not be that way. Artificial intelligence research should be done, should be continued and should be utilized, the application should be found out, which has some real improvement. When I was in Japan, I asked my professor shall I use some neural network here. So, at the time he was laughing, he told that in neural networks you will get so many data and most of the time it will be useless, plus the infrastructure the competition on and other things to do that, and after that, you will find that this learning technique will not give you anything. That way artificial intelligence requires a lot of resources for like computing power data storage, big data, as they're saying that and people will not bother because the data storage and the computational power are increasing, but these could be utilized, like say, the electrical energy which are being consumed, say, for running a ChatGPT server, huge amount of resources are required. Those resources, they got it because they actually back companies like Microsoft,  you can say a bit that this will give them earnings. So they are putting the money and when it will not work, it will create a high ripple and chaos in say, academic institutions when teachers say professors say that assignments are being copied by ChatGPT. There is another application being developed, GPT Zero in Canada, that also will be sold, and my son also tested that few words, if you change search activity generated text, a very big text some 1000 words for words in between if you change, then it gives that it is not AI generated, if it reduces that it is 100% authentic like human generated. So that means those flaws are everywhere, and it generates money, it also gives opportunities to other people. Right, but they have to find out that optimization not using the resources, now they have money, but when the money will be exhausted, what they will do it.


Debbie Reynolds  32:28

Yeah, I think that's true. That's another thing, these technologies to try to tell whether work is AI generated from texts, these are not very good. They can be easily fooled.


Debesh Choudhury  32:42

It is fun, of course, fun.


Debbie Reynolds  32:44

Yeah, it's fun. It shouldn't be used for high-risk or high-stakes things. To me, it's almost more of a novelty. But I don't think people are being kicked out of schools because someone's using something, and they say, hey, we think your text is plagiarized, and it actually isn't. So I think the problem that we're talking about with biometrics and deep fakes and new AI, to me, is really abdication of human judgment for technology. That's not really the place of technology.


Debesh Choudhury  33:17

Yes, the human society is now under challenge of finding out what is useful and what is not. What to adopt more, and what to use less. That is a challenge. So that determination, that answer may come when our children are taught to think logically, like if children like say, I visited a high school and this R&D advisor, he agreed that they have to create the artificial intelligence robotics lab because it is a force by the government and the votes that you have to introduce to high school these courses. But the students should be given that knowledge in a way that they can think, logically, their logical thinking part should not be taught when they should continue to think and everyone.


Debbie Reynolds  34:12

Debesh, if it were the world according to you, and everyone did anything you said, what would be your wish for privacy anywhere in the world, whether that be regulation, human behavior or something in technology?


Debesh Choudhury  34:27

The regulation is there by the government, I don't like regulation, I support freedom, but there should be checked by a technology and human staffs that what is being done, because the number of citizens in the world is very large, human staffs cannot do it. If a computer and digital techniques are used to control means to regulate, so there will be error, as you mentioned that some students are penalized, but that could be an error also like in particular, authentic content, also, sometimes software will detect as a AI generated. So that means when regulation is done using automation, so there'll be problems. As a free citizen, I don't want any regulation, means people should be given freedom of their body, that thinking, if we are not harming the other citizen, then it is okay. Privacy is actually something is having a problem because of the Internet, privacy content should be more progressive, like more propagating, then have to lose privacy. Somebody who wants complete privacy is a one person is there, I don't want to name he used to work in Microsoft, in Linux. Now, he's not there. His account is very weekly in on LinkedIn, and also Twitter. So he's private, he wants people to come to his own channels, like say, in different particular platform. So that means propagation wise, he restricted and his engagement and the supporters or the subscribers are also restricted. So privacy, somebody wants means he or she is restricting his or her propagation.


Debbie Reynolds  36:22

Well, thank you so much. It's been a pleasure to have you on the show. I love your writing. I'd love people to follow you on LinkedIn. The work that you do is tremendous, and I love the fact that you are saying things that maybe aren't mainstream but probably should be because I've seen technology impact so much of our lives. We need to know not only the good but also the dangers and the risks that we take. Thank you so much. I look forward to chatting with you.


Debesh Choudhury  36:55

Thank you so much for giving me this opportunity.


Debbie Reynolds  36:58

You're welcome. You're welcome. Have a good day.


Debesh Choudhury  37:01

Okay, thanks. Bye