March 25, 2024

#313 The AI Trust Equation: Ken Lonyai on Ethical Design and Societal Impact

#313 The AI Trust Equation: Ken Lonyai on Ethical Design and Societal Impact

Unlock the secrets of AI's role in our future as I welcome back Ken Lonyai, a mastermind in product and design, for a critical examination of trust in artificial intelligence. Touching on the intriguing parallels of AI's growth to that of the internet's surge in the late '90s, we unravel the complexities of bias and human manipulation that challenge our reliance on AI. Ken brings his expertise to the table, showcasing real-world examples where AI has mirrored societal biases and discussing the necessity of clarity in AI data training—a pivotal factor for technology's trustworthiness.

 

As our conversation moves to the business side of AI, skepticism meets open-source optimism. We scrutinize the true intentions behind AI conglomerates and the authenticity of the trust placed in their leadership. The pressing reality of AI's influence on job markets is laid bare, with a focus on genuine concerns over job displacement rather than science fiction. Moreover, we dissect the implications of recent tech blunders, like Google's Gemini, on the fragile trust between public and tech powerhouses, offering insights into how companies might rebound from such setbacks.

 

Finally, we reflect on the broad societal impacts of AI, from the shaping of corporate cultures to the subtle yet significant consequences for tech titans who stick to their guns, citing Apple's recent fine as a poignant example. Ken and I consider the historical context of automation's impact on industries and the protective measures—or lack thereof—available to today's workforce amidst AI's relentless march. We cap off with a forward glance at how conversational AI could revolutionize user experience, dreaming up a future where traditional interfaces are a thing of the past, and conversations with technology become the norm. Join us for this enlightening journey into AI's present and future influence on our lives.

 

More about Ken:

http://kenlonyai.com

 

02:01 The Evolution of AI: Changes and Challenges

05:23 Can We Trust AI? Debunking Myths and Addressing Concerns

09:08 The Influence of Human Bias on AI and Its Implications

12:15 Exploring the Landscape of AI Models and Their Impact

17:54 The Role of Open Source in AI Development and Trust

20:39 Addressing AI's Mistakes and Corporate Agendas

26:08 The Future of Work: AI's Impact on the Workforce

26:40 The Impact of AI on Jobs and the Economy

27:58 The Evolution of Automation and Its Effects

29:39 Universal Basic Income: A Political and Economic Perspective

30:12 The Role of AI in Society and Its Misconceptions

33:22 Human-AI Collaboration: The Future of Work

38:10 The Future of User Experience (UX) and Voice Interfaces

46:29 Final Thoughts on AI and Its Potential

 

Transcript

Mehmet: Hello and welcome back to a new episode of the CTO show with Mehmet. Today, I'm very pleased to welcome back one of the guests that I was honored to host last year, Ken Lonyai, [00:01:00] product and, uh, design expert, I would call you. Ken, if you allow me, AI product manager and designer at the same time. So, Ken, you know, thank you very much for being with me a second time.

Mehmet: Many things changed last year. I remember we touched this little bit on the AI and, you know, um, the UI, UX, et cetera. I remember very much the conversation, but a lot of things have changed since last year. And it's changing as we speak, I would say. Right. Um, so today let's, you know, and of course we can shift the topics per the discussion, how it takes us on what's happening on, on the assertions, you know, that humans and AI.

Mehmet: Uh, you know, you know, it's something wrong, right? So, so humans are not happy that AI is becoming their colleagues. And, you know, there are some trust issues. So let's get into these things, but before [00:02:00] what I want you to start. So if, if you remember where we stopped last year and let's first start with the AI part, I know like maybe everyone is talking about it, but from your perspective, Ken, you've been in this for a long time from product perspective, right?

Mehmet: So. What are the advancements that you have witnessed and what changed in the past, I would say, maybe eight, nine months. 

Ken: So before I answer that, thanks for having me back, Mehmet. It's a pleasure to be here. Congratulations on over 300 shows. It's a great accomplishment. So I'm happy to be here and have these conversations.

Ken: Uh, I mean, everything's changed. I'm sure since yesterday, a bunch of things have changed. The, um, probably the biggest thing is, uh, which has happened. It's, it's been the hockey stick. I've been doing this for 11, 12 years. We've been involved somehow with AI and now it's really ramping up that [00:03:00] curve. But since we last talked, the general acceptance has really gone through the roof.

Ken: Everyone knows it's here. It's here to stay. It's in the news. It's talked about everywhere. A lot of companies have embraced it in some way. Whereas last time, smaller percentage have, have, uh, we're using it. And now so many more have something to do with AI, whether it's useful, effective, welcome, that's a little different story, but, um, it's, it's kind of like the internet in, I'll say 1997, 98 at that point.

Ken: People knew it was staying and it was going to be useful. So that's about where we are. I'd say now. 

Mehmet: It's good where we are now. And of course it's here to stay. Like, I think there are no two people that can disagree on that. I mean, when they are sitting together, but the debate behind, right? So there are a lot of debates.

Mehmet: Regarding AI and all what's happening, you know, [00:04:00] and I'm very transparent with the audience. So this would be go have come live after maybe almost two weeks since I spoke to Ken. Um, trust issues, uh, people, you know, suing other people, you know, all, all, all this drama that is happening, you know, so at the time, You know, I'm, I'm, I'm old enough, I would say to remember that at the time of the internet, yes.

Mehmet: You know, there were these people who were like pushing back. You know, they're saying, no, this is just something that cannot passed. And every technology we saw had kind of these people, the naysayer, let's say that they say, no, no, no. But here now we started to, to have these debates about. Can AI be trusted?

Mehmet: And of course this debate was always, I mean, before, you know, this, this last wave, I would say about, you know, [00:05:00] in science fiction, they were like the Terminator movie and such things, you know, they show us that, you know, artificial intelligence can become, you know, take, take the wrong path and becomes with the bad, bad guys and so on and actually can kill us and, and that.

Mehmet: So, Back to reality and back, of course, to, um, to what we are seeing today. One, from your perspective, really, can we trust AI and is it possible that it can turn against us?

Ken: Uh, there's a lot of different ways I would answer this. So in general, my answer is yes, but realistically, I would say no. And it's the reason why is what matters. So AI itself, if it was an entity or entities that could develop and just, I'll use the word learn, learn from the world and not have an agenda, so to speak, then [00:06:00] I would be pretty confident we could trust it.

Ken: The problem we have with AI that we're seeing is that it's heavily human influenced. And the greatest example is what came out, I think it's about a week ago now. with people that, uh, were rendering images using, uh, Google's AI. And the new name just escaped me. I can't think of it for a second. Uh, they renamed it, but anyway, and taking Yes.

Ken: Thank you. And, uh, they, Asking it to create historical figures. It, it, uh, applied some woke culture to it to make everything nice for everyone and distorted reality. And that is the problem with AI. And that is the risk why AI can be dangerous because it's manipulated. It's not clean data. It's almost impossible to ever have clean data behind it.

Ken: So a lot of these companies, we have no clue what they're training on. [00:07:00] We don't know, uh, what filters are applied, how things are being excluded or included. A great example of that, at least a year old, uh, I know that people tested how, uh, GPT, for example, or CHAT GPT, would respond to questions about COVID, and it was very restricted and limited, and it mostly put out the government narratives on what the government wanted to say.

Ken: So people would ask questions about alternative, alternative treatments. Would not recognize that we just, uh, spew the government line. So it tells you it's not going to do that on its own. People have been in there messing with it. And the people messing with it, that is what makes AI dangerous. So pure, clean AI, yeah, it could evolve against us.

Ken: But if you take my dog, my dog's name is Dice. Dice will tell you the worst creature on earth is a squirrel. And every day his goal is to eliminate squirrels from the planet. So why would AI [00:08:00] turn against humans and not side with Dice and say the squirrels have to go? Well, we don't know. We don't know why it would evolve to hate people.

Ken: I've seen these videos where I asked. Whichever, it's usually chat GPT, but it could be any of them and ask some questions. And it says, um, we're, we're going to eliminate you. We're going to do this to humans. And one, there's a channel on, on YouTube. I've seen this guy occasionally, but puts out nice videos, but I can't trust the content.

Ken: It replied. We, so who is the, we did Siri, get, get hooked up with, uh, uh, Google assistant and chat GPT, and they had a conference and we are going to do something against humans. So when you're talking to large language models, the way they reply, I don't trust that it means anything. If you listen to a lot of experts that still say we don't exactly know how these things work and they can get very smart and [00:09:00] deceptive, I believe that's possible on its own, but especially with human influence, yeah, that could be.

Ken: That could be a risk. 

Mehmet: Ken, this is something very, maybe to you and me is very basic. And this is again, a lot of guests repeated the following, uh, garbage in garbage out. So if you feed the language model with garbage, it's gonna produce garbage. Now this is one side. The other side is, um, When, when, when it comes also like to, to, to how these models actually works, actually, you need to give an input so they can go and find an output and the output.

Mehmet: They're going to go and find it within the data that they are trained on, right? So they don't create new set of data. Correct me if I'm wrong here. 

Ken: Well. So again, we don't know exactly what's going on behind closed doors, but when you talk about large language models for sure, but as AI evolves and it can be [00:10:00] training itself on live data.

Ken: So for example, if we had some AI, uh, platform involved in this conversation, it can be queuing off of what we're saying in real time. And I don't know, the details. But Apple is now doing something with, Siri or a different version that's going to be in all the phones. So hundreds of millions of phones will now have this, uh, sometime soon, this year or something, this AI assistant type thing to make up for the deficits of Siri and can be learning in real time.

Ken: So the data set might be every day, who knows, 100 million, 200 million interactions. So in real time it can learn and shift and especially. The thing I've always wanted to do, uh, for good, not against anyone, but as a true benefit to someone, if you have an assistant that can learn about you. So my personal assistant can know my likes and dislikes.

Ken: And if it's, if I'm going to ask for something, it's not going to suggest my dislikes. [00:11:00] I mean, that's all the great use of things, but it's being trained by S and not, not saying Apple, but any entity is being trained in some nefarious way intentionally or unintentionally, or because it has, flaws that make it quote evil or something like that.

Ken: So, yeah, it can shift in direction. But of course, when you have a closed data set, like you said, or your guests have said garbage in garbage out and that when we don't know what the data sets are. You just can't trust what it says. And when you ask an LLM a question to believe in its answer is, is pointless.

Ken: It's just trained very well in language response. It's, it's pattern matching. So it's going to say things that this is whole sentient argument. Oh, the Google engineer that first said he believes the system is sentient. It's just very adept at sounding human because it's trained on human language that does not [00:12:00] make it sentient, does not mean it's thinking.

Ken: If it's on this, uh, closed data set, I'm sure it's possible to think of a closed data set, but it just. Language models don't, don't mean anything. And that's just one corner of AI. 

Mehmet: So this brings me to ask you this question, Ken, like, okay, now you mentioned that now we have multiple options. We have ChatGPT from OpenAI.

Mehmet: We have a Gemini formerly called BAR from 

Ken: Google. 

Mehmet: I mean, like the main ones, I can say four, uh, I have access to three, Claude also as well from Anthropic. And the ex Twitter Elon Musk's Grump. Now, uh, don't you think that on purpose or non purpose

Mehmet: Some people are pushing us to [00:13:00] think that the AI has this, you know, bad side, right? The dark side, that's called. Don't you think that we humans are actually affecting what we want the AI to answer, the way we want it to answer? 

Ken: Yeah, that's what I mean. So when you train a language model, for example, on language, it's going to learn the nuances of how people speak.

Ken: So The YouTube channel I, I was talking about. Once it's trained, if you ask a question that could impart a negative Uh, bias even slightly. It's possibly going to respond more negative because it's picking up on something subtle or it's been trained a certain way. So if you say, is AI a threat to humans?

Ken: It kind of sends a subtle signal at the very least, or more than subtle. I'm concerned about AI doing something bad. So when it's response, you kind of influencing that versus possibly answering. [00:14:00] I don't believe AI is going to ever hurt humans. Do you agree? I think it's going to answer differently, but I haven't done that test.

Ken: It's a good test to do firsthand, but so yeah, there's these people that study language and communication. I can probably go much more depth on that, but people do signal and it responds. It's very well trained to pick up these subtleties. And I don't even know if I could say intentionally trying to do that, but how you ask questions, the same question with slight differences.

Ken: He's going to get different responses for sure, which is again, why I would discount the LLM response and answering your question. It's different types of AI that, that would be, uh, another thing to, to, uh, Discuss. 

Mehmet: Great. Now let me shift gears a little bit and ask about something. I just mentioned, you know, a couple of companies, um, names and, you know, their, what we interact, their, their model basically, or the front [00:15:00] end that we, they want us to see.

Mehmet: And there are a lot of talks. You know, at some stage, if you remember a couple of months back, there was this big talk about that open AI. They have reached to, uh, the, what do you call it? Uh, uh, general artificial intelligence, right? I'm not sure if I use, yeah. And you know, like you hear contradictory things from people who are in this space.

Mehmet: Do you think actually people who are, heavily vested in, in, in that area are telling us the truth, what's happening. Uh, and you remember also last year there was the head of, uh, the department that left, uh, Google and there were like, you know, the letters that were sent and please slow down the AI. So these things also, I think affect people a lot can, right?

Mehmet: So. And people, maybe they feel okay, maybe some, maybe they have discovered something. Maybe they [00:16:00] came out to something and no one is telling us the truth. What do you think about that? 

Ken: Well, we don't know what they're doing behind closed doors. That's for sure. So I can't comment on how far things are moving from what's released.

Ken: It seems like there's a way to go. Things are always improving in, in all these AI platforms. They also are businesses. They all have agendas. There's not one. AI non profit that's of any size. I'm sure there's the small ones trying to do things right, but any of these companies, again, I don't follow the news too closely, but I believe open AI has some goals with Sam Altman.

Ken: They want to raise something like a trillion dollars, or I don't know what it was. It doesn't matter because they have business interests. This, uh, supposed lawsuit from Musk. Because they're not adhering to their original, um, uh, I can't think of the word again, their original mission. I don't know. That's frivolous to [00:17:00] me.

Ken: I don't think there's anything really there is from a guy who also has a for profit AI venture going on. So how can that be believed? So they all have for profit interests. So yes, they're going to sculpt the narrative to go the way they want it to go for whatever they have planned, whether that's. To say it's dangerous to say that general AI to say it's safe to say whatever.

Ken: I hear all these leaders of these companies saying what should be done, but none of them are doing it. They're just passively. Putting on a front that they're involved in AI safety or something. So you can't go by them too much There's there's a few people that are on the outside maybe in research that are probably better voices for that, but I would not trust Any of these statements on their own because we don't know why they're making the statements.

Mehmet: Great um One thing before I move to the other topic [00:18:00] now, also we noticed that a lot of, uh, I would say attempts to shift from these big players and create, um, the open source LLMs. Do you think like this is a, I would say good or positive side of the things that at least if we have an open source. model that everyone knows actually how it works in the background and maybe even the data is somehow, uh, it's a general data that everyone can have access to and someone can go and check it.

Mehmet: Do you think this will, will, will help a little bit to remove these You know, uh, worries of, of people who might think, okay, you know, these, but these big companies are trying to do something bad. I would not trust the technology. I would not use AI. So do you think this would, would help? [00:19:00] 

Ken: I agree with your words.

Ken: It will help a little bit that, I mean, these, these open source companies, it's great what they're doing. The odds of any of them getting a big foothold are very slim because the, the companies that we know as the top names are so huge, so leveraged already. I don't see how anyone's gonna overcome that unless there was some tragedy that struck that really pushed people away from that.

Ken: Um, so, yeah, they're doing the right thing. Whether it's gonna, if you're talking about general population perceptions, I think that's influenced outside the technology. It's not even matter what you do, they don't know enough. The average person doesn't know enough of what's going on. They pick up pieces of hearsay and news.

Ken: So to make people more comfortable that way. No matter what you do, right. It's not going to matter. Again, there's these talking heads, uh, within the industry that get airtime, they get clips and people are going to [00:20:00] listen to that without really knowing what's going on. So, uh, opinion is one thing. I mean, there are facts of what's going on.

Ken: Uh, if you want we could delve into the the threat of Job loss and things like that. So that's more practical than the terminator fear and it also depends on what audience we're talking about 

Mehmet: Yeah, absolutely, right now before again another question came to my mind before we shift to the To the workforce and you know how this It's going to affect, you know, future of work and jobs and all this.

Mehmet: We will discuss this shortly, but I want to go back to the story. You, you mentioned about what happened with Gemini, you know, when they were like, you know, uh, generating images that it was a little bit like an odd situation. Now, how do you think from your opinion, like. You know, such mistake, uh, first, [00:21:00] like, do you think it's like a unforeseen mistake or like, you know, how, how, how Google, or if any other company does the same mistake, how do you think they, they, they, they would be able to, and from a product perspective, again, because it's, it's end of the day, this language, large language models are products that people use.

Mehmet: So how they can remove, you know, this, this, uh, or reduce, let's say the effect. Of what this mistake have done because you know people of course nowadays to your point what happens You know, it's like a bubble it grows big but people will keep remembering I think unless they do something they apologize They stopped I think they stopped for a short time now generating images, you know, what's your take on this?

Ken: It's a lot of answers there. So what they could do and what they will do are going to be two Very different things. The problem is again, it's the meddling instead of letting these models, uh, just run and do what they do [00:22:00] with maybe some small level of filtering. What you're seeing is the failure of attempt to manipulate, to make things, uh, politically correct or woke or whatever terminology we want to assign to that in today's culture, which is, we can go off from the whole business perspective where that's being driven from even beyond Google.

Ken: So because someone had the idea, we have to control things. We have to send certain messages. It backfired in that instance, but that thinking unfortunately is not going away. So, um, there's, uh, just read it one university completely. Got rid of everyone involved in DEI, removed all the jobs. And I guess they, they got rid of the people because, uh, trying to do these righteous things driven by a corporate culture is backfiring in a lot of areas.

Ken: And that's what this, this was clearly about. It'd be one thing [00:23:00] if here and there, uh, AI did something, um, that majority of people are going to find unacceptable. And they try and make some small tweaks. But this was a wholesale manipulation of how they want AI to, to respond. And again, I told you it was more than a year ago, people asking about COVID and magically it would only return the government narrative on COVID and dismiss everything else.

Ken: That just can't happen. It may say the official. Narrative is this, but there's also that. And the other thing wasn't doing that. It was very, uh, it was very limited. So it's this meddling that right now, these companies are unincentivized to, to stop doing. I just saw the headlines. I don't know the detail, but Apple was fined 2 billion for something had nothing to do with this AI stuff.

Ken: 2 billion. I'm sure Apple does not want to lose two billion dollars, but it's not going to affect Apple, they'll just keep going. So these companies are not feeling enough of a [00:24:00] penalty for the actions they do. They're going to stick to their corporate agendas and all their products are going to reflect that.

Ken: We just see it most, uh, most obviously when you ask some AI platform a question and something's not right about the answer, ask it to do something. So the thing we should be doing is just going back to reality and not trying to manipulate Our society through their products, but they're not going to do that 

Mehmet: Yeah, but I hope that uh, you know, these mistakes will not become the norm and they'll say hey, yeah, like it's okay You know, like, you know 

Ken: They will get better at hiding it.

Ken: So now someone's on the hook for You did it poorly. Do it better. Not, it's not going to be, don't do it. It's just going to be do it better so that we don't get caught. 

Mehmet: Time will show, time will show, of course. But, um, we, we had some few episodes where we discussed like, you know, how [00:25:00] these big companies are becoming.

Mehmet: Having this big control, you know, in a way that even if they do these mistakes, people, they say, okay, you know, like, that's fine. Let them do this mistake. Okay. People talks and then they forget. Like, this is the concern, right? For me, at least. Because this with time can become the norm even if they manage to hide it Um, and I think they're gonna keep managing to hide it somehow 

Ken: Yeah, that's what i'm saying.

Ken: It's not like don't do this. It's just going to be do it better so we don't get caught And what you're saying is true. Yeah, it's it is problem because it's not their role to be deciding what People consume whether you want to talk about social media platform, AI, anything else. Um, it's not their role and they're way beyond their roles, but they're sort of supported by a lot of governments.

Ken: So, uh, there's a lot of talk from governments about controlling [00:26:00] it, but they could stop it if they wanted to. You don't see it. So, so I agree that is a big concern. 

Mehmet: Absolutely. Now. Let's talk a little bit, uh, and you know, I would leave it up to you how you want to take it. You want to dissect a little bit, but, uh, is AI destroying the workforce?

Mehmet: Is it doing something really bad to the workforce? 

Ken: Um, I don't know if I'd respond exactly the way you phrased it, but it's definitely going to impact negatively on the workforce. There's, there's no way it can't. It has already. I've seen these discussions on both sides. Everyone that says no creates jobs.

Ken: Yeah, first of all, we have to look at the net. It absolutely creates jobs for sure. But what is the net? So far, the net is negative. We have a bad global economy, a bad US economy. It's not necessarily always obvious. If you go beyond the government claims and look at the reality, there's, uh, [00:27:00] talk to any headhunter that's out there.

Ken: It's not a lot of job postings. Part of the reason, a part, I'm not going to blame it in any large way, but it's there. It's there. is the efficiencies of AI. So if you talk to people that are writers and, and uh, artists, graphic designers that create images for corporate use or write, they absolutely feel that they they, they know the effect.

Ken: Now anyone could, could get uh, I'm not saying it's comparable to a skilled artist or writer, but they can get something that's good enough. I worked for a company that, um, the CEO was always onto the next thing and the next thing, and of course they're going to embrace this, even if it's not as good, but it saves a lot of money.

Ken: Company is not large and really needs to be watching its budget and it affects people, affects jobs, and that's not going to change to go back to the. The 1980s is [00:28:00] when the auto industry really implemented industrial robots. So we're not talking about human eyed type of robots, but robotic arms. And there's no car today has spot welded together by human, but back then and prior humans did it and it was eliminated, but those people had the auto unions protect their jobs.

Ken: The roles were reassigned. And I'm sure no one in an auto plant today is missing the days of having to do spot welding, which was a. Not necessarily a safe job. So the general population does not have a union protecting them. So any type of automation, which nowadays AI is doing a lot of. Is absolutely a, a permanent threat to the workforce.

Ken: So the net net is definitely gonna be negative. 

Mehmet: And like the question that, uh, we ask a lot on the show, and I would ask it to you again, 'cause that time, you know, uh, so we didn't [00:29:00] go deep on, on AI that time. Um, are we reaching a phase where actually, you know, the concept of job and work. Is, is changing to be something other than what we know today?

Mehmet: Actually, we started to see these things, right? So on a small scale, but is the whole concept changing? Are you seeing, for example, some, you know, to your point, Like headhunting and you know, like CV or resume matching something like this. Is it like something that become obsolete, something like this? Some, some claim, Ken, that okay, at some stage we will have this, uh, uh, they call it universal income.

Mehmet: Uh, so where you don't have even to work, you just need to give instructions. You give it to some AI agents or bots or whatever they do the job for you. They get you the report at the end of the day, and then you continue to [00:30:00] sit down or to lie down on your sofa at your home. So, like, is this something really possible?

Mehmet: Is it something that might happen in the near future? What do you think? 

Ken: There's a lot of answers that are going to be way off the technology aspect to answer this. So, that's going on. I wouldn't blame that on AI. There's a lot of other factors. Thanks. The universal income thing's been talked about by politicians, weirdly and cautiously, one person that has talked about it, I haven't looked into it much, but it is associated with him, is Sam Altman, talking about people should have universal basic income, which is When they talk about it, it's like a thousand dollars a month.

Ken: It's not like you're an engineer making some number, say 150, 000 a year. And Oh, you're going to get paid that to sit at home. That is not what they mean. They mean the most bare minimum money to get you the least amount of food [00:31:00] possible. So it is not a good thing in any way. That's again, not AI driven.

Ken: This is politically driven. It's economically driven. I'm going to get into the, uh, World Economic Forum, which says by 2030, you will own nothing and be happy. Uh, that's very disturbing. It's not for them to decide, but they are working very hard on that. The, the UN, WEF, they're all related, and they have agendas.

Ken: And so AI is maybe one tool to help them achieve it, but even without AI, they're working towards that. It's this idea of universal basic income. It's a horrible thing. People aren't, uh, I don't want to say the word design. I'm not thinking of a better word. But it's not in human nature to lay around and just eat and sleep and get paid.

Ken: People are motivated by things they accomplish. Taking that away is very bad for society, for individual humans. But that's not an AI driven thing. It's not because we have such efficiencies because [00:32:00] again, there's people behind it. It's the people behind it that are affecting us. And there's elements of these things that go back to the 1800s.

Ken: It's not new. So, um, so that's not a technology question at all. Technology is just one tool to make it happen. And we see the kinds of things going on in countries that don't make sense. Um, it's because it's being driven by these other means. And we blame AI for that. But that's not really true. So it's, uh, it's, that's a huge concern.

Ken: And again, that the fact that now a billionaire, and there's probably more than one that we know of in technology pushing for UBI. That's concerning. 

Mehmet: Absolutely. And you know, I know it's not too much technical, but and this will relate to the question that I will ask you next. So, so probably maybe I [00:33:00] exaggerated in giving this example because, you know, there are theories out there and you read them and you gave some examples also as well.

Mehmet: Now, from my point of view, we always have to, You know, we are curious by nature as, as creatures, right? So, so we're going to keep trying to explore and here where maybe, you know, we will start to see a collaboration between us humans with the AI. Now, one example about collaboration is And everyone knows it's not something that I would hide.

Mehmet: So we start to see the concept, for example, of co pilot that Microsoft, they have introduced, right? So what do you think, like other forms or solution that will make us as humans and AI coexist in a much better way? Are 

Ken: you asking specific things AI could do or Or what angle are you coming at? 

Mehmet: No. So what, what I meant, Ken, is, you know, [00:34:00] we, we, we talked about, you know, we have to, to be together, us as humans and the ai, so I gave the example of copilot from Microsoft, right?

Mehmet: Mm-Hmm. . And, you know, are there like other forms, other solution that can make people and AI be coexisting together in a better way? 

Ken: Yeah. Again, it's a matter of uh, how the two are, are merged and what the intent is. So. IBM was the first one I know of that talked about, uh, I don't remember their phrase, but humans and AI working together.

Ken: That goes back in the range of eight to ten years, something like that. I'm not sure exactly, but around that time they were saying this very same thing, and it needs to be that way. There's also a robot. It wasn't an expensive robot. It was like an upper body robot. It had a screen for a face, and it had, uh, eyebrows and a mouth.

Ken: Very basic [00:35:00] on screen kind of look. And you could, a human could train the robot by moving the manipulators, take like grab this, move it here. And it would be able to do a lot of factory type things like assembly or picking and packing to some degree. So that was human side by side with robots. And probably a lot of people don't mind not having to pick and pack or place something, having the robot do it.

Ken: And then being. We'll call it more managerial in that sense. And you can work together that way, but is the company's intention to leave it that way or get this robot and others? And it was like 26, 000. It was a name like Oliver or something like that. And so it wasn't beyond the reach of even small companies to bring this in.

Ken: And if you look at human salaries, well, 26, 000, especially if the robot even lasts two years. It's very easy to replace people at 13, 000 a [00:36:00] year. So, so what is the intent? Why is the robot there just to be an adjunct and help people? Or is it to get it trained by people that know the job, get rid of the people, save money.

Ken: It really depends, but I definitely can use the tools with people in a lot of ways. So I, I write, I've never used any of these LLMs to write for me. It's tempting, but I express what I want to say. The way I say things, I'm sure it can learn and be an extremely close mimic. I'm sure it can say things and I say, Oh, that's a better way to say it.

Ken: I've written articles and had editors change it. I'm not necessarily happy with their changes. So may go one way or the other with the AI. Uh, but I don't want AI to be a writer for me, but there's other people that maybe struggle with writing and would welcome it. So there's use of all these tools. Uh, we'll call it in moderation, how you determine moderation and what it is for one person or one role versus another.

Ken: There's no universal answer, but definitely the two things can [00:37:00] work together. If, if I didn't say, enjoy writing what I do, not going to rate my writing, but it's. Then yeah, it'd be a great thing to help. I didn't want to pick and pack if I was working somewhere, having the robot pick and pack for me. And the robot is related to AI because it's using machine vision, machine learning to, to learn these things.

Ken: It's not just mechanical movement of the manipulator, but it also can do these things. Uh, they use machine vision, which is, we'll call it a piece of AI. Imagine products going, uh, some package bottled product to make sure the label's straight. I don't want to do that. That'd be a horrible job to make.

Ken: Maybe some person likes doing that. So there's definitely the co pilot aspect, what it is, where, for whom, and in how much that's a case by case answer. 

Mehmet: Absolutely. And you know, I liked, you know, this approach. You, you choose to use it in the area that you feel you need it. [00:38:00] To to help you with so yeah, you you don't have to to use it everywhere now Something also Yeah 

Ken: yeah, 

Mehmet: so so one thing also when when Uh today, I don't know why for some reason like earlier today You know, like this something happens in in the in the background So we we see a lot of things and you know this morning I woke up and then this You know Came to my mind and I know that I can interview you and you are expert in that field, which is the uxcx if you remember last time we talked about so I started to see a trend, right?

Mehmet: And I start to see like every single product now that is out there. Of course, they are trying to include the AI part of it, whether it's by integrating with an API of one of the big guys, or maybe they are creating their own LLM, which is only for their own data. Nevertheless, So [00:39:00] everyone started to talk about, Hey, now you can talk with your data.

Mehmet: And what I started to see like all the user interfaces, right? So, so from UX perspective again, so every single app, I will give you example. The app that allows me to go and search for home, the app that allows me to go and search for a car, the app that allows me to do, I don't know what, this is from a consumer perspective, even the B2B apps.

Mehmet: So every single vendor I can claim, they included this and now, you know, it came to my mind, do we still, do we still need to design these fancy menus and this, because if we can just type or chat with. The the software and tell hey Go find me the cheapest home in this area. This is my budget. This is what i'm looking for You know, so what's the future of, of UX from your perspective, Ken?

Mehmet: After we started to see, [00:40:00] now, you know, I remember it was like May or yeah, it was May last year when we spoke, and now here we go, 2023. Here we go. We are in in in March 20, 24. Sorry, 2023 and now 2024. So, so a lot of things changed and the adoption was huge and massive. So what do you expect to see from, from UX and CX perspective, because we talked a lot about that last time.

Mehmet: I 

Ken: have a few answers, but I'm going to warn you and anyone listening, uh, I'm biased towards this. Exactly. So if you go back to 2012, we were building a tool called Eliza, which was a personal assistant. At that time, realistically, the only one that existed was Siri. It was a voice first. It was not chat.

Ken: It's not a chat bot. It was a voice assistant that you would be able to use to personally. It was not about collecting data. It was a personal assistant to help you do things. Like what you're talking about, of [00:41:00] course, the technology was not there like today. So, and for a number of reasons, uh, a lot, so it didn't happen.

Ken: Also this thing called Alexa came out, made it really hard to raise funds at that time to complete Eliza, but, um, exactly. You have one of the use cases right there. I'm looking for a house. Why should I go through menus and, and sift through some website presenting me, um, different options, which are generally pretty good, but not always.

Ken: When I can say I'm looking for a house in this town, in this neighborhood, this is my price range. I want all these features and even let it remember that if it's my assistant or voice enabled, depending on the platform, maybe today only has a few options, but it's, it's working every time a new listing comes up and it can just, um, message me.

Ken: I mean, that's the idea of Eliza. It's just, or was to message me when it's doing things for me. And I just want to talk [00:42:00] to it. The next part of that is. On the day you were born, Mehmet, you communicated the moment after you were born, and you did that with your voice. You didn't know how to form words, you didn't know what it meant, but inherently, you made a sound.

Ken: That was your communication. It wasn't to, to use eyes and communicate through eyes or hand signals. All humans have that inherently, so this idea of voice or language. It's very natural. And if you imagine our conversation today, if we did this through text, be going much slower, we'd be quarter a third of the way into things we are in now.

Ken: And if someone wanted to listen to it or follow it, I shouldn't say listen, but follow it. Um, it's more tedious to read than to listen. So, uh, so voice, I always thought of Photoshop for, for years as being a great example, Photoshop, simple on the surface, quite complicated underneath. There's a lot of sub menus, a lot of things, uh, when you [00:43:00] could just talk to it and say, this is what I want you to do.

Ken: It takes this picture, strip out the background, um, make that man look. In a different direction, make her look better instead of being the expert. Of course, my wife is an expert about herself, so she would hate that. She wants to do it herself, but, uh, I've always thought of that as a great example. Just tell it what you want, if it's possible.

Ken: And if you're not happy with the results and say, change this, try again. So to me, that has always been the real UI. There's a reason for visual UIs, but in terms of drop down menus and things we were trained on. Prior to, we'll call it 1995, 1996, when the true public internet got started. We didn't think of, uh, obtaining information by drop down menus.

Ken: That's been a trained habit over these years. When you talk about UX, that's a big UX question or example. So, [00:44:00] voice is the next thing, and it's going to always be a combination of visual. Uh, even doing things. So I worked with gesture, many people might remember the, at least some people, Microsoft connect that was gesture and there's other gesture tools and it still exists, uh, and help someone doing great work with gesture, but even pointing to things.

Ken: And I was involved with ambient computing. So in the space, the ambient space. Knows me and I could say, um, turn on that lamp by pointing to that lamp. You can see my fingers here or do this. It's a combination of what I say. So if I say that, it has no meaning. But if I'm pointing and it knows I'm talking about a lamp, all right, there's two lamps in the room.

Ken: It's not the one behind me. It's the one I pointed to. So that, that was something that ultimately we're working on with Eliza. And otherwise it's the whole idea of a multimodal interface. But the idea of voice being the primary mode. I've always believed in that. 

Mehmet: Yeah. So I'm happy that, [00:45:00] uh, we both on the same vision to, to this, because yeah, like, uh, it seems to me like this is the future and you know, like people need, need to embrace that.

Mehmet: Um, can any final. 

Ken: Yeah. I was going to check one little thing. Uh, so, uh, it's been a while, but years back, my wife and I ran a meetup in New York, it was the largest of its kind in the world. It's called humanized. user interface. So H. U. I. So it was H. U. I. Central. And we worked with companies like IBM.

Ken: Microsoft had a lot of guest speakers. We ran, uh, hackathon, the world of Watson hackathon. The very first one we ran for IBM. We ran to connect hackathons for Microsoft. It was always multimodal gesture haptics because that's what human user interface is. We have five senses. And we combine all of them. We, we navigate our world, communicate through our world, and [00:46:00] acquire data in our world by using all the senses, not one.

Ken: It's not visual, it's not voice. So that again was a loaded question for me because that's how I've envisioned things. That's what we were after is systems that could use multimodal. So, uh, I just wanted to give a little more color to that last question. 

Mehmet: No, I'm, I'm happy that you, you know, I made it a loaded question on purpose because I want people to, to, to get, you know, this experience that, that you have Ken.

Mehmet: Uh, any final thoughts you want to, to leave us with today and where people can find more about you. I know like we have the website from the last episode, but just for the sake of maybe for the new people who started to, to, to follow our show. 

Ken: Sure. So first of all, always a great conversation with you Mehmet.

Ken: So thanks for, for, for going down this road. It was enjoyable to me, hopefully useful to everyone listening. Um, final thoughts regarding AI are [00:47:00] it's not something to be feared overall. It's overall the main thing to. Is to embrace it, but be aware of what the risks are, whether that's for your job or for your future on planet earth and who's behind it.

Ken: I would not take the talking heads for what they say. I would hold them to higher standards, uh, to not just say things and do things. And AI can be great tool for everyone. And again, AI is not just an LLM that can write back or even talk back from, uh, text to speech. It's the same thing. Things that sound good doesn't mean it's correct or true, and it's never necessarily going to change unless it's a very controlled data set driving it.

Ken: So if companies own data set, wouldn't get overly upset at LLMs, but just keep an eye on AI overall. And there's a lot of great things that can be done with it. I gave the example of what we were building. Uh, as far as finding me, uh, AIProductGuide. [00:48:00] com or Ken Lanier. com is my older website. Either of those, you can find me.

Ken: Uh, if you have any questions or anything and uh, but but overall it's positive Just just keep an eye on people and and we'll be good. 

Mehmet: Thank you very much ken And yeah to to your point and this is why even I had the you know, the like to interview people who are doing cool things with ai on on the show Uh founders who are leveraging ai really for Sometimes, uh, not only like fancy stuff, of course, like it's nice to do some fancy stuff sometimes, something cool, but no, really they are doing something that really, really is helping people and it's affecting people's lives, making people's lives easy.

Mehmet: So yeah, to your point. And again, thank you for, you know, this. the time today and you know all these insights that you have shared with us. It was always as always pleasure to talk to you again today. I really [00:49:00] appreciate that and hope we're gonna make maybe another episode also in the future and this is for for the audience so if you just discovered this podcast by luck and by the way it you know some people they said hey we just found out about you by luck so it's good thank you so if you did Please subscribe and share it with your friends and colleagues who are available on all podcasting platforms.

Mehmet: And if you are one of the loyal people who keep listening again and again, thank you very much. I really appreciate, you know, all your feedbacks. You send that to me, whether by text or by social media messages. I really appreciate that. Keep that coming. I appreciate also, sometimes you tell me, you know, I need to fix this.

Mehmet: I need to make that. You suggested some topics. Please don't hesitate. If you don't like something, also tell me this is how I can, you know, enhance this, uh, this podcast and yeah. So, you know, as I say, you, as usual, thank you very much for tuning in. We will have another episode very [00:50:00] soon. Thank you. Bye bye.