Sept. 13, 2023

#215 Unpacking the Impact of Big Tech and AI with Industry Veteran William Raduchel

#215 Unpacking the Impact of Big Tech and AI with Industry Veteran William Raduchel

Have you ever pondered why some software developers are worth millions, or why tech giants always seem to have the upper hand? Veteran technology expert, William Raduchel, joins us to unravel this enigma, offering an intriguing perspective on the technological shifts sculpting our world. Journey with us as we traverse the dynamic landscape of technology, from the resurgence of client-server network computing to the revolutionary impact of smartphones, and how these shifts are shaping our society and economy.

 

It’s time to question the dominance of Big Tech. Bill brings to light the influence of tech giants on our society, raising pertinent questions about the necessity for regulation to shield privacy, security, and competition. Uncover the intricacies of executive compensation schemes and their tax implications, and how these contribute to the talent acquisition strategies of tech behemoths. Together, we question the balance of power within the tech world and the changes needed to ensure a fair playing field.

 

Let's delve into the future - the future of work, that is. Bill shares his insightful thoughts around the implications of AI on our workforce and everyday life. What would a world look like where machines can outperform humans at most tasks? Will AI become a tool of augmentation or automation? Unearth with us the ethical considerations surrounding AI, the potential challenges of teaching people coding languages, and how all these factors are shaping our future. Join us on this enlightening journey through the world of technology, regulation, and the impact of AI on our lives.

 

About William:

High-level executive and strategic adviser for organizations such as AOL Time Warner, and Xerox, William Raduchel—author of The New Technology State: How Our Digital Dreams Became Societal Nightmares—And What We Can Do About It 

Transcript

 

0:00:01 - Mehmet
Hello and welcome back to a new episode of the CTO Show with Mehmet. Today I am very pleased to have with me Bill, joining me from Virginia in the US. Bill, thank you very much for being on the show. The way I like to do it, I keep it for my guests to introduce themselves because, you know, no one can introduce himself better than himself or herself, right? So thank you for being. 

0:00:23 - Bill
I know you're a veteran name in the tech world, but just you know, for the audience to get more about you and what you do, Well, I probably the longest stretch of my career was at SUN, where I had every staff job and ended up as Chief Strategy Officer and left there to go to AOL where I was the Chief Technology Officer and went through the merger with Time Warner and can tell many stories about that and have in another book. 

But I started out I was going to be a chemist, then I was going to be an economist. Economics led me into statistics, the statistics led me into even more computing and I now have evolved to being largely a technologist. And then I got involved with media and I've spent much of my career on the frontier between media and technology. So I have three professions that I can bounce among economists, statistician and computer scientists. But you know I'm pretty good at all three, but I'm not. I mean there are better people than me in every one of those fields, but the strength has always been the ability to walk between them. 

0:01:42 - Mehmet
Yeah, sure, for sure. Again, thank you, bill, for being here today. I'm impressed with this long experience that you have and I'm sure this is out of curiosity. Honestly, I'm asking you you have witnessed most significant technological shifts, but which one do you say? You can say, yeah, this was one of the main significant ones, it can be one or more and how these shifts have shaped the current state of technology and business in your opinion? 

0:02:19 - Bill
Well, I like to focus on abstraction layers. And computing is still the same. I mean, if you, I was on an airplane the other day and they had to reboot the in-plane entertainment system, which starts with the Linux boot file, and you watch the Linux boot up but you realize that all this stuff is just piled on pile on, pile on pile into this huge stack and you sit there and select movie and it plays, but at some point it had to actually tell a processor where to go find a boot file and then crank up all the way. Subtraction layers really matter. 

The transition that was really dramatic, I think, was client server network computing, and basically it's still the model we use today on our smartphones. 

I mean, it is just there are lots of clients and lots of servers, but the architectural model hasn't changed very much since then. But surely the biggest transition in lives has been the smartphone. I mean, I owned the Nokia 95 and 95, which was the best smartphone that anybody had seen at its time. And you look at it and the problem with the smartphone was that if you were writing for that device, every application had to be phone call aware. Every application had to understand that a phone call might come in, there might be a phone call ongoing and it was very hard to write an application. 

And the genius of the iPhone is that it was just client server computing and you didn't need to know anything at all about phone calls to write an app for the iPhone. And that was a. If you know the name, avi Tavenian he was the head of software at the time at Apple and that was Avi's great insight was that at the speed of the processor at that time, voice could just be another application and I don't think anybody saw on the initial launch how big it was. I was chairman of Opera Software at the time. 

0:04:31 - Mehmet
And. 

0:04:32 - Bill
I waited in line and got an iPhone and I brought it to Norway and the engineers proceeded to tell me why it was destined to be a failure. They were wrong Because they never thought through. But at the time the first one didn't have any apps on it. So the iPhone had no apps. And Steve Jobs had a big fight with the engineers because they wanted to put a camera on and he didn't think a camera should be there. He kept saying people use cameras to take pictures, not their phones. He was brilliantly wrong and they ended up putting on a cheap camera over here. 

He eventually relented, but the device that has emerged it would not be the same device if it didn't have GPS. I mean, the iPhone is client server computing with GPS and a camera and it's that combination that has made it so dominant. And then Eric Schmidt was at the board meeting and realized that this was right and that's where Android came from. So you have these two competing platforms in the world, but certainly that has been the vehicle. I mean, without the smartphone you never would have had it. And then, when Facebook was going public, chamath went all around the world convincing carriers to make Facebook data-free. 

And that in the rest of the world gave Facebook its predominance because it was free and the other ones weren't, and so you've got network activity going on everywhere. So that I mean, in terms of social scale, that has to be it, but in terms of architecture, it really was client server computing which basically happened in the early 90s. 

0:06:27 - Mehmet
Right, and I think thanks to the invention of the internet, that made also this more widespreadly available for people like us as the customers. Right Bill. 

0:06:39 - Bill
Yeah, I mean I built an IP network in 1990. We had a network all over the world. For some that was an IP network before the internet, right, and we had email that was delivered in a minute. I mean people today cannot imagine, but in 1990, you could actually send an email and it might show up in three days because of UCP and somebody else would store and forward it and when they got a connection they would pass it on. I mean, really, you had email taking minutes, hours, even days, and today we expect it to take seconds and in general it is seconds. So that certainly brought the smartphone into it and you're right to observe that. But I mean you could have done the smartphone revolution if everybody had a private network and interoperated. It's just that the internet made that a lot simpler. 

0:07:35 - Mehmet
Yeah, yeah, 100%. Now, bill, if you allow me, your new book, which is called the New Technology State. So from the briefing I had, it's about discussing how technology has changed our world. So can you share the core thesis of the book and what would be the main benefits for technologists and entrepreneurs to read it and go through it? 

0:08:07 - Bill
Well, when I was in my economist days, I spent five years working with John Kenneth Galbraith at Harvard, and Galbraith at the time and this is the early 70s repeatedly made the assertion that the global elite would use technology to gain wealth and power. Turns out he was right, and Sam Lesson, from Facebook and now at the information just posted that one of his concerns, or observations anyway, about the AI revolution is that it's going to even further increase the amount of inequality, because the people who learn early how to use and exploit it will be able to gain both wealth and power. And so what technology has done unintentionally I mean up until probably the invention of the iPhone was almost universally bringing more good than bad to our societies and economies. But today we've ended up divided, we've ended up being very unequal and we've built, in many ways, a very fragile system. And what happened is that we made software innovation very easy. I mean, when I was a graduate student, innovation was hardware and it didn't happen quickly and there was a limited number of people and it took huge capital to do, but today one person can write software that changes the world. 

There was a story recently from Robert Scoble, who was a pretty famous reporter out of Silicon Valley and he tweeted that he was visiting SpaceX and he asked to see the team that wrote the software that took the rocket into space and landed it back on the launch pad. And he said he wanted to meet that team and they said, ok, he's over there and you don't know, I want to meet the team and they go. Well, there's only one Right. I mean that's the power of this platform is that one human being can write it. So one of the central thesis of the book is around something called Halstead length, which goes back to Professor from Purdue in the 80s, maurice Halstead, and he was curious about why were some people hundreds of times better than others in developing software, and he came up with some metrics around that and what he discovered was that the very best people are two to 300 times better than the average and he gave an explanation of which is basically the length of memory chunks that a human being has. We don't know where this comes from. I mean the aborigines in Australia that know where every waterhole is in the outback and the roots between them obviously have it, or a London cab driver who has memorized the knowledge and knows all that. I mean, this is not just an intellectually, but it is a very important trait. 

And what I would argue is that there are a limited number of these people in the world I mean tens of thousands, not millions and that the tech giants hired them all, and so that gives them a huge advantage over everybody else. And you have to consider Tesla as a tech giant. I mean, there was an interview recently with the CEO of Ford Motor and he was explaining his problem, which is that in the Ford Lightning there are 150 software suppliers and the truck is an integration across 150 software suppliers and they don't own the software. Tesla has one software stack and one chip. I have 150 software suppliers and many, many, many chipsets. 

Well, you can't win. You can't win at that. So the tech giants can afford to pay millions of dollars a year to some of these brilliant developers, and that would be. I mean, I can't imagine a Ford or GM or Mercedes or BMW paying a software developer, you know, 3 million euros a year. I just I mean that would be so anti-cultural, it'd be very hard to do. Yet the big tech companies understand that that's what these people are worth, and aggregating them into teams of like people is makes them even more valuable. So you've got a monopoly here. That is by owning the talent, and everybody else is fighting over the non-A plus programmers, but the A plus programmers are 10, 20, 100 times more productive than everybody else, and so you've ended up in a world that is unequal, fragile, because most of the software is really bad, because it's not. You know, the tech giants have good software. Everybody else has mediocre software. 

And then you know, if you look at the 2016 election for president in the United States, the way that Donald Trump won was brilliant. He spent 150 million on Facebook in the last three weeks of the campaign and he spent it in five counties. There are about 3,500 counties in the United States. He spent only five and those were his margin of victory. And Facebook allowed him to buy ads by name, so I could buy an ad and it would only show to you. In fact, there's a story which you know it was repeated who knows whether it's true or not is that the Labour Party in Britain had a lot of pressure from their then leader, jeremy Corbyn, to run some ads, and the party management thought they were terrible. So they ran the ads on Facebook with by name advertising only to Jeremy Corbyn. So he logged on to Facebook and he saw the ads, but nobody else ever saw them. But that's you know again, you couldn't have done that 10 years ago and that's how Trump won. I mean, he ran ads in these five counties to Afro-Americans that had Hillary Clinton praising three strikes in your outlaw, which says that if you're convicted of three crimes you go to prison for life, and that is today regarded as the most racist legislation ever passed in the United States. And he ran ads on that in these five counties and he suppressed voting. 

I mean, it's a very different world and our legislators and regulators, they don't have a clue, I'm sorry. I mean, I mean you listen to their questions and hearings and whatever, and very, very few, very, very few. So you know what do we do about it? Because I mean, I believe in competition as a means of disciplining the market and if you don't have competition, the market does what the market does, which is it goes to monopoly and you end up with bad things and you also get a society that's fragile because you're very dependent on a handful of people and a handful of software stacks. So if you have a handful of software stacks, then hacking becomes very easy and you've got a cyber security risk. So, on the other hand, I'm a conservative and I don't believe in massive government regulation, but I do believe that government has to set the rules and then get out of the way. And so I argue that, you know, the government shouldn't enact some taxes. I mean, there's no need for people to accumulate personally identifiable information unless it's really valuable. 

So, I would tax people who use it so that if they don't do it gratuitously, that they do it only because they're going to make real money from it, and if they do fine, then pay some tax back for the costs imposed on society by doing that. So I'm trying to get a conversation, because we're headed toward regulation. I mean, the EU is already racing down that path, but they want that to be regulation. That will mean the government bureaucrats are going to be in the middle of every corporate system and I don't think that's good for privacy, security, innovation or anything at least my experience. I had a encounter with bureaucrats over cryptocurrency and I was a witness, that's all. But it was scary how little they understood about how technology worked and yet they're preparing to enact regulation at very broad scope. 

0:17:25 - Mehmet
Yeah, bill, actually you brought a lot of thought-provoking discussions, I would say, and I would try to tackle them one by one. So let's start with something you discussed at the end, maybe, which is about the big tech and the competition. Now, in a couple of episodes back, I was interviewing a CTO of one of the AI companies and there he lie on other companies, technologies, of course, and the question when I asked him, I said aren't you afraid that one day, this big, it's open AI, it's not something to hide which is backed by Microsoft? I said if these guys decide to cut the cable on you, the cut the current on you. You're in the dark. So how are you managing this? 

And he was shocked. He said you know what? I never thought about this this way. So from what do you think something can be done? You said regulations, but do you think is it enough to just put policies and how we can make sure that, on the long run, we have access to the technology available for everyone and we can have alignment actually with these big techs instead of being in competition with them? 

0:18:51 - Bill
So that's it. That is a profound question to which there is no simple answer, and it will probably take us a decade of experimentation to figure out what to do. Certainly I mean, you know, bizarrely maybe I go after the executive compensation schemes in the United States, and that is what drives a lot of this, and a lot of it is because of there's a huge tax break for the tech companies, and the tech companies get to deduct the actual value received by an employee for a stock award, but that's not a cash expense and they don't have to show that same expense on their profit and loss statement, so they can show large profitability and yet pay almost no taxes because they get to deduct all the gains. So as long as you can create a wheel in which the stock price goes up and you do that through whatever means hype even that allows you to pay your employees a lot, and a company that is paying cash can't even compete. They have no ability to compete. So if you don't reform that system in some way that balances it more, you're always going to give a huge advantage to companies that can have an ever rising stock price that allows them to overpay and get a tax break. I mean it's brilliant for them, so that certainly allows. 

That's what drives these companies and they make money that way. So if you just flipped it and said that the employees got capital gains treatment on those investments and the companies had to expense whatever they do, they run into the expense you just flipped the taxation so they couldn't deduct that value and then some of the employees got the tax benefit A. You make it more feasible for other people to play and you take away the enormous recruiting advantage that these companies. I mean, if you're a brilliant programmer and Google offers you potentially a million $2 million a year over five years to be a programmer, why are you going to go work someplace for $500,000? 

I mean, the tax advantage basically gives these companies, the tech giants, a huge advantage in recruiting the very best talent and when they get the very best talent, they get the best productivity. And I like to say that it allows them to pay above market but below value. So they pay way above market and get these people and but it's way below the value they get. I mean, when the, a friend of mine, was the first VP of engineering at Google and the first thing he did was fire all the mill managers. And he said these you know, try to put a mediocre programmer manager in charge of 15 geniuses is just not a good idea. So he lets the programmers self direct, and that's been what has driven Google for a long, long time is they get an enormous productivity out of the programming teams and they get innovation. 

The thing whether that's still going on or not is something we're going to see as we watch them compete in AI now. But you know other people, you know. I like to say, mehmet, that I think that the future of every company is going to resemble a sports team and you're going to have players and you're going to have employees, and the players, unfortunately, are not easily identifiable. I know I can easily look at Messi and understand that he's a very good player and nobody objects when he's paid enormous amounts of money, but I can't I don't look at a software developer and realize that that human being is the equivalent of a messy in software. And professional sports teams have figured out how to manage an enterprise with two classes of employees, but big companies, especially unionized big companies, have an enormous problem with that. 

Google, meta, amazon do not. They know how to do it. They know how to recognize and it's accepted part of the culture and that everybody realizes who these superstars are and compensating them. But as long as the tech giants end up with all the talent, it's not a fair fight. People have always wondered what made the Vikings so successful when they were conquering the world, and less than a decade ago people figured out that one Viking both obviously made its way to Iran or Iraq and brought back the secret of making steel. And so the Vikings had a thousand steel swords and that's what they used to conquer the world, because if you have an iron sword and I have a steel sword, you're dead Because I can cut an iron sword in half on the first stroke and then you don't have any sword and I do. 

It's not a fair fight at that point. And it's a small number of people I mean it may be it's probably total 10,000, 20,000, 30,000, but has allowed them to dominate because nobody else has that talent. And if you have all the steel swords in the world, you're going to beat iron swords, no matter how good the swordsman, and that's the challenge. 

And we're all very dependent. I mean we sit on top of a software stack and now that software stack is heavily open source, but the open source almost always goes back to one company as the provider and innovator in that stack. And I mean there's no CTO in the world that understands every layer of the stack that they're on. And then you look in the AI world, they're all on CUDA, which is the software from NVIDIA, and that's their lock-in. If you've got everything written in that, you can't easily move any place else because you're dependent on CUDA. You got a monopoly supply. 

So I know who's going to go buy NVIDIA today because you know, but the value isn't CUDA, the value isn't. I mean, the basic design of a GPU is DirectX and that comes from Alec Lank. John, you know, over 20 years ago at Microsoft they just put it in hardware. So I mean there isn't a lot of advantage there. The advantage is in CUDA. But I think you're correct in your initial question that you asked the guy, which is that if I'm a CEO, at some point I'm going to turn around and go okay, what's your resiliency strategy? How are you going to build a platform for me that I'm not going to get taken out because my supplier gets hacked, somebody finds a hole in CUDA and with that I can shut off all of my recommendation. I mean I don't, but I mean I think that resiliency is going to move to one of the you know right now cybersecurity. But that's just a narrow aspect really of resiliency and I think that's going to change the world for TTOs all over the world. 

0:26:48 - Mehmet
Yeah, 100%. I agree with you. Like I'd say, we will have to wait and see how it happens. Now. You mentioned also, you know, ai and the ethics of AI. So a lot of debates, you know a lot of voices. Couple of months back, there was the famous letter signed by, you know, people like Steve Wozniak and others, and while some people they say no, you know, let the companies control themselves Some people they say no, but we cannot figure out if one guy inside this company might do something wrong. So what do you think you know policymakers, businesses and technologists should do, or how should they collaborate, let's say, to ensure that we have responsible innovation? 

0:27:36 - Bill
Well, the big clinker that's coming in AI is the question of copyrighted content, and can AI use copyrighted content without permission and without compensation? And OpenAI did. Openai went out and they hoovered up the internet and they didn't pay anybody, they didn't tell anybody and they rubbed chat TPD and that's how the people have done so. There's a potentially forthcoming lawsuit from the New York Times against OpenAI, which will ask that they destroy chat TPD and start over again without using New York Times content. So that sits there off to the side. The issue regulating AI is that it's out of the bag. So there are countries in the world which are not going to care about this, will not care about any regulation and will build AI that is even better than anybody else. So if you say to countries that respect legal things, your AI is going to suck compared to the people who don't, that's not good. I mean, in 1999, when it was 2019, the US Air Force did a flyoff in which they flew their five best fighter pilots in the simulator against AI, and the AI won every order everyone and the reason at the end was simple the AI can fly the plane at the limits of the plane. The human pilots had to fly at the limits of the human. So AI is going to matter in, unfortunately, combat and you're gonna want the best AI and so I mean countries are gonna face really tough choices about what they do, because the learning skills may be learned in a lot of copyrighted content. They may not seem directly applicable, but AI is learning all the time and building up its knowledge algorithms. 

So I don't know a government regularly there may be a few more in the UK than in the US but I mean the government isn't staffed to do this. I mean, if you're a brilliant AI programmer today, you're starting salary in the US is a half a million dollars and your top salary maybe 10 times that. I mean the government doesn't offer jobs like that, right, I mean, okay, you can get an $80,000 a year job in the government, or $100,000 or 10,000, 100,000 euros maybe, but I mean I don't know how they're gonna regulate it. I mean it's very. I mean regulating technology is never, it's always. People have always found a way around it and I think people would. You know you need responsibility, accountability. 

I mean there is a movement to make people make algorithms and make them accountable for them, and that's certainly gonna happen in healthcare, because the algorithm is basically like a drug and in the end it'll be released like a drug and it's gonna be for liability reasons, because if somebody dies because the algorithm makes a mistake, then everybody's gonna be wanting to point fingers as to who's fault it was. So I think you have to say that if you have an algorithm, you have to and you're using it to make consequential decisions, you're accountable, and you're accountable for that algorithm. And if you don't follow responsible behavior around testing it and releasing it, I mean that helps. But you know I will say this that never in my history of technology have we ever been pushing a new technology that is commonly known to have hallucinations, and yet every AI researcher out there says oh yeah, well, they're hallucinations. Oh yeah, yeah, yeah, that's an hallucination. Well, no, I mean, if that hallucination causes my plane to crash, I don't think of it as just a hallucination. 

And yet people dismiss it. You know they go oh yeah, well, yeah, we just make stuff up. Well, that's the nature of the beast. But that's gonna be the problem as well. I mean, in the Cold War we built a distant early warning system that was gonna detect a USSR launch of missiles on the United States, and the first day that it went live it detected a Russian launch and, fortunately, the major on duty hit the bypass button because he thought it unlikely that the Russians would pick, the instant the system went live, to launch an attack that didn't seem to him to make any sense. It was the moon coming up Wow. 

I mean I do worry about this. I mean, in the late 70s I helped a group that testified in Congress about the dangers of computer-fired anti-aircraft missiles, because at the time it took 6,000 decisions per second to launch an anti-aircraft missile and so the only way to do that was computers. But we were actually letting computers make decisions that could lead to war. And I mean that's so. I don't think you can regulate. If I were Sam Alton and running OpenAI, I would want regulation, because regulation insures that no competitor can go up. What you're doing is, if you're in the lead, you want regulation so that new competitors are burdened and have trouble coming in. That's been true in every industry. I mean, once you get your dominant market share, then you want regulation so that new people can enter. So there are economic motives here that I think may affect positions, but I don't. I mean, how do you regulate AI? I don't even know what that's saying, I don't know. 

0:34:08 - Mehmet
You know, bill, like the other, you know, couple of months back I was chatting with someone, and you know when, especially the letter came out from Steve Wozniak and the others, and I think even Elon Musk was pushing, saying the same thing and they said look like, if OpenAI decides today to stop developing, right, someone else already these models are available out in the wide. So someone simply in a, as you said, in a country where there's no regulations, or maybe even as a hobbyist, he might, you know, just try to do something with it, so how we can stop it. 

0:34:47 - Bill
So I mean the software is open source from Google, right? The software comes from DeepMinds, now Google now integrating into Google. It's open source. So anybody look, I don't think that the idea that someone's going to do this at home I mean the plant on which chat GPT was trained cost a lot of money maybe, you know, maybe billions to go build up all those data centers. But somebody who could write a 50 or 100 or $200 million check could certainly take that software and go train it and that's all you need, and the stuff is there in the cloud to certainly you could buy the computing on the margin. You don't have to go build data centers. So I don't. Yeah, you can't. You know, genies are out of the bottle. You're not going to put it back in. 

0:35:41 - Mehmet
Yeah, it seems like this and you know, like beyond, you know maybe it's like a futuristic question. So, beyond, you know generating text and you know images what we could expect from this technology, what it could lead us actually. 

0:36:02 - Bill
Well, I mean, I mean, one of the arguments of the new technology state is that this is just an algorithm. I mean, ai is just an algorithm. It's just a very you know, it's got two billion parameters instead of 100, but it's just an algorithm in the end, and we've been going through a life history here in our last 20 years of algorithms running our lives, and algorithms already run our lives. I mean they determine what we pay for an airline seat, they determine whether an airline seat is available. They're driving our cars in part. I mean algorithms are, you know, driving healthcare. I think the most exciting stuff that I've seen is inventing new drugs, and that could have enormous impact on it. 

What the political scientists are worried about is the ability to do completely personalized advertising, and that's effectively what Trump did. Is he used algorithms on his own database to pick the people that he ran these ads to, because these ads would be offensive to a lot of people and no one else saw them except the people he targeted, so nobody even knew about them, and yet they were very, very effective. So you get to do that. But look, an AI is just like a child and as it grows it gets smarter, and there isn't going to be one massively smart AI, because every AI is different depending upon what it was trained on, the order in which it was trained, the hints that it was given, the experience it has had. So you're going to have to have a lot of it. 

What I think is the fatal flaw of AI is I don't know how they build trust. I mean, when we go do a massive project and we put it together, the ability of humans to collaborate depends, in the end, on trust and we can build up relationships and we've been evolving trust as a species for two million years and we know how I can trust you and you can trust me. And I mean, I don't know how you do that with AI, because one thing we know about AI is they lie with immunity and people have had lots of experiments. I saw one where it was lying to somebody to get them to do a capture. Today they can do captures, but it's a year ago when it wasn't very good and it lied to somebody to get them to do the capture for it. It said it was a disabled person and he couldn't see and he was visually impaired and would he please help him out, and the guy eventually did. I mean. So I don't know how you build trust and this inability to build trust is going to be a big deal. 

I use the app PoE from Quora and I go to. You know, the majority of my searches now are done on PoE, not on Google, and I ask the AI and I get better answers faster, unless something is really current, which case it doesn't work because it's working on it. And I mean, if you ask an ancient person who was the most skilled you know human on Earth 2,000 years ago, they give you an answer to every question, some of them we would think ludicrous, but that was to the best of their knowledge at the time and sometimes it's wrong. I mean, we uncovered a burial site in Peru archaeologists did a few years ago and the Incas were fighting climate change and, to the best of their knowledge, the way to do that was a mass sacrifice of children to the gods, but they killed 240 children on one day. But I mean AI is just like that. I mean it gives you the best answer it has to the knowledge it's been trained in. 

But no, ai is going to be trained in all knowledge, and descent, in the end, is the essence of innovation. If you don't descent, you don't innovate. You don't say there's a better way and you go find it. And so AI is not going to, I don't think, invent descent. It's designed to tell you the conventional wisdom. And, going back to my time with Galbraith, if there's one thing that Galbraith taught me is the conventional wisdom is almost always wrong, and you know you have to go the next layer down. Or Peter Grutter, quoted many times as saying any decision made unanimously is clearly wrong, because all that proves is that you haven't probed the issue deep enough to get there. And I think you know that's the world we live in and I think people are. I don't understand some of what people are saying. Are there a lot of jobs that are going to go away? 

Yeah, and the idea that you've had a lifetime career. That's certainly has gone almost everywhere. I mean, I was at a golf course recently where they put in AI driven lawn mowers and the lawn mowers go out, the GPS directed and they cut the grass. There used to be a team of humans that did that. 

Not high skilled lawn mowers, but I mean lawn mowers. When their power runs low, they know when they have to go back and plug in and recharge. They can mow grass at night because they don't need vision. They're doing it off of GPS coordinates. I mean what the job is. I mean junior analysts in financial companies. I mean I can feed a company report into a cloud and say you know, is this division profitable? And it'll give me an answer. I mean so. I mean lots of jobs are going to change and people who learn how to employ AI to make their jobs easier and themselves more productive are going to be winners. But there are going to be fewer of those than there are going to be losers. 

0:42:21 - Mehmet
So what we will be doing, bill, like what is actually? This one's my next question to you what is the future of jobs? I would say Like I would not say future of what, in terms of remote or in office? I mean by like as humans. What do we be doing if majority of the things machines can do? 

0:42:43 - Bill
You're making an assumption there that may not be correct. Which is majority? 

0:42:48 - Mehmet
Okay. 

0:42:50 - Bill
I mean AI has no common sense. So how important is common sense? How important is common sense that the major on duty at the radar system goes? I don't think the Russians would be picking today to attack and stop it. Ai would have said Geft FKON one, launch nuclear war. So common sense matters and matters a lot, and there's all this stuff that's going to be beyond your training set. But somehow either we as humans are able to reason around that and respond accordingly, so that I don't know that the world changes very much. But the way we solved this problem before was we reduced the labor force. I mean, if you look at the thirties again going back to being the columnist in the United States with Franken D Roosevelt, did that solve the problem as much as he could? Is he put in 40 hour a week and he created an option for people to retire. So he shrink the workforce and we're going to have to shrink the workforce. I mean, that's what you're saying. 

Now the question is how do we afford that? How we do it is we shrink the workforce how we pay for it is the question that is going to be debatable. But you're going to have to figure out how to give the people who are retired meaningful way to spend their time, and that will become the second challenge. I don't have any problem in predicting that we're going to shrink the workforce over. We've already shrunk it. The percentage of labor force participation keeps going down and it goes down because of many factors, the pandemic being a big, major structural change. But that's what's going to happen. 

How we afford it is going to be the great policy debate and unfortunately there isn't any politician around talking about that. There's no one who wants to bring that subject up and talk about it because it's a very it's a third rail subject. But I mean, in the 30s you did it by putting in 40 hour work weeks and by putting in early retirement with Social Security in the US, and that's how we did it and that's what we're going to have to do but I mean a lot of the. 

I mean AI will probably help us to more energy and it will probably help us do other things, but then you get back to what the pandemic taught us is that we over optimize on efficiency and forgot about robustness, and then we became very fragile and supply chains broke everywhere. So most of the corporate data in the world is still sitting on IBM mainframes. I mean they're still got an eight billion dollar business selling mainframes. I mean that should caution us. I mean that why. 

Well, there's a lot of software in the world that's 40 years old and that software is running Right and there's nobody around. I mean I read the other day that Bloomberg terminal you know the most you know very influential system right Written in Fortran, and they have a lot of trouble hiring people because people don't. Fortran is something they learned about in computer history class. It's not something you learned about how they program right, or companies that discover that they own. You know 200,000 lines of cobalt. Well, there isn't anybody under 45 that is a cobalt expert. 

I mean we got lots of obsolete software and that I mean innovation has is a two sided coin. The other side of the coin is obsolescence and the hardest problem for society to deal with is obsolete people, and AI will make a lot of people obsolete. They may be very talented individuals, but can you teach? I mean we've tried in this country to teach people how to be coders, but AI is going to eliminate a lot of those jobs, not by eliminating coding. My friends that are brilliant programmers tell me that chat GPT doubles their productivity. It just means you need fewer programmers and the really good ones can use it and the not so good ones can't. 

0:47:56 - Mehmet
Yeah, actually I'm not a. You know like I come from a technology background, but I was not a coder. But you know like I feel now I have superpowers, because whenever I want to do something that maybe a Python script will automate for me something, so I give it to chat GPT, it does it for me, right, it's like? It's like. 

0:48:16 - Bill
it's like the augmentation thing that they talk about now, but we know from research that chat GPT is full of security holes. 

0:48:26 - Mehmet
Yes, that's what I was wondering. 

0:48:28 - Bill
At least twice as likely to insert bugs as a human person doing the same job. So I mean, you know, so we'll get better. Yes, I mean, but we'll get perfect. No, perfection is very hard. I mean, elon Musk talks about how great full self driving is. 

Well, for random historical reasons, because when I bought my Tesla, I'm one of the 75 or 80,000 people who has the full self driving beta and you know, it's 95% accurate. It's really pretty good. The problem is that driving, you can't be 95% accurate. You have to be 99.999% accurate. And you know, I don't know what it's going to take to get from A to B, because in driving, you also have to predict what another human being is going to do, and there's no AI in the world but, you know, can do that, because you don't even know who the person is at the other time and what that human is going to do. So I mean, perfection is very hard. And can AI get the 95% solutions? Yeah, can it get the 100% solutions? No, and you know, so we're going to end up in the hybrid world and yeah, but we're already there. 

I mean, you know you go to a restaurant today and algorithms decide what table you get, algorithms decide what food you get served. When I mean, we're all living in a world in which we're coming. I mean, george Hott, the famous hacker you know, posted on Twitter now, ex, that you got to remember that in 15 years, you're going to be able to run a artificial general intelligence on your smartphone, that a smartphone, 15 years from now, will be able to have enough GPUs to be able to run this stuff natively on your phone. You know, at that point, we're in a world of smartphones with human, you know, a human attached as a peripheral because our lives get run. I mean that you know that's a very, very bleak forecast. I don't know. Yeah, I summarized this in one simple sentence. I believe that evolution is smarter than AI. 

0:51:04 - Mehmet
I agree. I agree with you, bill, like. Actually, history showed it to us, and many times as well, I believe. 

0:51:10 - Bill
Right Evolution is. You know, evolution is smarter than AI and I do not fear for the human species. I mean there's going to be millions of AIs and they're going to work in networks and they're going to have to learn how to cooperate and I mean it's no different than a child. I mean you know I was dealing with a three year old over the weekend and I mean you know, I mean you have to deal in the context of the three year old and talking about things the three year old doesn't understand. You're just wasting your breath. I mean the you know he didn't understand that. He did understand. He did assume that I was there only to play with him and that I was to ignore his siblings and his parents, because, after all, I was only there to play with him, which is how a three year old would think. 

0:51:57 - Mehmet
But yeah, Actually, you know what I always mentioned and some people, even they said like you are so pro AI. I said I'm not pro AI. But exactly what you said now, bill, I said think about it is not a, is not a conscious thing, right? It needs someone to go and give the input so it gives the output. Let's say it's after that end of the day, so you need to feed it input so it can go. Of course you can make it automatic, but still you need to feed it something so it get you what you are waiting for. 

So this is why you know I, you know, when I saw the previous I think I forget his name, but he was one of the leading leading the AI division at Google and he was like saying, hey, we should stop. You know, and I don't like this message is usually that make people fearful or, like you know, make them in a way his, you know, very, I would say. You know, they push people in a way that they think that this technology is bad and we should, you know, not use it at all. So I don't like such articles. But you know what you mentioned now and from someone experienced like you've been, really, I hope you know, if someone is listening or watching us, we will get it. 

0:53:18 - Bill
My favorite Star Trek episode is the one with Nomad, and Nomad is this device that shows up that is incredibly omniscient AI and in the end, kirk has it destroy itself because he shows that it's made a mistake and it can't deal with making a mistake and it basically goes down. Evolution is smarter than AI, and that episode is proof. Are there going to be problems? Sure, there are going to be problems. Are there going to be jobs lost? Yes, there are. We got to figure that out, but we don't get ahead of it. I mean the more important thing we're worrying about how to regulate something that is inherently unregulated. 

If you want to slow it down, there's a simple answer Put a tax on GPUs. I can slow down the AI revolution very quickly Destroy a trillion dollars in market cap but you put a tax on GPUs. If you want to slow it down, tax everybody for every GPU they have operating. Okay, now we've slowed down AI. It's not hard, you don't need to do anything. Just do that. I'm an economist, that's what you would do. No one is proposing that. But if you want to slow down AI, you put a 10-year tax on GPUs. Makes sense. 

0:54:40 - Mehmet
Bill, as we are coming to an end, with your experience and with all you've seen, it all as we can say, what are your advice for new generation startup founders, new generation technologists, if they would run out with something today from you, if you can say it in two or three sentences, what you would tell them? 

0:55:08 - Bill
I mean, when you see a technology that's going to win, you jump on it. If you're a technologist, investing in something that's going to be obsolete is just bad. On the other hand, you can make a lot of money sometimes sticking with old technology, because eventually you become invaluable and you're just going to extort money out. The world's going to change. It's going to change at a very dramatic weight. It's fundamental principles, I think, though, that matter. 

I learned long ago from a friend that the first principle of building systems is to encapsulate complexity. That's how you build good, redundant systems. Guess what I learned that 50 years ago? It's still true. It will still be true in 20 years. There are always parts of a system that are pretty routine. There are parts, though, that are going to be incredibly complicated. What you want to do is to encapsulate the complexity and not let it spread like peanut butter throughout your code, because then it becomes unmaintainable and unchangeable, and then you obsolete, and then it's gone. I think it's focusing on basic principles. Then, if you're looking for senior jobs, senior jobs aren't about writing code, they're about managing people. What you have to do is figure out who are the people that can really deliver. 

There's a great interview with former President Obama that ran recently, in which he said what is the basic advice he would give. His advice is actually very good. It is figure out how to get stuff done. Figure out how to get stuff done in the thing. Architecture matters enormously. If you don't understand architecture, try and learn it, because as new systems come in, you're always going to have to architect hybrids. The winning AI isn't going to be the winner. The application of AI is going to be the winner. You've got to figure out how to go do that. That's where the economic rewards are going to be. Yes, the people who create the tools will make some money, but the real winning is going to come because you figure out how to take two points out of the gross margin, where you figure out how to add two points to gross margin. That's what's really valuable. 

0:57:44 - Mehmet
That's great insight, bill, from you. I think the book would be live on September 12th. Then I have another one coming. 

0:57:53 - Bill
That is an autobiography in another couple of months Trying to do what you just asked me, which is, what's the career advice from 60 years of doing this. 

0:58:05 - Mehmet
Nice. 

0:58:06 - Bill
That's what I did in the pandemic. 

0:58:10 - Mehmet
Very nice, very nice, bill. I would make sure I have the website for the book. I'll put this in the episode description. Anything you'd like to share before we end, bill. 

0:58:25 - Bill
No, thank you. I think this is the most profound question in society. Is AI going to be transformational? It's going to be incremental. It's not going to wake up some day and say the world changed overnight. No, it's going to be incremental. It's going to be at a faster pace. Every one of these has come faster than the prior one but the world is going to change. What we have to figure out how to do is how to live there. Trying to slow it down I think that's the least productive application of energy I can think of. 

0:59:03 - Mehmet
Yeah, great insight. Well, Bill, thank you very much for your time today with me. I really appreciate it. I am sure that the audience, whether they would be listening or watching this, they've got a lot and tons of information from you. From your experience, I'd advise you to go and check the book where it's out and check the Bill's profile also as well. You are in a moving library, Bill. If you allow me to say this. 

0:59:36 - Bill
Thank you very much. Maybe I can visit you to buy. 

0:59:39 - Mehmet
Yeah, sure, as usual. This is how I close. Guys, keep your feedback coming. Thank you for tuning in. If you have any questions, you know how to find me. You can reach me on the social media, mainly LinkedIn, where I'm most active. We'll meet again next episode. Thank you very much. 

0:59:55 - Bill
Bye-bye, thank you, bye-bye. 

Transcribed by https://hello.podium.page/?via=mehmet