In this episode of “The CTO Show with Mehmet,” we are joined by Brad Micklea, the CEO of Jozu. Brad shares his extensive experience in the tech industry, detailing his journey from leading the Amazon API Gateway to founding Jozu. Jozu is building what they believe is the first control plane for AI projects inside enterprises, aiming to help organizations accelerate their AI development and protect their operations.
Brad discusses the current challenges enterprises face in integrating AI projects into production. He emphasizes the importance of having the right strategy and organizational structure to execute AI initiatives effectively. Brad shares his insights on the need for a solid connection between AI teams and product teams, ensuring that data scientists are embedded within the product teams to better understand and address customer needs.
One of the core topics explored in this episode is KitOps, Jozu’s open-source project that aims to simplify the management of AI models and their deployment in production. Brad explains how KitOps provides a central registry for all AI project components, ensuring that organizations can manage dependencies and control versions effectively. This approach helps enterprises maintain control over their AI models, ensuring that sensitive data is protected and that AI-driven differentiation is retained within the organization.
Brad also highlights the similarities and differences between MLOps and DevOps, illustrating how KitOps bridges the gap between these two disciplines. He explains the importance of using existing tools while centralizing AI project artifacts to streamline deployment and management processes. This integration allows enterprises to maintain a high level of control and security over their AI operations.
Throughout the conversation, Brad offers valuable advice for tech founders and executives looking to integrate AI into their organizations. He emphasizes the importance of listening to customers and iterating on AI initiatives based on real-world usage and feedback. Brad shares his optimistic view of AI’s potential to unleash human creativity and transform various industries, drawing parallels to past technological disruptions like the internet and mobile computing.
More about Brad:
Brad is the Founder & CEO of Jozu and a project lead for the open source Kitops.ml project, a toolset designed to increase the speed and safety of building, testing, and managing AI/ML models in production. This is Brad’s second startup, his first (Codenvy, the market’s first container-based developer environment) was sold to Red Hat in 2017. In his 25 year career in the developer tools and DevOps software market, he’s been the GM for Amazon’s API Gateway, and built open- and closed-source products that have been leaders in Gartner Magic Quadrants. In his free time he enjoys cycling, reading, and vintage cars.
https://www.linkedin.com/in/bradmicklea/
01:08 Brad Micklea's Background and Experience
02:41 Challenges in AI Implementation
04:16 Strategies for Effective AI Integration
04:54 The Role of Documentation in AI Strategy
05:48 Organizational Structure for AI Success
07:06 Moving AI Projects into Production
09:05 Managing AI Models in Production
10:22 Introduction to KitOps
12:27 Centralizing AI Project Components
20:37 KitOps and Enterprise Integration
21:37 Measuring AI Integration Success
25:45 Balancing Public and Private AI Models
27:12 Addressing the Skills Gap in AI
29:13 The Role of Open Source Models
30:41 Specialized AI Models: A Case Study
34:50 The Future of AI and Human Creativity
38:51 Advice for AI Startups
45:12 Building and Testing AI Prototypes
46:10 Conclusion and Final Thoughts
Mehmet: [00:00:00] Hello and welcome back to a new episode of the CTO Show with Mehmet. Today, I'm very pleased joining me, Brad Micklea, who is the CEO of Jozu. Brad, the way I love to do it is I keep it to my guests to a little bit, introduce themselves, tell us a little bit more about, you know, your background and you know, what you're currently up to.
Mehmet: So the floor is yours.
Brad: Oh, thank you, Mehmet. Um, all right. So, I'm the founder and CEO of Jozu. We've an organization that's building out what we believe is the first control plane for artificial intelligence projects inside enterprises. Enterprises are struggling in a lot of cases to get their AI projects into production.
Brad: And we believe that part of that is down to the fact that they don't have the right controls in place and the right process in place to both accelerate their development. Thank you for your time. And protect their organization. So that's what we're working on. Uh, at the core of this, we [00:01:00] have an open source project called kit ops, K I T O P S, uh, which has been getting great traction within enterprises.
Brad: And we're building the kind of enterprise features around that under the Jozu brand. Now, before starting this, um, I was the general manager for the Amazon API Gateway, which is used by probably millions of people around the world at this point. I'm sure all of you will be familiar with it. I did that for two and a half years.
Brad: Before that, I was the vice president of developer tooling at Red Hat, um, there. Of course, big open source company, the biggest open source company. I came to Red Hat when the last startup I co ran, Code Envy, uh, was acquired by Red Hat in 2017. So this is my kind of second foray into the, into the startup world.
Brad: So I'm based, uh, in Toronto and really looking forward to the, uh, conversation with it.
Mehmet: Great. Thank you, Brad, for this introduction. And, you know, I think we have lots and lots to cover today, [00:02:00] especially because you mentioned about the challenges that organizations are having today. I want to start from there, Brad.
Mehmet: So, of course, like, We keep mentioning this on the podcast that AI is not something new, but you know, with what happened since end of 2022 and, you know, Chad, GPT, Jen, AI, and all these things. So organizations started to feel, yeah, like, Hey, we're missing out something here. And we need to bring our AI capabilities up to speed so and of course you have spotted some gaps over there So you mentioned like you touch base very quickly, but I would love to hear from you if I am The chief digital officer or maybe i'm the chief technology officer at the organization So I got the mandate from the board.
Mehmet: I'm sure maybe people will agree with me that okay guys We need to [00:03:00] jump on this AI thing. We need to bring the organization up to speed. Otherwise, we're gonna face issues. So from your perspective, Brad, what are like the first obstacles that you are trying to solve? Of course, with choose you and and kittops that these organizations are facing today.
Mehmet: I love to tackle the problem first and then try to understand how we are solving that.
Brad: Love it. Love it. Um, I'll actually mention a couple of challenges really that are the first first challenges. We're not going to really help solve those, but I'll just call them out for people just so they're aware of them.
Brad: Um, so I think the first thing of course comes down to the strategy. And the organization, really, you can't do anything in a business without having the strategy and an organization that is structured in order to execute that strategy. And I think that that is a place where some organizations are just kind of coming up to speed or perhaps even struggling a little bit, um, trying to understand where does AI really fit into their product vision, into their differentiation vision, their [00:04:00] competitiveness, into their future as an organization.
Brad: So I think that's the first thing is to really focus on that. Getting solid with that personally. Um, I must admit one of the things I loved about working at Amazon and it's not for everybody, uh, was the documentation kind of centric culture. And one of the things I liked about that is that unlike a slide presentation, where honestly you can create a very compelling slide presentation that only skims over kind of the surface of a problem and doesn't get into the guts where really projects either succeed or fail.
Brad: When you write a doc, you know, three pages, four pages, five pages, whatever it is, It forces you to think more deeply about the problem and you can often tease out issues and then solve them before you go and bring it to a larger audience. And so my suggestion to people is always that if you think that you can work that way, try writing a document about what that strategy, what the goals for AI are in your company, because you may find [00:05:00] that actually helps you pull the answers out more than you expect.
Brad: So that's one. Two is the organization. A lot of folks, I think, Are leaning towards taking, hiring, uh, data scientists, ML engineers, and kind of putting them in their own team, kind of isolated over here and saying, okay, you figure out all this AI stuff and then come back and tell us when you got the answer.
Brad: I understand that. I think it's wrong. Um, it's appealing because. It's an area that most people don't know very much about. And so this idea that they're just going to hang out in their own and magically come up with the answer is appealing, but that's not really how things tend to work in my experience in organizations, large or small.
Brad: I think there still needs to be a fairly solid connection between that team and the teams that are talking every day to customers, because fundamentally. Most data scientists have not spent a huge amount of their lives interacting with enterprise customers. That's just not a world that they come from.
Brad: So they're going to need education there. And I think embedding them [00:06:00] into the product teams is actually a much more powerful way to get to the types of use cases that for any organization are going to launch things forward. So those are the first two things you've got to do kind of as the prerequisites.
Brad: Again, Jozu doesn't really help with that. Um, but that's my advice anyway for tackling this. Now you've got, let's say, an AI team. They're working on solving some problems. You've got a good vision. Everybody knows, okay, so for this product line, we want to do this. For that product line, we're going to do something different.
Brad: And for this third product line, it's a third thing. Next thing you've got to tackle is, can we use You know, an open AI solution, can we use a Google solution that exists already out there? And we're just leveraging it via the APIs, the interface that they already provide. In most cases, my argument is no, that's probably okay for like a prototype just to see, Hey, is this going to work?
Brad: Is it going to have value? Are people going to look at it and say, Oh, that's cool. [00:07:00] Great. You know, that's a good way to do a prototype. But once you're past the prototype stage, Ultimately, you need to own the intelligence that that AI has, because that is your company's differentiation. That's your competitive value and giving it out to an open AI or Google or whomever means that their model gets smarter and their smarter model now works just as well for your competitors as it does for you.
Brad: You're essentially training. That model on your use cases, which helps you, but also helps your competitors. My room would be keep that AI intelligence in the organization, hold on rigorously to the the gains you get there because those are the gains that will beat your competition. So to do that last part, and then sorry, it's a bit of a, it's ended up being a longer answer than I expected.
Mehmet: No, that's fine.
Brad: Then the last part is once you have that, AI working kind of as a prototype. Now you need to be able to move it into production and actually have it [00:08:00] interact with your, your users or, or whatever target your company has. That's where things get a bit tricky because the first couple times you do it, it's not that hard.
Brad: You're really just saying, Hey, I've got version one over here. I'm going to move it through this process and point gets in production. Isn't that nice. And now I'm going to replace with version two. I'll just fairly straightforward. The challenge gets to be. When you have multiple of these models serving multiple different purposes and across multiple different product lines in production at the same time, that's where things get complicated because one of the things that is misunderstood or perhaps underappreciated about AI projects is that most AI projects aren't a single model that just does all the things.
Brad: In most cases, it's going to be specialized, smaller models that have to interact with each other. To solve a larger problem for the customer, some of those will be reused in multiple ways. Some of them will be very, very specific to maybe one product line, let's say. [00:09:00] So now you're talking about kind of a web of AI models.
Brad: Now, if one of them goes wrong, it's not necessarily as simple as just, oh, well, let's drop a new version in there because you need to understand how that's going to affect multiple different projects in multiple different business units. Now you start to need to have More direct control, understanding of dependency trees.
Brad: You need to understand how to roll back safely, how to roll forward safely. You need to understand a B deployments, green, blue deployments, canaries, all sorts of things which live in the production world and are not really part of what a data scientist does. Um, when building the prototype, that production problem set, that's what Josie really focuses on.
Brad: We're really focused on creating the tools needed for enterprises to be able to handle. these AI agents or AI models in production safely, efficiently, consistently.
Mehmet: I got it. [00:10:00] Now, Brad, you mentioned, and just for the sake of, uh, you know, elaborating a little bit on what you mentioned. So when you mentioned the data scientists and the other team, this other team is like, Are they the applications team?
Mehmet: Like, who are they exactly?
Brad: That's right. They would be the more kind of traditional and, uh, software engineering team. So it's going to be a mix of folks who are doing application development. They're building the microservices that will actually allow the customer to interact with that model because the customer is not going to directly interact with that model.
Brad: There's always some kind of interface. It's also a platform engineering team in all likelihood, who is there to make sure that all those technical audiences. Are able to build and deploy and manage safely, um, people working in DevOps, uh, SREs. It's all those different roles that will need to touch these projects in one way or another documentation.
Mehmet: I got it. Now, what you just mentioned, I think it's coming now a new term because we have these multiple [00:11:00] components related to the AI, which is ML ops, like machine learning operations. And I know like. This is where maybe the GitOps project that you, uh, you started it as an open source initiative. So like, can you like tell us a little bit about, you know, you, how did you envision this project?
Mehmet: Um, in order to transform the landscape off of the A. I. N. Machine learning model management. So I'm really, you know, trying to understand I can get the complexities that are happening there and you know how you envision this would contribute in solving this complexity.
Brad: Yeah, it's a great question of it.
Brad: So I think when you look at the the state of play today in the world of kind of enterprise A. I. Projects, Things are quite fragmented. Uh, the code that people are developing typically lives in a Git repository, GitHub, [00:12:00] GitLab, whatever it is that you use. And that's great. Excellent for development. Um, now historically, you know, code has been 90 percent of what needs to get deployed to production and it's 90 percent of where all the differentiation is, if not more.
Brad: And so those, those Git repositories have really been. central in everything that the engineering team did. When you look at models, it's a little bit different though, because in general, data scientists are using tools like Jupyter notebooks and Jupyter notebooks are a bit peculiar because they are both kind of a dev environment.
Brad: Um, but they also have a runtime associated with them. Their storing of state is a bit odd, um, and unusual. And so it's not that you can just go stick a, well, I mean, you can put a Jupyter Notebook into Git, but it's not a very useful thing to do because a lot of the changes happen internally. So, you've got that.
Brad: [00:13:00] You've got, Data engineers working on data sets, maybe from your B I solution or from databases or just flat files that you have around. So they're working in a set of tools, you know, that that they have, um, that make their jobs easier. But all the data versioning is happening out there in those tools.
Brad: You've got your model versioning happening in Jupiter notebooks, kind of, um, And then you've got folks working on the features that the model has on the parameters that the model will need. That's in a different set of tools, typically MLOps tools. And then you've got folks who need to deploy these and are building the deployment artifacts for all of these.
Brad: That's in yet another kind of repository. So you've got, in many cases, 3, 4, 5, 6, 7 different places where parts of the AI project, which all need to work together, are being separately iterated on, separately versioned, separately stored. And that's tricky, um, because no [00:14:00] one group is interacting with all of those elements, but ultimately, You need as an organization to be able to understand where all of those elements or artifacts were in what version, in what state at what time.
Brad: And the reason why that's important is because you need to make sure that when you're deploying something, you're deploying the right version of each of those things, or else you're going to get something blowing up in production. And that's problem one, but problem two can get more scary and a little bit more nuanced, which is.
Brad: Imagine that somebody made a mistake and they accidentally left sensitive data inside of a data set. And that data set was used to train some number of models. And those models maybe have, maybe have gone to production, maybe haven't, we're not sure. A very natural question for a, an executive leader to ask is, okay, that happened.
Brad: That's bad. Which models were affected? And are any of them in production? Are we exposed? That's a really hard question to answer if you've got to go to five [00:15:00] different groups and say, well, wait, what version of the data did you use to do that? And, oh, well, did that change? Was the code change then? How about the features?
Brad: Were they different? Like, It gets very, very kind of enmeshed and confusing. What kid ops does is it provides a central repository, a central registry, and takes all of those different components, allows you to continue using the exact same tools you've used. So we're not trying to replace any of the dev tools because they work.
Brad: They're good. The scattered artifacts is the problem. So there's a central registry. All the pieces get added to a single AI project version, that version goes into the registry, and now it's canonical. Now you know, this data at this version in this state was used to train this model, which resulted in this, uh, output with these features, these hyperparameters, and this code base.
Brad: So it is absolutely unequivocal. What's nice too is [00:16:00] inside that registry, those artifacts are signed. And they are immutable. So you know whether they've been tampered with or who last touched them, when they were last touched, were they changed? So you can create a very clear line of ownership and kind of change management, which helps protect your organization in the event of some of those questions.
Mehmet: So, you know, like you explained it in a very nice way, Brad, I got it. So it's kind of, um, how I would say, like, it's kind of the single. Um, I mean, the record that they will come back to track, you know, every single operation that happened, whether it was a training, whether it was, you know, someone altered the model and if in case something wrong happens, so they can go back and they say, it's kind of like logging, but not in the sense of the system logging that we know, it's like logging the AIML operations that happened [00:17:00] within, you know, the organization.
Mehmet: So, and this will help them to, to, to, as you said, like see if they did something wrong, some data will be for a reason or another. So really like, yeah, I can just make sense of, of this. And I like, you know, to repeat this because it's very important. I believe, you know, organizations to understand because And this is where I would ask, you know, the question I'm trying to relate things here.
Mehmet: So if I want to make a similarity with the world of DevOps, right? So how we can relate these? And do you think like there's kind of intersection between like what The MLOps, uh, can do with the DevOps that we traditionally, you know?
Brad: Yes. In fact, that's a, I love that question. Um, because there is similarities and there are differences and I think it's really important to understand.
Brad: So as I said, if you think of the world that most people live in today, really the code is the thing that is [00:18:00] being built, is being deployed, it is being managed in production. So what I really need to understand is what is the code version? What were the last changes? And is there any issue? It's a relatively simple problem set.
Brad: And so you have Git Ops, G I T O P S, um, which often people use and you take changes that happen in the Git repository. Those are very clearly versioned. They have digests associated with them, you can be absolutely certain of what happened. And that then triggers the build, which triggers the deployment, which triggers the, you know, the existence and production.
Brad: It's a nice linear flow. What KNOPS is doing is trying to bring that same level of certainty to the much more fragmented world of ML. So you continue to use Git, yes, for your code, but you're probably not storing your data sets in Git because it's very heavy. It's not very efficient. Now [00:19:00] you have serialized models and they exist somewhere else.
Brad: So we're pulling all of those into one place. And now you can think about using kit ops, the way you would use get for the deployment use cases, you continue to use get, as I said, in development, kit ops doesn't really, um, change what happens in the development phase. It's more that once the development phase ends and everybody says, okay, now we have to test, build, deploy.
Brad: Um, that's where KitOps then takes center stage because now you're not having to go back to five or six different places. You just go to one place and say, everything I need is here. For testing, maybe they need to grab the model and the data set, but they don't need the parameters. Um, or maybe they need the parameters in the model, but not the data set.
Brad: They can grab just those pieces. For deployment, I don't need the data. I just want to deploy the model with the parameters. Um, so that makes that nice and easy. What's even better, and kind of just to your last point, Mehmet, is Git, um, [00:20:00] KitOps, pardon me, stores all of this in an enterprise's existing OCI registry.
Brad: So the place where they're storing their containers is the same place where all of this goes. So they don't need to worry about creating authentication and authorization and change control for a brand new tool or a brand new, uh, storage place. They're using the one that they already have. So we're trying to kind of marry those two MLOps and DevOps worlds.
Mehmet: You know, that's really great explanation again, uh, Brad. Now, one thing which I'm not sure if I should have asked before or now is the right time to ask. So, so we went into, you know, from integration perspective, what I mean is like how we bridge the gap between both teams, data engineers and the rest of the people who takes care of the applications.
Mehmet: But one, one thing, you know, I'm, I'm curious to know your opinion about it because I know like this is something not necessarily that you do with Guzu [00:21:00] or KitOps, but from organization perspective, um, I know Saying that I have this AI initiative or machine learning initiative. Okay. It's fine. I get it. We start, we're going to still start to get something over there.
Mehmet: How an organization could measure that, you know, the AI integration that we have done or the machine learning integration we have done is actually, it's fitting exactly the best benefit that we were aiming for. In other sense, of course, I need to start from some place. I need to have like data in the first place.
Mehmet: from different sources. And then I want to start to implement whatever, is it integrating with a model? Is it like, you know, maybe the data engineers will gonna do some, some data cleaning and all this stuff. If I want to measure this from a business perspective, is it like I'm doing really, I'm integrating the AI in the right application, or I should [00:22:00] have started, you know, from a different application?
Mehmet: I'm sure like maybe It doesn't, as I said, touch directly what you do with Guzu, but I believe maybe customers, they ask you this question, Brad, am I right?
Brad: Yeah. Yeah. And you're absolutely right. It's not something that we can directly impact with, uh, with Josie or KitOps. Um, that's part of those kinds of prerequisite.
Brad: And I think that really was what I was trying to get at with the, the comment about needing to have the right strategy. Now, I think strategy has two parts, in my opinion, um, there's the kind of grand plan for what do we think AI can bring that is, you know, 5X, 10X better than where we are today. What can, where do we think that that can have that kind of impact?
Brad: But one of the things that keeps drawing me back to the startup world is that I've always believed that the only way to really prove that is to build it. And try it. [00:23:00] I think that's the key. And I think that's where sometimes larger enterprises in my experience struggle a little bit because it is a little bit harder for them to just try things, but I don't think it's impossible.
Brad: I think sometimes they talk themselves out of it. More than necessary. So I think what I would suggest and what I've done in similar situations, this is before AI, but you know, with other kind of, um, disruptive technologies, if you try and look for what is the kind of cheapest, easiest way to actually put something out in the world and then measure how users use it.
Brad: Don't just talk to them, because if you talk to a user, people are generally nice, and they generally don't want to tell you your baby is ugly. So if you tell them, hey, this is what we're thinking of doing, they'll probably tell you they're going to use it. But if you actually put something out in front of them, then either they will use it or they won't use it.
Brad: Nobody is going to use something because they don't want you to feel bad. You know, we're not that nice. [00:24:00] So I think you can often find ways to build, like I said, kind of lightweight prototypes, be very clear with your customer base that these are beta, but that you think that they could be groundbreaking or, you know, whatever, have you, and you're going to have some percentage in my experience of your customers, probably, you know, probably not more than 20%, but still even a smallish percent of your customer base can probably give you a pretty strong signal.
Brad: On hey, they're using this thing consistently and increasingly we're, we're really onto something. This is has great value or they've all logged in and checked it out, but nobody's really coming back. Nobody's really consistently using it that you should just ditch at that point. Because it's not, it's not really working.
Brad: Um, but yeah, to me, I guess just directly answer your point. Nothing to me. ever equals the value of looking at actual usage. You can come up with as many metrics as you want. Actual usage to me is what determines love or not love. Great [00:25:00]
Mehmet: answer. I would say, um, another thing, Brad, like, because we mentioned a couple of times about training on the external models and training on the internal models.
Mehmet: Are you seeing organizations, uh, struggling in, you know, because they have a lot of options. And again, I'm asking maybe a little bit generic questions, but I think this is important and the touch little bit on the machine learning operations. So, because I talked to some people who are like executives in this space and they said, okay, so we have two challenges here.
Mehmet: So the first one, to your point, if we go and train the public model. There is a possibility that data might not be in the best hands, I would say, uh, we could have sent like data, which we should have stayed in the data center or on, on, on our premises and so on. But at the same time, they said, like, we're lacking the, um, the [00:26:00] skills to deal with these.
Mehmet: other models that we can put on premises right on on our platforms. So how are you seeing organizations like managing this balance between, you know, is it like, and again, like, what are the problems? Is it like lack of skills of people who understand how to deal with these models? Is it like, because there are no good models that can fit every single type of business, what you are seeing in that space, Brad?
Brad: Yeah. So I think that, you know, It's it's a little bit of everything. Um, you know, certainly I think in a lot of organizations There is a bit of a skills gap. Although interestingly In most of the folks I've been talking to and some of the research I've been reading, that doesn't seem to be the biggest problem.
Brad: Um, I mean, data scientists are out there and if you want to hire them, they're available for hire. Um, so I think that is a, a solvable gap for most organizations. I also, I can say just from personal experience back in 2017, when I was at Red Hat, [00:27:00] I had two data scientists in a team with over 150 engineers.
Brad: And those two data scientists still made a significant impact on our business. So it's not like you need 30 data scientists. If you have 150 engineers, you know, start small. Um, the stuff that data scientists can do is can be quite amazing. So, you know, you don't necessarily need a lot of them. The tooling Is an issue for sure.
Brad: And obviously that is something that we're trying to solve. Uh, it is a, like I said, it's a very new area and in many new areas, what you tend to see is folks who are expert in that area are the ones who build the tools and they build them for people like themselves. And this is what we saw when we started really using models, uh, heavily, you know, whatever, five years ago.
Brad: is that a lot of these MLOps tools are built by data scientists for data scientists. They don't help. And so they're great for that development phase when the data [00:28:00] scientists are doing their work. They're fantastic. They don't help on the operation side. They don't help on the production side. They're not great for DevOps because that's not a world that those folks come from.
Brad: So there are those gaps and that's the gaps that we're, that we are really trying to solve. Um, but I think that, Overall, if you can get that strategy right, if you can hire even just a small, small, small team, maybe even one person, um, could be enough as a data scientist, they can start now working. I think the other thing that is helping a lot here, and you're going to see helping more and more and more and kind of closing the gap is the open source models.
Brad: Are really, really performing very well. Um, yes, they are always a little bit behind of the latest and greatest from open AI or Google or, or, you know, whomever, but they are doing very well. And for most organizations, they don't need to be at that cutting edge of AI capability. If there are a couple of steps behind, it's still.[00:29:00]
Brad: Plenty good enough to have a big impact on their business. And the last piece is, as I said, I think sometimes there's a bit of a misunderstanding, people thinking that, well, I need to find the one model that is going to do all the jobs. The way I think about it is a model is like a person in your organization.
Brad: You would never say, I need to go find a critical hire. And this hire is going to do all the things in my organization. It would be idiotic. You would of course say, well, no, I need to hire this person into my legal group. I need to hire this person to my finance group, this person into my engineering group, this person into my HR group specialists in each of those areas to really kind of bring up the value inside each of those parts of my business.
Brad: That just makes sense. That's how we should be thinking about AI. Um, I need this model to do this kind of job really, really well, and I'm going to help train it to get there. And then this model is going to work on a different type of job. There's a great article by the AI [00:30:00] team at LinkedIn, they actually went through and talked about exactly how they built out a bunch of the AI functionality you can see in LinkedIn today.
Brad: And if I'm remembering correctly, I could be slightly off, but I think they had four separate models. One of them was specialized in processing and understanding, um, the, The profiles of everybody on LinkedIn. Another separate one was just about processing and understanding all the jobs that were posted to LinkedIn.
Brad: Those two things seem pretty related, right? But they were two different models. There was another one that was there in order to interpret. And kind of orchestrate. So the user would actually go into this model. This model would then tease out, Okay, well, we need to understand some profile stuff. We need to look at these five jobs.
Brad: And we need to make these recommendations. And it would kind of farm out the jobs to then those little sub models. That, I think, is the kind of winning way to think about structuring AI. And this should be familiar to people because it's [00:31:00] conceptually very similar to how we think about microservices. And that shift from kind of monolith to microservices, these specialized services, think of them as specialized AIs.
Mehmet: you know, like you, you describe it in a way, which is again, to your point, like you cannot have one model that can do it all. Of course, like, like even for example, till now, and I'm talking about the things that the public would know about. So when it comes to, for example, uh, image generation, so diffusion, right?
Mehmet: Rather than the one that open AI, they have the daily two or daily three. So, so You know, you and you, for example, we see some models that are good in doing some such tasks that the other doesn't do. And here, you know, the idea of having to your point, like microservices or like you, you gave the right, the nice example of hiring.
Mehmet: So for example, if even I want to build a very simple thing today, [00:32:00] I would hire a guy for the front end and the guy for the back end. And then, you know, and maybe another guy for doing the quality assurance. So, so very. you know, logistic, uh, logical. And the other thing you mentioned is about the idea of having them into kind of a microservices.
Mehmet: And I think even what we, we, we start to see even since last year, people talk about having like the AI model works as an agent and then get agents to walk, to talk to each others and give each one of these agents like, you know, it has to do. And then, you know, Combined with a super agent that can kind of, you know, supervise, maybe it's trained to be good at supervision and revision.
Mehmet: Right? So, so that's exactly
Brad: No, that's in fact, I'd go even one step further, Mehmet, because you can look at some of the, your listeners may have heard the announcements from Google about AlphaFold, which had done DNA sequencing that kind of [00:33:00] blew people's minds how good it was doing at this. That AlphaFold model is totally different.
Brad: Then Bard, which is totally different than Sora, which is totally like, these are very specialized models. Like that's exactly the way that this will go. Yeah. A
Mehmet: hundred percent. Now what is the most thing that excites you about all this, you know, very fast disruption that we are living. And I'm saying we are living because it's not like an event that's happened and then we stopped.
Mehmet: So how you are seeing, you know, in general, The A. I. Progression going on and how it's affecting you think on again, if I want to go back to the M. L. O. P. S. And all the things that we mentioned how this fast pace disruption is affecting, you know, the way people are thinking. Organizations are integrating AI within their applications, kind of like future trends.
Mehmet: Also, if you want.
Brad: Yeah. Yeah. Okay. Well, I am, [00:34:00] uh, I'm definitely not qualified to peer into the crystal ball, but what the heck? Let's try. Um, I think one of the things that excites me about any of these big disruptions. Is the way that it tends to unleash human creativity. Um, that's the thing that to be honest, gets me most excited and why I've spent the last 25 years in technology is just because there are these disruptions.
Brad: And every time there's this new explosion of creativity and you see people. look at a problem from a totally new lens and come up with a great solution where you're just like, wow, how did I live without that before? Uh, I was actually talking to one of my daughters the other day and explaining that, you know, there was a time before Google maps existed.
Brad: And when I went to go visit customers, I had to print out a paper map of where I was going when I landed in Boston's Logan airport and had to drive out into the middle of nowhere, Massachusetts. And if I missed a turnoff, I had to stop. I had to go to my map. I had to [00:35:00] try and figure out where I was and it blew her mind because Google maps or Apple maps or any of these are so much of a better way to get one from one place to another than having to look at a paper map when it's raining at 11 PM and you have no light.
Brad: Um, It's going to be the same thing with AI. There's going to be a time when we will look back and go, man, how did we do these things before we had these AI systems to help us? So I'm, I'm naturally a somewhat optimistic person. You can probably tell. So when I look at these things, I don't see doom and gloom.
Brad: I don't see everybody losing their job. Um, you know, people thought that everybody's going to lose their job when the internet was here. And then it was when mobile was here and every big change is scary. And so we always look to that thing, which scares us most. I don't think it's going to happen. These AI agents will be helpers for us.
Brad: And so I think that the companies that will win are the ones that will [00:36:00] keep their eye on the customer. And not just on the AI. I think that's always hard. That's maybe the hardest thing when you have these big disruptions, is there's so much news and so much focus on the technology, the technology, the technology, the technology.
Brad: But I think the best companies always remember that the technology lives in service of your customer and their goals. And so if you have the best technology in the world, but you've lost sight of what actually is going to help your customer, you're not going to win. Another competitor who has kept their eye on the customer and has listened to the customer and who's watched the customer and who's seen what customers use and don't use, not just what big ideas are capturing their imagination will be the one that wins.
Brad: And I think that's exciting because Just like a forest when it gets too big, naturally has a forest fire to kind of burn everything down and have everything restart new and fresh. It's a bit like one of those moments where you're going to have some big companies today [00:37:00] that will not make this transition well.
Brad: But the ones that don't make the transition well probably won't because they've stopped listening to customers and if they've stopped listening to customers, then good riddance, let's get, get, get them out of here.
Mehmet: Absolutely. Yeah, I'm like you, uh, Brad, uh, you know, I'm optimistic about what's happening and I'm happy that you mentioned because I always repeat this.
Mehmet: I say like technology, it's a tool at the end of the day. Whether it's AI, whatever it is. And very nice, like you mentioned the example of Google Maps, because today when we want to give an example about any technology, we say like it helps us to get from point A to point B, which is like the maps does.
Mehmet: And the same thing applies to AI, because you're trying to Because people come and ask sometimes, okay, what can I do with this AI? I say, look at what is, what you're struggling with today and try to see if AI can fit. If AI is not the solution, don't [00:38:00] implement AI because you'll be wasting time and resources for, for nothing.
Mehmet: So, that's right. 100 percent on this. Now, as we are almost close to the end, Brad, now, you mentioned like this is the second time, you know, you're gonna you're in kind of a startup operation, right? And as I was telling you, part of the audience are technology founders or to be founders. Let's put it in this way.
Mehmet: And if you want to give like piece of advice from, from your own experience, um, what, what would be like, especially if people today that are very interested in being in this AI space. So what you can leave us with today.
Brad: Ooh, that's tough. Um, but let's, let's, let's see. Uh, I think that the main thing to try and keep in mind is a little bit like what I just said about any enterprise in any business, you've got to know your customer, you've got to be listening to your customer.[00:39:00]
Brad: And I think one of the hardest things about founding a startup is that you have to have a kind of split brain in some ways. You have to have so much self belief that it can sometimes come across as a bit of egoism. Um, but you've got to believe in what you're doing because if you don't, no one else will.
Brad: And lots of people will doubt you. You will have more people tell you you're wrong than you will ever have tell you you're right. Um, and so you've got to constantly keep that momentum going, but at the same time, you can't let that self belief blind you or deafen you to what your users are telling you, if your users are telling you you're wrong, then you really need to listen.
Brad: Um, and so this is interesting because it's the last startup. Uh, we were building a web based I. D. E. Um, this was in 2015, roughly, uh, and already and 2015 web based I. D. E. S. were very [00:40:00] unusual. Um, they were generally considered toys. And so we were trying to build a web based I. D. E. usable in enterprises, Of course, the vast majority of people we spoke to said, you're crazy.
Brad: Developers like having a local ID. None of them will ever use a web based ID and certainly won't use it in an enterprise because they're toys. This is the dumbest idea I've ever heard. But we believed and we knew there was a way that we could make this valuable. Around the same time containers were just starting to get off the ground.
Brad: Docker was just starting to get off the ground. It was not really production ready at that point. And so myself and the founder, um, well, but especially the founder, Tyler Jewell, who now is a CEO of a light band, very, very smart guy. He looked at this and said, you know what? We're just going to have to make a big bet.
Brad: We're going to build this whole thing on containers, even though they're not ready because when they get ready, we'll be a year ahead of everybody else. And. That was a scary call. Um, but luckily he was right. And [00:41:00] it took us a year of struggling. All of the competitors essentially were laughing at us because they were like, look at these guys trying to build a Sass web IDE that does compiled languages, uses containers and honestly it, our service tipped over semi frequently because containers were a struggle at that time.
Brad: But by the time, fast forward a year, we were there. You know, 18 months we had learned so much about using containers and had so much deep integration between what we did in the containers that when containers did kind of blow up, we were the ones that everyone was suddenly looking at and going like, wow, look at this amazing thing that these folks have built with containers and web IDs.
Brad: And that was really what brought red hat to acquire us. Um, and gave us such a great exit for ourselves and our investors was, was seeing where that, that thing was going to change, but a lot of it was listening to the customers when we built it, it was built for. Java, and we were sure that the audience that would pick this up would be enterprise developers working in Java compiled language, which you typically couldn't use in a web I.
Brad: D. Before that, [00:42:00] but we had a very different Lucky few conversations where we started to notice the people were using this most were from these organizations that seem to be more IOT focused. And we were like, that's weird. None of us had ever done anything with IOT before, but we called them and we actually got into conversations with these users and listened.
Brad: And they said, well, we're doing embedded systems and for embedded systems, it's really painful to try and set up your dev environment with different drivers and different Linux kernel versions and all these different libraries to test 10 different devices. That can take more than a day. And this web IDE you guys have built, it actually seems like we can probably containerize each of those environments and get through it in maybe an hour.
Brad: And although it was not where we had targeted the company at all, we heard that and we said, Oh, hold on, this actually makes a lot of sense. And so we started to look for what features would they need to be really successful doing that for embedded systems. And we ended up scoring Samsung as a client and that just really, really helped.
Brad: [00:43:00] And so it's one of those things where we went in one direction, but we were always listening and willing to kind of tweak to hear what our users told us more than what. Investors or pundits or experts or anybody else told us?
Mehmet: You know, I think this is one of the greatest advice I've heard since long time because you know, of course like we talk about some other aspects How to build the organization how to do this how to do that.
Mehmet: I'm a little bit biased because I worked and I still kind of do consultation. I'm, I'm, you know, do sales also as well. And I'm very biased because the customers come first all the time. And I think there's a very thin line, which you mentioned now, Brad, between. Yeah, of course you need to believe in your own idea, but listening to the others and not having the ego.
Mehmet: No, no, no. I'm building the right thing, the right thing, the right thing. And then you figure out [00:44:00] you are building product for no one. So I think, I think, you know, what you just mentioned is the customers are, I, you know, someone asked me the other day, how I know if I, my idea is. You know, worth to build, I said, you just go and.
Mehmet: Talk to customers. You don't talk to your friends. You don't talk to anyone else. You talk to a potential customer that you have in mind and then you would understand if someone will buy your product or service, whatever from you or not. So thank you for mentioning this, Brad. And to
Brad: tie it back to AI, Mehmet, I would say one of the things that is really exciting to me about founders today, I think that in the next few years is AI has dropped the barrier for building a prototype.
Brad: Yeah. Very low. And so even better than talking to customers, just use AI to build the prototype. And try and get people to use it. And if they use it, then go, you know, find people who can build it properly. Cause the AI is not going to build the world's best, best software right now, but [00:45:00] that's okay for a prototype.
Brad: Who cares? Like as long as it works, it doesn't matter. Um, and then you can find people who can build it properly, who can scale it and do all those things. But if it fails, you haven't spent very much, you know, very low. You can try something else and go on.
Mehmet: Absolutely. Yeah. So it's, it's, you know, the, the ability to build an MVP became so much easy, I believe.
Mehmet: Yeah. So, so another great advice from you, Brad. Finally, where people can find more about you and about Jozoo and Kitops.
Brad: Yeah, absolutely. Absolutely. So. Uh, Jozu is going to be launching our product probably in the next month. Uh, you can start to see a little bit of what we're thinking at jozu. com. J O Z U dot com.
Brad: Um, If you're in playing with AI projects today, and the idea of being able to have all those artifacts in one place, versioned, controlled, signed, and in an enterprise registry that you already own, [00:46:00] then look at KitOps, which you can look up K I T O P S on GitHub, or go to kitops. ml. And you'll see more information there.
Mehmet: Great. So for the audience, you don't need to worry about, you know, following the, the links, you will find them in the show notes. So I will put the links in the show notes, Brad, again, thank you very much. You know, it was a very, very valuable, uh, information you shared with us today, especially on a hot topic that as we started every it's on top of mind of every, CTO, CDO, even the board level.
Mehmet: So thank you for sharing this and, uh, you know, all the best for, for Jozu and for Kitop's project also as well. And thank you for being with me here today. And this is how usually I end my episodes. This is for the audience. If you just discovered this podcast by luck, thank you for passing by. If you did so please give up, give us a thumb up, subscribe, share it with your friends and colleagues.
Mehmet: And if you are one of the people who keep coming, thank you for sending me your feedback, your comments. [00:47:00] questions, suggestions, keep them coming. I really read all of them and I take actions guys on that. So thank you for the encouragements also. And as I say, always, thank you for your time. Let's meet again in a new episode very soon.
Mehmet: Thank you. Bye bye.