March 11, 2025

#445 AI vs. Cyber Threats: Alec Crawford on Governance, Risks, and the Future of Security

#445 AI vs. Cyber Threats: Alec Crawford on Governance, Risks, and the Future of Security

In this episode of The CTO Show with Mehmet, I sit down with Alec Crawford, an AI and cybersecurity expert with decades of experience in financial institutions like Goldman Sachs, Morgan Stanley, and Deutsche Bank. Alec is now leading AI-driven cybersecurity solutions, focusing on AI Governance, Risk, Compliance, and Cybersecurity (AI GRCC).

 

We dive into:

🔹 How AI is reshaping cybersecurity—both defensively and offensively

🔹 The biggest vulnerabilities companies face in the AI era

🔹 Why traditional security approaches aren’t enough anymore

🔹 How businesses can balance AI adoption with compliance and governance

🔹 The rise of AI-powered cyber threats, including phishing, data breaches, and zero-day exploits

🔹 The role of AI agents in securing enterprises

🔹 Why regulators are behind and what businesses must do to stay compliant

 

This episode is packed with actionable insights for CISOs, CTOs, tech leaders, and entrepreneurs looking to navigate the fast-evolving cybersecurity landscape.

 

🎙️ About Alec Crawford

 

Alec Crawford founded and leads Artificial Intelligence Risk, Inc., which accelerates Gen AI adoption through a platform ensuring AI safety, security, and compliance. The company  achieving the top rank for both Gen AI cybersecurity and regulatory compliance from Waters Technology in 2024. Alec, an AI, investing and risk management expert, shares insights through various media and has a rich history of leadership roles, including at Lord, Abbett & Co. LLC, where he managed global investment risks. His background spans prominent positions in financial services since 1988, including at Ziff Brothers Investments, Goldman Sachs, and Morgan Stanley. Alec holds a Computer Science degree from Harvard, where he specialized in artificial intelligence.

 

https://linkedin.com/in/aleccrawford

https://aicrisk.com/

 

 

🎯 Key Takeaways

 

✅ AI is both the biggest opportunity and the biggest threat in cybersecurity

✅ AI-driven cyberattacks—such as hyper-realistic phishing, deepfake fraud, and rapid zero-day exploits—are evolving faster than defenses

✅ Many companies are adopting AI without a clear security strategy, leading to data leaks and compliance risks

✅ Traditional cybersecurity models aren’t keeping up, and CISOs must embrace AI-driven defense mechanisms

✅ Organizations should focus on AI governance early to avoid massive compliance fines from GDPR, the EU AI Act, and emerging U.S. state regulations

✅ AI agents will automate threat detection and compliance monitoring, but adoption must be done securely

 

🎧 What You’ll Learn

 

🔹 How AI helps hackers launch more sophisticated cyberattacks

🔹 Why regulatory compliance for AI is still a gray area and what businesses need to do now

🔹 The future of AI in security—from AI-powered monitoring to real-time attack mitigation

🔹 What AI GRCC (Governance, Risk, Compliance, and Cybersecurity) means for organizations

🔹 Why cybersecurity needs to shift from reactive to proactive strategies

 

 

⏳ Episode Chapters (Timestamps for YouTube & Spotify)

 

00:00 - Intro & Welcome to Alec Crawford

02:00 - Alec’s Background: From Harvard Neural Networks to AI Security

05:30 - The Acceleration of AI & Cybersecurity Threats

08:45 - The Role of AI in Phishing & Data Breaches

12:15 - How Businesses Can Secure AI Adoption Without Breaking Compliance

15:30 - The Biggest Security Gaps in Enterprises Today

18:40 - AI in Governance & Regulatory Compliance

22:00 - Why Cybersecurity Teams Are Facing Alert Fatigue

26:15 - The Future of AI Agents in Cybersecurity

30:00 - AI-Powered Threat Detection & Ransomware Prevention

35:30 - The Next Evolution of AI in Security

40:00 - Final Thoughts & Where to Find Alec

[00:00:00]

 

Mehmet: Hello and welcome back to a new episode of the CTO Show with Mehmet. Today I'm very pleased joining me, Alec Crawford. Alec, the way I love to do it is I keep it to my guests to introduce themselves. So tell us a little bit more about you, your background, [00:01:00] and you know, what you're currently up to. And then we're going to deep dive in a very exciting two topics, which are kind of now on the forefront, AI and cybersecurity.

 

Mehmet: So the floor is yours.

 

Alec: Great. Thanks. Great to be here. Well, look, my background is as a computer scientist. I was building neural networks from scratch in 1987 at Harvard. And then, you know, fast forward to today and I run an A. I. Software company that focuses on helping banks and other companies using high risk AI, meaning they've got confidential data or patient data for healthcare, you know, keep that safe and secure.

 

Alec: Uh, I call it AI GRCC for governance risk. Compliance and cybersecurity, but it's really an overall AI turnkey platform. So, so that's what I do today, but I spent, uh, decades in finance at the biggest banks, uh, Goldman Sachs, Morgan [00:02:00] Stanley, Deutsche Bank, Royal Bank of Scotland, um, kind of doing this, doing computer stuff.

 

Alec: doing risk management, doing some compliance stuff, doing some AI stuff. And it's been an incredible, incredible ride. And obviously, you know, AI is, is going to the moon right now. And I don't think we're going to be able to disentangle AI from cybersecurity. I mean, I think it's a hundred percent AI is the most important change to cybersecurity.

 

Alec: For the last 10 years and the next 10 years.

 

Mehmet: Great. And thank you again, Alec, for being with me here today. So you just mentioned something interesting. So you've been doing this for a long time. Um, how fast are things? Developing with AI. We've seen a lot of stories, you know, we, we are listening to a lot of, you know, analysts and experts talking about it.

 

Mehmet: You have this practitioner background also, Alec, and you see things on the ground. So tell me [00:03:00] what kind of new threats we are seeing that AI has brought to, to the table.

 

Alec: Yeah, so it's a very good point. I think look what things changed from the public perception, slow and then fast. Right? So I've been seeing it progress the whole time, you know, especially since the invention of the transformer.

 

Alec: In neural networks or what we call deep learning now, uh, but the public perception really was the introduction of open AI when all of a sudden everybody in the world could use AI as opposed to, you know, a few 100, 000, you know, experts, right? So that's, that's been the, the, uh, the perceived acceleration and, and I would divide, divide the threats from AI into two categories.

 

Alec: Things that the bad guys can do better or easier and brand new things, right? So the things that the bad guys can do better or better and easier are things like phishing emails, right? Before you get stuff that was misspelled or like [00:04:00] obviously like, oh, this is totally bogus, right? And, and now they're very convincing, right?

 

Alec: Now it's like, oh, my boss needs me to do X, right? And if it's coming through email and it's urgent, it's probably fake. Right. You know, and you got to rely on more secure methods of communication than than email. So that's super, super important. Um, in terms of brand new threats, I think it's really more about, um, speed and discovery of weaknesses.

 

Alec: So, for example, uh, you take us back. Uh, 20 years ago, right? And you know, what was our, you know, patching cycle? Like, well, you know, once every month, every quarter, you know, we patch stuff and we don't really worry about it. Like now the patch cycle is 30 days. Maybe we can get it down to 25 days. Uh, but the bad guys have gone from zero day, [00:05:00] uh, exploits taking still like, you know, 20, 30 days to figure out to a day.

 

Alec: Because they're using AI, right? They're like, Oh, here's an exploit. Like, how do we take advantage of this today before it can be patched? So this is something that, uh, that, that Cisco loves talking about and has some, uh, claims to have some software around helping companies detect and, and stop these kinds of things.

 

Alec: But that's an example of something that is a race, uh, that the bad guys are, are currently winning, right? Because the patch cycle is not going to get to the point where, or anytime soon where, uh, an exploit is discovered the next day, you can patch it, right? Just not, it's not available right now.

 

Mehmet: Absolutely. And what about also, Alec, you know, Everyone, of course, they have this, uh, sense of urgency to adopt AI, especially, you know, Gen AI. And, you know, we hear, like, about how this is kind of a mandate for them [00:06:00] to keep up with all, you know, what's happening around us. But at the same time, there are some threats that I believe majority of the companies that are not aware of.

 

Mehmet: So what do you see the biggest vulnerabilities here? And why aren't companies addressing them fast enough?

 

Alec: Yeah, that was, that's a great point. So it take us back to, you know, almost two years ago, right? When Chachi became a thing. And all of a sudden, CEOs, whether of small or large companies, whether they're banks or making wheel rims in the Midwest or, uh, independent energy producers in Europe or whatever, uh, all of a sudden we're getting huge pressure from their stakeholders to adopt Gen AI.

 

Alec: And I mean all stakeholders, boards, investors. Employees. So, um, something unusual happened, which is, uh, CEOs and boards went to the technology teams and said, get me some gen AI. I [00:07:00] don't really care what it does. And I'm not giving you a budget right now. Right? So, which is super unusual. So, obviously, that kind of short circuited the whole planning and budgeting process.

 

Alec: And over the next year, uh, companies got a lot of stuff, which may not have been what they thought they were going to get, or I mean, just that may not have worked, right? Or may have been just massive expense for not a lot of, uh, not a lot of ROI. And if you think about, uh, the different risks that, um, bringing on a gen AI generates, there's kind of.

 

Alec: Risks now risks in the near future and risks in, you know, the further in the future, right? So risk now are things like data exfiltration and compliance issues and governance issues. Which, which is a classic, right? We saw a big retail company adopt jet AI last year. And within one day, someone had gone into co pilot said, Hey, how much money does my boss make?

 

Alec: And found that out, not only found that [00:08:00] out, but found out how much money like every senior manager made, right? Like, whoops, you know, huge data governance issues. So that's, what's happening now. And people have to be super careful about. You know, AI governance and, uh, you know, who gets to do what with AI and AI policies and, and things like that.

 

Alec: And then in the near future will be, you know, compliance fines and things like that. Look, the rules are just proliferating. You know, the a, the EU AI Act is basically aimed at, uh, consumer safety, but you've also got GDPR, which. Which protects consumer data. Like if you screw that up with AI, you're going to have millions of dollars of fines, right?

 

Alec: Uh, Amazon paid almost a billion euros in fines a couple of years ago for GDPR violations as an example, uh, in the U S, uh, There are some, uh, kind of federal rules. They're a little bit loosey goosey, but the states have gotten way more aggressive. So for example, Texas and [00:09:00] Colorado have very strict, uh, rules now around how you use, uh, and protect consumer data.

 

Alec: Uh, so for example, your bank granting a loan, all kinds of rules around that, and they don't really care what the feds say. If you even have one customer in Colorado or Texas, like you have to abide by these rules as a bank. And I've talked to a lot of institutions that have no idea these rules are even in force, right?

 

Alec: So it's crazy. It's like they're already breaking rules that they don't know exist, right? So this is going to be a big, big problem. And we're talking millions of dollars of potential fines here, right? So, um, that's what I'm seeing out there right now.

 

Mehmet: I like, you know, it's kind of, um, I don't like to say this, but you know, if I want to play the devil advocate, right?

 

Mehmet: So now. For the traditional side of technology. I'm saying traditional because it's been with us for a long time. So let me take for example, finance. So we have like [00:10:00] P-C-I-D-S-S, and we have like some other frameworks, let's say HIPAA in the US for the healthcare. Right now AI needs governance, but at the same time things are moving so fast, so.

 

Mehmet: Someone might say, Hey, we can't wait until, you know, we have these frameworks set up. So then I can go and implement. So for me as maybe the CIO or maybe the CISO, the CISO in the organization, now we can make sure that also, you know, we are not slowing down because we are always scared that we should not implement these new technologies.

 

Mehmet: Now, I understand that now there are like some Companies that are addressing specifically, you know, these, uh, these use cases you just mentioned again, and you've you've you've been like in the industry for long enough. I myself also as well. So how we can do this balance of adopting the technology while not [00:11:00]

 

Alec: Yeah,

 

Mehmet: you know, breaking the rules.

 

Alec: Yeah, I think it's, it's, it's pretty straightforward, right? If you're doing private secure AI on your own private cloud with your own data, and you're not using, you know, APIs to go out there and do things because you can't look, you can deploy OpenAI to Azure for Azure OpenAI. You can deploy obviously the open source models, Llama, Mistral.

 

Alec: You can use AWS Bedrock for tons and tons of models and keep everything inside your firewall. That is huge, right? Cause you know, that stuff's. Not leaking out there as long as it's inside your firewall. In fact, you know, my company has a process where we can deploy, uh, our software, the models connecting to our clients data, and then pull the plug on the internet, and you can run it air gapped in the Pentagon if you want to.

 

Alec: You want secure AI? That is secure AI, right? The most secure AI. Um, but I think that there are other kind of common sense things. That we can do as companies and businesses, such as implement the National Institute of Science and [00:12:00] Technology A. I. Risk Management Framework. Now, that's not super, super prescriptive, right?

 

Alec: It basically says you got to set your a policy and you've got to manage it and you've got to know what people are doing and monitor what they're doing. Um. It's it's very straightforward and I think can be implemented by any company, whether it's five people or 500, 000. Right? So that I think is is a good strategy.

 

Alec: Um, but I think, you know, job one, if is that if your company is simply blocking AI and a lot of companies are. Just know that if you have more than 10 people working for you, uh, someone's out there using, you know, rogue AI, right? They're doing it on their phone. They're uploading stuff to their, their emailing stuff back and forth, whether it's code or client data or, uh, earnings reports or whatever it is, it's just happening, right?

 

Alec: And the fastest way to stop that is to provide a secure alternative for your own employees. Right. Very straightforward. [00:13:00]

 

Mehmet: Right. Now, how much AI can help us help us in this? Specifically, you know, we started to see the talk about agentic AI, right? And everything becoming agent. So can CISOs and CIOs and even maybe the board, you know, Think about agents, AI agents to facilitate this journey and making everything in shape, in your

 

Alec: opinion.

 

Alec: Absolutely. I mean, I think that's going to be critically important. So as we think about this first example I gave, you know, emailing back and forth code and things like that. Um, you know, right now, even regulated industries are checking a small fraction of emails that are coming in and out, right? Maybe 1%, right?

 

Alec: And. As an example, one of the things that we've done for a client is set up a process and an agent [00:14:00] which can look at each email and say, Hey, does this look like it has compliance violations in it? Right? So revealing confidential information, things like that. Right? And. It will score them as, you know, green, yellow, red, and if it's red, you know, it's going to a human to review like, Ooh, this looks really bad.

 

Alec: This is a problem. Uh, yellow. Yeah, it goes to human, but maybe they look at it. Maybe they don't read. They're definitely looking at it. So think about that for all kinds of things, whether it's Inbound email that, uh, contains something that looks like a link to a phishing site or inbound email, that's a very standard, uh, format for phishing.

 

Alec: Like, Hey, it's your boss. I need gift cards. I'm in, I'm in, uh uh, I'm in Switzerland, or whatever. Right? Like all this, all this stuff that should be obvious. To many, many people, but look, people get a new job. They get an email from their boss. Who's really not their boss. And, uh, it seems urgent and they just act right.

 

Alec: Because look, [00:15:00] if those emails didn't work. They would have stopped sending them out, right? So I, so the, the, the short version is yes. I think that agentic AI can, can help a lot with this. Um, that being said, it hasn't quite, you know, caught up yet. And if you're a, uh, a regulated institution, like a bank, look, you've got to, first things first, right?

 

Alec: You've got to have AI governance, risk, compliance, and cybersecurity in place and make the regulators and your board happy before you can really do anything right. And that's something that. That my company helps a lot with. Uh, but you know, it's hard. It's not just like, Hey, let's get an open AI enterprise subscription.

 

Alec: That doesn't work. It doesn't, it's not gonna. Satisfy the regulators.

 

Mehmet: Absolutely. Um, now one thing also about, you know, the AI and the power of AI. So we've seen a lot, uh, and you just mentioned that this, this, you know, at least the phishing attacks, for example, still works because the email still works.

 

Mehmet: Right. [00:16:00] And, but we are seeing like now it's, it's going to the next level because actually these attacks are, I mean, The threat actors are utilizing the AI also themselves. So how we can keep up in this battle and every single cybersecurity expert, and you know, I work in cybersecurity also myself. So it's like a, we say it's like a cat and mouse game all the time.

 

Mehmet: So we're trying to upscale ourselves. They do the same now with AI. You know, what do you expect the next level to be? Like, uh, we've, we've seen like also deep fakes on the rise. We, and every, but the outcome is always, of course, to do the exfiltration encrypt, which is ransomware. So, but where do you see this, you know, cat and mouse plays going, you know, with also the threat actors themselves.

 

Mehmet: Leveraging AI, and I'm sure maybe they have their own [00:17:00] trained models by now, uh, utilizing them left, right and center.

 

Alec: Absolutely. Yeah. I think, look, I think deep fakes are definitely a thing. I'm, I'm a little less concerned about that. That takes a lot of, of work to do. And that's really about education of, of your team.

 

Alec: Like, Hey, if something sounds really weird, even though it looks like your boss asking you to wire money somewhere, like, Call him up, you know, use teams, use a secure channel, ask if that's really him, right? Like, don't, don't be, uh, don't be fooled by something that sounds really urgent, but sounds really odd, right?

 

Alec: Like, please wire money immediately, you know, please wire 5 million somewhere immediately. So that's, uh, but I think that's still relatively rare. Whereas every company is getting inundated with phishing emails and various, you know, penetration types, uh, types of attacks. Right. And that's going to get worse and worse, right.

 

Alec: As, as you point out. Um, and I think that [00:18:00] the, the, the piece that's going to be important. Um, so, so two, two basic tenants. One is if you're not keeping up with technological advances, obviously you're becoming more and more vulnerable. to attackers, number one. But number two, as companies adopt Gen AI for the first time and broadly at their companies, they're opening up new attack surfaces.

 

Alec: So why do we encrypt customer data, right? So if someone gets in there and grabs the database, they can't really do anything with it. All right. Well, guess what? We're enabling our teams like the call center to get access to customer data through AI. Now, now, normally. Uh, it's a little bit of an issue, right?

 

Alec: Like, okay, they can pull down a customer record, they put a customer name in, get information, not that big a deal. If they get hacked, they'll get a few customers, whatever. Okay, that's all well and good, but if a hacker comes in or a threat actor comes in and jail breaks the AI and then exfiltrates all your customer data, [00:19:00] that's not encrypted, right?

 

Alec: And they have it. And now it's a ransomware attack, right? So that's what I mean by a new attack vector. And, you know, as you know, it's not a matter of if, but when, um, These different AIs get hacked and you need day zero detection, right? Like none of these companies do that right now, right? They don't. So if someone comes in and jailbreaks your AI, you may figure it out 17 days later that someone's exfiltrated all your client data.

 

Alec: That's one of the things. That we focus on is, Hey, was there any kind of hacking attempt or weird behavior with, uh, with AI in terms of whether it's data exfiltration or a dance style attack or whatever? Uh, and also the ability for the cybersecurity team to dial what I'll call paranoia up or down based on, you know, what the AI has or the agent has access to.

 

Alec: Is it very. Secret data then make it make it very paranoid. Anything that even smells like an attack gets flagged to the [00:20:00] cyber security team, right? Something that has access to, uh, things that are already public. Whatever. You know, like you want to detect an attack, but you're not as concerned if data leaks out

 

Mehmet: great now in terms of the current.

 

Mehmet: Solutions that any organization like companies they have do you think the vendors are doing good job in? Lifting, you know the current things especially, you know, I know and I've talked last year and the year before to a lot of you know leaders in cybersecurity like CISOs and CIOs and you know risk managers and compliance Managers and the thing that they were telling me that We, we have a fatigue because you know, all the, you know, all the alerts, we have the fatigue because everyone comes and position the same thing to us.

 

Mehmet: So now with AI coming, so we have, oh, we have an AI powered system for you. So are we, are we doing now the thing, the [00:21:00] right way? You know, uh, uplifting the, the, the, the, the infrastructure that we have, the network that we have. Or do you still think, you know, there's work to do because. Some of my guests, it's not my own opinion, part of, you know, what they say, uh, is, you know, partially us.

 

Mehmet: I mean, all of us as organizations and individuals, which we did not. Update our infrastructure and even sometimes the basics of networking and the basics of some of the protocols To a level that actually became more secure and you know, this is now with ai it's becoming more easy for the bad actors to Get in even if you have a firewall and you know, like all the other things that you might have today in a modern Let's call it secure.

 

Mehmet: Uh, whether private cloud or public cloud, so Are we doing enough to upscale the game also ourself [00:22:00] in a sense, you know, getting rid of some of the legacy that we have? Yeah.

 

Alec: Yeah. Look, I think it's going to get worse before it gets better. And I think, um, you know, one of the issues is just the budgets everybody looks at management looks at, uh, Information security is a, is a, a cost center, right?

 

Alec: And that's gonna have to change, you know, pretty soon. If you can wrap up AI with cybersecurity in terms of, uh, an overall budget and, you know, use that AI as part of your cybersecurity solution, I think that's kind of the, the win-win for the technology team. And I'll, I'll give you a, a good example of that.

 

Alec: Um, All right. So who goes through all their server logs, right? Thousands of pages or tens of thousands of pages. One of our clients built out an A. I. Agent on our platform that goes to their server logs and says, Hey, this is unusual or this is different. Or you should take a look at this. And it's, you know, a handful of things, not a hundred things or a thousand pages, right?

 

Alec: So [00:23:00] those are the kinds of things that I think are going to be helpful because I agree with you. There is alert fatigue as well as fatigue in general and the cybersecurity community. Uh, and if they can go back to management with like, Hey, we can help with overall AI as well as, you know. The security stuff and kind of add value to the business and really deeply understand what the business needs there that I think is the win win, uh, for for a I slash the CTO and cyber security.

 

Mehmet: Cool. Tell me now more about your patent pending air GPT. What is it exactly? Alec?

 

Alec: Yeah. So it's interesting because look, a lot of companies are out there doing. One piece of the puzzle, right? They're like, Oh, we do AI cybersecurity. We detect Dan style attacks on AI. Okay. That's nice. Uh, others are more focused [00:24:00] on, um, compliance, although typically not kind of bank regulatory and HIPAA compliance, which is what we do as well.

 

Alec: Similar things in Europe. Um, and then also there's the stuff that we've been doing in the internet for decades, right? Like blocking not safe for work topics and websites and things like that. Like you gotta do that in a corporate environment too, right? You don't want people playing games with ai, right?

 

Alec: Like what? You know? And then, uh, and then there's the governance part, which I think is, um, misunderstood. And super important. So there's a couple aspects of governance, right? One is, uh, by policy, what are you gonna allow people to do with AI, right? And you set up the use cases and agents and connections to data for that.

 

Alec: Uh, and the other is also making sure that permission awareness is maintained, meaning that someone logs in on an AI system and is looking at some data. They don't all of a sudden magically become a super user and can see everything that everybody [00:25:00] else says, right? Or everybody else sees. So a lot of important things around governance, which a lot of these other systems kind of haven't figured out, right?

 

Alec: Um, so I call that AI GRCC or Governance, Risk, Appliance, Cybersecurity. So combine that with an entire platform for generating no code AI agents. And that's the holy grail for kind of banks and healthcare right now. And that's what we're doing. Uh, I think we invented the category. Uh, I haven't seen anyone else with a complete solution there.

 

Alec: Mostly it's just a piece here, a piece there. And if you're a bank, you know, would you rather hire one company and have turnkey AI in a day? Or go hire seven different companies and pray that they all work together? Right. Pretty straightforward, right? You know, you want you want the first one. And I think where where is really going, uh, over.

 

Alec: The next year even is fully autonomous AI agents, right? So kind of what we're talking about earlier, if I could have an agent [00:26:00] that I don't, you know, you start and it just goes and all it's doing is reviewing each incoming email and saying, this looks like spam. This looks like a phishing attack. Uh, the website attached to here is clearly an attack.

 

Alec: Uh, okay. This is the language of a Dan style attack, right? And just. Categorizing and blocking those things before they even get to a human. Like even something like that, as simple as that, as an autonomous agent would be hugely valuable for most organizations. And I've, I've noticed that, um, even today, the basic tools, even that are filtering my email into the three buckets, like you should read this.

 

Alec: Other and spam are terrible. They're just terrible, right? I've got, you know, important stuff at my spam email and total junk in my regular email. It's just, it's pathetic, right? So that's just going to have to get better. Right. And it's got to learn, right. If I'm opening an [00:27:00] email from, uh, you know. From Joe blow, like every single day, uh, tomorrow it shouldn't be putting it in my spam folder, right?

 

Alec: Like, it's obviously important to me. It's got to learn. So that's, that's something I think, uh, is, is going to be good. And I agree with you. Look, there's all kinds of, it's not just about. Email and attacks on A. I. It's in general about how even someone with no coding skills can now run out there and use A.

 

Alec: I. To build. Uh, software to attack different attack vectors, right? Or you could take someone with no experience. They could figure out how to do a DDoS attack in a week, right? So it's, that is a problem, right? And that's a problem that's going to be very, very hard to fix because even if we go and kind of bulletproof a whole bunch of these models and say, okay, like we're not going to allow it to write code for [00:28:00] this.

 

Alec: Uh, guess what? There will always be some other model out there that allows it. I mean, DeepSeek has the worst scores for that right now. Like you can basically go into DeepSeek and ask it to do anything you want in terms of coding. It's just not going to say no. Right. And it's out there as an open source model.

 

Alec: It's kind of, you know, the genie's out of the bottle.

 

Mehmet: Oh, this is always the term that I use. The genie is out of the bottle indeed. And this is why. If you remember, Alec, like a couple of months after OpenAI released their chat GPT, there was a call from some, you know, executives in, uh, in the tech space saying we need to slow down.

 

Mehmet: My point was exactly this, is someone have access now to an open source model that they can keep optimizing it. Now, regardless how they do that, again, but It's, we can't, I mean, force people not to, to, to do research on it because the [00:29:00] papers, the scientific papers, let's say at least they are out in public and you can get access to them like you pay, like even if they are behind a paywall, you can get access to them.

 

Mehmet: We know this for a fact. And you mentioned at the beginning, the transformer model, and it's out of a scientific paper that was done at Google. We all know this by now, so it doesn't stop anyone of going and trying, but things are getting much, much easier. Now, here's the thing we talked about agents, and now we have the deep research also as well, which is I don't know what's your point of view, Alec, but I can think about it also.

 

Mehmet: It might be utilized in both ways to find, you know, maybe new methodologies, maybe new attack surfaces, because it can do kind of this deep search in the background. And what I've seen so far, I didn't try myself OpenAI as one yet, but I've seen it in other fields. Excelling. I tried [00:30:00] perplexity today for a couple of use cases, not related to cybersecurity.

 

Mehmet: It's not bad. It didn't wow me, but it's not bad. And Google, they are working on that and other, uh, AI companies that are working on that. So from your point of view, how this will affect also.

 

Alec: Yeah, no, I agree. I mean, I think, I think deep search originally one might've looked at that and said, eh, this is okay.

 

Alec: It's getting really good. And, you know, we, we talked a little bit about the speed of development and, uh, you know, my, my, my running joke is, um. Speed correlates with the amount of money being spent on it. And there's a lot of money being spent on AI right now. So it's the, you know, uh, open AI is losing more money every day than you could fit in my house, right?

 

Alec: Like it's crazy. So, um. It's going to continue to accelerate until the money slows down, and I don't see the money slowing down anytime soon. Right? So remember when [00:31:00] Deep Seek came out, uh, and, uh, people really started to panic about it, right? And the stocks dropped in the stock market and things like that.

 

Alec: And then they've kind of come back since then because all these big guys came out and said, We're not going to spend less money because of Deep Seek, right? That's kind of where we are, right? So if a couple years from now, the tune changes, and they're spending less money, then things will slow down. Until then, we're going to be accelerating super fast.

 

Alec: That being said, look, I think there are limits on large language models. You know, they are statistical models. These are not computers that are thinking, uh, as much as that one Google engineer was convinced that is, uh, you know, that the computer was thinking and conscious. It's not right. These are what are called statistical models, right?

 

Alec: In the future. Yes, we may get to. Models of the human brain that use deep learning and use quantum [00:32:00] computing to literally create something that someday could be artificial general intelligence or A. G. I. Uh, is that in my lifetime? I don't know. Is it some form of composite? A. I. Maybe it is. Um, but we will have something that looks and, uh, and feels like A.

 

Alec: G. I. Pretty soon, right? We'll be able to fool a lot of people. It won't necessarily be AGI, but we'll fool a lot of people. But eventually I think the human race is very resourceful and they will create it. And now that it's been framed also as a kind of a national race, meaning, Oh, you know, the free world better get it before the not free worlds.

 

Alec: Uh, and governments are throwing money at it. Like, I think that just makes. AGI over time more likely whether that's a few years away or a few decades away or a few centuries away Like I don't know but it's going to happen at some point

 

Mehmet: Yeah, like some some people they think it's happening very near [00:33:00] in this year Let's see Yeah, but to your point.

 

Mehmet: Yeah, like it's not People mix things also I like and I think you agree with me So like AGI, it's not what people think it is to your point It's not like the AI that thinks out of itself and start to take actions with without us You know giving instructions. Yeah, maybe in the future it will have some kind of a sensors connected to IOT thing and it start to act based on somebody action that sees from the environment.

 

Mehmet: Yeah. This is another topic. Maybe we can even do it with APIs today, but I mean, yeah, it's, it, it doesn't have the consciousness. Um, and AGI, I think, you know, it can solve complex problems to me. I mean, like This is my understanding. It can solve complex problems. It can be kind of a creative more than You know the way it does today because again today you have to give the prompt You have to write the prompt in a proper way so it can [00:34:00] understand exactly what we want to say But yeah, I agree with you on this now and and thank you for like, you know, because I was asking about You know, what do you imagine this ai to be, you know in the future and you just gave me the answer for that Yeah, but one thing which I think we, we, we, we didn't discuss very quickly, Alec, which is like also the ability for people now to write, you know, codes by themselves.

 

Mehmet: And, you know, we've seen the replit and, you know, the lovable and cursor and all these tools that now allows anyone to write code. And the first thing that came to my mind. Like back in the days, even if I want to be a new kind, a newbie hacker. So I needed to go to some, you know, kits and download them and understand.

 

Mehmet: So now maybe I can, can help me in this because I can now say, Hey, I want a landing page that it's exactly the same one as FedEx, for example. Right. [00:35:00] And, and I want you to design for me a form that I put, you know, my credentials, maybe even in it. So do you think like. You know, also hackers lives are coming easier today.

 

Alec: Yeah, look, all developers lives are easier, right? Like, so I'm a developer, but I didn't know Python, right? I know C sharp and C and, you know, eight other languages. I'm like, ah, but I want to prototype. My software and Python. So I learned Python. And if I had a question, I just asked chat GPT. What's the question?

 

Alec: I use GitHub copilot. I mean, I think that probably doubled the pace of development just for me by myself, like as I was starting this company a few years ago. So obviously for hackers, it's going to help right for the bad guys. And I think in some ways, um, it may even help more because you've got. People don't know how to code at all who all of a sudden can do, you know, bad things, right?

 

Alec: As well as just make things look so much more realistic. [00:36:00] Copy this website. Copy this form. Things like that. So, and it just becomes more and more difficult for victims to detect like, hey, this looks a little fake. It looks a little off or this doesn't make sense. Um, yeah, unfortunately, it's just going to get worse from here.

 

Alec: And I think, uh, you know, the solution may be partly web 3. 0, um, and, you know, verification of when you get an email, it's secure, you know, it came from all those kinds of things. Either we're going to do that or not, right? I mean, email is a terrible form of communication right now, right? It's honestly, for most people, uh, me included, I got a hundred emails and, and, and five or things I need to read, like the rest is just spam or garbage or junk or whatever, right?

 

Alec: So that's got to change. I'll take you back to, uh, 1984 when I got my first, uh, email account at Harvard, I would get one email a month and I was [00:37:00] so excited. Oh, my, my friend at Carnegie Mellon just emailed me. This is incredible, right? You know, it, it only takes a. Seconds to get to me. And if I mailed him a letter, it would take like, you know, days, right?

 

Alec: And, and now obviously email is the bane of people's existence, right? I gotta go delete 100 things to see the five that I need to see. So, uh, I'm sure there will be. Solutions for that. Uh, hopefully. Yeah. And the other thing I'd observed is that, um, you know, good news, bad news. But, you know, if you're a small business relying on email outreach, you know, for your business.

 

Alec: This doesn't work that well anymore, right? Like maybe a couple people might open your email, right? Like that's it, right? People are, people are sick of email at this point. And as it becomes less and less valuable or works less and less, hopefully people to stop using it and move over to more secure channels.

 

Alec: This is

 

Mehmet: [00:38:00] exactly my point of view and my opinion. Of course, you know, the people who still believe in it. Yeah. You have personalized and do this. I said, yeah, but like you're teaching this literally to thousands of people, if not tens of thousands or hundreds of thousands, every single one going to go and try to customize it the way you teach people to do it.

 

Mehmet: And again, like I would, I can still see these emails. Don't get me wrong. Maybe. Sometimes, you know, I want to open just because I'm bored, but not because and yeah, I just love when I see just, I received one today, so it's dear and you can see the brackets customer name or prospect name. Oh, good luck. You still don't know how to do the mail merging the proper way.

 

Mehmet: And to your point also, I like, uh, and this is, you know, Part of the legacy we're talking about it's not only the email system and we are you know I think humans are chatty by by nature. The other thing is we are still [00:39:00] relying on the same protocols that probably you know in 1984 still you were using the same smtp protocol to send the email and Right.

 

Mehmet: So we're still relying on these protocols also as well, which is, you know, I think they need some revamp. Anyway, maybe it's a discussion for another day. As final call to action, Alec, what you can tell the audience and also where they can find more about you and get in touch. And by the way, also, I wanted to mention about the podcast.

 

Alec: Awesome. Yeah. So look, if you want to, if you want to learn more about AI, private secure AI and compliance run AI, AI rules. Um, You can, uh, you can go to, uh, our LinkedIn site. So it's artificial intelligence risk Inc, or you can follow me on LinkedIn, Alec Crawford, Alec with a C. Uh, you can go to our website, AIC for corporate risk.

 

Alec: com. Uh, and then obviously I've got, I've got a sub stack under my name called AI risk reward. Oh, sorry. I call that's called. that sustainability technology, AI [00:40:00] and you, uh, and then the podcast, uh, it's called AI risk reward and that's, uh, everywhere you can find podcasts. And on, on YouTube, we just became a top 1 percent podcast this year.

 

Alec: We've got great guests. So hope you listen every week, every week on Tuesday, whether you need, whether you need it or not.

 

Mehmet: Great. That's great. So for the audience, you don't need to go and find all these links yourself. I will make the life easy. You will find them in the, in the show notes. If you're listening on your favorite podcasting platform, or if you're watching on YouTube, we'll find them in the description.

 

Mehmet: I like, I really enjoyed the discussion with you today. It's very important. I think for every business owner, every everyone actually, um, to learn more about AI and the risks. And the cyber security risks also around the technology because AI is the mainstream today So i'm not now differentiating AI from any other tech It's something that we are using in our daily lives even [00:41:00] outside of work So thank you for shedding the light on all these, you know risks and how we can somehow mitigate them And you know shedding also the light on What's happening and what's waiting us and this is how usually I end my podcast This is for the audience if you just discovered this podcast by luck.

 

Mehmet: Thank you for passing by I hope you enjoyed if you did, so please subscribe And share it with your friends and colleagues I release two podcasts a week like alec do it every tuesday. I do it also every tuesday and thursday. So Please keep tuning in and if you are one of the people who keeps coming back and again and again Thank you very much because of you this year 2025 For the first time ever the podcast entered the top 200 charts in multiple countries at the same time We used to do one country every now and then so January and february numbers are out and We are getting, you know, top 200 multiple countries at the same time.

 

Mehmet: So thank you for your support. And thank you also [00:42:00] for making the CTO show as one of the top 40 podcasts in the business and tech listened in Dubai. So thank you for making us. I think the rank, it's swinging between 14 and 15. So if I'm saying 14, you find 15, don't get me wrong. So we are just in the between these two ranks.

 

Mehmet: So thank you very much for all your support. And as I say, always. Thank you for tuning in. We'll meet again very soon. Thank you. Bye bye.