In this episode of “The CTO Show with Mehmet,” we dive deep into the world of AI and blockchain with our special guest, Jordan Miller. Jordan, the visionary behind the SatoriNet project, shares his ambitious journey to create a decentralized network of computers aimed at predicting the future. We explore how Jordan’s background in blockchain and his fascination with distributed consensus led him to develop a system that gathers and analyzes vast amounts of global data—from economic indicators to environmental metrics—to foresee future events.
Jordan explains the philosophical and technical underpinnings of SatoriNet, comparing the network’s predictive capabilities to the human brain’s ability to anticipate the future based on sensory inputs. We discuss the potential to reduce the frequency of “Black Swan” events—unpredictable occurrences that have significant impact—through the intelligent correlation of global data streams.
The conversation also touches on the ethical and practical implications of AI, including the challenges of ensuring that AI predictions are based on reality rather than biased inputs. Jordan shares his thoughts on the future of AI and blockchain, addressing the role of quantum computing and the potential for decentralized AI to democratize access to powerful predictive tools. Additionally, we delve into the concept of governance tokens in the SatoriNet ecosystem, highlighting how token holders can influence the direction of the network.
Jordan emphasizes that while the project is still in its early stages, the goal is to create a system that benefits both society and individuals by making accurate predictions accessible to all. He also shares his thoughts on the importance of open-source development in maintaining transparency and trust in AI systems.
More about Jordan:
Jordan is the driving force behind the Satori Network. As the lead developer of Moontree, a crypto startup Jordan has blended his blockchain expertise with his passion for AI in Founding Satori - a pioneering Crypto AI endeavor designed to decentralized the power benefit and control of AI. See more at satorinet.io
Jordan's goal is to expose people to Satori.
Satori's primary objective is to predict the future of everything we human's find valuable to foresee. Its predictions are free and open for everyone. Anyone can participate by downloading the Satori Neuron software from the website.
For more information on it go to http://satorinet.io
01:06 Jordan's Background and SatoriNet Project
01:51 Interest in Blockchain Technology
02:49 Predicting the Future with Blockchain
08:20 Philosophical Insights on Intelligence and Prediction
32:35 Challenges and Future of SatoriNet
40:11 Conclusion and Final Thoughts
[00:00:00]
Mehmet: Hello and welcome back to a new episode of the CTO show with Mehmet. Today I'm very pleased having with me Jordan. Jordan, thank you very much for being with me on the show today. The way I like to do it is I keep it to my guests and introduce themselves. So tell us a bit about you, your [00:01:00] background and what you are currently up to.
Mehmet: So the floor is yours.
Jordan: All right. Uh, thanks for having me on. Um, my background, I'm Jordan. Um, I started the SatoriNet project. That's a project that is, um, brings together, uh, a bunch of computers on a network across the globe to focus on predicting the future. And so we've just launched, um, and we're just getting started.
Jordan: So that's kind of, uh, Uh, that's kind of a little bit about me. Before this, I was a lead developer at a blockchain startup called Moontree.
Mehmet: Cool. So, Jordan, what brought you to, to blockchain and, you know, this, this, this area of, of technology?
Jordan: Uh, I was always interested in blockchain technology. Uh, when I first heard about it, I actually, the very first time I heard about it, I kind of didn't [00:02:00] believe it was possible. Um, I thought, Oh, okay. Uh, I, I didn't assume that distributed consensus was something that could actually exist.
Jordan: And then when I discovered, Oh, this is how it works. Um, I, I kind of fell in love with the technology. Cause I saw that it could allow us to. kind of come into consensus with each other and know the truth about what we all believe and what we all think in a way that we didn't rely on any single individual.
Jordan: So I found that to be extremely intriguing when I very first found it. And, uh, ever since then I've wanted to work in blockchain, do things in blockchain and, and bring that world about, I guess.
Mehmet: That's fantastic. Now, when you were mentioning about, you know, your, your current, uh, startups that are in ads, you said like blockchain to predict future, like, [00:03:00]
Jordan: right.
Mehmet: To me, this is like something exciting. Like, can you, you know, walk me through how blockchain can predict what's the philosophy behind it? Like, how did you come up with such a possible and, you know, what kind of future are we talking about future in general? Is it like. Predicting, for example, uh, how the stock market would, would, would be, let's say, in the coming days, months or so on.
Mehmet: So I would love to hear more from you, Jordan.
Jordan: Okay. Yeah. You know, uh, I had this idea before I even heard of blockchain. And so when I finally kind of looked at blockchain, I realized it was a way to, uh, create this decentralized network. Okay. So the idea was. Why don't we create a network of computers on the earth that are always, always watching various metrics out there in the real world that are watching the real world, um, [00:04:00] government statistics, econ, uh, um, economic prices, um, uh, like maybe metrics about our physical environment, you know, ocean temperatures, like everything, let's just watch all the metrics that we can gather.
Jordan: And, um, Let's have this network focus on prediction of the future of each one of those networks. That was my idea. Quite a while ago, I was, I was in college learning about how the brain works. And, uh, we are only intelligent because our brain is so efficient. And it's, it gets its efficiency in large part by predicting the future of everything that it sees and hears.
Jordan: We have a lot of data. Flowing into our bodies at any time through our eyes, through our ears, touch everything. And, um, all that data is being anticipated before it ever comes. [00:05:00] We, we try to see the future. That's what our brain is subconsciously doing all the time. about everything. And so I said, well, why don't we start trying to implement that pattern in computers on the earth, not some centralized supercomputer, but why don't we just try to do it on a large network?
Jordan: Um, and so that was kind of the inspiration and that was over 10 years ago. Uh, when, when I saw blockchain and I saw that this might be possible, I did try to implement it. But had the wrong approach back then. I tried to implement it as a blockchain, uh, distributed consensus technology. And that was really not quite the right approach because it was, uh, too difficult to do, um, too difficult to get that to scale, uh, just wasn't right.
Jordan: So I put it on the back burner and thought, well, maybe there will come a time when I can see a different way [00:06:00] to create this. And about two years ago, I started, um, I think the appropriate the right approach. So,
Mehmet: you know, a lot of exciting things still to me, you know, um, so are we trying here, Jordan, to simulate the way that us humans We predict the futures by getting these data to this decentralized network of computers on, on, you know, on the network and then try to, to give us insights.
Mehmet: Is it like, this is the way you're trying to do it?
Jordan: That's right. Uh, I'm trying to get the computers to be able to watch everything and then correlate everything and say, okay, this is correlated with that and start to figure out how to predict the future. Okay. Because, you know, what I saw, this is, I was, um, in college during the [00:07:00] financial crisis, the housing crisis.
Jordan: And so, uh, I remember looking at the brain, looking at the economy and saying, well, why didn't we see this coming? You know, I mean, there were laws that had been passed. There were there were economic forces at play, you know, regulations and banks. There were things that if you could see the whole picture, you could see it long before it actually happened.
Jordan: Long before it came to fruition and, and, and was an actual disaster. You could see it because some people did and they could see the whole picture and they made money off it. So some people did, right? But we, we don't have the bandwidth to figure out who knows what about what. To, to kind of, um, figure out where the real experts are and, you know, who's making predictions that are not so good versus, but if we had all the bandwidth in the world and we could just not sleep and not eat and just think about the [00:08:00] future 24 seven, like computers could.
Jordan: Then we could. We could have seen that coming and we would have been warned and it never would have been a disaster. So, yeah, exactly what you said. Uh, we want the network to exist so that we can anticipate what's events that we couldn't anticipate otherwise.
Mehmet: Cool. Now putting for the time being the technology aside and taking it a little bit from, let's say philosophical point of view, right?
Mehmet: So, and you mentioned about the intelligence here. Now, of course, nowadays, everyone talks about artificial intelligence and you know, all these things. Now, and I'm not saying this, Jordan, I'm not asking this to kind of a challenge or trying to, you know, because I had a very interesting, um, discussion a couple of days back with someone, you know, about, you know, the concept of the black swans, right?
Mehmet: So, so a black swan, right? So it's something that no one can predict actually. So, and, you know, like maybe they [00:09:00] said the COVID is something like this, you know, that the financial crisis, they said it was something like this. Now, If I want to put this into the, the, from philosophical point of view, not technology, into what you're trying to do.
Mehmet: So do you think like this will make the theory of the black swan something like Not true because it's a theory after all, you know, and as we know like a theory is just a theory Like it's not like something proven. It's like that someone who claims something they have some, you know Facts they say yeah based on these facts.
Mehmet: I think it is. It's a theory So from your point of view from a philosophical point of view and with the intelligence that we're talking about which is now given to the machines Do you think that this is, can, do you debunk this, this, the theory? What's your point of view?
Jordan: Um, I don't. I think the Black Swan events are always going to stick with us.
Jordan: And, uh, I think it's [00:10:00] impossible to get rid of them. But, they are a, uh, you know, their, their rate of occurrence is a function of what we understand about the universe. And so like if you take a baby, a baby, just a newborn infant, you know, the brain is in that baby's head and it's trying to figure out what is going on because it's getting a bunch of data, right?
Jordan: It's getting data from your eyes, uh, from the ears everywhere, right? It's just, it's bombarded with data and everything that it experiences at all times is a black swan event. It couldn't have predicted any of it. But by the time you're an adult and you kind of, understand everything about the world. You kind of, you know, you get up in the morning, you know, you got a routine, everything's predictable.
Jordan: You've put order to your world as much as possible. And so, um, I think as a society and as kind of like a world, We're probably closer to the [00:11:00] infant stage where we just don't know what's going on about anything. And, uh, but with these kind of tools that, you know, will actually probably come to fruition far in the future.
Jordan: Um, I think we can, we can get a handle of, of what's real pretty quickly and, and we'll never get away from black swans completely. But, uh, we can lessen their occurrence, um, in a major way.
Mehmet: So we're going to reduce, of course, like again, funny enough, you know, the, the, the friend was telling me about, yeah, it was a black swan event.
Mehmet: Yeah. But if it was a black swan event, we should have known about it. So what you're saying, we might be able to reduce these events, you know, as, as humanity, um, based again on the data. Now you mentioned, you know, letting the machines. of course, learn the same way what, [00:12:00] what happens to us. You just mentioned like, yeah, when we are kids, we're getting bombarded by data now to you, like these machines, they need, of course, we, we don't like machine learning now and everything we see in AI is based on, of course, like machine learning.
Mehmet: You give, I mean, we train, you train the machine. And then you try to let the machine decide or maybe give you, uh, some insights sometimes, you know, and forecasting and all this. But now with this amount of data and the correlation, because you mentioned something about correlation, Jordan, Um, and because you're using the blockchain, which is You know, it gives like something very interesting here.
Mehmet: So can I think about that in the future? Like the concept of agents, I mean, AI agents that they, for example, there's an agent responsible for, let's say, ocean temperature. There's an agent responsible for, you know, the humidity. There's another agent responsible, I don't know, like, uh, how far the [00:13:00] stars are from us at the time being and so on and so on, right?
Mehmet: And then collectively, they do the work like is this, is this the idea is to divide it even between between the machines or they're still kind of a central Of course, it's a blockchain, I know, but like, there must be some, some, you know, command and, uh, you know, something that gives what they should ignore what they should, you get my point, right?
Jordan: I see where you're coming from. Yeah.
Mehmet: Yes.
Jordan: Um, that's a, that's a very important point. Um, The blockchain actually serves the purpose of directing the network, uh, and that is because those that hold the token of the blockchain have the voting power for deciding what's a valuable data stream to watch and predict and what's not.
Jordan: So, um, we [00:14:00] have a situation right now in AI where AI is created by us, and I'm talking specifically You know, these days, AI means, uh, language models basically. Um, cause that's the latest phase. So it seems like we build these language models and we build them on our own language, so that's on our understanding of the world, and then we.
Jordan: We kind of don't like everything that they say because we curate our language very carefully. And so we retrain them and we make sure that they kind of are nice and don't say anything mean and, and, you know, try to take everybody's point perspective into, into account. And so they approach our collective and collective is often groupthink, uh, view of what reality is.
Jordan: Which is okay, [00:15:00] but that's not what the reality actually is. So I think it's important to have. AI machines, bots, algorithms that are looking at the real world directly, not our language about the real world, but looking at the real world and, and trying to predict it directly and not according to our preferences, um, as far as, as the result that they predict, but according to our preferences, as far as.
Jordan: What they're actually looking at, you know, we might not care how far away Alpha Satori is, uh, right now or what the prediction is going to be next year of how far away it's going to be. We might. Well, we might not. And so, but we really might care about, you know, our ocean sea temperatures or something like that.
Jordan: So we want the machine's attention to be allocated according to [00:16:00] our value hierarchy. Um, we don't, we don't necessarily want to be giving it all the answers though, uh, because, because that's not the truth. That's, that's just our opinion.
Mehmet: Now, another word that we kept repeating, uh, you and me, uh, Jordan, which is intelligence, right?
Mehmet: So the word intelligence. By itself. Like, I mean, again, we were going a little bit philosophical, probably here. Um, but there's a huge difference between us getting data, put it inside the machine and then write, writing a code. Of course, it's a complex code. It's not like the day to day code that people know about.
Mehmet: And then try to extract some data out of the data, which is mainly what the LLM do. Now, are we talking here about another kind of intelligence, which is a little bit similar to [00:17:00] what our conscious mind does. So for example, you know, of course, if someone asked me to do something in instructions, I, I will go and do it because it became, as you mentioned, it became our routine, right?
Mehmet: So wake up in the morning, we know how to step out of the bed. We know, like, if there's stairs in the house, we know how to go without falling down and so on and so forth. Now, When I need to do something completely new. So I need some kind of consciousness that, you know, take this data and do something that I never done before.
Mehmet: And this is what happens, for example, when we learn things for the first time. Is this the intelligence we're talking about here, Jordan? Is this like the, I would say the main thing that we need to be able to solve? So really, the machines can predict the future.
Jordan: Hmm. Uh, yeah, there is a big difference between Um, even even modern day A.
Jordan: I. Technologies and what the brain is capable of. It's [00:18:00] radically it's radically more efficient, but that doesn't mean that we can't make huge, um, huge models in like chat. GPT, for example, is an example of, uh, a model that knows a lot about a lot of things. And so you can't, um, I think what the paradigm is, is that big models is, is good enough.
Jordan: Like that's all you need. And, and that's comparable to our human brains once it gets big enough and maybe, but the way I see it is that there's a different, Difference between how we do AI today and how our brains have evolved to do it. And the biggest difference that I see is that our brains. are always online learners.
Jordan: So, um, let me, they take time into account. Uh, they [00:19:00] don't batch process data. So if you're going to build ChatGPT, for example, you're going to take months to, uh, get the data and then to curate it. And then you're going to take months to train the model. And then after a year or something, you're going to realize that the language has shifted, our technology is better, and we're going to rebuild the whole model.
Jordan: We're going to do all that again, and we're going to come up with a new model. And that's kind of the iterative feature of, so it's iterating it like, a yearly schedule, right? Um, not very fast. Our brains are rewiring themselves all the time. Um, and, and so we do things in real time. And that's, I think the main difference between The technology that we have today and, uh, the ideal form of [00:20:00] intelligence is that we're not just building one big model.
Jordan: Once we are always updating it and updating it incrementally. So, um, yeah, I think that's probably the biggest difference and probably what gives us, um, whatever kind of humanness we have over over machines.
Mehmet: That's really interesting, Jordan, because, you know, when you were mentioning about how our brain functions, so, and if we want to simulate, let's say, the same thing on machines, I think there would be a need for a huge compute power behind this, right?
Mehmet: Because I think I saw a video a couple of weeks back, you know, when they tried to simulate the worm. Right. So there's even it's I think it's a Python script where you can simulate a warm life and you know, and so on. And they said like it consume a lot of compute power. So and this is just I think it's a one cell warm something like this, right?
Mehmet: [00:21:00] Yeah. So if we're going to do the same thing, so does it mean like we are still have a lot of time in front of us until we reach, because this is another debate, Jordan, and I'm sure like you're aware of about it, about the concept of AGI, right? So, you know, and people sometimes get scared from AGI and, you know, are we far?
Mehmet: Are we close? Some people, they say it's like just in two years time. Some people, they say, no, we still need like probably maybe 10, 15 years to reach it. So from what you're currently doing and because, you know, you immerse yourself in this today. So, so what's your point of view?
Jordan: I think that we probably have a long way off if, uh, if you define it in a particular way.
Jordan: So it really just depends on your definition. I think, uh, maybe the simplest definition of Um, AGI is just I can talk to a computer and it's pretty much [00:22:00] able to do and say, uh, anything that a human could, uh, that's probably pretty close, um, years away, right? Not, not decades, uh, maybe a decade, maybe, um, but, but that could just be, you know, it doesn't mean it's architected the same way as our brain.
Jordan: So to get it, to get it to that level or beyond. I think that's probably quite a ways away. Um, but I, I don't know. I don't, I don't think I could put years on it or anything, but I think that's for the pure intelligence solution. That's a ways off.
Mehmet: But as humans, should we be, again, as to your point, it defines how it depends how we define it, but should we be worried, you know, because another thing that comes up about, you know, the ethical aspects of such intelligence, uh, in [00:23:00] the sense like it can takes, you know, decisions.
Mehmet: It can, you know, can do a lot of things. It's gonna be super powerful. Actually, what we have today is super powerful. If I compare it to what we used to have just like five years ago, right? So what are like, you know, the things that really we should be worried behind, you know, the what the media says and how they try sometimes to scare people.
Mehmet: But, you know, as from real scenarios that can happen, what can go wrong and why we should be You know, aware of these threats. So at least we can be prepared from now.
Jordan: That's a good question. Uh, I, I, I'm not sure we can anticipate. I think one thing we can see in the near future is that everybody is, uh, treats.
Jordan: are large language models as sources of truth, which is okay. Uh, it's the typical, typical pattern. Um, you're born into a society, you have a culture, [00:24:00] that culture has been around a while, it's had a lot of brains in it, and so we typically tend to outsource whatever thinking we feel like we can to the culture so that we don't have to do it because it's, it's work, right?
Jordan: So if you can outsource it, you can, uh, get something for nothing. It's, it's great. So we, we tend to do that, but what we're doing now is treating these large language models, which are essentially are, um, explicitly defined groupthink now, uh, we're treating them as sources of truth. And they're not. They're, they're sources of our opinion.
Jordan: And so I think that's kind of an immediate threat. Um, that's not great, I think. But, um, as far as long term goes, I think what we need to do is make stuff that can anticipate the [00:25:00] future better than we can, like Satori. And that's kind of the reason is so the environment is evolving really fast. Uh, the technological environment, and it's faster than we've ever lived in an environment.
Jordan: Um, it's faster than our social environment evolved. It's faster than our physical environment evolved before that. And now we're in this, you know, it's faster than the industrial revolution environment, you know, living in large nation states and all that. And that was dangerous. We developed the atomic bomb and all that.
Jordan: But now we're in a new phase. where it's the technological phase and everything is way faster and so we can't anticipate that future very well. Um, and so I, I kind of think this is, this is the reason we need this, but I would also say, [00:26:00] I don't think, um, we should be afraid. I don't think we should be fearful until we actually see a reason and then And then as soon as you see a reason, you just, you just change it as fast as you can.
Jordan: So, uh, fear, I don't think is, is a good default mode.
Mehmet: Absolutely. And to your point, Jordan, just, I want to, you know, uh, highlight. Something important you said about, you know, it's like source of truth. And if we think like, let's forget AI for a moment, search engines and Google specifically became source of truth because, you know, when you ask someone about something, Hey, go and check it on Google, right.
Mehmet: Or, you know, if people, they don't like Google, they can go on another search engine, but I mean, they take the information that comes out from there for granted. And, you know, they, they don't go and check on it. And I think what's happening with AI is the same thing. Oh, because you know, the AI said this.
Mehmet: Okay, this is, this is true. And [00:27:00] again, and this is what we kept saying here, at least in, in, in my podcast, like AI is garbage. I mean, the, the large language model is, you know, garbage in garbage out. So if you train it on wrong data, it's going to give you wrong data and you need to go and check the source.
Mehmet: And this brings me to the next question, Jordan. There's also a discussion about The source of these large language models. So, you know, there's a debate that happened recently and we know that there are like other reasons behind it. You know, making these large language models open source so everyone can go and check, you know, from where they get the data.
Mehmet: And, you know, Elon Musk, he, he was very harsh on, on Sam Altman and, you know, saying like, yeah, you call it open AI, but you have a closed source and we don't know actually what data you put in. And I'm asking this question because you're doing it on a blockchain and blockchain is a very transparent, uh, technology because you can see who's [00:28:00] doing what at any point in time.
Mehmet: So is that also something You know, with what you're currently doing with SatoriNet, that making sure that whatever comes to these, uh, you know, let's say machines that they're doing, they are part of the massive intelligence that you are aiming to build with these, with this network, to make it also open source, so everyone knows from where this data came.
Jordan: That's right. Yeah, it is open source. Yeah, we we've already open neuron you can download It's open source so you can see all the code.
Mehmet: Okay, cool now another thing I want to ask which is more related to the blockchain part now as you know Jordan whenever we see a new blockchain project so it's associated with tokens right and cryptocurrency In that, you know, [00:29:00] arena, if I might say, how actually this works, because, you know, is there a, a, a, you know, alt currency that's gonna be, you know, getting, uh, distributed to the network?
Mehmet: How that would exactly work? And who, who get these, uh, these coins?
Jordan: Um, so I think the blockchain can allow us to decentralize AI as well. And so, um, we want to decentralize its production so anybody can download, uh, the Satori neuron to their machine at home or, or whatever, um, and run it on their machine.
Jordan: And so it's you actually using their cycles, their CPU, their RAM. It's using their bandwidth. Um, uh, so this way we distribute the, the load, the work that has to be done. And for that, they receive a [00:30:00] token or the, the neuron receives a token, earns a token. That token is a token of governance. So like I said, I think the point of, of crypto in general should be governance.
Jordan: And so it, it has a governance token over the network so that it can, it can be used to vote on which data streams are important and valuable. And I think that's, um, kind of the role of the token is to decentralize the benefit of AI, decentralize its production and decentralize its control. Because we need a unit of control over the more intelligent, um, worldwide network.
Jordan: Uh, and that's the currency. That's the, the, the governance token.
Mehmet: And who will be doing the voting, Jordan? Us? Like, I mean, the, the users of the, of the network?
Jordan: Anybody who has [00:31:00] token. Yes.
Mehmet: Okay, that's really, really interesting. Now, coming back to the speed and you know, I just this question just popped up in my head now because we're discussing the speed and the amount of data that needs to be processed and so on.
Mehmet: And we know, like, even even if we have a lot of machines which are part of the network, still, we're gonna need more. Do you anticipate like if If we have a breakthrough in some other technologies, mainly everyone nowadays talks about, you know, quantum computing, for example, do you think if we have a breakthrough where we can bring the quantum to, to the, uh, to the end point, if, if that is correct point, this will be another, you know, uh, opportunity.
Mehmet: tools, maybe stream more data, maybe, you know, taking fast decisions. How do you anticipate, you know, also the future of this? Because, you know, to me, [00:32:00] like this is maybe it looks futuristic, but it's very logical. And I start to try hit my head to see where other use cases can be fit in there.
Jordan: Yeah, it seems like quantum computers could, um, radically change how we do AI, but I don't know very much about quantum computers or quantum computing or its state of how close it is to, um, uh, its next big breakthrough.
Jordan: Um, I really don't know. Yeah.
Mehmet: Yeah. So it's just a an idea that popped up in my head. The other thing, Jordan, where do you see the benefits of, you know, what you're doing currently? Do you see it only for individuals? Do you see it for the whole society? Or do you see it for maybe companies also as well? Or it's a mix of everything.
Jordan: It's a mix of everything. Uh, if you can predict the future on a global scale, you can help society. You can also help companies [00:33:00] because they can make their systems more efficient. If you can predict it on an individual scale, like, uh, Instead of broadcasting all predictions out for free, um, you give certain predictions to people for a fee, a private offering, essentially, then, uh, they can make their processes, their internal processes more efficient as well.
Jordan: Um, so I think increasing the efficiency, uh, increases our wealth and increases everybody's standard of living and, uh, helps everybody. So I do see that as well.
Mehmet: That's, that's interesting. And when you started to talk to people Jordan and I know like this is Part of the journey when you're building anything regardless.
Mehmet: So what were, you know, the main things that people were skeptical about when you start to talk about your project and, you [00:34:00] know, were you in a situation where, you know, like really you felt like, okay, I think this will not work or, you know, and then something happened and then you overcame this challenge.
Mehmet: So I'm interested to hear about like also this part of the story.
Jordan: Well, like I said, I tried to build this back in like 2013, 14, and just couldn't, couldn't get it because what I was trying to do was use blockchain as the very base layer, um, build a editor, build a competing algorithm to proof of work, maybe proof of prediction.
Jordan: And, uh, it was too, too complicated. The cool thing about proof of work is that it's simple. And so, um, And then it can be easily distributed a proof of prediction. It's too nuanced, too complicated. So, um, I wasn't able [00:35:00] to what I wanted to do is build a blockchain that would manifest the rest of the network.
Jordan: all by itself. Uh, I wasn't able to accomplish it in that order. So I just had to. I never thought it was a bad idea, but I did think I must not understand something about what needs to be done here. I must not have the right approach. And so I just had to put it on the back burner and wait until I figured it out.
Jordan: So, um, I took, I came at it from the AI angle this time, this time I said, okay, I see what we can do. We can build an AI engine, distribute that and attach the blockchain to it. And that way, um, we can have the decentralization of the blockchain, but we start with the AI instead. And so that, I think that's the right approach.
Jordan: Um, it may [00:36:00] be possible to do it in the pure fashion that I wanted to do it in the first place But I I it's beyond me and it might be on be beyond everybody. I don't know
Mehmet: Yeah Where do you aim to take this jordan? Like, you know, where do you anticipate satori net to be?
Jordan: um As far as timeline Gosh, I really, I don't know, because these things take a long time.
Jordan: Um, and, and they always seem to take way longer in the short run than you expect. And then in the long run, they just, they just snap, they just pop. And so I really, I don't know. But what I would like to see is that we're able to, um, get it running well and get it up in so that it's making predictions, um, reliably and consistently [00:37:00] about the things that we all care about.
Jordan: So I'd like to see the public good, the open predictions, public good kind of come to its full fruition, um, within a few years. That's what I would think is ideal. And then on top of that, we can increase the efficiency by doing the private good as well. But we don't want to start that until after the public good has come to fruition.
Jordan: Uh, but that's kind of the timeline. I think it'll probably take longer than what I'd like to see. Uh, but that's okay. You know, as long as we're making progress towards the goal, that's kind of my goal.
Mehmet: Now, uh, before I ask you the final question, I know like this is a, you know, you, you're doing this, uh, offering it for open for everyone and it's free.
Mehmet: So, you know, is it like a, uh, impact project? Like, like, you know, are you [00:38:00] planning to monetize it later? Like just out of curiosity, I'm asking you this question.
Jordan: Yeah, I think it can probably, uh, stay afloat on its own just through, uh. So, so the token that gets minted, the token has an attribute where part of it goes to a dev fee, and that's to support the development of the network.
Jordan: And this isn't, uh, this isn't something new. A lot of projects do this kind of thing. What they usually do though is they set it at a number or they have some predefined thing and what we wanted to do is give the community of token holders the ability to define it. So give them control over it through vote.
Jordan: It's a governance token after all, right?
Mehmet: Right.
Jordan: So, um, if, if the community thinks that the developers are not doing a good job, [00:39:00] they can vote the dev fee down. And so, if they think it's doing, they're doing a good job, then they can vote it up. And so, really, it just depends on how, how much the community appreciates the development that's occurring.
Jordan: Um, and so I think that's sustainable that way, that way it can, it can exist in perpetuity as long as the community likes what's going on. Um, but as far as, so, so the, the organization is set up as a nonprofit, so it doesn't really make a profit, but, um, you could build ancillary products on top of it eventually.
Jordan: Um, but that's not really my goal. My goal is to focus on the, uh, the public good, you know, predicting the future of everything for everybody. So, um, those other things might come later, but, but that's [00:40:00] not what I'm focused on.
Mehmet: Cool. Cool. It's good to know Jordan. And, you know, I, um, I always appreciate anyone who's trying to do something for the public good.
Mehmet: So, so that's, that's fantastic. Um, finally, Jordan, like, uh, anything you based on the discussion that we, we, we had today, anything you want to tell the listeners about, you know, AI and the future and intelligence that maybe I didn't ask you about. And also I know like you have put the website there. So I'm just asking how people can get in touch and know more about the project.
Jordan: Yeah. Uh, satori. net. io is the website. They can just go there, look at it. It's, it's, it's still in its infancy. You know, the website is kind of simple and, and not very flashy. So you can go there, but you can see what it's about. There's a video. Um, and you can see what we're trying to build. You can even [00:41:00] download it, uh, right now it's available for download for free.
Jordan: You can try it out if you like it. Uh, we, one thing we want to do with this, this piece of software that, that people can download is we want to make it so that people don't have to use it for Satori. You know, if we make it for free and, um, and we put it out there, most people that, that turn it on, We'll have it work for Satori a little bit, right?
Jordan: But we also wanted to make the ability, uh, that anybody can use it as their own tool to predict their own data. You know, like they could route their own spending habits, their own health metrics, their own data to their own. Um, uh, private software that's running on their own machine. And, uh, Get the future prediction out of that.
Jordan: And so we want to build that [00:42:00] tool as well. And that that doesn't have to be it doesn't have to make those predictions public that can just be for anybody who wants to try it out.
Mehmet: That's cool. Really, really cool. Jordan. I really enjoyed the discussion with you today. And, uh, you know, it's very exciting because, you know, we're talking about something that can touch.
Mehmet: All the humanities life. So, um, again, um, thank you for putting the time to work on, on a project like this. At your point, it's still in infancy. Let's see how it goes in the future. And, you know, usually this is how I end my episodes. This is for the. Audience. Uh, if you liked, you know, this discussion today, which I hope you did, uh, please subscribe to the, uh, podcast wherever, on all podcasting platforms, and we are available also on YouTube.
Mehmet: And if you are one of the people who keeps sending me their comments, suggestions, please doing so, keep doing, uh, so, [00:43:00] and I read all of them, you know, like I take your, uh, you know, suggestions and, uh, you know, even feedback. into consideration. So thank you for doing that. And as I say always, thank you very much for tuning in.
Mehmet: We'll meet again very soon. Thank you. Bye bye.