#7 - Bryan Johnson's Blueprint for Longevity
[00:00:00] Balaji: Bryan, welcome to Sunny, Singapore. The next episode of the Never Say podcast. This is Crips and Bloods on the same side. We’re doing technology plus longevity. It’s great to be here. Yeah. I think everybody is a fan of what you’ve been posting online. I don’t know, the naked selfies part, that part. Some people are fans of that, but you know, it’s, it’s all, it’s all cool.
[00:00:17] Balaji: I’m posting my data. You’re posting your data. There you go.
[00:00:19] Bryan: Exactly. All forms of data. Yeah, the nudes are posting data. There you go. There you go. P O A S T I N G. That’s right.
[00:00:24] Balaji: And people know you don’t skip leg day. Exactly. That’s right. Right? A few years ago, when did you begin your whole Transformation kind of process three years ago, three years ago.
[00:00:33] Balaji: Okay. So we’ll put a photo of what Brian looked like three years ago and give people the background. You were a tech guy. You, you know, you found a brain tree. Give, give, give
[00:00:42] Bryan: the spiel. Yeah. At 21, 21 years old, I had a goal that I wanted to make a whole bunch of money and then direct that attention to trying to change the human race for the better.
[00:00:50] Bryan: And I didn’t know what to do. I didn’t have any skills that I was necessarily good at. And so. Trying to acquire resources to then find a purpose was what made sense. And so I built Braintree Venmo. I sold it [00:01:00] for 800 million in 2013. And then my energy was entirely focused on this thought experiment, which was if I imagine that being present in the 25th century and they’re observing the early 21st century, and they are commenting on what we did in this moment that allowed intelligence to thrive in the galaxy.
[00:01:16] Bryan: What did we do? And so I was basically trying to go through this exercise of mustering as much clarity of thought. as possible. And it was, I think, one of the most, uh, challenging endeavors of my entire life because When you start posing that question, you have to tease apart, uh, all of reality. It’s almost like Plato’s cave.
[00:01:36] Bryan: You have to try to imagine what if you’re seeing the wall or you’re seeing the broader view of it. You have to continually unpack. And so I spent quite a bit of time, uh, you know, I started investing, well, I made a momentary path where I was investing in companies and deep tech doing genomics, synthetic biology, computational therapeutics.
[00:01:53] Bryan: I wanted to This is mid 2010s. Yeah, uh, yeah, exactly. Yep. And, uh, I basically was working on this [00:02:00] hypothesis that, uh, we now have this ability to engineer reality at every level, uh, from atoms, uh, from like Lego, like atom structures to biology, to reality, to intelligence. And so it’s the programmable existence.
[00:02:16] Bryan: And when you have these abilities, Uh, it remaps the possibility space of being human, unlike anything we’ve ever had before. And I wanted to learn the specifics of what does it mean to program reality at the atomic level, at the biological level, and at the intelligence level. So, I invested in companies doing synthetic biology.
[00:02:33] Bryan: Ginkgo Bioworks was one of my best investments. Uh, then also Matter, a company called Numat, which is doing metal organic frameworks. Uh, they’re the leader in the world. They’re basically doing, uh, atomic construction at the nanoscale, precision chemistry. And then I started Kernel, which is basically trying to do, uh, global scale brain measurement.
[00:02:52] Balaji: Uh It’s like brain machine interface? Like
[00:02:54] Bryan: Yeah. Or different? It’s like an fMRI that you can put on your head. Got it. So, basically bringing brain measurement to the [00:03:00] masses. So
[00:03:00] Balaji: people have, you know, just to talk about that for a second, there’s been the work that You know, Nick Lillis and Krishna Shannoy have done a long time ago on brain machine interfaces.
[00:03:09] Balaji: Elon is commercializing that with Neuralink. And then there’s also been the fMRI work, which is like, related to it, but it’s non invasive, so it’s on a parallel track to it. And, there’s been some conflict in psychology, I’m not deep in the literature, but as to whether that signal is real, or whether it’s just artifactual, is it getting something?
[00:03:28] Balaji: And people have also tried to make like, circlets that you can use to like, control video games, and you know, things of that nature, like people have tried to do stuff like this. Where’s the state of that right now? I’m not sure where
[00:03:37] Bryan: it’s at. Yeah, that’s when I came into the game, um, most of people’s intuitions were thinking about how to take signal from the brain and then control an external.
[00:03:46] Balaji: Like, basically
[00:03:47] Bryan: telepathy, essentially. Exactly. Technological telepathy. So, imagine moving an object. Or telekinesis, I should be more clear. Exactly. And then, uh, what we, and then you look at the space of brain interfaces, and you look at this trade off, where we have almost 100 billion [00:04:00] neurons. And you can go about, you can say, I’m going to implant something in the brain, and you get really high precision with 100 neurons, or 1, 000 neurons, or 10, 000 neurons.
[00:04:09] Bryan: And you can do amazing things in control in that environment, but you miss out on 99 point, you know, the remaining billion neurons. And so you miss a very large portion of cognition. Hmm. And so, on the other side, you can do non invasive. Is that, is that
[00:04:22] Balaji: due to the fact that the invasive stuff can’t read from that many neurons at the same time?
[00:04:27] Balaji: Is that the
[00:04:27] Bryan: technological limits right now? Yeah, you need to implant the technology. You need to put the actual electrodes in the brain matter. Right. So it has proximity to the neuron. You have to be in a certain proximity to pick up the signal. And it’s hard to do implantations across the entire skull.
[00:04:42] Bryan: You can do one, maybe two, but you just can’t push the boundaries. And so, implantables have been a highly specific technology for a certain condition. You want to be in a Like somebody’s disabled or something like that, right? Oh yeah, or like Parkinson’s. You know, like there’s an implant there where you have a very specific two to three millimeter zone of the brain.
[00:04:59] Bryan: [00:05:00] You have your implant there, it does its thing. Hmm. And, or paralysis. Uh, but it’s a very narrow and specific application of what you’re trying to do. It’s not this thing where it’s like, I’m gonna download all the knowledge in the world into my brain. So,
[00:05:12] Balaji: you know, maybe this is a dumb question, but how did people get better at that?
[00:05:15] Balaji: Because trial and error, every trial has, like, well, okay, that guy’s lobotomized, okay, next one, you know, like, how, I mean, obviously you can do a certain number of monkey experiments, but Um, the human brain is going to be different than the monkey in some ways. So, how do people improve on that? Or is it still very rudimentary?
[00:05:32] Balaji: You know, obviously, reading the brain There you’ve got more, more debug cycles because you’re getting a signal out of it and you can look at that non invasively and so on and so forth. But, you know, knowing where to put the probe and so on, poking around in a new person’s brain seems like a pretty error prone kind of thing.
[00:05:51] Balaji: I’d love your thoughts on that.
[00:05:52] Bryan: Yeah, it’s very much so. In fact, when I was building, when I was beginning to build Kernel, I went and saw a brain implant surgery at [00:06:00] USC. It was a company called NeuroPace. It was a 52 year old woman. who had debilitating epilepsy. And they brought her in, you know, she’s under general anesthesia, laying on the bed, and I just watched the entire thing where they, you know, carve open the scalp, pull back the skin, pull out the saw, and you, you know, you, you grind away, you pull out a portion of the skull, and then the dura’s on top of the brain, you just kind of cut open the dura and there’s the brain.
[00:06:24] Bryan: Pulsating it was one of the most significant moments of my entire life where you you see the brain hardware It’s real, right? Right there. You’re right. We’re not accustomed to seeing live brains. And yes, it’s this woman and You know, they laid over the electrode on the brain and then they had identified ahead of time this was the source of the epilepsy and so Uh, they were doing this, the stimulation, but it was really, uh, an important moment for me because I was trying to decide, do we build invasive or do we build non invasive and invasive has highly specific [00:07:00] applications like Parkinson’s or epilepsy or, uh, paralyzed, you know, trying to deal with paralysis.
[00:07:05] Bryan: The non invasive, what I was trying to do is I was modeling out saying, if we can program reality from the atomic scale all the way up. What can we do in terms of reliably, scientifically, methodically engineering our cognition? And so if you think about an example where if we buy our washer and dryer, we have a pretty high degree of confidence it’s going to fit through our front door.
[00:07:27] Bryan: We don’t typically say, can you measure that please to see if it’s going to fit? Because we’ve built engineering standards to say these washers and dryers should be able to fit through a common door. And the same thing when you buy a car, you trust it’s going to fit in the lanes of the road. You’re not worried that it’s going to, you know, So there’s a there’s a fun.
[00:07:43] Balaji: There’s a fun video online with like, um, there’s a particular bridge that’s a little too low that chops off the top of lots of trucks or do you see this is a famous viral video or I’ll put that viral. Yeah, so that’s the exception that proves the rule to your point. Yeah, most trucks can go on most highways and most bridges [00:08:00] because they’re built for that tolerance in mind and the it would be a failure of
[00:08:03] Bryan: expectations.
[00:08:04] Bryan: Otherwise, yeah, we’re we are in this epic moment of homo sapiens transitioning to homo Deus or whatever we become. And the most important thing is we are using our intelligence to build a new intelligence. Yeah. And we don’t have high quality ways to structurally and methodically improve our own intelligence.
[00:08:28] Bryan: We commit the same errors thousands of times throughout our life. We can’t fix our biases. We rarely can fix our indulgences. We can’t. We’re just these. We maneuver through our lives in, like, in this serpentine way. And so the contemplation with Kernel was, if we could have a system on the head that anyone could wear, and you get global scale measurement, it’s cheap enough to do that, you basically feed society this data, that then you can [00:09:00] build standards around that.
[00:09:01] Bryan: And so basically, you level up humanity. In a structured methodical way in the same way we build everything else. And we engineer society based upon data and looking on getting feedback in that loop. We don’t have it for our brains. It’s one of the only things. And so it seemed to me this gaping hole of the importance of refining the intelligence that is building the next level of intelligence.
[00:09:27] Bryan: So like,
[00:09:28] Balaji: uh, let me see, like a mirror for your brain. Right like actually getting some because you have a mirror for your face your mirror for your body Yeah, you have we have our Fitbit or Apple watch. It’ll give you now an ECG. There’s more quantified self So finding out what actually your brain activity is doing like it’s kind of like your heart rate.
[00:09:46] Balaji: Are you really agitated? It’s sometimes good to know right for example is your heart rate Like 120, then you’re probably not working out hard enough. If it’s 200, you’re probably working out maybe a little too hard, right? So you have this non invasive. It’s like a [00:10:00] circlet. Is it a cap? What does it look like?
[00:10:01] Bryan: Think of like a bicycle helmet. Bicycle
[00:10:04] Balaji: helmet. Okay, so it’s fairly, it’s like, it’s like a thing right now. You, you wouldn’t, you wouldn’t look normal
[00:10:09] Bryan: wearing it outside. That’s right. Yeah. It’s meant for, uh, clinical conditions. Fine. Okay. Like at home use. At home use. Here’s an example. I did, uh, the psychedelic, I guess it’s not psychedelic.
[00:10:17] Bryan: I did ketamine, uh, as a pilot participant for Kernel. We wanted to pose this question. What do drugs do to the brain? Exactly. What happens to the brain when you do ketamine? Okay. And so you, we wanted to show this because if you can answer what happens to the brain when you do ketamine, you can pose that question for thousands of things.
[00:10:35] Bryan: What happens when and then fill in the blank with everything we do in society. Right. And so what we did is we measured my brain for 10 minutes a day for five days before I did ketamine, during ketamine, and then for 30 days after ketamine. And it was the first time in the world somebody had created a longitudinal map of what actually happens in the brain.
[00:10:54] Balaji: Do you know Judea Pearl’s work on causality? Yeah. You’re familiar with that, right? Yeah. Yeah, like you know, his whole [00:11:00] thing is that a lot of studies that are just purely observational studies do not actually tell you what the causal relationships are. So he introduces this thing, the do operator. And so you have a model of the world, and you infer that if you do X, then Y happens.
[00:11:16] Balaji: But you have to actually do X, right? And so you consciously put in the stimulus, set up the monitoring to see what the response was, and then you could, you know, actually figure that out over time. And this, I think, is the big shift in biomedicine, where we go from just people being Research subjects, you know, like I was in academia like all these papers they talk about subjects to basically self You know participants in their own health.
[00:11:43] Balaji: They’re doing self experimentation and they aren’t like a row and a table for a scientist They are the scientists, and they are, you know, essentially generating their own data on themselves. And I think this is the shift of the biohacker, the self experimentation, and you’re part of that general kind of trend.
[00:11:59] Balaji: So what happened? Did you [00:12:00] live?
[00:12:01] Bryan: I did. It was an amazing experience because, you know, when you do something like ketamine, Somebody may inquire, Hey, how was it? And then you basically try to generate words to explain the situation. And language is a totally inadequate form factor to convey what it’s like.
[00:12:17] Bryan: You could say, well, I felt like I was in another dimension or I, you know, whatever you, but it really is such an imprecise, uh, way to understand what really happened. Like, tell me, uh, what’s your, what happened to your blood glucose level when you ate the following foods? And it’s like, I don’t know. It’s spiked.
[00:12:35] Bryan: Probably. Yeah, you just don’t know. You’re making or tell me the health of your heart by how you feel. I don’t
[00:12:40] Balaji: know. It was being fast, you know, maybe a qualitative kind of thing, but
[00:12:44] Bryan: not exactly. But so it’s so far removed from accuracy. And so we saw, for example, what was cool is the map we created is think of like planet Earth.
[00:12:53] Bryan: You’ve got airports around the entire Earth and you see Traffic from Tokyo to the US and to [00:13:00] New York, and it’s a pretty substantial leg, right? Then you see smaller areas where there’s less traffic. The same is true with our brains. We have high traffic and low traffic nodes and the way our brain structure in these nodes tells a lot about us.
[00:13:13] Bryan: And what ketamine did is it basically washed all my nodes. So it’s almost like taking all the airports on planet earth and just putting them in random places and then network activity starts building among the nodes again. And so you have this two or three day window where things are pretty. Open to being restructured, which is called this therapeutic window, which is you have this opportunity to remap your beliefs, ideas, emotions, and then over time by day three, four or five, we saw my nodes starting cement to cement again in certain patterns.
[00:13:44] Bryan: What we said is, okay, that’s cool, but highly specific for an application. We would rather build a technology that can cover the entire cortex. Okay. And let’s call it 15 millimeters deep. So we’re going to say, let’s say that represents. I’m making this up five billion neurons. Okay. 10 [00:14:00] billion neurons. Fine.
[00:14:00] Bryan: We want to see the whole system at play, not just one little cluster. And if you look at the entire system, even though we can’t see with the technology we chose, you can’t see deep brain structures. What we do get is a system that costs, you know, orders of magnitude less than fMRI than an MRI machine and can be scaled globally because it can be used in any kind of environment.
[00:14:20] Bryan: You’re not sitting in a coffin like environment with that with MRI. So we basically went through all these trade offs of What technology for what resolution under what use case and then we looked at the laws of physics and we said What paths do we think are doable? And that’s what we spent seven years Building the technology.
[00:14:36] Bryan: We know we built the entire thing. We built a custom chip Ground up, which is really hard to do. Yes, really and the entire why do you need that? For just to do it fast enough. Yeah, the technology is called time domain functional near infrared spectroscopy So
[00:14:53] Balaji: it’s, it’s something like a, like, is it for your transform like thing or what are you, what exactly are
[00:14:57] Bryan: you doing?
[00:14:57] Bryan: So basically the way the technology works is we [00:15:00] pulse light into the brain. And then you’ve got a certain photons that will go in and scatter about and then a few will come back out. And the detector needs to pick up the small number of photons that come back out. And when those photons come back out, because you’re dealing in a time domain scenario, you can get deep.
[00:15:17] Bryan: And then you reconstruct the photons when they come back out. So it’s like Um, holding a flashlight in your cheek. We’re looking at the hemodynamic signal. So we’re not looking at neurons. So I should think of it as imaging. Yes. We did a study with alcohol. What the question was, um, we had, uh, no, uh, placebo, uh, low alcohol, medium alcohol.
[00:15:36] Bryan: What we found is when someone drank a minimal amount of alcohol, uh, they were impaired. We could see that with, with the, uh, colon system, but their brain could compensate for the impairment. So the behavioral tests show no impairment. So if you just say, are you impaired from the alcohol you’ve, you’ve consumed behaviorally, you’d say, no, you’re not.
[00:15:56] Bryan: You’re fine. Right. Looking through kernel, you could see where there was something [00:16:00] on, on the hardware
[00:16:01] Balaji: issue that you’re compensating for in
[00:16:03] Bryan: software. Exactly. Right. And so then if you are at the mode of the higher alcohol condition, the brain no longer can compensate for it. Right. Interesting. So behavioral measures can pick it up.
[00:16:12] Bryan: But then we’re, are we’re basically now in clinical trials looking at dementia to say. People’s decline.
[00:16:19] Balaji: That’s a good one. Alcohol is, ketamine might seem like too heavy a drug for people to test normally. But, I don’t know, I mean like, How legal is it? Is it like, It’s
[00:16:29] Bryan: um, In some states I’m sure. Yeah, in the US it’s being used quite a bit now.
[00:16:33] Bryan: It can be used in clinical settings. Fine,
[00:16:35] Balaji: ok. So, alcohol is, In theory, they could buy a kernel helmet. Can you, can you buy it? Or can you get it in the store? Uh, you’re gonna get for
[00:16:43] Bryan: medical? Yeah, we’re only doing clinic right now. Okay, fine.
[00:16:45] Balaji: So, alcohol would be a way, though, you could test it. If you want to do the causality analysis, you would be able to see this.
[00:16:52] Balaji: And so that’d be a fun drug to, to try, that anybody could try at home. And the reason I say that is, a lot of [00:17:00] biomedical papers are, scientists at Harvard discovered X or Y or Z. And if you can’t replicate them, it’s prestigious citation, which isn’t science, independent replication is science, you know, that’s why with computer science, we can all go and download something from GitHub or Hugging Face or something like that, and you can just run it locally, and you can replicate it yourself, right, that’s why I’m, that’s why I’m asking about the data sets and so on, and like how portable the thing is, uh, can you just buy it and try it at home, what can people try at home, what they can try at home is the diet stuff and so on, we’ll come back to that in a second.
[00:17:29] Balaji: Anyway,
[00:17:30] Bryan: finish up with Kernel. Yeah, so like basically the idea with Kernel was, If we are giving birth to a new form of intelligence, it would be maybe helpful if we could methodically build and improve our own intelligence, right? And that’s so you can ask these questions. Like if you wanted to address, uh, how do we address depression or anxiety or how do you address anger or bias or how do you address, you know, like thousands of questions, it would be great to work on a trusted system of science.
[00:17:58] Bryan: And so that was the idea behind kernel. [00:18:00] And so all this time back, going back to this thought experiment. I was in the trenches with a bunch of entrepreneurs building deep tech, nanotech, synthetic biology, genomics. I was then working on brain tech to try to say, can we scaffold human cognition? And then I was continuing to think about this 25th century thought experiment.
[00:18:18] Bryan: And, uh, this is where Blueprint came up, where I thought maybe the observation they make about us is that technology reached a certain point in the early 21st century. Where death became a maybe from inevitable, right? And I wondered, I thought, when, when,
[00:18:39] Balaji: when did that, uh, like I’ve been posting on this for a while and I think your interests and my interests are like lining up.
[00:18:45] Balaji: Obviously you’ve done a lot, you know, personally and whatnot. Uh, and maybe we’ll have more to announce in the, in the weeks and months to come. But when, when did that, like, when did you turn from kernel to this, like three years ago, four years ago?
[00:18:56] Bryan: Is that right? It was in tandem. I was doing both kernel and blueprint at the same time and they [00:19:00] actually overlapped really nicely because.
[00:19:01] Bryan: Kernel greatly improved my ability to do things with Blueprint. Oh, why is that? Uh, just because I needed, uh, you know, I The creativity of it or something? Well, Blueprint actually, you know, we took this approach where we said, in order to do this property, uh, so basically, okay, let me step back a few steps here.
[00:19:19] Bryan: We said, is the Fountain of Youth here right now? Mm hmm. And we just don’t know it. Exactly. And we don’t know about it. And so we said, okay, what we’re going to do is look at every, every scientific paper ever published on Lifespan, Healthspan. We’re going to rank them according to effect and then how we strongly think the evidence is.
[00:19:35] Bryan: There’s some
[00:19:35] Balaji: mouse studies that are very, very convincing. I mean like the, they actually have real results and so on. Human studies have a lot more confounds and that’s because probably because we can’t run the same kinds of experiments, but go ahead.
[00:19:47] Bryan: So we looked at all the evidence and we said, okay, what happens if you actually structure the Most robust evidence and then you applied it to one person.
[00:19:55] Bryan: Mm hmm. Uh, no one had ever done that before, of the structure plus the application. So [00:20:00] Colonel
[00:20:00] Balaji: gave you the push to go and do that diligence on the literature? Well, I was
[00:20:03] Bryan: doing both in tandem. And Colonel was nice because we were trying to address every single organ in my body. Because typically, when you approach this problem, we say, to understand aging and this entire endeavor, you have to bio age every organ of my body because you can’t just look at me.
[00:20:21] Bryan: You can look at my entire body, right? But also it’s helpful to look at every single organ. Yep. And so when doing that, looking at the brain, it was really helpful to have a functional measurement of my brain. That was easy because measuring blood glucose was easy. Measuring my weight was easy. Getting a blood draw was easy.
[00:20:38] Bryan: So this is what I mean, a
[00:20:38] Balaji: mirror for your brain. Exactly. Like this was, this is. Another quantified self device in a sense that’s giving you readouts on another obviously very important organ where we can do that already for Skin or the heart to sex yet, but
[00:20:51] Bryan: not the brain. Exactly It’s like the one of the only missing pieces we have of the body is if we could but arguably the most important one Exactly.
[00:20:58] Bryan: Yeah,
[00:20:59] Balaji: actually [00:21:00] now, you know people have been working on circlet MRI that’s the term that I’ve heard for a long time and that just didn’t work because Why people just or it is working now. I’m not familiar with it circle it MRI like you know, like a circle it like the Like a king or Queens kind of thing that would be something you could wear and walk around I guess the bicycle helmet is almost as good as circle it MRI.
[00:21:22] Balaji: Yeah. Yeah That’s cool. So, uh, okay. So now let’s come to the present day, which is what people you don’t know you for right? Anything else you want to say on Colonel or whatever? I mean, that’s good. That’s helpful. Yeah, so Um, starting in 2019, you decided to, uh, you’ve seen Rocky IV? I have. Yeah. So like, obviously I like Rocky, but as a kid, I also had a soft spot for Drago’s training regimen, which was quantified self
[00:21:49] Bryan: before the name.
[00:21:50] Bryan: That scene? Yeah. It was, as a kid watching that thing, you want to be Drago. Well,
[00:21:55] Balaji: so you want, exactly. That’s right. So you want to have Until they give the injection. Until they give the injection. Yeah, yeah, [00:22:00] yeah. Exactly. Right? So there’s something about the combination in my view of the, Rocky and Drago style, the muscles and the quantified self that I think beats both in combination or like the combination beats both, right?
[00:22:11] Balaji: This is what I tweeted the other day when someone was like, Oh, Sol Bra will beat Moon Bra or whatever. And I was like, look, there’s a lot of wisdom, wisdom, right? In, you know, back to basics and sun and steel, all the types of stuff, a lot of wisdom in that. But a caveman isn’t going to pull off a moon landing, right?
[00:22:28] Balaji: Like, you can only go so far with the primitive caveman stuff before you hit a wall, and look, we live in a technological society, you’re not gonna progress or do math, right? That’s also a portion of human greatness, the moon landing, right? So, I think what you’ve done over the last three years, four years, is actually a great fusion of these, so why don’t you tell, you know, give the recap of, you’re a tech guy, and now, what’d you do over the
[00:22:50] Bryan: last three years?
[00:22:50] Bryan: What we tried to do is pose this question. Could we build an algorithm that could take better care of me than I can myself? [00:23:00] And so what we’re trying to do is say, look throughout history and the moment when technology demonstrated superiority to a previous system run by Homosapien intelligence. And so, like, just take a few examples of Like elevator
[00:23:16] Balaji: operator turning into Like, that’s a very small example, but Exactly.
[00:23:19] Balaji: You don’t need an operator anymore.
[00:23:21] Bryan: Exactly right. Okay. And so we’ve seen more of these. Like, you see that when the telegraph message was sent, you know, the Pony Express was dead. When we started having GPS navigation, the paper map on the lap was dead. So continually, we say yes to solutions that are either more efficient or help us achieve our goal at a lower cost or So I was basically posing this question of this sacred idea that we are the only arbiters of what we eat, when we go to bed, all these nuanced decisions on a daily basis that we make.
[00:23:52] Bryan: And I was posing this question, if the 25th century would see this was inevitable and we could identify it. And [00:24:00] so, yeah, I, uh, over the past three years we have built an algorithm and I said yes to the algorithm. Basically, I said yes to this entire process where we looked at all the evidence. We, uh, do a measurement across my entire body, every organ, and then we do the protocol, and I do exactly what the protocol says.
[00:24:14] Bryan: Mm hmm. And in doing that, I made trade offs. Like, I would go to bed when it, you know, when the data said I’d have the optimal sleep. And I broke all kinds of social norms in doing this. Right. And I think a lot of
[00:24:27] Balaji: people, uh Like, like, for example, like, you can’t drink alcohol when you go out. You often can’t eat when you go out to dinner.
[00:24:33] Balaji: Like, you’re just drinking water and everybody else is eating. Uh, you have to go to sleep much earlier or at a certain time. Um, you can’t travel as much because that screws up your sleep. These are some of them or?
[00:24:43] Bryan: Yeah, exactly. Yeah. And what we did is we, uh, we reframed this idea that because I became the most measured person in human history over the past three years, no human has more data than myself, uh, measured for
[00:24:56] Balaji: measurement.
[00:24:56] Balaji: There’s a guy named Larry Smarr. There’s a great article from Do you know this guy? [00:25:00] Yes. Okay, the measured man using the Atlantic in 2012 where he was measuring himself, and now you’ve taken that to the next level.
[00:25:05] Bryan: Yeah. Go ahead. Yeah, so we basically said, uh, if we have all this data, and we have an algorithm that’s running, Um, what I think a lot of people are confused by is they see this, and then they think it’s some vain attempt at something, or they, there’s so many misconceptions, they don’t understand what I’m really trying to do is a scientific exploration.
[00:25:26] Bryan: Oh, yeah. And a technological exploration to say, Where are we at as a species now, if this is true that this algorithm can in fact take better care of me than I can myself, even though it does mean that we have some trade offs, the point is, is established. Yes. We can have the debate, like what that means, what we want, what we think, how we feel.
[00:25:47] Bryan: Great. But it’s basically if the telegraph just did something, the Pony Express is not going to be around no matter how much we love the horse and no matter how much we love the riders. We’re going to use a telegraph, right?
[00:25:58] Balaji: And the thing [00:26:00] is, when you say you says a few times, like the algorithm takes better care of you than you would have yourself, meaning you’re sort of, uh, let’s call it intuitive or the way you normally live your life.
[00:26:12] Balaji: would give you a certain output X and Then if you just do things that are possible, but not normally, but I mean obviously like this particular regimented discipline you get Y Right. Yeah, and it’s interesting to put it that way because it’s as if you’re talking about the algorithm being outside your body whereas at least I would look at this as I Might rephrase it as a way of life that is better than the current fast food McDonald’s sugar You know, kind of lifestyle of like American health in the, you know, in the 2020s.
[00:26:45] Balaji: It’s interesting to call it an algorithm. People might call it a lifestyle or they might call it a way of life or a set of recipes or something like that. But when you call it an algorithm, it’s suggestive because it indicates that the sensors are pulling the [00:27:00] data off. And so that’s actually a question like, how dynamic was it?
[00:27:04] Balaji: You’ve got all these, you’re doing all this measurement, right? You’re also, you’ve got this rigorous diet and exercise and sleep program. To what extent is this a constant versus is it a variable where it’s like, you know, open loop versus closed loop control, right? So you’re getting all the signal coming off you, right?
[00:27:20] Balaji: It’s simple to just do the same thing every single day. But did this signal affect what you did? Yes.
[00:27:27] Bryan: Okay, what kinds of things did it do? We make modifications every day. Like what? Uh, for, for two years we were messing around with diet. Okay. Like, basically, uh, the diet I had was initially 1,900 and, and, uh, seven 1,950 calories.
[00:27:41] Bryan: Mm-Hmm. , I was on a 20% co restriction and every, because it was so low, uh, every calorie had to fight for its life. There was not a single calorie in there that was nice to have. Cool. To have trendy. It had to serve a specific purpose in the body. And the marker had to show its, uh, 1950. [00:28:00]
[00:28:00] Balaji: Yep. Is it normal, like, 2, 500?
[00:28:03] Balaji: Uh, yes, exactly. So you’re 500 below. 550 below got you to that level of budget.
[00:28:09] Bryan: Is that right? Exactly. And so that’s what we built out on the evidence of caloric restriction. And we since modified that like we have almost everything. And I’m now 2, 250. So I’m 10 percent caloric restriction. And we saw no change in my markers of giving up that 10%.
[00:28:24] Bryan: So we didn’t see there’s more benefit by dipping more. So these are, like, we’ve discovered, no, not discovered, we have. Experienced, you know, a hundred things like that, where we’ve put out a hypothesis, we’ve tested it, we get the data back, and that’s just done continually on every factor. And the important thing on this is, the evidence we use is population level studies, so it’s not me specific.
[00:28:46] Bryan: So we think that this is much more applicable to population level people, you know, to everyone, than it is Customized for me. Do you
[00:28:54] Balaji: know who Mike Snyder is? Yeah. Okay. So, you know, he did the integrum kind of thing So that’s another thing that this reminds me of [00:29:00] larry smarr mike snyder, you know, I think on there were 10 years ago He took every test just for the audience.
[00:29:05] Balaji: You may not know this mike centers at profit stanford He took every test that was available at that time. It was like, you know genome sequencing It was like transcriptome with rna. It was all this other stuff all this so called omics, which is you know doing everything at once and Measured himself or had his lab do it For, I don’t know how long it was, weeks, months or something.
[00:29:25] Balaji: And he, for example, he got sick in the middle and he could actually see from the, you know, mRNA analysis that he was getting sick before he felt sick. That’s right. Right? And moreover that it was pushing him into a body state that was at the level of expression vectors more similar to like a diabetic or pre diabetic state.
[00:29:45] Balaji: And so there’s like some correlation in infection and diabetes or what have you that he hadn’t known before. And so that’s a lens that I’m familiar with. Which is like the omics type stuff, like, you know, DNA, RNA, proteomics and so on and so forth. Did you, [00:30:00] what kinds of tests, you said you did all these measurements, right?
[00:30:02] Balaji: Did you get your genome done? I did. You did. And you, what other kinds of tests did
[00:30:06] Bryan: you do? Yeah, so an example of the Mike Snyder like is we’ve been using this DNA methylation clock, uh, denudin pace, uh, based upon a longitudinal study out of New Zealand. And it’s looking at the methylation patterns. It’s a Gen 3 clock.
[00:30:20] Bryan: And we’ve been doing it for two years now and we’ve seen how my speed of aging has changed in response. Oh. Which is
[00:30:28] Balaji: really cool. That’s really interesting. So you had like, you had a curve that was going like this and then it flattened out.
[00:30:36] Bryan: Yeah, so there’s a few things. Do you have a graph? Did you publish that graph?
[00:30:39] Bryan: I will soon. Okay, great. Yeah, we haven’t published it yet. So for example, this would be a great
[00:30:43] Balaji: n equals one study if you’re the term n equals one Yeah, right like in the sense of n equals one studies a longitudinal study on one person This would be a great thing to put on like bio archive or something like that if you we
[00:30:53] Bryan: intend on yeah Okay, that’d be awesome.
[00:30:54] Bryan: Yeah, so we we saw for example, I did two therapies that we had identified as [00:31:00] Worthwhile and so one was the statin of incursion. It’s used to clear senescent cells So when the cell stops dividing it becomes zombie like in it Spits out these cytokines. It’s bad. And so you want to clear the senescent cell load.
[00:31:13] Bryan: And so we used to statinib Which is a drug for leukemia and then quercetin and it’s a protocol where you do it three days For three months. Mm hmm. And so we did that and interest interestingly when I did that my speed of aging spiked So the therapy that was intending on doing the clearance Had this relationship where in this clock my speed of aging increased.
[00:31:35] Bryan: Mm-Hmm. . So it was like a game of whack-a-mole where it’s not like you do one thing in the body, it’s doing a whole bunch of positive things. It’s a complicated interaction with all kinds of thing, emergent properties. We saw the same thing when I did, uh, human growth hormone to regenerate my thymus. So I did, uh, HGH for a hundred days, and we, according to MRI, we did in fact, uh, rejuvenate my thymus by seven years.[00:32:00]
[00:32:00] Bryan: by changing the fat fraction based upon three MRIs. Interesting. So it was a huge success in that, uh, we’re cautiously optimistic. You know, like, this is new, so. Right. But, like, it was still three MRIs, and we, uh, we, the outcome’s interesting. That also dramatically increased my speed of aging. And then it came back down after I had discontinued the HGH.
[00:32:20] Bryan: And so it’s cool now that we have enough data over enough time from enough vantage points from MRI, ultrasound, DNA methylation, blood, saliva, stool sample, fitness test, cognitive test, MRI, like all of these things, we can see this picture, which is becoming increasingly interesting to see the state of play in the body as we go about these various So,
[00:32:39] Balaji: so I love this.
[00:32:40] Balaji: I love this. And you know, what I’d love to see are graphs, which are like, you know, the hammer of Thor, your intervention hits it at, you t equals zero or t equals five whatever and then you see the thing go like this or down like this and Then you see some maybe recovery back to [00:33:00] baseline Uh, or you see a permanent change, you know, based on that intervention.
[00:33:04] Balaji: Just seeing those graphs would be really cool, like the thymus thing you’re talking about. Or conversely, you know, here’s a graph, start working out here, and then it slopes off, right? You’re changing derivatives, you’re changing absolute values. I do believe that the way medicine is currently practiced, this is a whole long topic, and I’m sure you have opinions on this, but as somebody, like I did clinical genics for a long time, you know, did tests or prescribed, I’ve interacted with lots of doctors.
[00:33:26] Balaji: I’m technically Dr. Srinivasan, actually, but a PhD, not an MD. Okay, but I work with many MDs, right? The whole thing is based on you getting sick and your body breaking down and then maybe a surgeon or Imaging or you know, some palliative care is rolled in it waits for you to break first But that whole thing like an ounce of prevention is worth a pound of cure You know the where we really want to do and i’ve said this for a long time Way before I met you is we need a lot more investment in diagnostics up front to detect when something’s going wrong first.
[00:33:59] Balaji: And the [00:34:00] Doogie Howser MD and all the money should be going into diagnostics as opposed to what it currently is. It’s like a pinball machine where all the payouts are for surgeries or, uh, you know, very expensive drugs after things are already broken. It’s like a pinball machine that’s set up wrong, you know.
[00:34:15] Balaji: And even like the, the best doctor should be your internist or primary care physician. Or nowadays, and this is going to be a breakthrough potentially, your AI doctor that’s triaging upstream and getting you to do the right thing before it breaks, right? So what you’re doing has a lot to say, not simply about your personal fitness, but also like for how medicine is practiced and how people take care of their own health.
[00:34:38] Balaji: I love your thoughts. That’s
[00:34:38] Bryan: exactly right. We have, we have such a strong bias for action. in the therapy category of addressing something that’s broken and we discount dramatically the diagnostics. And that’s why blueprint has been so expensive. So it has been 2 million a year because we invest so heavily in the [00:35:00] measurement.
[00:35:00] Bryan: And so people look at that and they say, I’m not going to do it because I don’t have 2 million a year. But what, what they don’t realize is the actual practice of blueprint is like a thousand to 1500 a month, including groceries and I’ve open sourced everything. So blueprint, I’ve taken the cost and lowered it by orders of magnitude from all of our knowledge and made it to distill it into this open sourced, you know, very low cost process.
[00:35:23] Bryan: But yeah, it’s really, the cost has been the scientific research and the diagnostics and the data analysis of trying to piece together. Um, basically informing what we should do and why. So
[00:35:34] Balaji: here’s, let me, let me, let me frame it in a way maybe you, which is, and that’s really good and it’s awesome. Open loop blueprint is a thousand dollars a year.
[00:35:44] Balaji: Closed loop blueprint, where you have the entire metrics thing coming out and, and rejiggering what your diet is and so on. That’s still on the order of a million dollars a year. So that’s, which is fine, right? It just means like we have the, there’s the scalable [00:36:00] technology that’s out there now that people can do that’s based on your self study and then to further customize it to themselves.
[00:36:06] Balaji: We will want to take all the body monitoring and measurement stuff that you’ve done and start miniaturizing it or otherwise making it accessible and bring it down by another one, two, three orders of magnitude, similar to what happened with cell phones, right? like cell phones were really expensive in the 80s and You know, they’re like these bricks, you know And then they became something that now the poorest of the poor around the world can afford but we knew that that was desirable first before we did it.
[00:36:31] Balaji: So, would you agree with that, like the, the non customized version is something that you can do at home and that’s just X thousand dollars a year or something like that. The custom version with all the sensors, people cannot yet do at home or they can only do a subset of that at home and that’s something we want to bring the cost down.
[00:36:46] Balaji: Is that right? Yeah, that’s,
[00:36:47] Bryan: that’s correct. I’d say there’s some power laws at play. Where some of the sensors are much more valuable than others. You don’t need all the whole thing. The open loop I’d say is probably like an 80, 20 rule. And then you step up to where I’m at, where the [00:37:00] 99 percent of what we’re over at is probably the 2 million a year, but somewhere, but it really, I think the cool thing is that you have an 80, 20 power law for the lowest cost version.
[00:37:12] Bryan: And that’s where I think. The blueprint is basically basically misunderstanding. I’m I’m so misunderstood. This product is so misunderstood That is the cleanest way someone can understand. It is the power law availability for everyone. Yeah, and it’s all for free I think there’s a lot of people who Are apprehensive because they think it’s too expensive or they can’t get the benefit or they don’t realize the power laws at play but it’s to me that’s the most exciting outcome of this entire thing is that One can, can meaningfully improve their health and wellness and extend their life by doing the basics.
[00:37:48] Balaji: I love it. I mean, the way I’d put it is the, you’ve computed a solution and that solution will work probably pretty well for most people. If people want to recompute a custom [00:38:00] solution for themselves. It might be even better, but it’ll be significantly expensive and it’s only worth it if they’ve got disposable and they go from 100.
[00:38:07] Bryan: And the yield is honestly pretty low. Probably
[00:38:09] Balaji: low. Now, I will poke a little bit on this. And the reason I’ll poke a little bit is, um, you know the concept of nutrigenomics? Yeah. Right. So, one thing that I believe is that the reason we have so many studies that say X is good for you, X is bad for you, Y is good for you, Y is bad for you, is that you have essentially confounding genetic variation behind the scenes, potentially, where there’s cohorts that are like, you know, a small example.
[00:38:36] Balaji: There’s a paper that I, you know, did the stats for many years ago. We can put that up on screen, but it’s the um, Warfarin dosing. Okay. So warfarin, it’s a, it’s a blood thinner and you could have a huge football player and they could bleed out from warfarin. But like a 80 year old grandmother could take like a big dose and that’s because there’s like two alleles like [00:39:00] VKRC1 and CYP2A9 if I remember correctly, you know, in addition to the normal kinds of things, which is like how big and beefy somebody is or how small and light somebody is.
[00:39:08] Balaji: These, you know, your genotype, they’re actually significantly influenced. Your tolerance to this and there’s a whole database of this called farm GKB, which has Your background genetic variance and then what drug dosage is optimal for you, right? Like I I absolutely do believe that what you’ve done here with the certainly with eating vegetables and so on will improve for a lot Of people as an example if somebody’s a caffeine non metabolizer if they’re an alcohol non metabolizer if they’re if they’re lactose intolerant Or lactose tolerant that optimal diet might change, right?
[00:39:40] Balaji: so I want, you know, there’s, there’s actually something, here’s something maybe that’s in, by the way, I love the 80 20. I’m not saying not to do that. It’s great. It’s good that it’s there. So I mean, it’s in between 80 1 might be population specific breakdowns. For example, you know, Northern Europeans, you know, need, you know, less vitamin D than South Asians, right?
[00:39:59] Balaji: As a South Asian, [00:40:00] lots of Indian computer guys need vitamin D supplementation because with dark skin, you’re not, you’re, you’re supposed to be in the sun more and you’re not, right? So that kind of thing might be something where population level segmentation might be the next level of resolution, where it’s still cost effective to go and do these studies, right?
[00:40:16] Balaji: But it’s kind of like what we do with the Human Genome Project. There was like one reference genome that was done. Then there’s like the HapMap and Thousand Genomes Project where we’re sequencing subgroups from different populations, and you get better resolution, but you’re not all the way down to the level of an individual yet.
[00:40:30] Balaji: Let me know your thoughts. Yeah. Yeah, I agree. That might be a follow up study where you have like representatives from different
[00:40:34] Bryan: populations. Yeah. So I guess as I have evolved through this project, we went to the absolute extreme of what can be done in the early 21st century. And where I’ve come back with everyone else is those, that level of optimization, you’re still at this far end of the extreme curve.
[00:40:52] Bryan: And really what I’ve been seeing and talking to people about this is we have this plague of self destructive behavior. [00:41:00] We eat too much sugar, we don’t sleep well, we eat junk food, we eat too much food. We, we are a society addicted to addiction. You
[00:41:08] Balaji: know, so the thing is, I would actually, I agree with you.
[00:41:11] Balaji: I, I have started to think though that as capitalist as I am, what’s happened is, um, as you went from like home cooked food, right? Where either you made it or like a close relative made it. To like, for example, the rise of restaurant culture, you know, which is actually a relatively recent phenomenon. We can put that graph on screen.
[00:41:29] Balaji: Then all of these businesses that were not going to pay for your health care bills could make the food salty, sugary, tasty, huge portions and get more money. But they put the externality of your health care bills on you. And so it’s not exactly purely self destructive, it’s partially self destructive, but it’s also something where there’s entities that are disaligned with your health interests that will make more money if you’re less healthy, you know?
[00:41:59] Balaji: It’s [00:42:00] like the, have you seen the snack foods thing? Like the really Uh, something, they put something in snack foods where you just, you eat the whole box or you eat the whole thing and you want more and so on. And they have all these chemists and all these people who are doing packaging and they’re putting it right in the front of the store because that’s no accident.
[00:42:16] Balaji: It’s right there in the checkout line where you have to see it. You know, Dungeons and Dragons, we were talking about that, right? You have to roll like three saving throws and be like, I’m not, not eating the cookies, right? So it is, it’s not entire, it is partially self, right? But it’s also partially that the environment isn’t helping us.
[00:42:32] Bryan: Entirely. And so I’d love your thoughts on that. Yeah, if you’re, if you’re piecing together, I guess, if I’m piecing together all these different pieces, so, cell branch of Venmo, try to understand deep tech to program reality, make the brain a methodical scientific endeavor to improve our intelligence, and then see where we’re at with the fountain of youth and try to explore whether we’ve reached that critical point where an algorithm is better taking care of us than we can ourselves.
[00:42:55] Bryan: And you bridge it all together. That to me, output don’t die. So if you [00:43:00] say, okay, so we have these systems operating society, we’re part of it. The society is part of it. What is the objective function of society? What game are we playing? What are the rules? What are incentives? Visible and invisible. And that’s where I think this is.
[00:43:16] Bryan: Uh, what I’ve been working for since the age of 21 is this idea of don’t die, which is basically when intelligence gets to a certain point of capability, the only foe becomes death. Mm hmm. It no longer makes sense to raise an army and conquer territory. It no longer makes sense to have huge disparities of wealth, uh, you know, accumulation.
[00:43:40] Bryan: It’s a, a different State of existence. And so don’t die is don’t die. Don’t die individually. Don’t kill each other. So don’t build things that have people killing themselves or don’t kill each other. Don’t kill the planet because we treat our bodies like we treat planet Earth. It’s an identical relationship.
[00:43:59] Bryan: And build [00:44:00] AI around to align it with don’t die. And that is my best guess on what the 25th century will say is homo sapiens figured out. That the philosophical and mathematical structure of existence was don’t die at every layer of existence. You know,
[00:44:19] Balaji: I’ll give a couple of riffs on that, Ray, if you want.
[00:44:21] Balaji: One is, the zeroth commandment, thou shalt not, first commandment is thou shalt not kill, zeroth commandment is thou shalt not die. Is that good? That’s very good. So use that one, right? Um, uh, second thought is, um,
[00:44:36] Bryan: Well played. Was that good? Yeah, that was good. Really well played.
[00:44:38] Balaji: Alright, good, good. So, uh, second thought is, um, you know, I had this clip from, you know, when I was on, uh, Lex Friedman’s podcast where, you know, I, I talk about the, the prime number of memes.
[00:44:49] Balaji: Are you familiar with that? No. It’s just a different way of, I think, coming at some of the same concepts. The short version is, you can teach rats. to navigate meses with, like, even turns, right? This [00:45:00] example comes from Chubbs. You can do even turns, maybe every third turn, but if you start putting them in a mese where they have to navigate a prime numbers The level of abstraction is too high and frankly, it’s too high for like 99 percent of humans.
[00:45:11] Balaji: But something that I’ve thought about for a long time is that pattern is still very simple in a sense, right? And so how much are we like rats in a prime number maze and just one notch of abstraction above we could just see the grid, we could see the structure, you know? And physics is very appealing in this way because like, You know, Maxwell’s equations are so beautiful and so simple in a sense, and yet they describe a variety of different phenomena, and we can’t see these electrical and magnetic waves, but then maybe you can with iron filings, or maybe you can with an oscilloscope or something like that, right?
[00:45:49] Balaji: Well, what’s next? You know, you built the brain measuring device, you measured everything on yourself, You use that to solve essentially an equation, which is what [00:46:00] is the optimal health, diet, etc. routine that you can do, right? And you adjusted those parameters based on this. Now you’re doing what?
[00:46:09] Bryan: What’s this?
[00:46:10] Bryan: You’re selling
[00:46:11] Balaji: olive oil. That was a V1. You just put it on the website and it was like, Oh, he’s just selling olive oil. I’m like, I’m sure that’s just the prelude to something bigger. So, you know, now maybe we can talk about what that bigger thing is. This is going to come out right after your launch.
[00:46:23] Bryan: Yeah, so we do a blueprint and we say, Okay, now we’ve figured out some power laws.
[00:46:28] Bryan: And diet is a significant power law that is sometimes outside the control of the person. You know, a person can choose to go to bed when they, they, they can have control over the bedtime. Uh, and
[00:46:41] Balaji: to some extent, not if they’re in like the military or if they’re
[00:46:44] Bryan: to some, to some degree, there’s like some control around sleep.
[00:46:48] Bryan: Diet is very hard because you’re subject to what’s available in your area. Uh, you have prep requirements, which, so it’s very hard to control diet. So we basically said, what are some power laws of [00:47:00] improving our life and extending life? We said diet is, you know, one of the biggest things that we could actually help people in solving.
[00:47:06] Bryan: And so we said, okay, what if we take blueprint and make it into a format that we could get everyone to have access to and make it lower cost than fast food. And that’s what we set off to do nine months ago and we’ve done it. So we’re going to drop the product in January. And, uh, we’re competing for the most efficacious product in human history, that it will beat everything ever built.
[00:47:29] Bryan: And initially it will be just a few products, it’ll be olive oil, a six ounce drink, eight pills and then super veggie nutty pudding, these things I eat. Yep. We’ll expand out to, uh, covering the entire caloric intake for the entire day and then add more variety and texture and fun and stuff like that. But we’re basically, um, We’re trying to, uh, solve for what society has done.
[00:47:52] Bryan: The exact opposite of society has built itself to addict us to self destructive behaviors, and they’ve driven the [00:48:00] price down. So it’s very hard, uh, from a, the, you know, from being addicted to the food, but also from an economic perspective to make it the easiest option on multiple fronts. Yep. And we’re trying to, I would love to, to transform the industry.
[00:48:16] Bryan: Uh, to make fast food a bygone era.
[00:48:20] Balaji: I love it. And basically, you know, it’s kind of like that saying, you know, what beats a bad guy with a gun, a good guy with a gun. So we talked about the bad capitalists. And what beats a bad guy with a business is a good guy with a business, right? And, you know, it’s funny, like, uh, you know, Warren Buffett, who I respect as an investor and so on, he has, he, he calls Bitcoin Rat Poison Square, okay?
[00:48:38] Balaji: But actually, like, some of Buffett’s companies, like See’s Candy or Coca Cola, are actually the ones selling Rat Poison Square, where it’s like chocolates and sugars and sugary, you know, sodas and all this type of stuff. Now, you’re selling, in a sense, defense. Shields against that offense a be the point that you make about how the fast food is cheap [00:49:00] It also stores and so, you know, people can keep chips on the shelf and they’ll store for six months Whereas fresh food you have to go and get a new each day So that the cost isn’t just the cost of the vegetables which are something maybe sometimes higher than you know fast food It’s a cost in time and attention To go and you know Like your supply chain becomes much more complicated when you decide to eat Fresh food all the time, which of course you’re aware of so what you’re doing in a sense Is you’re solving that that like implicit supply chain problem, which is that you know unhealthy carbs sit on the shelf And they last longer and they’re easier to get and the cost problem and the and the attention problem We’re just essentially subscription health, right?
[00:49:42] Balaji: So you just get essentially a meal kit and is that right? That’s right And it’s a meal kit and Uh, if you eat it, it’s, it’s for the busy, healthy, and obviously you have to do all the exercise and all the other stuff that you do, um, and includes the supplements in there? It does. Okay, great. Yeah. So, that’s cool.
[00:49:58] Balaji: And, and this
[00:49:59] Bryan: [00:50:00] launches January 5th. January 5th. Yeah, we have a few, they’ll roll out in the first few weeks of January. Great. Good.
[00:50:05] Balaji: So probably by the time, no, by the time you see this, it should have launched. Yeah. Uh, and I’m the very first customer. I’m going to be the very first customer. Um, I love this kind of stuff.
[00:50:13] Balaji: I was the first, uh, I was the first investor in Soylent. And so I’m just sold and so but I think this is the next iteration of that where it’s not just saving you time. It’s not just for the business for the busy healthy who want to get into like tip top physical condition as well. That’s great and and live as long as they can and maybe look forever.
[00:50:30] Balaji: Okay, awesome. So that was good. That covered a lot of ground. Anything else that you wanted to talk
[00:50:35] Bryan: about? Yeah, it’s like going back to the previous point we had before of like this the conversation that has spun up around Blueprint. I did this I did this project really in No one paid attention for two years.
[00:50:46] Bryan: I published the entire thing online. Oh, really? For two years. And
[00:50:49] Balaji: it was just basically when you did like the Bloomberg thing or something, the selfies? Exactly. Shirtless selfies proved
[00:50:54] Bryan: that it worked. Is that right? Honestly, probably,
[00:50:56] Balaji: yes. Yeah, that’s right. I think that’s what it is. Proof of workout, right?[00:51:00]
[00:51:00] Balaji: I mean, the funny thing about it is there is a logic to the illogic of it, right? Basically, the thing is if it doesn’t work for somebody, why would it work for them? You know, A. It’s, it’s actually, what you’re doing is almost the inverse. You know this guy Mark Milley? No. He’s like a senior general in the military, right?
[00:51:16] Balaji: And supposedly the military has certain physical fitness requirements. And something I was unaware of, I thought generals were like exempted from them because some generals are out of shape. He’s like particularly out of shape. And so somebody in the military told me, No, actually he’s not exempted from it.
[00:51:30] Balaji: And the fact that he’s out of shape like that, shows that it’s just sort of flagrant abuse of the political system where a private is punished for not passing these kinds of criteria. And they’re like a private is punished. worse for this than a general is punished for not winning a war, right? And so it’s, it’s actually good if leaders are actually able to practice what they preach and so on and so forth.
[00:51:54] Balaji: So I think that’s why there’s some value in actually people seeing the results. They’re like, okay, now I’m
[00:51:58] Bryan: interested. That’s right. [00:52:00] Um, go ahead. Yeah. So I’ve been, uh, what I like about this entire thing is that If, in fact, an algorithm is better at taking care of me than I can myself, then it invites this delicious conversation about the future of being human and the future of society.
[00:52:16] Bryan: And it basically teases out everything that is sacred about our existence. It pulls it out, and it feels offensive, and it feels like you’re being assaulted. Hmm. For some people. Uh, yeah. For some, yeah. The majority. And so I’ve been hosting these dinners at my house for the past two years. And I spent two and a half hours.
[00:52:36] Bryan: It takes two and a half hours to have this conversation.
[00:52:38] Balaji: Of why, of why don’t die is a reasonable thing? Uh,
[00:52:41] Bryan: um,
[00:52:43] Balaji: I always thought it was the most intuitive thing. The reason is, I have so many math textbooks to get through. I want to be able to get to, I don’t know, Hungerford’s, you know, algebra. Like, you need time to be able to go through these.
[00:52:56] Balaji: Some of these things could take like weeks or months to go through, right? [00:53:00] Um, so, I thought that was the most intuitive thing ever. But a lot of people are like, Oh, no, no, death is so secret or, and they have this weird stuff around to go.
[00:53:08] Bryan: It’s remarkable. Yeah. So yeah, it’s basically like walk through, uh, because society is changing at a speed that it’s never changed before and change is scary for humans, all of us.
[00:53:19] Bryan: And what I try to do is walk people through what’s going to happen when things change fast. And when what we value is taken away from us because things have changed and we’re given uncertainties or unknowns, how do we psychologically navigate this transition? And that’s what the two and a half hour long conversations about.
[00:53:38] Bryan: And the majority of the people that show up, they’ll. So, comment that it was the most significant conversation they’ve had in their entire life. That they’ll ruminate on it for, oh, even two years. I still get messages like it’s still
[00:53:50] Balaji: What are their arguments? Like, I mean, obviously I know some of them, but I’d love to hear
[00:53:53] Bryan: you recapitulate them.
[00:53:54] Bryan: It basically goes in five stages. So, the first stage, as I say, if you had access to an algorithm [00:54:00] that could give you the best health of your life, physically, spiritually, mentally. Uh, in exchange for access, you would need to do what the algorithm said. Go to bed on time, you know, go to bed when the algorithm said, it would have said, would you say yes or would you no?
[00:54:13] Bryan: Say no. Now, on purpose, it’s very high level. And there’s a lot of unknown questions, like people want to know, but what if blank and can I have blank? So it’s purposely left as abstract. Yes. And then the table, you know, everyone there goes around and they offer their perspective. You know, like I would say yes, or I’d say no, the majority of people will say no, uh, for a variety of reasons.
[00:54:34] Bryan: And then phase two is I flip the script and I say, okay, we just all offered our opinion on this thought experiment. Let’s now imagine the 25th century is looking at us and they’re observing from our answers. What are the things we care about? What are our values? What are our norms? What are our beliefs?
[00:54:50] Bryan: So let’s look in the mirror. What do we, what are we in this moment? And so then they’re invited to reflect on that. Their viewpoint is a snapshot in [00:55:00] time, but clearly society is going to move forward and those values and ideas are going to shift. So what a person expresses, how a person answers that question in 2040 2023.
[00:55:10] Bryan: So it invites them to be reflective. And then, uh, now they’re in a situation where they’re not being defensive anymore about their own positions are trying to be aware of what’s happening now. And they kind of have this incentive of not being stuck in the past. They kind of want to be clever now to like, what is the puzzle we’re trying to solve?
[00:55:28] Bryan: So the next phase is what is happening in society right now? Like with the major trends, how can we say in simple terms, what’s really happening? It’s basically homo sapiens have been the dominant form of intelligence. It’s artificial intelligence now rising, it’s significantly contributing to certain fields.
[00:55:45] Bryan: In some ways it’s becoming better at humans. If we map this out over some duration of time, you can see how this goes, where it transforms reality in ways we can’t understand, and that the concept is first principle thinking is you gather all your knows, all the, everything you know, and you [00:56:00] branch out your next step.
[00:56:01] Bryan: Zero of principle thinking is unknown, unknown. So examples like special theory of relativity. Where Newtonian physics change if you have a new dimension to play with or germs where you have beyond the resolution of the eyes You have these things these tiny things causing infection. Those are things you can’t deduce from first principles They just change reality And so AI is going to introduce a bunch of zeroth principle changes in society and that’s going to scramble our realities at speeds faster than we ever had before creating some Discomfort in how you know, the speed of change and how we deal with it.
[00:56:36] Bryan: And so now you’re basically This is the first existential crisis people experience because now they’re like, oh shoot. I’ve been working my entire life For the following objectives for like this status this power. It’s a Truman
[00:56:50] Balaji: show kind of experience Yeah, all the things fall down and they’re like, oh, where am I?
[00:56:54] Balaji: What am I doing? It’s like the end of the Soviet Union. I’m from a totally different angle. Yes Yeah,
[00:56:58] Bryan: so this is existential crisis [00:57:00] and then so everything they value that I Does anything matter anymore? Like, and so, and then you say, what do they get in return? A bunch of zero principle, new changes. So they get everything they care about taken away from them and they’re given a bunch of uncertainty.
[00:57:15] Bryan: And so you, you hit it from both sides and it creates this catastrophic thing of like, I don’t know if I can do this or not.
[00:57:22] Balaji: Right. This is the future shock, the technological shock. And as we were talking about. And that’s hitting Asia in a totally different way where Asia was under communism or socialism and they didn’t, they, they were at zero and now they’ve gotten to one, you know, in the sense of technology is associated, the smartphone, all this stuff is associated with the rise of China, the rise of India.
[00:57:45] Balaji: So generally speaking, people out here are more optimistic about the future. And so in many ways, what were you talking about, that uncertainty and that feeling of. You know, decline. This is why I think the term first world and third world I’m not sure we talked about this. Those are [00:58:00] no longer applicable.
[00:58:00] Balaji: Instead I talk about the ascending world and the descending world That makes sense, right? Because it’s like rates of change of what people feel. People in Brooklyn are wealthier than people in India, but People in Brooklyn are much less optimistic about the future because they’re a descending world like, you know The legacy media is being competed against by tech.
[00:58:22] Balaji: They’re losing their influence, you know, they’re you know, etc Whereas some villagers in India, even though he’s absolutely at a much lower base. His ascent is upward So he feels great about the future. He’s optimistic. He’s positive, right? So I mean, I, you know, I, I do think that it’s important to try to, what, what I like about what you’re doing is it’s a relatively simple recipe that in the literal sense, right?
[00:58:47] Balaji: That people can follow to at least take themselves and point themselves towards ascent, right? It’s also, it’s meta libertarian. You know why? It’s opting into constraints, right? Libertarian says no constraints. [00:59:00] Meta libertarian says I might opt into this algorithm. That reduces my freedom in return for a benefit, right?
[00:59:08] Balaji: It’s like, it’s kind of take away all the constraints and then figure out, okay, this is actually a constraint that does work for us again, you know? This is like, uh, bundling, unbundling, re bundling, you know? It’s like, everything was stuck on CDs and then you mp3s, you re bundle them into playlists, right?
[00:59:24] Balaji: And so, we had a bunch of conventional ways of eating, like you go and you get I don’t know, Big Mac with fries and this was like conventional and millions of people did it and it was on TV. How bad could it be? Etc. Or you have the food pyramid and which is horrible with all the grains and so on. So we unbundle that.
[00:59:41] Balaji: We see that the carbs are terrible for you. Then we rebuttal it into something like blueprint.
[00:59:44] Bryan: Yeah, that’s exactly right. We’re trying to build a new FDA. Yeah,
[00:59:47] Balaji: exactly. Exactly. Which we can talk a lot about that. Exactly.
[00:59:50] Bryan: The fourth and the fifth stage. So once you’re at that point, then people say. What do I do in a circumstance?
[00:59:55] Bryan: Then we talk about practical things that people can do [01:00:00] to reach Fourth or fifth stage of what? Oh, yeah, yeah. The dinner. Oh, the dinner. Yeah, yeah. So they basically So, like, given if this is state of play, um, how might one succeed in this new environment? Yep. And then it So there’s an example I, I, uh, explain of how you build a business or do a work in life with a first principle reality versus zero principle reality.
[01:00:21] Bryan: So imagine I was, I was in the Middle East, uh, talking to a country leader and he was telling me about his 2030 plans. This is in the year 2017. And I said, that’s remarkable. Like I’m imagining being you and planning 13 years ahead. I don’t know how I can do that. Uh, the world’s going to change so many times.
[01:00:40] Bryan: Yeah, exactly. And so once you put that, like, I don’t know. And so he said, okay, like how would you think about it? So I, I did this thought experiment with him on the spot. I said, okay, let’s imagine we have two robots in front of us. On the left here, we’re going to give the robot a topographical map of the sand dunes.
[01:00:56] Bryan: And we’re going to say, go to that end point. And you do that, and you set the robot [01:01:00] off, and then the robot is basically stalled very quickly because the sands have shifted and the map has changed. On the right hand side, you’ve got another robot. You just give the robot the tools to navigate any change in the sand, and you just give the GPS coordinates of the end point.
[01:01:14] Bryan: That, on the left side, is a first principle robot. Because you’re saying, what can I know about the terrain? What can I know about the robot? And give it the instructions. But it doesn’t take into account zero’s principle change. On the right hand side, you don’t really care how the terrain, the, the, the sand shift.
[01:01:30] Bryan: You just care, you just care about the robot giving the, the ability to navigate. Yeah. So
[01:01:34] Balaji: I mean, I would call that like, I mean, open loop versus closed loop control. Right? Like, right? Sure. So, the first one, you, you are not incorporating feedback. During the process you have no budget for uncertainty. You assume a static environment and if all that is true Then you can just shoot to the goal But in the second you’ve got onboard sensors you have some ability to turn that sensing into actuation You have your initial direction heading and then you update it But you keep kind [01:02:00] of moving back towards that that direction and then that’s
[01:02:03] Bryan: like closed loop control Yeah, exactly and that the basic idea on this is every generation previous to us Could reasonably look at the past and say, I’ve seen the past.
[01:02:14] Bryan: I’m going to model out the future of my life and make reasonable assumptions and say, for example, I’m going to become a college professor. You can say that at age four, I’m going to study in school, study the following thing. I’m going to apply. I’m going to tenure. I’m gonna do this thing and then retire.
[01:02:27] Bryan: You can plan out a 70 or 80 year lifespan. Uh, that is not the case right now with the way technology is moving. And political
[01:02:36] Balaji: change, and you know, a few
[01:02:37] Bryan: things, yes. So my son is now a first year in college. It is impossible for him to now map what the world is going to be like in four years. What is he going to study?
[01:02:48] Bryan: He’s currently doing physics, math, and CS. Those are
[01:02:51] Balaji: good fundamentals. So I think, like, math isn’t going to change. Right, like in the sense of, so it’s funny you say this, because basically, I do think the [01:03:00] two things that you should get good at today, I mean, physics and math are always good. But computer science and stats, if you’re good at those two, computer science and algorithms and stats
[01:03:08] Bryan: is data structures.
[01:03:09] Bryan: Yeah, it’s a language. But I guess like the, what I tried to do with this group is to say, um, for the first time in homosapien intelligence, the wisest answer to a question may be, I don’t know, for the very first time ever. Which is the biggest shift of intelligence ever from I can map out The past and model out the future and I can put probabilities to this outcome Hmm given pace of change and the dynamic range of potential change The wisest answer may be I don’t know because the risk is you create you bring over your first principle priors And it leads you
[01:03:48] Balaji: astray.
[01:03:49] Balaji: We have complementary lenses on societal, global, etc. situation now. One thing I think a lot about is, like, whether it’s exactly the year [01:04:00] 1913 or comparable to it, the year 1913, the monarchy still existed. You’re basically, you had kings and nobles and, you know, regal people all around Europe. And the U. S.
[01:04:09] Balaji: and Russia were basically on the side of the main event, which was Europe. And yet, all this technological change is bubbling underneath. You had factories, you had, you know, automobiles, first of all, you had communism, you know, like was, was, was, uh, bubbling, right? Ideologies, all these things were bubbling.
[01:04:28] Balaji: Yet over it were kings and queens and so on. And all that just collapsed in fire and blood. from 1914 to 1918 with World War I, where it started with, you know, guys on horseback and ended with machine guns and passports and, you know, the modern world and the collapse of Ottomans and all, all of this kind of, you know, change that had been held back just went, voof, like, like this.
[01:04:51] Balaji: And, you know, Lenin, who’s not a good guy, but he did have a good saying, was like, you know, there are decades where nothing happens and then there are weeks where decades happen. That’s right. I also do feel [01:05:00] like, um, You know, the level of change that happened, for example, over a few weeks with COVID was just something where normal life was on hold and lots of things were changing really, really fast.
[01:05:09] Balaji: And whether it’s a sovereign debt crisis, whether it is, um, conflict, physical conflict with China, whether it is political conflict in 2024, whether it’s all of the above, whether it’s that plus technological shocks, of course, with, you know, you know, obviously AI, but I think AI plus crypto or technological shocks, especially if things, things rise.
[01:05:29] Balaji: All that together is a lot of stuff to do at the same time. Exactly. That’s a lot of stuff. Even for guys like us who ride the lightning and, you know, we surf vol for a living and so on and so forth. It’s a lot of information to
[01:05:40] Bryan: manage, right? 100%. This is like when I come back to this Don’t Die. It’s deceptively simple and it took me a decade to come up with it, but you take this entire environment.
[01:05:50] Bryan: Yeah. You distill it down. And you basically say, okay, what do we do as a species? Like given this. Right. What do we do? Like, uh, who has an [01:06:00] answer that they want to offer up? Like, hey, religions, where are you at? Like, what do you have to say? Right, right, right. Hey, capitalism, do you have something to offer up?
[01:06:05] Bryan: Right. Socialism, communism, like, who has a framework to say what do we care about? Why do we exist? Under what terms do we work together? Uh, how do we deal with a potential climate that may not be sustainable for us? Like, how do we understand it? Because right now we have the objective function of capitalism.
[01:06:24] Bryan: Capitalism is the dominant ideology of existence. It’s the thing that creates the objective function for all things. And if that is, if that remains so, it’s going to take us in a certain directory.
[01:06:34] Balaji: I like capitalism, but I don’t love capitalism. I like it because I think you need to make money and you have to balance the books and so on.
[01:06:43] Balaji: But capitalism isn’t going to get you to the moon landing, right? That is to say, oh, you need meaning as well. You don’t just need to make money, you need to make meaning. Do you know this guy, um, gosh, I’m going to misremember his name. He is, uh, he’s a senior guy in China who [01:07:00] wrote, um, I think it’s Wang, Wang He Min or something like that.
[01:07:03] Balaji: Um, he wrote like 30 years ago about, um, cause he was a Chinese guy who came to America. And he’s now a very senior advisor to the Communist Party of China and has been one of the most influential thinkers there. And he basically said the thing that balances capitalism in the U. S. is Americans vision of the future.
[01:07:23] Balaji: That that is the one thing which the futurism is enough of a spiritual thing, in a sense, of what is possible, that balances against the extreme practicality of The here and now material capitalism of the small business owner, which is fine. I don’t, I don’t dislike those people, but they’re in a loop and they’re just like running in a loop and they’re not thinking about really radical change.
[01:07:45] Balaji: Right. Whereas like a tech founder is thinking about truly new things and then this yeah There’s a there’s a vehicle to raise money and to do things and to reward people in the here and now But we are thinking about something that is the beyond right? Would you agree with
[01:07:59] Bryan: that? [01:08:00] I do. Yeah Basically, like if I try to map my own intelligence and I make decisions every second of every day What are the influencing forces that are?
[01:08:11] Bryan: inviting me to do certain things, right? Some things my biochemical state. Sometimes it’s my genetics. Sometimes it’s capitalism. Yep. So I have nothing against capitalism. I’m just acknowledging that it’s a system that we’ve all signed up for that has certain positives and certain negatives and it leads to certain outcomes.
[01:08:29] Bryan: And uh, what I’m hypothesizing is in this moment, in this special moment after 4. 5 billion years, We’re baby steps away from superintelligence. So
[01:08:38] Balaji: it’s funny because the way I think about it, and this may be reductive or maybe like dropping down several levels, but I think the new capitalism versus communism is tech versus woke, or it is effect.
[01:08:51] Balaji: You there’s different answer, effective acceleration ism versus effective altruism, growth. It’s hypergrowth versus degrowth, you know, [01:09:00] and deceleration. It’s transhumanism versus anarcho primitivism, which is like, you know, I say it’s, um, it’s Uncle Fred versus Uncle Ted, meaning Frederick Nietzsche, you know, we need to become 2.
[01:09:13] Balaji: 0, or Ted Kaczynski, we need to destroy industrialized civilization, right? That is actually, I think, the right, the real axis, not really left right. It is, do we ascend to the stars and do we become super intelligent and immortal and unlock the secrets of the universe? Or, out of a fit of self hatred, do people destroy industrialized civilization and go back to being apes and, you know, cannibals or whatever it was, right?
[01:09:38] Balaji: Or just basically animals. You know, is, is man more than a man? Or is it just an animal? And it’s funny because you, you will find people who really believe in, oh, the natural Gaia, et cetera, kind of thing. What they don’t think about is that what defines humanity is tool use. Like, that’s what branched us off from, like, primates, you know, like, many, uh, many, many, many generations [01:10:00] ago.
[01:10:00] Balaji: We, like, you know, we are fundamentally, like, the reason why do we not have fur? Because we invented clothes. A guy, Richard Wrangham, wrote a book, like, because we use fire to cook, we didn’t need all the same energy cost to metabolize things, and that could go to the brain instead. So we externalize various things, and humans can’t live without their tools.
[01:10:19] Balaji: And Tool Use defines humanity, so take it to the next level. There’s a lot of people who think that we can just go and be apes in the jungle again, right? And I think that is the argument that we’re gonna have. And not everybody’s gonna agree with us, but we just need to build enough energy where if they want to go and live in the jungle, be my guest.
[01:10:34] Balaji: Knock themselves out. Be Uncle Ted, right? Go and live in the jungle. The problem is they want to stop us from getting to infinity and that’s where, you know, the conflict arises.
[01:10:42] Bryan: So we could be just baby steps away from the most extraordinary existence that’s ever happened in the entire galaxy. And we’re consumed with our internal squabbles and drama and personal vendettas and [01:11:00] interests.
[01:11:00] Bryan: And, you know, is there an opportunity that we realize this moment for what it is? And we sober up and we are equal to the moment. And, you know, this is the conversation I want to have with everything we’re doing. I’m trying to invite this that. Yeah. Can we step up to this moment? There’s,
[01:11:23] Balaji: there’s several aligned kind of things which are, even YIMBY is actually somewhat aligned with this, right?
[01:11:29] Balaji: Like, meaning build construction, you know? So, like, Andrew says, it’s time to build. YIMBY, effective accelerationism, transhumanism, the, uh, what I call human self improvement, right? What you’re doing with longevity and blueprint. Um, what I think we’re doing with network states and startup societies. All of those are They’re not exactly the same, but they have a significant overlap in terms of a positive vision for the future That is based on building that’s got new challenges That reduces all the way down to an individual action like literally eat this today versus [01:12:00] that But it scales all the way up to a civilizational goal of don’t die get to mars
[01:12:05] Bryan: Explore the stars.
[01:12:06] Bryan: Yeah. Yeah. Yeah, it’s interesting that if you put it if you frame it that way like basically there’s a There’s a new group of answers of, of what we do, of why, yeah, why and what we do. So I think basically what the, what these endeavors are acknowledging is that the current whys don’t cut it. They’re not up to the task and we need to create new.
[01:12:30] Bryan: Yes. And so I think that’s probably, yeah, you’re right. I think it’s just this, um, amalgamation of these various ways to understand if we can weave it into a tapestry.
[01:12:40] Balaji: It’s funny. I’ll give, uh, maybe I’ll give one, one liner here. It’s like, you know, the, one of my big things is like history running in reverse, like we were talking about earlier.
[01:12:47] Balaji: And in the late 1800s, Nietzsche wrote, God is dead. And why do you say that? Because enough educated people no longer believed in God. And so the church, which had been this organizing principle for everything, was no [01:13:00] longer really powerful enough. And so he envisioned a future of giant wars, and he envisioned the 20th century, where people didn’t believe in God.
[01:13:06] Balaji: And when they didn’t believe in God, they also lost Actually, do you know what the first hit on Google for eternal life is? Oh, I can’t wait to know. Last I checked, it was Christianity. Okay. Okay, so because he’s loved you so much, he gave you eternal life and stuff, right? So when you took away God, you took away eternal life from people, so they had to fill that with something like communism, which promised them, you know, at least plenty on this earth and so on and so forth, right?
[01:13:30] Balaji: Communism, Nazism, and then also democratic capitalism was the best of those three ideologies. But that was the 20th century. And Now, I think with AGI on the horizon, right? Already, arguably, GPT 4 is a form of AGI where it can write better than many people and so on. And AlphaGo is certainly a form of AGI where it’s like better, you know, okay.
[01:13:52] Balaji: Put all that together and you have both the generation and the planning. It’s still a digital intelligence and one of my big, um, points of departure [01:14:00] from Um, what people call AI doomers, or decels, or degrowthers, or people who are anti AI is, I’m like, the AI still needs actuators, it needs humans, or it needs autonomous robots, and we don’t have enough of those yet.
[01:14:12] Balaji: So it can’t just, like, how is it going to stab you, or whatever. It doesn’t have the hands to do so yet, for a long time. But in the sense of a super intelligence, of something that’s wiser than us, smarter than us, well, if Nietzsche said, God is dead, now, technology is saying, God is back. Right? And if eternal life went away, right?
[01:14:29] Balaji: Yeah. Okay, so it’s in the sense of something that is smarter than us that we look to for guidance and so on, right? And if, if, uh, the death of God also took away eternal life and replaced it with the state, now the state is failing, and the network is giving us both God and eternal life, right? Now, I know a bunch of people will be like, extremely offended by all of that, but if you can engineer something that is smarter than any human and that knows all of your historical culture, and you can also engineer, you know, a way death, Well, technology can give us what, you know, scripture maybe didn’t, right?
[01:14:59] Balaji: And, you know, [01:15:00] I believe in like the polytheistic version of this where many different communities will have their own like kind of Oracle that they crowdfund and just like priests would go and maintain the incense at a church, the engineers will maintain the code of this AI that will be like, you know, for example, what would Jesus do but also What would Lee Kuan Yew do or what would Gandhi do or what would Krishna do or what would Thor do?
[01:15:23] Balaji: Yeah, you know I mean, right you can imagine like, you know, today’s AI is just like a text box But tomorrow’s AI is a 3d avatar that speaks to you Maybe in VR in your voice knows your history and you know So and so forth and give you personalized guidance in the same way people use Google but next next next level.
[01:15:41] Balaji: Yeah, and Combining that with this, you know, that might be, I don’t know, maybe that’s better adherence to the algorithm, right? If there’s that AI who’s like your AI personal trainer that’s telling you to do it, right? What would Brian Johnson do? We’ll do the Yeah, yeah, the AI version of that, right? And, uh, so anyways, I think some, some interesting things are [01:16:00] happening.
[01:16:00] Bryan: Yeah, I agree with you those things. So, um, if you’re listening, this is a spoiler alert for my book. So don’t listen if you don’t want to hear this. So the most radical idea I put forward in this book, Don’t Die. Um, is that it seems likely, maybe potentially inevitable in this path that with algorithmic ability improving over time, we will no longer, uh, have the functional elements of free will.
[01:16:28] Bryan: Like if, whatever your, whatever your opinion of free will is, whether you think we have it or don’t, that’s beside the point, um, that we will opt in to the system and whether we still feel like we do have it, like, you know, uh, Uh, whether or not the change happens or it’s like, yeah, I’ve got free will, but we’re really being run by these larger competitional systems.
[01:16:48] Bryan: But as a species, we want that. That’s the path that actually is the greatest source of liberation we could ever imagine. And so it’s, it’s very counterintuitive and it’s the most [01:17:00] sacred thing we have as a, as a, as a human. Like when we think about ourselves, we think about our intellect and what we know and what we can say and how we can feel and we can express as a preference.
[01:17:08] Bryan: And our perceived autonomy to do any given thing at any given moment. And that the biggest
[01:17:15] Balaji: This is similar to what we were talking about earlier, meta libertarianism, right? Now, of course, what people will ask in all this is like, Okay, who’s running that algorithm? Am I running the algorithm? And so, you’re opting into that constraint.
[01:17:25] Balaji: I think that’s a high level ethical justification for it is If you want to, it’s like going to boot camp, right? Like marine boot camp, you’re opting into that constraint and you’re doing that and you know, you’ll want to opt out in the middle of it, but you’ve opted in. It’s almost like signing a social
[01:17:38] Bryan: contract.
[01:17:38] Bryan: Yeah, basically this is the same thing as taking ozembic because you’re basically saying, I’m going to take this drug, right? It’s going to modify my conscious experience. So I no longer experience hunger, right? And that’s going to have this positive effect where I’m going to lose weight, right? Let’s just say, you know, side effects aside.
[01:17:54] Bryan: But you’re basically saying, I’m willing to do this to modify my biochemical ability [01:18:00] because I have this objective, right? And so we it’s not a far fetched idea. We already do it right in so many ways of life It’s sort
[01:18:06] Balaji: of meta free will in a sense. I mean like there’s a few different ways of thinking about free will, right?
[01:18:10] Balaji: It’s like first is how predictable is one’s behavior and MRI studies where People can’t predict within like a small time interval that someone’s gonna raise their left hand or their right hand before they do so right Which means they don’t, arguably, they don’t have free will in some sense during that time period when that thing fires, you know which hand is going to be raised.
[01:18:30] Balaji: But there’s another concept which is, have you heard the concept of higher order wants? Yeah. Yeah, so it’s like, I want to, uh, eat this cupcake, but my higher order want is that I want to not want to eat this cupcake, right? So you’re talking about kind of operating at the level of control plane here, right?
[01:18:48] Balaji: Yeah. Which is, again, it’s like the meta version, right? Um, So anyway, that’s cool. Yeah.
[01:18:53] Bryan: Yes, actually Sapolsky’s new book, Determined. He did a phenomenal job. I, after reading that book, I [01:19:00] basically learned to never again express an opinion on free will. He does such a marvelous job examining from a quantum theory perspective to the edge of the genetics.
[01:19:10] Bryan: Does he actually
[01:19:11] Balaji: believe that quantum is influential for free will? He, or is it just like the Schrodinger type stuff with observe?
[01:19:16] Bryan: Yeah, he basically threw it in the, in the ring to say like you, he said, to understand the argument of free will, you need to look at it as a neuroscientist, as a physicist. As a geneticist, as a, see, and here’s all the frames, and then you have to bundle them all together.
[01:19:31] Bryan: And this is how you understand the free will
[01:19:32] Balaji: discussion. With something like with China’s data sets, for example, with WeChat, I wouldn’t be surprised if you could predict life histories, and, and this is the kind of stuff, obviously there’s privacy considerations and so on, and, you know, but China doesn’t care about that.
[01:19:46] Balaji: So just in the sense of technically, is it, is it even possible? I bet they could predict a lot of life outcomes from that data. And the interesting question might be, you know, and here’s, here’s like the Western version of that. You take all of [01:20:00] people’s, I don’t know, their past eating history or something and you’re like, you remember the thing you were saying with the methylation, it was like your fork has to be here and now you can knock it here.
[01:20:07] Balaji: You’re like, this is what you’re on, this is what we’re predicting you’re not going to be. Interview blueprint or something like that, now you’re like this, you know, so. Anyway.
[01:20:14] Bryan: Yeah. Cool. Wrap? Yes. Wrap. Boom. Yeah. That’s so great. That was fantastic. Yeah. Yeah. It’s really enjoyed
[01:20:19] Balaji: hanging out. Yes. And people, uh, this is available online, where is it blueprint.com, is that right?
[01:20:24] Balaji: Yeah, uh,
[01:20:24] Bryan: blueprint brian
[01:20:25] Balaji: johnson.com. blueprint.brian.com. Okay, great. See you guys later.