Transcript for #2117 - Ray Kurzweil
SPEAKER_00
00:04 - 00:12
The Joe Rogan experience. Join my day Joe Rogan podcast by night. All day.
SPEAKER_03
00:12 - 00:21
This is he sir. Great to see you. I was thin Tony before I'm admiring your suspenders and you told me you have how many pairs of these things? Three of them. Yep.
SPEAKER_01
00:21 - 00:24
How did you know? I wear them every day. Do you really? Everything.
SPEAKER_03
00:24 - 00:28
Why do you like suspenders? Um... practicality thing.
SPEAKER_01
00:29 - 00:47
No, it expresses my personality and different ones have different personalities that express how I feel about doing so.
SPEAKER_03
00:48 - 00:50
I see. So it's just another style point.
SPEAKER_01
00:50 - 00:57
Yeah. See, the reason why I was just, you don't see any hand-painted suspenders. Have you ever seen one?
SPEAKER_03
00:57 - 01:13
I don't know. I would have not noticed. I only noticed because you were here. I'm not really a suspender-efficient auto. But the reason why I'm asking is because you're basically a technologist. I mean, you know a lot about technology when you think that suspenders are kind of outdated tech.
SPEAKER_01
01:17 - 01:19
Well, people like them.
SPEAKER_00
01:19 - 01:19
Clearly.
SPEAKER_01
01:19 - 01:30
Yeah. And I'm surprised I haven't caught on. But you have somebody who can actually paint them. I mean, these are, these are hand painted suspended.
SPEAKER_03
01:30 - 01:33
So the ones that you have, these are, right here, these are hand painted?
SPEAKER_02
01:33 - 01:33
Yeah.
SPEAKER_03
01:33 - 01:39
Interesting. Okay. So that's part of it. So you're wearing art. Exactly. Got it.
SPEAKER_01
01:39 - 02:13
So an artist part of technology. We're using technology to create art now. That's true. In fact, the very first, I mean, I've been in AI for 61 years, which is actually a record. And the first thing I did was create something that could write music. writing music now, but with AI is a major field today, but this was actually the first time that I've ever been done.
SPEAKER_03
02:13 - 02:23
Yeah, that was one of your many inventions. That was the first one, yeah. So why did you go about doing that? What was your desire to create artificial intelligence music?
SPEAKER_01
02:24 - 02:59
Well, my father was a musician, and I felt this would be a good way to relate to him, and he actually worked with me on it. And you could feed in music, like if you feed in, let's say Mozart or Chopin, and it would figure out how they created melodies and then write melodies in the same style. So you can actually tell this is Mozart, this is Chopin. It wasn't as good, but it's the first time that that has been done.
SPEAKER_03
03:01 - 03:08
It wasn't as good then. What are the capabilities now? Because now they can use some pre-extraordinary things.
SPEAKER_01
03:08 - 03:21
Yeah, it's still not up to what humans can do, but it's getting there. And it's actually pleasant to listen to. We still have a wild to do art, both art, music, so on.
SPEAKER_03
03:23 - 03:58
Well one of the main arguments against AI art comes from actual artists who are upset that with essentially they're doing is they're like you could say right draw a paint or create a painting the style of Frank for Zetta for instance and what it would be would be they would take all of for Zetta's work that he's ever done which is all documented on the internet and then you create an image that's representative of that. So you're essentially in one way or another. You're kind of taking from the art.
SPEAKER_01
03:58 - 04:14
Right. But it's not quite as good. It will be as good. I mean, I think we'll match human experience by 2029 that's been my idea. It's not as good.
SPEAKER_03
04:14 - 04:16
Which is the best image generated right now, Jamie?
SPEAKER_00
04:17 - 04:28
It's they're really changed almost from day to day right now, but like mid journey was a much better one at first and then Dolly, I think is a really good one too.
SPEAKER_03
04:28 - 04:41
Bid journey is incredibly impressive. Incredibly impressive graphics. I've seen some of the mid journey stuff. It's mind blowing. And it'll not quite as good. Not but boys. It's so much better than it was five years ago. That's what scary. Yeah.
SPEAKER_01
04:41 - 04:49
It's so quick. I mean, it's never going to reach its limit. We're not going to get to a point. Okay. This is how good it's going to be. It's going to keep getting better.
SPEAKER_03
04:51 - 04:59
And what would that look like if it, if it, if it can get to a certain point it will far exceed what human creativity is capable of?
SPEAKER_01
04:59 - 05:16
Yes, I mean, when we reach, ability of humans, it's not going to just match one human. It's going to match all humans. It's going to do everything that any human can do. If it's playing a game, like go, it's going to play it better than any human.
SPEAKER_03
05:16 - 05:29
Right. Well, that's already been proven, right, that they have invented moves. AI has invented moves that have now been implemented by humans. Right. In a very complex game that they never thought that AI was going to be able to be because it requires so much creativity.
SPEAKER_01
05:29 - 05:36
Right. Arthur, we're not quite there, but we will be there. And by 2029, it will match any person.
SPEAKER_03
05:42 - 05:47
That's it, 2029. That's just a few years away.
SPEAKER_01
05:47 - 06:25
I'm actually considered conservative. People think that will happen next year, the year after. I actually said that in 1999, I said we would match any person by 2029s for 30 years. People thought that was totally crazy. And in fact, Stanford had a conference, they invited several hundred people from around the world to talk about my prediction, and people came in, and people thought that this would happen, but not by 2020, and I thought it would take a hundred years.
SPEAKER_03
06:25 - 06:35
Yeah, I've heard that, but I think people are amending those. Is it because human beings have a very difficult time grasping the concept of exponential growth?
SPEAKER_01
06:36 - 07:00
That's exactly right. In fact, still economists have a linear view. And if you say, well, it's going to grow exponentially. Yeah, but maybe 2% a year. It actually doubles in 14 years. And I brought a chart I can show you that really illustrates this.
SPEAKER_03
07:06 - 07:20
Is this chart available online so we can show people? Yeah, it's in the book, but is it available online? That chart where Jamie can pull it up and someone could see it. Just so the folks watching the podcast could see it too. But I could just hold it up to the camera.
SPEAKER_00
07:20 - 07:22
It says price performance of computation 1939 to 2023.
SPEAKER_03
07:29 - 07:34
You have it, okay, great. Jamie, you have it. Yeah, the calm is insane.
SPEAKER_01
07:34 - 10:23
It's like, it's an interesting, it's an exponential curve and a straight line represents exponential growth. And that's an absolute straight line for 80 years. The very first point, this is the speed of computers. It was zero, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7 calculations per second per constant dollar. The last point is 35 billion calculations per second. So there's a 20 quadrillion fold increase in those 80 years. But the speed with which it gained is actually the same throughout the entire 80 years. Because if it was sometimes better and sometimes worse, this curve would bend. It would bend up and down. It's really very much a straight line. So the speed with the retreat increased it was the same regardless of the technology used. And the technology was radically different at the beginning versus the end. And yet, it increased the speed exactly the same for 80 years. In fact, the first 40 years, nobody even knew this was happening. So it's not like somebody was in charge and saying, OK, next to you, we have to get to here. And people would try to match that. We didn't even know this was happening for 40 years. 40 years later, I noticed this for a very reasons I predicted it would stay the same, the same speeding crazy cheer, which it has. In fact, we just put the last stop like two weeks ago, and it's exactly where it should be. So technology and computation, certainly, prime form of technology, increases at the same speed. And this goes through worn peace. You might say, well, maybe it's greater doing war. No, it's exactly the same. You can't tell when this war piece or anything else on here. It just matches from one type of technology to the next. And it's also true of other things. For example, getting energy from the sun, that's also exponential. It's also just like this. It's increased. We now are getting, by a thousand times as much energy from the sun that we did 20 years ago.
SPEAKER_03
10:23 - 10:39
Because the implementation of solar panels and the like has the the function of it increased exponentially as well the function of because it whatever had understood was that there was a bottleneck in the technology as far as how much you could extract from the sun from those panels.
SPEAKER_01
10:40 - 10:59
No, not at all. No. I mean, it's increased 99.7% since we started. Right. And it does the same every year. It's an exponential curve. And if you look at the curve, we'll be getting 100% of all the energy we need in 10 years.
SPEAKER_03
11:01 - 11:15
The person who told me that was Elon, and Elon was telling me that this is the reason why you can't have a fully solar powered electric car, because it's not capable of absorbing that much from the sun with a small panel like that. He said there's a physical limitation in the panel size.
SPEAKER_01
11:15 - 11:50
No, I mean, it's increased 99.7% since we started since what year? That's about 35 years ago 35 years ago and 99 percent and 99 percent of the ability of it as well as the expansion of use I mean you might have to store it we're also making exponential gains in storage of electricity right battery technology so you don't have to get it all from a solar panel that fits in a car
SPEAKER_03
11:52 - 12:05
The concept was, like, could you make a solar paneled car? A car that has solar panels on the roof, and would that be enough to power the car? And he said, no, he said it's just not really there yet.
SPEAKER_01
12:05 - 12:09
Right, it's not there yet, but it will be there in 10 years.
SPEAKER_03
12:09 - 12:19
You think so? Yeah, he seemed to doubt that. He thought that there's a limitation of the amount of energy you can get from the sun period, how much it gives out, and how much the solar panels can absorb.
SPEAKER_01
12:20 - 12:25
Well, you're not going to be able to get it all from the solar panel that fits in a car. You're going to have to store some of that energy.
SPEAKER_03
12:26 - 12:40
Right, so you wouldn't just be able to drive indefinitely on solar power. Yeah, that was what he was saying. But you can obviously power a house, and especially if you have a roof, the Tesla has those solar power grooves now.
SPEAKER_01
12:40 - 12:56
But you can also store the energy for a car. We're going to go to all renewable energy, wind and sun within 10 years, including our ability to store the energy.
SPEAKER_03
12:57 - 13:02
All renewable in 10 years. So what are they going to do with all these nuclear plants and coal power plants?
SPEAKER_01
13:02 - 13:14
That's completely unnecessary. People say we need nuclear power, which we don't. I mean, you can get it all from the sun and wind within 10 years.
SPEAKER_03
13:14 - 13:24
So in 10 years, you'll be able to power Los Angeles with sun and wind. Yes, really. I was not aware that we were anywhere near that kind of timeline.
SPEAKER_01
13:25 - 13:30
That's because people are not taking into account exponential growth.
SPEAKER_03
13:30 - 13:49
So the exponential growth also of the grid? Because just to pull the amount of power that you would need to charge, you know, x amount of million, if everyone has an electric vehicle by 2035, let's say then, just the amount of change you would need on the grid would be pretty substantial.
SPEAKER_01
13:49 - 13:53
Well, we're making exponential gains on that as well. Are we? Yeah.
SPEAKER_03
13:53 - 14:05
Yeah. I wasn't aware. I had this impression that there was a problem with that, and especially in Los Angeles, they've actually asked people at certain times when it's not a charger.
SPEAKER_01
14:05 - 14:14
I'm not looking at the future that's true now, but it's growing exponentially in every field of technology then, essentially.
SPEAKER_02
14:14 - 14:14
Yeah.
SPEAKER_03
14:16 - 14:28
Is the bottleneck battery technology and how closer they to solving some of these problems with like conflict minerals and the things that we need in order to power these batteries?
SPEAKER_01
14:28 - 14:40
I mean, our ability to store energy is also growing exponentially. So putting all that together will be able to power everything we need within 10 years.
SPEAKER_03
14:41 - 14:51
Wow. Most people don't think that. So you're thinking that based on this idea that people don't have a limit of the computation we grow like this.
SPEAKER_01
14:51 - 15:21
It's just continuing to do that. And so we have a lot of language models, for example. No one expected that to happen like five years ago. And we had them two years ago, but they didn't work very well. So it began a little less than two years ago that we could actually do large language models. And that was very much a surprise to everybody. So that's probably the primary example of exponential growth.
SPEAKER_03
15:22 - 15:41
We had a Sam Altman on one of the things that he and I were talking about was that AI figured out a way to lie, that they used AI to go through a capture system, and the AI told the system that it was vision impaired, which is not technically a lie, but it used it to bypass the RU or robot.
SPEAKER_01
15:41 - 16:09
Well, we don't know, now it's for large language models to say they don't know something. So yeah, that's got a question. And if that, the answer to that question is not in the system, it still comes up with an answer. So it will look at everything and give you its best answer. And if the best answer is not there, it still gives you an answer. But that's considered a hallucination. And we know hallucination. Yeah, that's what it's called.
SPEAKER_03
16:09 - 16:12
So AI hallucination. So they cannot be wrong.
SPEAKER_01
16:13 - 16:24
They have so far. We're actually working on being able to tell if it doesn't know something. So if you ask it something and say, oh, I don't know that. Right now it can't do that.
SPEAKER_03
16:24 - 16:26
Oh, wow. That's interesting.
SPEAKER_01
16:26 - 17:18
So it gives you some answer. And if the answer's not there, it just makes something up. It's the best answer, but the best answer isn't very good, because it doesn't know the answer. And the way to fix hallucinations is to actually give it more capabilities to memorize things and give it more information, so it knows the answer to it. If you tell an answer to a question, it will remember that and give you that correct answer. But these models are not, we don't know everything. And it has to, we have to be able to scan and answer to every single question, which we can't quite do. It'd be actually better if we could actually answer, well, she had to know that.
SPEAKER_03
17:19 - 17:33
Right. Like, in particular, like, say when it comes to exploration of the universe, if there's a certain amount of, I mean, vast amount of the universe we have not explored. So if it has to answer questions about that, it would come to the answer.
SPEAKER_01
17:33 - 17:36
Right. It'll just come up with an answer which will likely be wrong.
SPEAKER_03
17:36 - 17:46
Hmm. That's interesting. But that would be a real problem if someone was counting on the AI to have a solution for something too soon. Right.
SPEAKER_01
17:47 - 18:12
Right, they don't know everything. Search engines actually know pretty well-vetted. And if it actually answers something, it's usually correct. Unless it's curated. But large language models don't have that capability. So it'd be good, actually, if they knew that they were wrong, they'd also tell us what we have to fix.
SPEAKER_03
18:13 - 18:23
What about the idea that AI models are influenced by ideology? That AI models have been programmed with certain ideologies?
SPEAKER_01
18:23 - 19:01
I mean, they do learn from people, and people have ideologies. Some of which are not correct. And that's a large way in which it will make things up, because it's learning from people. So right now, if somebody has access to good search engine, they will check before they actually answer something with the search engine to make sure that it's correct. Because search engines are generally much more accurate.
SPEAKER_03
19:02 - 19:31
generally. When it comes to this idea that people enter information into a computer and then the computer relies on an ideology, do you anticipate that with artificial general intelligence that will be agnostic to ideology that will be able to reach a point where instead of deciding things based on social norms or whatever the culture is accepted currently that it would look at things more objectively and rationally.
SPEAKER_01
19:32 - 20:36
Well, eventually, but we still call it artificial general intelligence, even if it didn't do that. And people certainly do our influence by whatever their people that they respect feel this correct and it will be as influenced by as people are. I'm still call it our official general intelligence. We are starting to check what large language models come up with with search engines and that's actually making them more correct. But we have to actually continue on this curve. We need more data to be able to store everything. This is not enough data to be able to store everything correctly. This is a large amount of large language models for which we don't have storage for the data.
SPEAKER_03
20:36 - 20:38
So that's what's holding us back is data and storage.
SPEAKER_01
20:38 - 20:51
Yeah, we also have to have the correct storage. So that's really where the effort is going to be able to get rid of these hallucinations.
SPEAKER_03
20:51 - 20:55
That's a fun thing to say hallucinations in terms of artificial intelligence.
SPEAKER_01
20:56 - 22:15
Well, we usually come up with wrong things. A large language model is not really the correct way to talk about this. It does not language, but there's a lot of other things it knows. We're using them now to come up with medicines. For example, the magenta vaccine, we wrote down every possible type of medicine that might be that might work. It was actually several billion MRNAs sequences. And we then tested them all and did that in two days. So I actually came up with tested several billion and decided on it in two days. We then tested it with people. We'll be able to overcome that as well because we'll be able to test it with machines. but we actually detested with people for 10 months that was still a record.
SPEAKER_03
22:15 - 22:38
So for machines when they start testing medications with machines, how will they audit that? So the concept will be that you take into account biological variability, all the different factors that would lead to a person to have an adverse reaction to a certain compound, and then you program all the known data about how things interact with the body.
SPEAKER_01
22:38 - 22:43
Right. I mean, you need to be able to simulate all the different possibilities. Right.
SPEAKER_03
22:43 - 22:48
And then because I'm up with like a number of how many people will be adversely affected by something.
SPEAKER_01
22:48 - 22:50
That's one of the things you would look at.
SPEAKER_03
22:50 - 22:53
And then advocacy based on age.
SPEAKER_01
22:53 - 22:58
But that could be done literally in a matter of days rather than years. Right.
SPEAKER_03
23:03 - 23:23
But the question wouldn't be like who's in charge of that data and like how does that, how does it get resolved and what, if, if artificial intelligence is still prone to hallucinations and they start using those hallucinations to justify medications, I could be a bit of an issue, especially if it's controlled by a corporation that wants to make a lot of money.
SPEAKER_01
23:23 - 23:27
Well, that's the issue. Let's be able to do it correctly.
SPEAKER_03
23:27 - 23:36
So we'll have to come, there's going to have to be a point in time where we all decide that artificial intelligence has reached this place where we can trust it implicitly.
SPEAKER_01
23:36 - 24:06
Right. Well, that's why they take now the leading candidate and actually tested with people. But we'll be able to get rid of the testing with people once we can have reliance on the simulation. So we've got to make the simulations correct. But right now we actually tested with people and that takes about 10 months in this case.
SPEAKER_03
24:07 - 24:24
When you look at artificial intelligence, and you look at the expansion of it, and the ultimate place that it will eventually be, what do you see happening inside of our lifetime, like inside of 20 years? What kind of revolutionary changes on society with this half?
SPEAKER_01
24:24 - 25:14
Well, one thing I feel will happen in five years, by 2029, is will reach longevity escape velocity. So right now you go through a year and you use up a year of your longevity. You're then a year older. However, we do have scientific progress and we're making coming up with new cures for diseases and so on. Right now, you're getting back about four months. So you lose a year, but through scientific progress, you're getting back four months. So you're only losing eight months. However, the scientific progress is progressing exponentially. And by 2020-29, you'll get back a full year. So you lose a year, but you get back a year, and you're pretty much staying the same place.
SPEAKER_03
25:14 - 25:17
So by 2020-29, you'll be static.
SPEAKER_01
25:17 - 25:23
And past 2020-29, you'd like to get back more than a year, you'll get back.
SPEAKER_03
25:23 - 25:24
Can I be a baby again?
SPEAKER_01
25:27 - 25:32
No, but you'll in terms of your longevity, you'll get back more than a year.
SPEAKER_03
25:32 - 25:41
Right. So you'll be able to go essentially go back and biological age, lengthening of the telomeres, changing the elasticity of the skin.
SPEAKER_01
25:41 - 25:58
Eventually you'll be able to do that. It doesn't guarantee you living forever. I mean, you could have a 10-year-old, and you could compute that he's got many decades of longevity, and he could die tomorrow. So, sure.
SPEAKER_03
25:58 - 26:03
But overall, there'd be an expansion of the exact moment that most people die.
SPEAKER_01
26:04 - 26:24
And that's something that we're going to get. And it's also using the same type of logic as large language models, but that's not language. You're actually creating medications. So we should call that large event models, not large language models, because it's not just dealing with language. It's dealing with all kinds of things.
SPEAKER_03
26:25 - 26:31
When I talked to you 10 years ago, you were telling me about this pretty extensive supplement routine that you're on.
SPEAKER_01
26:31 - 26:59
So I'm trying to get to the point where we have longevity escape velocity in good shape. Right. And yes, I do follow that. I take maybe 80 pills a day and wow. Some injections and so on. So far it works and have you ever gone off of it to see what you feel like normally?
SPEAKER_03
27:01 - 27:03
No. Why do that, right?
SPEAKER_01
27:03 - 27:12
Yeah. I mean, it seems to work. The evidence behind it. How old do you know? 76. 76.
SPEAKER_03
27:12 - 27:17
You look good. You look good for 76, man. That's great. That's what's doing something.
SPEAKER_01
27:17 - 27:20
Yeah. I think it's working.
SPEAKER_03
27:21 - 27:31
And so your goal is to get to that point where they start doing the, you live a year, you stay static, and then eventually get back to youthfulness.
SPEAKER_01
27:31 - 27:39
Right. And it's not that far off. If you're diligent, I think we'll get there by 2021. Not everybody's diligent.
SPEAKER_03
27:39 - 27:50
That's all right. Of course. Now pass that. This is for life extension, which is great. But what about how AI is going to change society?
SPEAKER_01
27:52 - 30:44
Yes, well, that's a very big issue. And it's already doing lots of things, make some people uncomfortable. What we're actually doing is increasing our intelligence. I mean, right now you have a brain. It has different modules in it that deal with different things. But really, it's able to connect one concept to another concept. And that's what you brain does. We can actually increase that by example carrying around a phone. This has connections in it. a little bit of a hassle to use if I see to do something you've got to kind of mess with it and actually be good if this actually listened to your conversation oh it does and without saying anything you're just talking and it says all the name of that actress is so and so and yeah but then it's a busy body it's like interfering with your life talking to you all the time well this way is a dealing with that too you shut it off but we don't we don't So we haven't done that yet, but that's a way of expanding your connections. What a large language model does. It has connections in it as well. And in fact, it's getting now to a point that's getting fairly comparable to the human brain. We have about a trillion connections in our brain. Things like this top model from Google or GPT4, they have about 400 billion connections approximately There'll be a trillion probably within a year. That's pretty comparable to what the human brain does. Eventually we'll go beyond that and we'll have access to that. So it's basically making a smarter. So if you have the ability to be smarter, that's something that's positive really. I mean, if we were like mice today, and we had the opportunity to become like humans, We wouldn't object to that. In fact, we are humans, and we don't object to that. We used to be shruse. And this is going to basically make us smarter. Eventually, we'll be much smarter than we are today. And that's a positive thing. We'll be able to do things today that we find bothersome in a way that's much more palatable.
SPEAKER_03
30:45 - 30:50
The idea of us getting smarter sounds great. Great. It'd be great to be smarter.
SPEAKER_01
30:50 - 32:00
But, right, but people rejects it because it's like competition. And what way? Well, I mean, Google has, I don't know, it's at 67,000 programmers, and how many programmers are existing in the world? How much longer is that going to be a viable career? Because the language model is already code. That quite is good as a real expert coder. But how long is that going to be? It's not going to be 100 years. It's going to be a few years. So people see it as competition. I have a slightly different view of that. I see these things actually adding to our own intelligence. And we're emerging with these kinds of computers, making our cell smarter by merging with it. And eventually, it'll go inside our brain and be able to make us smarter instantly, just like we had more connections inside our own brain.
SPEAKER_03
32:01 - 32:26
Well, I think people have reservations always when it comes to great change. And this is probably the greatest change. The greatest change we've ever experienced in our lifetimes for sure has been the internet. And this will make that look like nothing. It'll change everything. And it seems inevitable. I understand that people are upset about it, but it just seems like what human beings were sort of designed to do.
SPEAKER_01
32:27 - 34:02
Right, we're the only animal that actually creates technology. Yeah, it's a combination of our brain and something else, which is our thumb. So I can imagine something, oh, if I take that, you can leave it for now. Three, I could create a tool with it. Other animals have actually a bigger brain, like the whale dolphin. dolphins, elephants, they have a larger brain than we do, but they don't have something equivalent to the thumb. Monkey looks like the thumb, but it's actually an inch down. It doesn't actually work very well. So they can actually create a tool, but they don't create a tool that's powerful enough to create the next tool. So we're actually able to use our tools and create something that's much more significant. So we can create tools. And that's really part of who we are. It makes us that much more intelligent. And that's a good thing. I mean, here's So he's U.S. person will income per capita. So this is the average amount that we make per person in constant dollars.
SPEAKER_03
34:02 - 34:08
And it's over here. It's on the screen. It should we make a lot more money, but things cost a lot more money too, right?
SPEAKER_01
34:08 - 34:09
No. The constant dollars.
SPEAKER_03
34:09 - 34:14
The constant dollars. The constant dollars in relation to the inflation.
SPEAKER_01
34:14 - 34:26
Yeah. So it's just not to show you inflation. These are constant dollars. And so we're actually making that much more each year on average.
SPEAKER_03
34:26 - 34:33
So if you remember, it doesn't take into account inflation, correct? So it's not taking into account the rise of cost of things. No, it is taking into account.
SPEAKER_01
34:33 - 34:45
Oh, it is. Okay. So we're making that much more in constant dollars. If you look over the past 100 years, we've made about 10 times as much.
SPEAKER_03
34:45 - 35:22
I wonder if there's a similar chart about consumerism, just about material possessions. I wonder if like how much more we're purchasing and creating. I've always felt like that's one of the things that materialism is one of those instincts that human beings sort of look down upon and this aimless pursuit of buying things. But I feel like that motivates technology because The constant need for the newest greatest thing is one of the things that fuels the creation and innovation of new things.
SPEAKER_01
35:22 - 35:30
But if you were to go back a hundred years, you'd be very unhappy. Oh, yeah. Because you wouldn't have, I mean, you wouldn't have computer, for example.
SPEAKER_03
35:30 - 35:34
You wouldn't have anything. You would have most things you've grown accustomed to.
SPEAKER_01
35:34 - 35:52
Yeah. And that's why you want to. Also we did live very long medical advancements. For average life was 48 years and 1900. It's 35 years and 1800. Right. Go back to thousand years. It was 20 years.
SPEAKER_03
35:58 - 36:10
That takes into account child mortality too though, right? But it's also injuries death. Some people did live long, like there's people that live back then, if nothing happened to you, you did live to be 80 like a normal person.
SPEAKER_01
36:11 - 36:13
That was actually very rare.
SPEAKER_03
36:13 - 36:23
I mean, because most of us happen to people. Most people by the time you get to 80, you've had at least one hospital visit. Something's gone wrong. Broken arm, broken this, broken that.
SPEAKER_01
36:23 - 36:28
It was very rare to make it to 80. Right. 200 years ago.
SPEAKER_03
36:28 - 36:32
But the human body was physically capable of doing it. Right.
SPEAKER_01
36:32 - 36:48
Well, are you in body can go on forever if you fix things properly? There's nothing in our body that means that you have to die at 100 or even 120. We can go on really indefinitely.
SPEAKER_03
36:48 - 36:57
That's the groundbreaking work today, right? They're treating disease or excuse me age as if it is a disease.
SPEAKER_01
36:57 - 37:04
Not just inevitable consequences. And FDA doesn't accept that, but actually beginning to accept it now.
SPEAKER_03
37:04 - 37:28
Why does it get older? Yeah, exactly. They're forced into it. The concept of artificial general intelligence scares a lot of people also because of Hollywood, right? Because the Terminator films and things along those lines, like, how far away are we do think from actual artificial humans or will we ever get there? Will we integrate before that takes place?
SPEAKER_01
37:28 - 39:11
I mean, all of the traditional intelligence that we're creating It's something that we use and it's just like it came with us. So we're actually making ourselves more intelligence and ultimately that's a good thing. And if we have it, and then we say, well, gee, we don't really like this. Let's take it away. People would never accept that. They may be against the idea of general intelligence, but once they get it, nobody wants to give that up. And it will be beneficial. The bloodline started 200 years ago because the cotton journey came out. And all these people that were making money with the cotton journey were against it. And they would actually destroy these machines at night. And they said, gee, if this keeps going, all jobs are going to go away. And indeed people using the cotton journey to create more wealth, that did go away. But we actually made more money because we created things that didn't exist then. We didn't have anything like electronics, for example. And as we can actually see, we make ten times as much in constant dollars as we did a hundred years ago. And if you would ask, well, what are people going to be doing? You couldn't answer it because we didn't understand. the internet, for example.
SPEAKER_03
39:11 - 39:16
And there's probably some technologies down the pipe that are going to have a similar impact.
SPEAKER_01
39:16 - 39:20
Exactly. And they're going to send life for example.
SPEAKER_03
39:20 - 39:25
But are they going to create life?
SPEAKER_01
39:25 - 39:46
Well, we know how to create life. We don't. Well, that's an interesting question. What do you mean by create life?
SPEAKER_03
39:46 - 40:31
What I think is that human beings are some sort of a biological caterpillar that makes a cocoon that gives birth to an electronic butterfly. I think we are creating a life form and that we're merely conduits for this thing and that all of our instincts and ego and emotions and all these things feed into it, materials and feeds into it. We keep buying and keep innovating. And technology keeps increasing exponentially. And eventually, it's going to be artificial intelligence. And artificial intelligence is going to create better artificial intelligence and a form of being that has no limitations in terms of what's capable of doing. And capable of traveling anywhere, but not having any biological limitations in terms of.
SPEAKER_01
40:31 - 40:41
But that's going to be ourselves. I mean, we're going to be able to create life. that it's like humans but far greater than we are today.
SPEAKER_03
40:41 - 40:54
With an integration of technology. If we choose to go that route but that's the prediction that you have that we will go that route like a neural link type deal something along those lines that I don't see this competition
SPEAKER_01
40:54 - 41:11
Like the things you're gonna know. No, I don't think it's competition. Well, I think it's just... Well, I'll just seem like that. I mean, if you have a job doing coding. Right. And suddenly, they don't really want you anymore because they can do coding with a large language model. It's gonna feel like it's competition.
SPEAKER_03
41:11 - 41:51
Well, there's an issue now with films. Tyler Perry owns and it was building an $800 million television studio and he'd stop production when he said, what is it called, Sora? Is that what's called Jamie? He stopped production when he saw the capabilities of AI to just for creating visuals, scenes, movies. There's one that's incredibly impressive. It's Tokyo. They're walking down the street of Tokyo in the winter. So it's snowing and they're walking down the street and you look at it, you know, this is insane. This looks like a film, see if you can find that film. Because it's incredible.
SPEAKER_01
41:51 - 41:56
But would you want to get rid of that? Get rid of what? The capability.
SPEAKER_03
41:56 - 42:48
No. No, I don't want to get rid of the capability. But people do people that make movies. People that actually film things with cameras and use actors are going to be very upset. So this, this is all fake. which is insane, beautiful snowy Tokyo city is bustling the camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous to core petals are flying through the wind along with snowflakes. And this is what you get. I mean, this is insanely good. The variability, like just the way people are dressed, if you saw this somewhere else, look at this, a robot's life in a cyberpunk setting. If you saw this, You would say, oh, they filmed this, but just looking but they're able to do with animation and kids movies and things almost not.
SPEAKER_01
42:48 - 42:55
Yeah, it's going to get better. Yeah, it's just incredible. I mean, it's a new art form.
SPEAKER_03
42:55 - 42:56
So right there, the smoke looks a little uniform.
SPEAKER_01
42:57 - 43:02
But yeah, I mean, there's some problems with this, but not much.
SPEAKER_03
43:02 - 43:18
And you imagine what it was like five years ago, and then imagine what it's going to be like five years ago. Absolutely. And it's insane. We know one took in a consideration. The idea that kids are going to be cheating on their school papers using chat GPT, but my kids tell me that's a real problem in school now.
SPEAKER_01
43:20 - 43:24
Yes, definitely.
SPEAKER_03
43:24 - 43:37
So no one saw that coming, no one saw this coming, and what we're at now is what chat GPD-4, right? 4.5, so what is it? Well, 4.5 is coming. 4.5 is coming. 5 is supposed to be the massive leap.
SPEAKER_01
43:40 - 43:49
It'll be a leap, just like three to four was a massive leap. But it's going to continue. It's never going to be finished.
SPEAKER_03
43:49 - 43:55
Right. It'll keep going. And it will also be able to make better versions of itself, correct?
SPEAKER_01
43:55 - 43:59
Yes. Well, we do that. I mean, technology does that already.
SPEAKER_03
43:59 - 44:05
Right. But if you scale that out a hundred years from now, what are you looking at? You're looking at a God.
SPEAKER_01
44:07 - 44:08
Well, it'll be less than 100 years.
SPEAKER_03
44:08 - 44:11
I mean, so you're looking at a God in 50 years?
SPEAKER_01
44:13 - 44:27
Less than that. I mean, once we have an ability to emulate everything that humans can do, and that's just one human, but all humans. Yes. And that's only like 2029 that's only five years from now.
SPEAKER_03
44:27 - 44:39
And then it will make better versions of that. So it will probably solve a lot of the problems that we have in terms of energy storage, data storage, data speeds, computation speeds, and also medications.
SPEAKER_01
44:41 - 44:43
For us, for humans, yeah.
SPEAKER_03
44:43 - 44:51
Wouldn't be better just rain, just download yourself into this beautiful electronic body. Why do you want to be biological?
SPEAKER_01
44:51 - 46:22
I mean, ultimately, that's what we're going to be able to do. You think that's going to happen? Yeah. So do you think that we'll be able to, I mean, be able to create, I mean, the singularities when we multiply our intelligence a millionfold, and that's 2045. So that's not that long from now. That's like 20 years from now. Right. And therefore, most of your intelligence will be handled by the computer part of ourselves. The only thing that won't be captured is what comes with our body originally. We'll ultimately be able to do that as well. It'll take a little longer, but we'll be able to actually capture what comes with our normal body and be able to recreate that. So that also has to do with how long we live. because if everything is backed up I mean right now anytime you put anything into a phone or any kind of electronics it's backed up so I mean I could look this has a lot of data I could flip it and it ends up in a river and we can't capture anymore I can recreate it because it's all backed up anything that's going to be the case of consciousness that's going to be the case of our normal biological body as well
SPEAKER_03
46:22 - 46:50
What's this top someone like Donald Trump from just making 100,000 versions of himself? I give you can back someone up. Could you duplicate it? Couldn't you have three or four of them? Couldn't you have a bunch of them? Couldn't you live multiple lives? Yes. Would you be interacting with each other while you're living multiple lives? Having consultations about what is St. Louis Ray doing? Well, I don't know. Let's talk to San Francisco Ray. San Francisco Ray is talking to Florida Ray.
SPEAKER_01
46:53 - 47:02
It's basically a matter of increasing our intelligence and being able to multiply Donald Trump, for example, that comes with that.
SPEAKER_03
47:02 - 47:08
Do you think there'll be regulations on that? To stop people from making 100,000 versions of themselves at Operator City?
SPEAKER_01
47:08 - 47:20
There'll be lots of regulations. Unless the regulations we have already, you can't just create a medication. and sell it to people that cares its disease. We have tremendous amount of regulation.
SPEAKER_03
47:20 - 47:27
Sure, but we don't really with phones. Like with your phone, you could essentially, if you had the money, you could make use of many copies of that as you wanted.
SPEAKER_01
47:30 - 47:43
There are some regulations, we regulate everything, but you're right, generally electronics doesn't have as much regulation as it is.
SPEAKER_03
47:43 - 47:46
And when you get to a certain point, we will be electronics.
SPEAKER_01
47:48 - 48:01
Yes, I mean, certainly, if we multiply our intelligence a millionfold, everything of that additional millionfold of years is not regulated.
SPEAKER_03
48:01 - 48:20
Right. When you think about the concept of integration and technological integration, when do you think that will start taking place and what will be the initial usage of it? Like what will be the first versions and what would they provide?
SPEAKER_01
48:20 - 48:23
Well, we have it now, a large language model is a pretty impressive.
SPEAKER_03
48:23 - 48:29
And if you look at what they can do, I mean, I'm talking about physical integration with a human body, like a neural-length type thing.
SPEAKER_01
48:31 - 48:42
Right. Some people feel that we could actually understand what's going on in your brain and actually put things into your brain without actually going into the brain. I was something like neural link.
SPEAKER_03
48:42 - 48:45
So something that like sits on the outside of your head?
SPEAKER_01
48:45 - 49:35
Yeah. clear to me that if that's feasible or not. I've been assuming that you actually go in. The neural link isn't exactly where we want because it's too slow and it's actually will do what it's advertised to do. Like if actually know some people like this who were active people and they completely lost the ability to speak and to understand language and so on. And so they can't actually say anything to you. And we can use something like neural ink to actually have them express something. They could think something and then have it be expressed to you.
SPEAKER_03
49:35 - 49:43
Right, and they're doing that, right? They had the first patient, the first patient that was, yeah, and apparently that person can move a cursor around our screen.
SPEAKER_01
49:43 - 49:54
Right, and if you can do anything, it's fairly slow, though, and neural ink is slow. If you really want to extend your brain, you need to do it at a much faster pace.
SPEAKER_03
49:54 - 50:01
But it's not going to increase exponentially as well. Yes, absolutely. So how long do you think it'll be before it's implemented?
SPEAKER_01
50:01 - 50:19
Well, it's got to be by 2045. because that's when the singularity exists and we can actually multiply our intelligence on the order of a millionfold.
SPEAKER_03
50:19 - 50:29
And when you say 2045, what is the source of that estimation?
SPEAKER_01
50:29 - 50:58
Because we'll be able to base actually on this chart and also the increase in The ability of software to also expand will be able to multiply our intelligence a millionfold and will be able to put that inside of our brain. It would be just like it's part of our brain.
SPEAKER_03
50:58 - 51:21
So this is just following the current graph of progress. Yeah, exactly. So if you follow the current graph of progress and if you do understand the exponential growth, then what we're looking at in 2045 is inevitable. Right. Does that concern you at all or you excited about it? Do you think it's just a thing that is happening in your part of it and you're experiencing it?
SPEAKER_01
51:22 - 52:10
I think it will be enthusiastic about it. Imagine if you were to ask a mouse, would you like to actually be as intelligent as a human? How to know what people would say, but generally, that's a positive thing. Generally. Yeah. And that's what it's going to be like. We're going to be that much smarter. And once we're there, someone's going to say, not really like this, I want to be stupid, like human beings used to be. Nobody's really going to say that. Do human beings now say, gee, I'm really too smart. I'd really like to be like a mouse.
SPEAKER_03
52:12 - 52:26
Not necessarily, but what people do say is that technology is too invasive. And it's too much a part of my life, and I'd like to sort of have a bit of an electronic vacation and separate from it. And there's a lot of people that I know that have gone to.
SPEAKER_01
52:26 - 52:35
But nobody does that. Nobody becomes stupid like we used to be when we were mice.
SPEAKER_03
52:35 - 53:03
Right, but I'm not saying stupid. I'm saying some people just like being a human, the way humans are now. Because one of the complications also comes with the integration of technology is what we're seeing now with people. Massive increases anxiety, from social media use, be manipulated by algorithms. The effect that it has on culture, misinformation, and disinformation, propaganda. There's so many different factors that are at play now that make people more anxious and more depressed statistically than ever.
SPEAKER_01
53:03 - 53:09
I'm not sure we had more anxiety. You're not sure today than we used to have.
SPEAKER_03
53:13 - 53:23
Well, we certainly had more when the Mongols were invading. We certainly had more anxiety when we were worried constantly about war. But I think people have a very heightened level.
SPEAKER_01
53:23 - 53:40
I mean, 80 years ago, we had a hundred million people die in Europe and Asia from World War II. We're very concerned about wars today and the terrible. But we're not losing millions of people.
SPEAKER_03
53:42 - 53:52
Right, but we could. We most certainly could with what's going on with Israel and Gaza, what's going on with Ukraine and Russia, but it's easily escalate.
SPEAKER_01
53:52 - 53:57
But it's thousands of people. It's not millions of people for now.
SPEAKER_03
53:57 - 54:02
Yeah, but if it escalates to a hot war where it's involving the entire world,
SPEAKER_01
54:03 - 54:30
What would really cause a tremendous amount of dangers, something that's not really artificial intelligence, it was invented when I was a child, which is atomic weapons. Right. I remember when I was like five or six weeks ago outside, put our hands behind our back to protect us from nuclear war. Yeah, drills. And it seems to work, we're still here.
SPEAKER_03
54:31 - 54:34
Do you remember those things that they tell kids to get under the desk?
SPEAKER_01
54:34 - 54:36
Yes, that's right.
SPEAKER_03
54:36 - 54:41
We went under the desk and put out which is hilarious as if a desk is going to protect you from a nuclear bomb
SPEAKER_01
54:42 - 54:45
Right, but that's not AI.
SPEAKER_03
54:45 - 55:18
Right. No, but AI applied to nuclear weapons makes them significantly more dangerous. And isn't one of the problems with AI is that AI will find a solution to a problem. So if you have AI running your military and AI says, what do you want me to do? And you say, well, I'd like to take over Taiwan and AI says, well, this is how to do it. And it just implements it with no morals. No thought of any sort of diplomacy or just force.
SPEAKER_01
55:18 - 55:35
Right. It hasn't happened yet because we do have people in charge and the people are enhanced with AI and AI can actually help us to avoid that kind of problem. by thinking through the implications of different solutions.
SPEAKER_03
55:35 - 55:47
Sure, if it has some sort of autonomy. But if we get to the point where one superpower has AI, artificial general intelligence, the other one doesn't, how much of a significant advantage with that being?
SPEAKER_01
55:51 - 56:12
I mean, I do think there are problems, basically there's problems with intelligence and we'd like to say stupid. But actually, it's better to be intelligent. I believe it's better to be to have to tell a shore.
SPEAKER_03
56:12 - 56:48
Right. But my question was, if there's a race to achieve AGI, how close is this race? Is it neck and neck? Who's at the lead? And how much capital is being put into these companies that are at the lead? And whoever achieves it first, If that is under the control of a government, it's completely dependent upon what are the morals and ethics of that government, what was the constitution? What if it happens in China? What if it happens in Russia? What if it happens somewhere other than the United States? And even if it does happen in the United States, who's controlling it?
SPEAKER_01
56:49 - 57:19
I mean, the knowledge of how to create these things is pretty widespread. It's not like anybody can just capitalize on a way to do it. Nobody understands it. Knowledge of how to create a large language model or how to create the type of chips that were enabled you to create this. is actually pretty widespread.
SPEAKER_03
57:19 - 57:33
So do you think essentially the competition is pretty even in all the countries currently? And there's also probably S.B. and S.B. and S.B. and S.B. and S.B. and S.B. and S.B. and S.B. and S.B. and S.B. and S.B. and S.B.
SPEAKER_01
57:33 - 57:55
And in terms of differences, the United States actually has superior AI compared to other places. Well, that's good for us. I mean, we'll actually weigh ahead of China, I would say.
SPEAKER_03
57:55 - 58:01
Right, but China has a way of figuring out what we're doing and copying it. Pretty good at that.
SPEAKER_01
58:01 - 58:02
They have been, yeah.
SPEAKER_03
58:02 - 58:45
Yeah. So do you have any concern whatsoever in the idea that AI gets in the hands of the wrong people? So when it first gets implemented, that's the big problem. Before it exists, before artificial general intelligence really exists, it doesn't, and then it does, and who hasn't. And then once it does, can that AGI stop other people from getting it? Can you programming it? program it to make sure you can use sabotage grids. Can you do whatever you can to take down the internet and these opposing places? Could you inject their computations with viruses? What could you do to stop other people from getting to where you're at if you have an infinitely superior intelligence?
SPEAKER_01
58:47 - 58:53
First, if that's what your goal is, then yes, you could do that. Are you worried about that at all?
SPEAKER_03
58:53 - 59:10
I'm worried about it. What is your main worry? I'm worried about the implementation of artificial intelligence. What's your main worry?
SPEAKER_01
59:10 - 59:37
I mean, I'm worried if people who have a destructive idea of how to use these capabilities get into control. Right. And that could happen. And I've got a chapter in the book about parallels that are like what we're talking about.
SPEAKER_03
59:37 - 59:41
And what do you think that could look like if the wrong people got to hold this technology?
SPEAKER_01
59:43 - 01:00:24
Well, if you look at actually who controls atomic weapons, which is not AI, some of the worst people in the world. And if you would ask people right after we use two atomic weapons within a week, 80 years ago, What's the likelihood that we're going to go another 80 years? And not have that happen again. Everybody would say zero. Right. Right. But it actually has happened. Chocking. Yeah. And I think this actually some message there.
SPEAKER_03
01:00:24 - 01:01:26
Mutual is short destruction. But the thing is, would artificial general intelligence, but that would not happen. Right. It has not happened yet. But would artificial general intelligence in the control of the wrong people negate that mutually assured destruction that keeps people from doing things? Obviously, we did drop bombs on Hiroshima and Nagasaki. We did. We did indiscriminately kill who knows how many hundreds of thousands of people with those weapons. We did it. and if human beings were capable of doing it because no one else had it, if artificial general intelligence reaches that sentient level and is in control of the wrong people, what's to stop them from doing, there's no mutually assured destruction if you're the one who's got it. You're the only one who's got it. And you possibly, my concern is that whoever gets it could possibly stop it from being spread everywhere else and control it completely. And then you're looking at a completely dystopian world.
SPEAKER_01
01:01:28 - 01:01:33
Right. So that's, if you ask me what I'm concerned about, it's a long move.
SPEAKER_03
01:01:33 - 01:01:48
Yeah, that's why, because that's when I always want to get out of you guys, because there's so many people that are rightfully so, so high on this technology and the possibilities for enhancing our lives. But the concern that a lot of people have is that at what cost and what are we signing up for?
SPEAKER_01
01:01:49 - 01:01:57
Right. But I mean, if we want to, for example, live indefinitely, this is what we need to do.
SPEAKER_03
01:01:57 - 01:02:23
We can't do... What if you're denying yourself heaven? You're a thought of that possibility. I know that's a ridiculous abstract concept, but if heaven is real, if the idea of the afterlife is real, and it's the next level of existence, and you're constantly going through these cycles of life, what if you're stepping in artificially denying that? That's hard to imagine. It is hard to imagine, but so is life. So is the universe itself. So is the big game.
SPEAKER_01
01:02:23 - 01:03:12
My father died when I was 22, so it's 15, 50, 60 years ago. It's hard, and he was actually a great musician, and he created the fantastic music, but he hasn't done that since he died. And there's nothing that exists that is a role creative Based on him, we have his memories. Actually created a large language model that represented him. I can actually talk to him. Did you do that now? Yeah. It's in the book.
SPEAKER_03
01:03:12 - 01:03:21
When you do that, have you thought about implementing some sort of a Sora type deal where you're talking to him?
SPEAKER_01
01:03:21 - 01:03:24
Well, you can do that now with language.
SPEAKER_03
01:03:24 - 01:03:27
Right, but I mean, basically, like looking at him like you're in a Zoom call with him.
SPEAKER_01
01:03:29 - 01:03:37
That's a little bit in the future to be able to actually capture the way he looks, but that's also feasible.
SPEAKER_03
01:03:37 - 01:03:44
It seems pretty feasible. It's certainly it could be something representative of what he looks based on photographs that you have, right?
SPEAKER_01
01:03:44 - 01:04:15
So things like that is a reason to continue so that we can create that and create our own ability to continue to exist. You talk to people and they say, well, I don't really want to live past 90 or whatever 100. But in my mind, if you don't exist, there's nothing for you to experience.
SPEAKER_03
01:04:15 - 01:04:38
That's true. In this dimension. My thought on that, people saying that I don't want to live past 90, it's like, okay, are you alive now? Do you like being alive now? What's the difference you now in 90? Is it just a number? Or is it deterioration of your physical body? In fact, how much effort have you put into mitigating the deterioration of your natural body so that you can enjoy life now?
SPEAKER_01
01:04:38 - 01:05:27
Exactly. And we've actually seen who would want to take their lives. People do take their lives. If they are experiencing something that's miserable, if they're suffering physically, emotionally, mentally, spiritually, and they just cannot stand the way life is carrying on, then they want to take their lives. Otherwise, people don't. If they're enjoying their lives, they continue. And people say, I don't want to live past 100, but when they get to be 99.9, they don't want to disappear unless they're suffering, unless they're suffering.
SPEAKER_03
01:05:27 - 01:05:43
That's what's interesting about the positive aspects of AI. Once we can manipulate human neurochemistry to the point where we figure out what is causing great depression, what is causing anxiety, what is causing a lot of these schizophrenic people.
SPEAKER_01
01:05:43 - 01:05:47
And we definitely have that right for. We didn't have the terms. Well, the sense gets a franier. Right.
SPEAKER_03
01:05:47 - 01:06:15
People definitely had it for sure. But what if we get to a point where we can mitigate that with technology? Where we can say, this is what's going on. That's why we're continuing. Right. I was saying that's a good thing. That's a positive aspect of this technology. And think about also profoundly, think about how many people do take their lives and with this technology would not just live happily but also be productive. And also contribute to whatever society is doing.
SPEAKER_01
01:06:15 - 01:06:24
That's why we're carrying on with this. Yes. But in order to do that, we do have to overcome some of the problems that you've articulated.
SPEAKER_03
01:06:26 - 01:07:06
I think what a lot of people are terrified of is that these people that are creating this technology. There's oversight, but it's oversight by people that don't necessarily understand it the way the people that are creating it. and they don't know what guardrails are in place. How safe is this? Especially when it's implemented with some sort of weapons technology, you know, or some sort of a military application, especially a military application that can be insanely profitable. and the motivations behind utilizing that are that profit and then then we do horrible things and somehow not justify it.
SPEAKER_01
01:07:06 - 01:07:32
I mean I think democracy is actually an important issue here because democratic nations tend not to go to war with each other and I mean you look at the way where handling military technology. If everybody was a democracy, I think they'd be much less war.
SPEAKER_03
01:07:32 - 01:07:36
As long as it's a legitimate democracy, it's not controlled by money.
SPEAKER_01
01:07:36 - 01:07:36
Right.
SPEAKER_03
01:07:36 - 01:09:25
As long as it's a legitimate democracy, it's not controlled by the military, industrial complex, or the pharmaceutical industry, or whoever puts the people that are in elected places, who puts them in there? How do they get funded? And what do they represent once they get in there? are there for the will of the people, they're there for their own career, do they bypass the safety and the future of the people for their own personal gain, which we've seen politicians do. There's certain problems with every system that involves human beings. That's another thing that technology may be able to do. One of the things, if you think about the worst attributes of humans, whether it's war, you know, crime, some of the horrible things that human beings are capable of. Imagine that technology can find what causes those thoughts and behaviors in human beings and mitigate them. I've joked around about this, but if we came up with something that would elevate dopamine, just 300 percent worldwide. There would be no more war. It'd be over. Everybody would be loving everybody. We'd be interacting with each other. Well, that's the point of doing this. But there will also be no sad songs. Well, he needs some blues in your life. Need a little bit of that, too. Or do we? Maybe we don't. Maybe that's just a byproduct of our monkey minds. And that one day will surpass that. We get to this point of enlightenment. Enlightenment seems possible without technological innovation, but maybe not. I've never really met a truly enlightened person. I've had some people that are pretty close. But if you could get there with technology, if technology just completely elevated the human consciousness to the point where all of our conflicts come to race.
SPEAKER_01
01:09:25 - 01:09:57
Just for starters, if you could actually live longer. quite aside from the motivations of people. Most people die not because of people's motivations, but because our bodies just won't last that long. And a lot of people say, you know, I don't want to live longer, which makes no sense to me. Why would you want to disappear and not be able to have any kind of experience?
SPEAKER_03
01:09:57 - 01:10:37
Well, I think some people don't think you're disappearing. I mean, there's a long, held thought in many cultures that this life is but one step. and that there is an afterlife and maybe that exists to comfort us because we deal with existential angst and the reality of our own inevitable demise or maybe it's a function of consciousness being something that we don't truly understand and what you are is a soul contained in a body and that we have a very primitive understanding of the existence of life itself and of the existence of everything.
SPEAKER_01
01:10:37 - 01:10:43
Well, I guess it makes sense. But I don't really accept it.
SPEAKER_03
01:10:43 - 01:11:09
I mean, if there's no evidence, right? But is it there's no evidence because we're not capable of determining it yet and understanding it? Or is it just because it doesn't exist? That's the real question. Is this it? Is this everything? Or is this merely a stage? And are we munking with that stage by interfering with the process of life and death?
SPEAKER_01
01:11:11 - 01:11:16
Well, it makes sense. Yeah, but I don't really see the evidence for that.
SPEAKER_03
01:11:16 - 01:12:03
I could see from your perspective. I don't see the evidence of it either, but it's a concept that is not. Look, just when you start talking to string theorists and they start talking about things existing and not existing at the same time, particles in superposition, like you're talking about magic. You're talking about something that's impossible to wrap your head around. even just the structure of an atom. Like, what was that? What's in there? Nothing. How much of it is space? The entire existence of everything in the universe seems preposterous. But it's all real. And we only have a limited grasp of understanding of what this is really all about and what process is a really in place. Right.
SPEAKER_01
01:12:03 - 01:12:15
But if you look at people's perspective. If somebody gets disease and kind of known they can only live like another six months, people are not happy with that.
SPEAKER_03
01:12:16 - 01:12:23
No. Well, they're scared. They're scared to die. It's a natural human instinct. That's what kept us alive for all these hundreds and millions of years.
SPEAKER_01
01:12:23 - 01:12:35
Yeah, but very few people would be happy with that. And if you then had something, we have this new device. You could take this and you won't die. Right. Almost everybody would do that.
SPEAKER_03
01:12:36 - 01:12:48
Sure, but would they appreciate life if they knew it had no end? Would it be the same thing? Or would it be like a lottery winner just goes nuts and spends all their money and loses their marbles because they can't believe they can't die?
SPEAKER_01
01:12:51 - 01:12:55
Well, first of all, it's not guaranteed to live for us.
SPEAKER_03
01:12:55 - 01:13:06
Sure, you can get an accident. Something can happen. You get injured. But if we get to a point where you have automated cars that significantly reduce the amount of automobile accidents.
SPEAKER_01
01:13:06 - 01:13:13
Well, also we can back up everything, everything in our physical body as well as how far away we're from that.
SPEAKER_03
01:13:13 - 01:13:30
That idea of, I mean, we don't really truly understand what consciousness is, correct? Right. So how would we be able to manipulate it or duplicate it to the point where you're putting it inside of some kind of a computation device?
SPEAKER_01
01:13:30 - 01:14:20
Well, we know to be able to create a computation that matches what are brain does. That's what we're doing with these large language models. And we're actually very close now to what our brain can do with these large language models and it will be there like within a year. And we can back up the electronic version and we'll get to the point where we can back up what our brain normally does. So it will be able to actually back that up as well, it will be able to detect what it is and back that up as just like computers.
SPEAKER_03
01:14:20 - 01:14:27
So we'll create it in the form of an artificial version of everything that it is to be a human being.
SPEAKER_01
01:14:27 - 01:14:27
Right.
SPEAKER_03
01:14:27 - 01:14:30
In terms of emotions, love, excitement.
SPEAKER_01
01:14:30 - 01:14:35
That's going to happen over the next 20 years. It's not a thousand years.
SPEAKER_03
01:14:36 - 01:14:57
But will that be a person? I mean, or will be some sort of a zombie? What motivations would have? If you can take human consciousness and duplicate it, much like you could duplicate your phone. And you make this new thing. What does that thing feel like? Is that thing live in hell? Like, what does that? What does that experience like for that?
SPEAKER_01
01:14:57 - 01:15:03
What are my large language ones? Do they really exist? Can they actually, I mean, they can talk.
SPEAKER_03
01:15:03 - 01:15:14
They certainly do, but would you want to be one? Are we different than that than that? Yeah, we're people. We check hands. I give you a hug. You pet my dog. You listen to music.
SPEAKER_01
01:15:14 - 01:15:15
You're able to do all of that.
SPEAKER_03
01:15:15 - 01:15:27
It's right. But what you want to, what you even care. The thing is like a lot of what gives us joy in life is biological motivations. There's human reward systems that are put in place that allow us to be part of who we are.
SPEAKER_01
01:15:27 - 01:15:43
We just like that. And we'll also have our physical bodies as well. And that'll also be able to be backed up. And we'll be doing the things that we do now, except we'll be able to have them continue.
SPEAKER_03
01:15:43 - 01:15:57
So if you get hit by a car and you die, there's another ray that just pops up. Oh, we got the backup ray. And the backup ray will have no feelings at all about having had died and come back to life.
SPEAKER_01
01:15:57 - 01:16:05
Well, that's a question. Yeah. I mean, why wouldn't it be just like Ray is now?
SPEAKER_03
01:16:05 - 01:16:49
Why wouldn't it? If we get to a certain, but if we figure out that if biological life is essentially some kind of technology, the universe is created, and we can manipulate that to the point where we understand it, we get it, we've optimized it, and then replicate it, physically replicated. Not just replicated in form of a computer, but actual physical being. Right, well that's where we're headed. Do you anticipate that people will be happy with whatever they have? If you decide, I don't like being five, six. I wish I was six, six. I don't like being a woman. I want to be a man. I don't want to be Asian. I want to be, you know, whatever. I want to be a black person.
SPEAKER_01
01:16:49 - 01:16:56
I want to be, we'll actually be able to do all of those things. simultaneously and so on.
SPEAKER_03
01:16:56 - 01:17:41
We're not going to be limited by those kinds of right happens happen stance which is going to be very strange like what will human beings look like if you give people the ability to manipulate your physical form we do things now that we're impossible even ten years ago we certainly do but we don't change racist size sex gender height we don't we don't do all of it in the the radical increase in just your intelligence like what is that going to look like what kind of an interaction is it going to be between two human beings when you have a completely new form, you know, you're much different physically than you ever were when you were alive, you're taller, you're stronger, you're smarter, you're faster, you're basically not really a human anymore. You're a new thing.
SPEAKER_01
01:17:41 - 01:17:46
I mean, we're expanding who we are, we're already expanded who we are from, you know,
SPEAKER_03
01:17:47 - 01:17:54
Sure. Right. Over a course of hundreds of thousands of years. Well, that's gone for being Australia-Pithecus to what we are now.
SPEAKER_01
01:17:54 - 01:18:10
That has to do with the... pace at which we make changes. Right. And we can make changes now much more quickly than we could, you know, 100,000 years ago.
SPEAKER_03
01:18:10 - 01:18:19
Right. But if we can manipulate our physical form with no limitations, I mean, what are we going to have six arm people that can fly? Like, what is it going to look like?
SPEAKER_01
01:18:19 - 01:18:22
Well, do you have a problem with them? Yeah.
SPEAKER_03
01:18:22 - 01:19:12
I would discriminate against six arm people like a fly. That's the one area allow myself to give the prejudice to. Okay. No, I'm just curious as to how much time you've spent having owned people would be okay. Yeah, seven art people is cool because it's like, you know, maybe five on one side to any other. No, I just, I'm just curious as to like how much time you've spent thinking about what this could look like. And I just, I don't think it's going to be as simple as, you know, it's going to be Ray Kurzweil, but Ray Kurzweil as like a 30-year-old man, 50 years from now. I think it's probably going to be, you're going to be all kinds of different things. You could be kind of whatever you want. You could be a bird. I mean, what's the stop if we can get to manipulate the physical form and we can take consciousness and put it into a physical form.
SPEAKER_01
01:19:12 - 01:19:28
But this is the description. I think of something that's positive rather than negative. You could be a giant eagle. I mean, negative is People that wanted to destroy things show power. Sure. And that is a problem.
SPEAKER_03
01:19:28 - 01:19:31
Well, it's certainly an improvement in terms of the viability.
SPEAKER_01
01:19:31 - 01:19:46
Seven arms and being like an eagle and so on. I mean, and you can also change that. Right. So I think that's a positive. aspect and we will be able to do that kind of thing.
SPEAKER_03
01:19:46 - 01:20:00
Sure, if you want to look at it in a binary fashion of positive and negative, but it's also going to be insanely strange. Like it's not going to be as simple as there'll be people that are living in 2016.
SPEAKER_01
01:20:00 - 01:20:10
Since it's first reported If it's been reported now for five years and people are constantly doing it, you won't find it that strange.
SPEAKER_03
01:20:10 - 01:20:23
It'll just be life. Yeah. Yeah. So that's what I'm asking. Like when you think about the implementation of this technology to its fullest, what does the world look like? What does the world look like in 2069?
SPEAKER_01
01:20:23 - 01:20:49
I mean, the kind of things that you can imagine right now will be able to do And my team's strange when it first happens, but when it happens for the, you know, millions of times. It won't seem that strange. And maybe you're like being an eagle for a few minutes.
SPEAKER_03
01:20:49 - 01:21:06
It's certainly interesting. It's certainly interesting. I'm just, I just wonder how much time you've spent thinking about what this world looks like with the full implementation of the kind of exponential growth of technology that would exist if we do make it to 1269.
SPEAKER_01
01:21:06 - 01:22:00
Well, I did write a book, Danielle. And this young girl has fantastic capabilities. And no one really can figure out how she does this. She actually takes over China at age 15. And she brings about, she makes it a democracy. And she actually becomes president at age 19. Create a constitutional amendment that at least she can become president at 19. That sounds like what a dictator would do. Right, but unlike a dictator, she's very popular and she writes very good music.
SPEAKER_03
01:22:00 - 01:22:03
And this is one artificial intelligence creature.
SPEAKER_01
01:22:03 - 01:22:03
Yes.
SPEAKER_03
01:22:03 - 01:22:04
And how was she created?
SPEAKER_01
01:22:05 - 01:22:18
It never says that she gets these capabilities through AI. I didn't want to spell that out, but that would be the only way that she could do this.
SPEAKER_03
01:22:18 - 01:22:22
Right, unless some insane freak of genetics.
SPEAKER_01
01:22:22 - 01:22:26
And she's like a very positive person. She's very popular.
SPEAKER_03
01:22:28 - 01:22:43
Yeah, but she's the only one that has that. Yeah. All right. It doesn't give it to everybody. Which is, which is where it gets really weird. You have a cell phone. I have a cell phone. Pretty much everybody has one now. What happens when everybody gets the kind of technology we're discussing?
SPEAKER_01
01:22:43 - 01:22:50
Well, it shows you the benefit that she has it. And if everybody gets it, that would be even more positive.
SPEAKER_03
01:22:51 - 01:22:59
Perhaps, yeah. I mean, that's the best way of looking at it that we become a completely altruistic, positive, beneficial to each other.
SPEAKER_01
01:22:59 - 01:23:08
Well, that's an idea of, I mean, that is a great mindset. Benefit, if you have more intelligence, you'd be more likely to do this.
SPEAKER_03
01:23:08 - 01:23:13
Yes. Yeah, for sure. That's the benefit.
SPEAKER_01
01:23:13 - 01:23:23
Yeah. Yeah. So we live long. We're also smarter to making more rational decisions towards each other.
SPEAKER_03
01:23:23 - 01:23:30
So overall, when you're looking at this, you just don't concentrate really on the negative possibilities.
SPEAKER_01
01:23:30 - 01:23:49
Well, no. I mean, I do focus on that as well. I mean, but you think overall it's net positive. Yes, it's called intelligence. And if you have more intelligence, we'll be doing things that are more beneficial to ourselves and other people.
SPEAKER_03
01:23:49 - 01:23:51
Do you think that the experiences that we're having right now?
SPEAKER_01
01:23:51 - 01:24:37
I mean, like right now, we have much less crime than we did 50 years ago. And if you listen to people debating presidential politics, they'll say, crime is worse than ever. But if you look at the actual statistics, it's gone way down. And if you actually go back, like a few hundred years, crime and murder and so on was far far higher than it is today. It's actually pretty rare. So the kind of additional intelligence that we've created is actually good for people. If you look at the actual data, sure.
SPEAKER_03
01:24:37 - 01:24:47
If you look at Stephen Pinker's work, right? Scale it from a few years ago to today. Things are generally always seem to be moving in a better direction.
SPEAKER_01
01:24:47 - 01:25:06
Right. Well, Pinker didn't credit this to technology. He just looks at the data and it's gotten better. What I try to do in the current book is to show how it's related to technology and it's we have more technology. We're actually moving in this direction.
SPEAKER_03
01:25:06 - 01:25:10
So you feel it's a function of technology that we're moving in this direction. Absolutely.
SPEAKER_01
01:25:10 - 01:25:27
I mean, that's why I mean, look at the technology in 80 years. We've multiplied the amount of computation 20 quattillion times. And so we have things that didn't exist two years ago.
SPEAKER_03
01:25:30 - 01:25:49
When you think about the idea of life on earth and that this is happening and that we are on this journey to 2045 to the seniority, do you consider whether or not this is happening elsewhere in the universe or whether it's already happened?
SPEAKER_01
01:25:49 - 01:27:00
Yeah, we see no evidence. that there's any form of life, little on intelligent life, anywhere else. And I can say, well, we're not in touch with these other people, it is possible. But it seems, I mean, given the exponential impact of this type of technology, we would be spaced out based on over a long period of time. So some people that might be ahead of us could be ahead of us certainly thousands of years, even millions of years. And so they'd be like way ahead of us. and they'd be doing galaxy-wide engineering. How is this that we look at there? We don't see anybody doing galaxy-wide engineering.
SPEAKER_03
01:27:00 - 01:27:08
Maybe we don't have the capability to actually see it. I mean, yes, as universities. What's the 13.7 billion years old or whatever it is?
SPEAKER_01
01:27:08 - 01:27:18
But even just incidental capabilities would affect galaxies. We would see that somehow.
SPEAKER_03
01:27:18 - 01:27:34
Would we, if we were at the peak, if there is intelligent life in the universe, some form of that intelligent life has to be the most advanced. And what if we are underestimating our position in the universe that we are?
SPEAKER_01
01:27:34 - 01:27:35
Well, that's what I'm saying.
SPEAKER_03
01:27:35 - 01:27:37
But maybe there's something that's like 10 years.
SPEAKER_01
01:27:37 - 01:27:44
I mean, there's an industrial age. I think that's a good argument that we are ahead of other people.
SPEAKER_03
01:27:44 - 01:27:58
But we don't have the capability of observing the goings on of a planet 5,000 light years away. We can't see into their atmosphere. We can't look at high resolution video of activity on that planet.
SPEAKER_01
01:27:58 - 01:28:02
And if they were doing galaxy wide engineering, I think we would notice that.
SPEAKER_03
01:28:02 - 01:28:09
If they were more advanced than us, maybe we would. But what if they're not? What if they're at the level that we're at? Well, that's what I'm saying. What if we're at the peak?
SPEAKER_01
01:28:09 - 01:28:12
And this is like, I think it's an argument that we are at the peak.
SPEAKER_03
01:28:13 - 01:28:25
What if it gets to the point where artificial intelligence gets implemented and then that becomes the primary form of life and it doesn't have the desire to do anything in terms of like galactic engineering?
SPEAKER_01
01:28:28 - 01:28:34
But even just incidental things would affect whole galaxies.
SPEAKER_03
01:28:34 - 01:28:46
Like what things like we're doing, are we affecting the whole galaxy? No, not yet. Right, but what if it's like us, but it gets to the point where it becomes artificial intelligence, and then it doesn't have emotions, it doesn't have desires, it doesn't have ambitions.
SPEAKER_01
01:28:46 - 01:28:49
So why would it decide to explain why would it not have those things?
SPEAKER_03
01:28:50 - 01:29:09
Well, we'd have to program it into it, but it would probably decide that that's foolish, and that those things have caused all these problems. All the problems in human race, what's our number one issue? War. What does war call caused by, it's caused by ideologies, it's caused by acquisition of resources, that's why resources are not violence.
SPEAKER_01
01:29:09 - 01:29:13
Or it's not the primary thing that we are motivated by.
SPEAKER_03
01:29:13 - 01:29:20
It's not the primary thing we motivated by, but it's existed in every single step of the way of human existence.
SPEAKER_01
01:29:21 - 01:29:31
But it's actually getting better. I mean, just look at the effect of war. Sure. I mean, we have a couple of wars going on. They're not killing millions of people like they used to.
SPEAKER_03
01:29:31 - 01:29:49
Right. Right. My point is that if artificial intelligence recognizes that the problem with human beings is these emotions and a lot of it is fueled by these desires, like the desire to expand, the desire to acquire things, the desire to
SPEAKER_01
01:29:49 - 01:29:55
Well, then she emotion is positive, I mean, music and to us, to us.
SPEAKER_03
01:29:55 - 01:30:54
But if it gets to the point where artificial intelligence is no longer stimulated by mere human creations, creativity, all these different things, why would it even have the ambition to do any sort of galaxy, wide engineering? Why would it want to? Because it's based on us it is based on us until it decides it's not based on us anymore That's my point if it realized that like if we're based on a Very violent chimpanzee and we say you know what there's a lot of what we are because of our genetics that it really are a problem And this is what's causing all of our violence, all of our crime, all of our war. If we just step in and put a stop to all that, will we also put a stop to our main team that we're moving away from that? We are moving away from that. But that's just natural, right? That's natural with our understanding and our mitigations of these social problems.
SPEAKER_01
01:30:54 - 01:30:58
We expand that even more will be even more in that direction.
SPEAKER_03
01:30:58 - 01:32:00
As long as we're still we, But as soon as you become something different, why would it even have the desire to expand? If it was infinitely intelligent, why would it even want to physically go anywhere? Why would it want to? What's the reason for our motivation to expand? What is it? It's human. The same humans that were tribal creatures that Rome, the same humans that stole resources from neighboring villages. This is our genes, right? This is what made us that got us to this point. If we create a sentient artificial intelligence that's far superior to us and it can create its own version of artificial intelligence, the first thing it's going to engineer out is all these stupid emotions that get us in trouble. Well, if it just can create happiness and joy, just from programming, why would it create happiness and joy through the acquisition of other people's creativity, art, music, all those things, and then why would it have any ambition at all to travel? Why would it want to go anywhere?
SPEAKER_01
01:32:01 - 01:32:04
I mean, it's an interesting philosophical problem.
SPEAKER_03
01:32:04 - 01:32:17
Right. It is a problem because a lot of what we are and the things that we create is because of all these flaws that you would say. If you were programming us, you would say, well, what is the cause of all these issues with the plague human race?
SPEAKER_01
01:32:17 - 01:32:18
Say that they're flaws.
SPEAKER_03
01:32:18 - 01:32:20
Murder's a flaw. Is there any flaw?
SPEAKER_01
01:32:21 - 01:32:25
But that's what way down. Right. But it's a technology.
SPEAKER_03
01:32:25 - 01:32:48
It moves ahead. If it happens to you, it's a flaw. And it crime is a flaw. All these theft is a fraud, fraud. Those are flaws. If we could engineer those out. What would be the way that we do it? Well, one of the things we do, we get rid of what it is to be a person, because what it is is corrupt people that go down these terrible paths, they cause harm to other people, right?
SPEAKER_01
01:32:48 - 01:32:54
You're taking a step there, the ability to feel emotion and so on as a flaw.
SPEAKER_03
01:32:54 - 01:33:07
No, I'm not. I'm saying that it's the root of these flaws, that greed and envy and lust and anger are the root. We'll talk about flaws. And we're back to that. I mean, as I think about myself now, it's when I have emotions that are positive emotions.
SPEAKER_01
01:33:24 - 01:33:59
like really getting off on a song, or a picture, or some new art form that didn't exist in the past, that's positive. That's what I live for, relating to another person in a way that's intimate. The idea, if we're actually more intelligent, we're not to get rid of that, but to actually enjoy that to a greater extent.
SPEAKER_03
01:33:59 - 01:34:01
Hopefully.
SPEAKER_01
01:34:01 - 01:34:09
But what I'm saying is that, yes, the things that can go wrong, but lead us in incorrect direction.
SPEAKER_03
01:34:09 - 01:34:30
I'm not even saying it's wrong. I'm not saying that it's going to go wrong. I just, I'm saying that if you wanted to program away some of the issues that human beings have in terms of what keeps us from working with each other universally all over the globe, what keeps us from these things?
SPEAKER_01
01:34:30 - 01:34:32
Well, actually doing that more than we used to do.
SPEAKER_03
01:34:33 - 01:34:43
Sure, but also not, you know, we're also like massive inequality. You've got people in the Congo, mining cobalt with sticks that powers your cell phones. There's a lot of real problems with society.
SPEAKER_01
01:34:43 - 01:34:45
Right. But there used to be even more of that.
SPEAKER_03
01:34:45 - 01:35:06
There's a lot of that though. There's a lot of that. And if you looked at greed and war and crime and all the problems with human beings, a lot of it has to do with these biological instincts, these instincts to control things, these built in genetic codes that we have that are from our ancestors.
SPEAKER_01
01:35:06 - 01:35:09
That's because we haven't cut them yet.
SPEAKER_03
01:35:09 - 01:35:26
Right. But when we get there, You think we will be a better version of a human being, and we will be able to experience all the good, the positive aspects of being human being, the art and creativity and all these different things.
SPEAKER_01
01:35:26 - 01:35:39
I hope so, and actually, if you look at what human beings have done already, we're moving in that direction. I mean, I think that way.
SPEAKER_03
01:35:39 - 01:35:50
No, it does seem that way to me. It does overall, but it's also like, you know, if you look at a graph of temperatures, it goes up, it goes down, it goes up, it goes down, but it's moving in a general direction.
SPEAKER_01
01:35:50 - 01:36:06
We are moving in a generally positive direction. Why do we want to continue moving in this same direction? Yeah, I don't think that it works. It's not a guarantee. I mean, you can describe things that would be horrible and it's feasible.
SPEAKER_03
01:36:07 - 01:36:16
Yeah. It could be the end of the human race, right? Or it could be the beginning of the next race of this new thing.
SPEAKER_01
01:36:16 - 01:36:54
Well, I mean, when I was born, we created nuclear weapons, and people were very soon we had hydrogen weapons, and we have enough hydrogen weapons to wipe out all humanity. We still have that. That didn't exist like 100 years ago. Well, it exists 80 years ago. So that is something that concerns me. And you could do the same thing with the artificial intelligence. It could also create something that would be very negative.
SPEAKER_03
01:36:55 - 01:37:09
But what I'm getting at is like, what do you think life looks like if it's engineered? What do you think human life looks like if it's engineered by a far superior intelligence and what would it change about what it means to be a person?
SPEAKER_01
01:37:14 - 01:37:34
I mean, first of all, we would base it on what human beings are already, so we'd become better versions of ourselves. For example, we'd be able to overcome life-threatening diseases and we're actually working on that, and that's going to go on to high gear very soon.
SPEAKER_03
01:37:38 - 01:37:56
Yes, but that's still being a human being. If you're implementing large-scale artificial intelligence, You're essentially a superhuman. You're a different thing. You're not what we are.
SPEAKER_01
01:37:56 - 01:38:02
If you have the computational efficiency for human, you have the human being as part of it.
SPEAKER_03
01:38:02 - 01:38:33
For now. But this is the thing. If you're engineering this artificial intelligence and you're engineering this with essentially a superior life form, It's going to look at it logically. It's going to look at the issues that human beings have logically and say, well, we don't need this. This is a problem. This is what we needed when we were primates and we're not that anymore. We're this new thing. We're going to like who cares what the movies like. It's just a thing that's tricking your body and pretending that it's involved in drama, but it's not really.
SPEAKER_01
01:38:33 - 01:38:37
Well, you're making certain assumptions about what we'll create.
SPEAKER_03
01:38:37 - 01:38:40
Now, I'm just making an assumption.
SPEAKER_01
01:38:41 - 01:38:52
I mean, in my mind, we would want to create better music and better art and better relationships.
SPEAKER_03
01:38:52 - 01:38:56
Well, the relationships should be all perfect eventually if we keep going in this general direction.
SPEAKER_01
01:38:56 - 01:38:58
Well, it's not perfect.
SPEAKER_03
01:38:58 - 01:39:04
I mean, but if you get artificial intelligence, we're all reading each other's minds and everyone's working towards the same goal.
SPEAKER_01
01:39:04 - 01:39:15
Well, no, you can't read each other's minds. I mean, we can create Yes, we can create privacy that's virtually unbreakable and you could keep the privacy to yourselves.
SPEAKER_03
01:39:15 - 01:39:23
But can you do that as technology scales upward? If it continues to move, it's difficult like your phone. Anyone can listen to you on your phone.
SPEAKER_01
01:39:23 - 01:39:30
Anyone who has a significant technology has pretty good technology already. You can't really read someone else's.
SPEAKER_03
01:39:30 - 01:39:48
Phone. You're definitely good. Yeah, if you have Pegasus, you can hack into your phone easily. Not hard at all. The new software that they have, all they need is your phone number. All they need is your phone number, and they can look at every text message you send, every email you send, they can look at your camera, they can turn on your microphone. Easy.
SPEAKER_01
01:39:48 - 01:39:53
We have ways of keeping total privacy, and if it's not built into your phone now, it will be.
SPEAKER_03
01:39:54 - 01:40:05
but it's definitely not built in your phone now. The security people that really understand the capabilities of intelligence agencies, they 100% can listen to your phone, 100% can turn on your camera, 100% can record your voice.
SPEAKER_01
01:40:09 - 01:40:17
Yes and no. I mean, we have the ability to keep total privacy in a device.
SPEAKER_03
01:40:17 - 01:40:28
But from who? You can keep privacy from me because I don't have access to your device. But if I was working for an intelligence agency and I had access to a Pegasus program, I am in your device.
SPEAKER_01
01:40:29 - 01:40:37
Now, I've talked to people because it's not perfect. We can actually build much better privacy than exists today.
SPEAKER_03
01:40:37 - 01:40:45
But the privacy that we have today is far less than the privacy that we had before we had phones.
SPEAKER_01
01:40:45 - 01:40:47
I don't really quite agree with that.
SPEAKER_03
01:40:47 - 01:41:12
How so? If you didn't have a phone, okay? And you were at home having a conversation, a sensitive conversation about maybe you didn't pay as much taxes as you should. There's no way anybody would hear that. But now, your phone hears that. If you have an Alexa in your home, your Alexa hears you say that. People have been charged with crimes because Alexa heard them committing murder.
SPEAKER_01
01:41:12 - 01:41:24
We actually know how to create perfect privacy in your phone. And if your phone doesn't have that, that's just an imperfection in the way we're building these things now.
SPEAKER_03
01:41:24 - 01:41:45
But it's not just an imperfection, sort of built into the program itself, because that's what fuels the algorithm, as it has access to all of your data. It has access to all of your, what you're interested in, what you like, what you don't like. You can't opt out of it, especially you. You've got a Google phone. That thing is just a net scooping up information.
SPEAKER_01
01:41:45 - 01:41:50
We know how to build perfect privacy.
SPEAKER_03
01:41:50 - 01:41:51
How do we do it?
SPEAKER_01
01:42:02 - 01:42:07
I mean, if it's not built into your phone now, it should be.
SPEAKER_03
01:42:07 - 01:42:13
Unless they don't want it to be built in there because there's an actual business model and not being built in there.
SPEAKER_01
01:42:13 - 01:42:20
But it can be done if people want that it will happen.
SPEAKER_03
01:42:20 - 01:42:33
But you recognize the financial incentive and not doing that, right? Because that's what a company like Google, for instance. That's where they make the majority of their money is from data. Or a lot of their money, I should say.
SPEAKER_01
01:42:33 - 01:42:45
Well, I mean, there's actually a lot of effort that goes into keeping what's on your phone private.
SPEAKER_03
01:42:45 - 01:43:04
It's not that he's private from some people, but not really private. It's only private until they want to listen. And now the capability of listening to your phone is super easy. Not really. No. With the Pegasus program, it's very easy.
SPEAKER_01
01:43:04 - 01:43:08
Well, that has to do with imperfections in the way phones are created.
SPEAKER_03
01:43:08 - 01:43:49
Right. But I think it's a feature. I think part of the feature is that they want as much data from you and knowing about what you're doing, what you're talking about. If you're at a conversation with someone, then you see an ad for that thing on Google. That happens. Yes, but so something's going on where it's listening to your conversations. It's picking up on keywords. It's not picking up on everything. No, yeah. Well, it's not unless it wants to. Like I said, if they're using a program and intelligence program to gather information from your phone, it is. And then you're basically a little spy that you carry around with you, everywhere you go.
SPEAKER_01
01:43:49 - 01:44:01
Unless you're using, I mean, if you think that's a major issue we could build phones that are impossible to spy on.
SPEAKER_03
01:44:01 - 01:44:45
Maybe. But if we did, well, there are some phones that likes graphing. Do you know about that? You know about what people that they take a Google phone, and they put a different Linux-based operating system on it, makes it much more difficult to track, and there's multi-levels of protection. There's a bunch of phones that are being made that are security phones. But we lose access to apps. You lose access to a lot of the features that people rely on when it comes to phones, like for instance, like if you have GPS on your phone, As soon as you're using GPS, you're easy to find, right? So you lose that privacy. If they want to know where race phone is, they know exactly where race phone is. And that's where you are and you're with your phone. They've got you tracked everywhere you go complicated.
SPEAKER_01
01:44:45 - 01:44:49
If this were a major issue, we could definitely overcome that.
SPEAKER_03
01:44:49 - 01:45:00
I think it's a major issue, but I don't think it's a major concern for most people. Right. But it's because they reap the benefits of it. Like the algorithm is specifically tailored.
SPEAKER_01
01:45:00 - 01:45:04
That's how we influence the kinds of things we put on phones.
SPEAKER_03
01:45:04 - 01:45:13
Right. But you can't opt out of it unless you just decide to get a flip phone. But even if you do, they can figure out where you are, they triangulate you from cell phone towers.
SPEAKER_01
01:45:17 - 01:45:35
I mean, we give up certain things in order to get the benefits of insurance. Yeah, we do. If what you're giving up is a great concern, we could overcome that. We know how to do that.
SPEAKER_03
01:45:35 - 01:45:48
Yeah, if people agree that the benefit of overcoming that outweighs the loss in the financial loss that you would have not having access to everybody's data and information.
SPEAKER_01
01:45:48 - 01:46:17
Well, I mean, what you're giving up is a certain type of data that you want, certain type of capability that you could buy. And so they can advertise that to you and people feel that that's OK. Yeah. For example, keeping your email private is quite feasible.
SPEAKER_03
01:46:17 - 01:46:27
It's possible, but it's also easy to hack. People can be reading your emails all the time and you should probably assume that they do.
SPEAKER_01
01:46:27 - 01:46:45
Well, it's a complicated issue, but we keep, for example, your email's private. And generally, we actually do do that generally for most people.
SPEAKER_03
01:46:45 - 01:47:49
But my point is, as this technology scales upward, when you have greater and greater computational power, and then you're also integrated with this technology, how does that keep whatever group is in charge from being able to essentially access the thing that is inside your head now? How do, how do, if you have a technology that's going to be upgraded and you're going to get new, you know, new software, it's going to keep improving as time goes on. What kind of privacy would be involved in that if you're literally having something that can get into your brain? And if most people can't get into your brain, can intelligence agencies get into your brain, can foreign governments get into your brain? Like what, what does that look like? I'm not looking at this as a negative. I'm just saying if you're just looking at this, it's just completely objectively. What are the possibilities that this could look like? I'm trying to paint a weird picture of what this could look like.
SPEAKER_01
01:47:49 - 01:48:02
Well, a lot of things you want to share. I mean, music and so on. It's desirable to share that and you'd want that to be shared. If you didn't share anything, you'd be pretty lonely.
SPEAKER_03
01:48:04 - 01:48:31
Sure. What do you think about the potential for a universal language? Do you think that one of the things that holds people back is, you know, the Rosetta Stone, the tower of battle? I think we can't really understand what all these other people are saying. We don't know how they think. If we can develop a universal worldwide language through this, Do you think it's feasible? I mean, all languages that we have were created.
SPEAKER_01
01:48:31 - 01:48:35
We have a certain means of changing one language into another.
SPEAKER_03
01:48:35 - 01:48:42
That's what I'm saying. And we're doing that now with some like Google does that with translate and new Samsung phones do that in real time.
SPEAKER_01
01:48:42 - 01:48:52
Yeah. I wrote about that in 1989 that we would be able to have universal translation between languages.
SPEAKER_03
01:48:52 - 01:49:20
But do you think that the adoption of a universal language is pretty good? It's pretty good. But there's also context that's missing because there's different cultural, significant. There's different ways that people say things. There's gendered language and other nationalities use and other countries use and let's try to get that into the language translation as well. You can, but it's a little bit imperfect, right?
SPEAKER_01
01:49:20 - 01:49:28
You might have something that's said very quickly and you'd have to translate it into much longer language in order to capture that.
SPEAKER_03
01:49:29 - 01:50:17
Right. But what a universal language be possible. If you're creating something, why would you need that? Because what we have all of our languages pretty flawed. Ultimately, I mean, we use it, but how many versions of your do we have, how many of our, there's a bunch of different weird things about language that's imperfect because it's old. It's like old technology. If we decided to make a better version of technology through art, excuse me, better version of language through artificial technology and say, listen, instead of trying to translate everything. Now that we're super powerful intelligent beings that are enhanced by artificial intelligence, let's create a better, more superior universally adopted language.
SPEAKER_01
01:50:17 - 01:50:21
Maybe, I mean, do you see that as a major need?
SPEAKER_03
01:50:21 - 01:50:39
Yeah, I do. Yeah, I think that would change a lot. I mean, we'd lose all the amazing nuances of cultures, which I don't think is good for us as human beings, but we're not going to be human beings. So maybe it would be better if we could communicate exactly the way we prefer to.
SPEAKER_01
01:50:39 - 01:51:10
Well, it would be even beings. And in my mind, even beings, someone who can change both ourselves and means of communication to enjoy better means of expressing art and culture and so on. No other animal really quite does that, except human beings. So that is an essence of what it means to be human beings.
SPEAKER_03
01:51:10 - 01:51:16
For now, but when you're a mind reading eagle and you're flying around, are you really a human being anymore?
SPEAKER_01
01:51:17 - 01:51:20
Yes, because we are able to change ourselves.
SPEAKER_03
01:51:20 - 01:51:30
So that's just a new definition of what a human being is. What are your thoughts on simulation theory?
SPEAKER_01
01:51:30 - 01:53:24
If you mean that we're living in this simulation, well first of all, some people believe that we can express physics as formulas, And the universe is actually able to... It's capable of computation. And therefore, everything that happens is there was also some computation. And therefore, the universe is capable of We are living in something that is computable. And this sort of debate about whether that's feasible, but That's certainly mean that we're living in a simulation. Generally, if you say we're living in a simulation, you assume that some other place and teenagers in that world like to create a simulation. So they created a simulation that we live in and you want to make sure that they don't turn the simulation off. So we'd have to be interesting to them. And so they keep the simulation going. But the whole universe could be capable of simulating reality. And that's what we live in. And it's not a game. It's just the way that the universe works. I mean, what would the difference be if we lived in a simulation?
SPEAKER_03
01:53:25 - 01:53:42
This is what I'm saying. If we can, and we're on our way to creating something that is indisernable from reality itself. I don't think we're that far away from that may decades away from having some sort of a virtual experience that's indisernable from regular reality.
SPEAKER_01
01:53:42 - 01:53:46
I mean, we try to do that with games and so on.
SPEAKER_03
01:53:46 - 01:53:58
Right. And those are far superior to what they were. I mean, I'm younger than you, but I can remember Paul. Remember Paul? It was groundbreaking. You could play a video game on your television. This is crazy.
SPEAKER_01
01:53:58 - 01:54:01
It was so nuts. Yeah.
SPEAKER_03
01:54:01 - 01:54:42
Now you look at like the Unreal 5 engine. It's insane. how beautiful it is, and how incredible, and what the capabilities of that, that's kind of a simulation. Right, but as you expand that further, and you get to the point where you're actually in a simulation, and that your life is not this carbon-based biological life, feeling in texture that you think it is, whether you're really a part of this thing that's been created. This is where it gets real weird with like probability theory, right? because they think that if a simulation is possible, it's more likely, it's already happened.
SPEAKER_01
01:54:42 - 01:55:00
I mean, there's really an unlimited amount of things that we could simulate and experience. Yeah. And so it's hard to say we're living in a simulation because a lot of what we're doing is it's living in a computational world anyway, so it's basically being simulated. Yeah.
SPEAKER_03
01:55:00 - 01:55:31
In a way. Yeah. And if you were some sort of an alien life form, wouldn't that be the way you go instead of like taking physical metal crafts and shooting them off into space? wouldn't you sort of create artificial space, create artificial worlds, create something that exists in the sense that you experience it. Right. And it's indecernable to the person experiencing it.
SPEAKER_01
01:55:31 - 01:55:37
But if you're intelligent enough, you'll be able to tell what's being simulated and what's not.
SPEAKER_03
01:55:37 - 01:55:47
Up to a point. Until it actually does all the same things that regular reality does, it just does it through technology and maybe that's what the universe is.
SPEAKER_01
01:55:49 - 01:56:04
But that's okay. We could still experience what's happening. Yeah. And we could also experience people doing galaxy wide engineering, which not all of which would be simulated.
SPEAKER_03
01:56:04 - 01:56:21
So the galaxy wide engineering is the main thing that you look at in the point where I don't see any evidence for life outside. Well, there's definitely no real evidence that we see other than these people that talk about UFOs, UAPs and pilots and all these people that say that there's this thing.
SPEAKER_01
01:56:21 - 01:56:37
They don't see any evidence that life is simulated outside of our own life. We can simulate things and experience it. We don't see any evidence that other beings are doing that elsewhere.
SPEAKER_03
01:56:37 - 01:56:56
This is based on such a limited data, right? I mean, look at what limited data we just have of Mars. We have rover, rolling around, satellites, orbit. It's very limited data with something that's just one planet over. We don't really have the data to understand what's going on in our house.
SPEAKER_01
01:56:56 - 01:57:06
It's possible that simulated life elsewhere. I mean, it's, uh, we don't see any evidence of what but it's possible.
SPEAKER_03
01:57:06 - 01:57:10
Is it something that intrigues you or did you just look at it like there's no evidence so I'm not going to concentrate on that?
SPEAKER_01
01:57:12 - 01:57:44
I've arranged to see what we can achieve because we're actually, I can see they were on that path. So it doesn't take a lot of curiosity in my part to imagine other people simulating life and enjoying it. I'm much more interested to see what will be feasible for us. And we're not that far away from it.
SPEAKER_03
01:57:44 - 01:58:01
So over the next four years, five years, you think we're going to be able to far surpass the ability of human beings. We're going to be able to stop aging and then eventually reverse aging. And then 2045 comes along. What does that look like?
SPEAKER_01
01:58:05 - 01:59:52
Well, one of the reasons we call it singularity is because we really don't know. I mean, that's why it's called singularity. Singularity in physics is where you have a black hole. No energy can get out of a black hole, and therefore we don't really know what's going on in it and we call it a singularity. So this is a historical singularity based on the kinds of things we've been talking about. And again, we don't really know what that will be like, and that's why we call it a singularity. There's another way of looking at it. I mean, we have mice, and they have experiences. It's a limited amount of complexity because that particular species hasn't really evolved very much. And we'll be going beyond what human beings can do. So asking human being what it's like to be human being in singularity is like asking a mouse, what would it be like if you would evolve to become like a human? Now if you ask a mouse that, it wouldn't understand the question, it wouldn't be able to formulate an answer, it wouldn't even be able to think about it. and asking a card you and being what it's going to be like to live in the singularity is a little bit like that.
SPEAKER_03
01:59:52 - 01:59:56
So it's just who knows. It's going to be wild.
SPEAKER_01
01:59:56 - 02:00:00
We'll be able to do things that we can't even imagine today, right?
SPEAKER_03
02:00:01 - 02:00:12
Well, I'm very excited about it. Even though it's Gary, I know I ask a lot of tough questions about this because these are my own questions. This is like what bounces around time I own had.
SPEAKER_01
02:00:12 - 02:00:29
Well, that's why I'm excited about it also because basically means more intelligence will be able to think about things that we can't even imagine today. And solve problems. Yes, including like dying, for example.
SPEAKER_03
02:00:30 - 02:00:47
Listen man, I'm glad you're out there. It's very important that people have access this kind of thinking and you dedicate your whole life to this. In this book, Ray Kurzweil, the Singularity is near when we merge with AI. It's available now. Did you do the audio version of it?
SPEAKER_01
02:00:47 - 02:00:55
That's being worked on now. Are you doing it? It's coming out June. No. No.
SPEAKER_03
02:00:57 - 02:01:08
I want to hear it in your voice. It's your words. Yeah, that's what people say. Yeah, why don't you do it? You should do it. You know what you should do. Just get a AI to do it. Why waste all that time sitting around doing it?
SPEAKER_01
02:01:08 - 02:01:10
Basically, they can do it now.
SPEAKER_03
02:01:10 - 02:01:31
Oh, 100%. Look, and they could take your voice from this podcast and do this book in an audio version. Easy. Do you know what they're doing now? It's Spotify. They're translating this podcast. They're going to translate it to German, French, and Spanish. And it's going to be like your voice and perfect Spanish.
SPEAKER_01
02:01:31 - 02:01:36
My voice and perfect Spanish. That's actually came up yesterday. I'll think about that pretty wild.
SPEAKER_03
02:01:36 - 02:02:14
Yeah. It's 100% you should do that. Okay. My friend Duncan does that all the time. He'll have friends, text friends, or send of voice messages, a fake voice message that's ridiculous. And they'll talk about how he's marrying his cat or something like that. It's just like just but he does it with AI and it sounds exactly like whoever that person is. Okay. So that's the, that's the solution. Yeah. Have AI read your, of course, you should have AI read your book. I can't believe we even would think of you sitting down for 40 hours or whatever it would take. Probably take more than that to read this whole book. And then if you mess up, you got to go back and start again.
SPEAKER_01
02:02:14 - 02:02:22
Well, certainly that's going to be feasible. Well, that's feasible now to get all the nuances correct. I've been, it's pretty close.
SPEAKER_03
02:02:22 - 02:02:24
Yeah. I've been, it's pretty close right now.
SPEAKER_01
02:02:24 - 02:02:28
But it has to be very close because we're doing it like in the next month as soon as possible.
SPEAKER_03
02:02:28 - 02:02:41
I bet I bet they don't you think they can do it Jamie? Yeah, I think they can do it right now. Listen, Ray, I appreciate you very much. Thank you very much for being here and thank you for this book. When is it available?
SPEAKER_01
02:02:41 - 02:02:41
June 24.
SPEAKER_03
02:02:41 - 02:02:47
June, I've got a really copy gift. Yeah. Thanks, sir. Really appreciate it. Thank you very much. Bye, everybody.