Transcript for Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education

SPEAKER_00

00:00 - 03:56

The following is a conversation with Sebastian Throne. He's one of the greatest roboticist computer scientists and educators of our time. He led the development of the autonomous vehicles as Stanford that won the 2005 DARPA Grand Challenge and placed second in the 2007 DARPA Urban Challenge. He then let the Google self-driving car program, which launched the self-driving car revolution. He taught the popular Stanford course on artificial intelligence in 2011, which is one of the first massive, open, online courses or MOOCs as a commonly called. That experience led him to co-founded audacity and online education platform. If you haven't taken courses on it yet, I highly recommend it. Their self-driving car program, for example, is excellent. He's also the CEO of Kitty Hawk, a company working on building flying cars, or more technically EV Taws, which stands for electric vertical takeoff and landing aircraft. He has launched several revolutions and inspired millions of people, but also, as many know, he's just a really nice guy. It was an honor and a pleasure to talk with him. This is the Artificial Intelligence podcast. If you enjoy, subscribe my YouTube, give it 5 stars in Apple podcast, follow it on Spotify, support it on Patreon, or simply connect with me on Twitter. Alex Friedman spelled FRIDMAN. If you leave a review on Apple Podcasts or YouTube or Twitter, consider matching ideas, people topics you find interesting. It helps guide the future of this podcast. But in general, I just love comments with kindness and thoughtfulness in them. This podcast is a side project for me as many people know, but I still put a lot of effort into it. So the positive words of support from an amazing community from you really help. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. I provide timestamps for the start of the conversation that you can skip to, but it helps. If you listen to the ad and support this podcast by trying out the product of service being advertised. This show is presented by CashApp, the number one finance app in the app store. I personally use CashApp to send money to friends, but you can also use it to buy sell and deposit Bitcoin in just seconds. CashApp also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by cash app investing, subsidiary of square, and member SIPC. I'm excited to be working with cash app to support one of my favorite organizations called First. Best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over a hundred and ten countries and have a perfect rating and charity navigator, which means the donated money is used to maximum effectiveness. When you get cash app from the App Store or Google Play and use code Lex Podcast, you'll get $10 and cash app will also donate $10 the first, which again is an organization that has personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Sebastian Thrun. You mentioned that the matrix may be your favorite movie. So let's start with the crazy philosophical question. Do you think we're living in a simulation and in general, do you find the thought experiment interesting?

SPEAKER_01

03:56 - 04:02

Define simulation, I would say, maybe VR, maybe we are not, but it's completely irrelevant to the way we should act.

SPEAKER_00

04:03 - 04:20

Putting aside for a moment, the fact that might not have any impact on how we should act as human beings, for people studying theoretical physics, these kinds of questions might be kind of interesting. Looking at the universe's information processing system.

SPEAKER_01

04:20 - 04:44

It's a huge physical biological chemical computer. There's no question. But I live here and now, I care about people, okay, about us. What do you think is trying to compute? I know I think there's an intention, I think, the world evolves, the way it evolves and it's beautiful, it's unpredictable and I'm really, really grateful to be alive.

SPEAKER_00

04:44 - 05:27

I spoke like a true human, which last time I checked I was. Well, that, in fact, this whole conversation is just a touring test to see if indeed you are. You've also said that one of the first programs of the first few programs you've written was a wait for a TI 57 calculator. Yeah. Maybe that's early 80s. We don't want to date calculators in early 80s. Correct. Yeah. So if you were to place yourself back into that time, into the mindset you were in, could you have predicted the evolution of computing AI, the internet technology in the decades of followed.

SPEAKER_01

05:27 - 06:02

I was super fascinated by Silicon Valley, which I've seen on television once and thought, my God, this is so cool. They build like DRAMs there and CPUs. How cool is that? And there's a quarter of students, a few years later, I decided to be study intelligence and study human beings and found that even back then in the 80s and 90s, The artificial intelligence is what fascinated me the most. What's missing is that back in the day, the computers are really small. They're like the brains if you could build well, not anywhere bigger than the cockroach. And cockroach is on very smart. So we're going at the scale yet very out today.

SPEAKER_00

06:02 - 06:08

Did you dream at that time to achieve the kind of scale we have today? Did that seem possible?

SPEAKER_01

06:09 - 06:29

I always wanted to make a robot smart, and I felt there was super cool to build an artificial human. And the best way to build an artificial human to build a robot because that's kind of the closest you could do. Unfortunately, we aren't there yet. And the robots today are still very brittle. But it's fascinating to study intelligence from a constructive perspective and build something.

SPEAKER_00

06:29 - 06:37

To understand, you build, what do you think it takes to build an intelligent system and an intelligent robot?

SPEAKER_01

06:37 - 07:55

I think the biggest innovation that we've seen is machine learning and it's the idea that their computers can basically teach themselves. Let's give an example. I'd say everybody pretty much knows what to walk and we learn how to walk in the first year, two of our lives. But no scientists has ever been able to write down the rules of human gate. We don't understand that we can't put, we have in our brains some, or we can practice it, we understand it, but we cannot take away that, we can't pass it on by language. And that, to me, is kind of the deficiency of today's computer programming. Even you could program a computer They're so insanely dumb that you have to give rules for every contingencies. They unlike the way people learn, but learn from data and experience computers are being instructed. And because it's so hard to get this instruction set right, we pay software engineers $200,000 a year. Now, the most recent innovation, which has been the make for like 30, 40 years, is an idea that computers can find their own rules. So they can learn from falling down and getting up, the same way children can learn from falling down and getting up. And that revolution has led to a capability that's completely unmatched. Today's computers can watch experts do their jobs, whether you're a doctor or a lawyer, pick up the regularities, learn those rules, and then become as good as the best experts.

SPEAKER_00

07:55 - 08:23

So the dream of in the 80s of expert systems, for example, had at its core the idea that humans could boil down their expertise on a sheet of paper. So to reduce, sort of be able to explain to machines how to do something explicitly. So do you think What's the use of human expertise into this whole picture? Do you think most of the intelligence will come from machines learning from experience without human expertise input?

SPEAKER_01

08:23 - 09:33

So the question for me is much more how do you express expertise? You can express expertise by showing someone what you're doing. You can express expertise by applying it by many different ways. And I think the experts systems was our best attempt in AI to capture expertise in rules. By someone said down and say, here are the rules of human gate. Here's when you put your big toe forward and your heel backwards and here how it stops stumbling. And as we now know, the set of rules, the set of language that we can command, is incredibly limited. The majority of the human brain doesn't deal with language, it is with subconscious numerical, perceptual things that we don't even have several way of. Now, when a AI system watches an expert do their job and practice their job, it can pick up things that people can't even put into writing into books or rules. And that's what the real power is. We now have AI systems that, for example, look over the shoulders of highly paid human doctors, like dermatologists or radiologists, and they can somehow pick up those skills that no one can express in words.

SPEAKER_00

09:35 - 09:59

So you were a key person in launching three revolutions online education, the Thomas vehicles, and fine cars or vitals. So high level, and I apologize for all the philosophical questions. That's no apology, that's the thing. How do you choose what problems to try and solve and drive you to make those solutions a reality?

SPEAKER_01

09:59 - 11:41

I have two desires in life. I want to literally make the lives of others better. Or as we often say, maybe jokingly make the word a bit of laziness. It's just funny that sounds. And second, I want to learn. I want to get in the skillset. I don't want to be in a drop, I'm good at it. Because if I'm in a drop, then I'm good at the chance of me to learn something interesting, it's actually minimized. So I want to be in a drop, I'm bad at it. That's really important to me. So in a build, for example, what people often call flying cars, which are electrical, vertical, takeoff and landing vehicles. I'm just no expert in any of this. And it's so much fun to learn on the job what it actually means to build something like this. Now it's saying that the stuff that I've done lately after I finished my professor ship at Stanford, the video focused on what has the maximum impact on society. Transportation is something that has transformed the 21st or 20th century more than any other invention of my opinion and even more than communication. and cities are different workers, different women's rights are different because of transportation. And yet we still have a very sub-optimate transportation solution, where we kill 1.2 or so million people every year in traffic. It's like the leading cause of death for young people in many countries. Where we are extremely inefficient resource-wise, let's go to your average neighborhood city and look at the number of park cars that's at Travis team, my opinion, or where we spend endless hours in traffic jams. And very, very simple innovations, like a self-driving car or what people call a flying car, could completely change this. And it's there. I mean, the technology is basically there. It's the close your eyes not to see it.

SPEAKER_00

11:43 - 12:36

So lingering on autonomous vehicles, fascinating space, some incredible work you've done throughout your career there. So let's start with DARPA. I think the DARPA challenge, the desert and then urban to the streets. I think that inspired an entire generation of roboticists and obviously sprung this whole excitement about this particular kind of four-wheeled robots we call autonomous cars, self-driving cars. So you let the development of Stanley, the autonomous car that won the race, the desert, the DARPA challenge in 2005. And Junior, the car that I finished second in the DARPA, urban challenge, also did incredibly well in 2007, I think. What are some painful inspiring or enlightening experiences from that time that stand out to you?

SPEAKER_01

12:37 - 14:10

Oh my god. Painful were all these incredibly complicated stupid bugs that had to be found. We had a phase where the Stanley or our car that eventually wandered up a wrench challenge would every 30 miles just commit suicide. And we didn't know why. And it ended up to be that in the sinking of two computer clocks, occasionally a clock went backwards and that negative time elapsed, screwed up the entire internal logic, but it took ages to find this. It was like bugs like that. I'd say enlightening is the Stanford team immediately focused on machine learning and on software, whereas everybody all seemed to focus on building better hardware. Our analysis had been a human being with an existing rental car can perfectly drive the course. I have to write a bit of a better rental car. I should replace the human being. And the human being, to me, was a conjunction of three steps. We had a sensors, eyes and ears, mostly eyes. We had brains in the middle and then we had actuators or hands in our feet. Now the extras are easy to build. The sensor's actually also easy to build was missing what's the brain. So we had to build a human brain. And nothing clear then to me that the human brain is a learning machine. So we're not just train our robot. So we would build a massive machine learning into our machine. And with that way, we're not just learning from human drivers. We had entire speed control of the vehicle. We was copied from human driving. But also we have the robot learning from experience where it made a mistake and got to recover from it and learn from it.

SPEAKER_00

14:12 - 14:51

You mentioned the pain point of software and clocks, synchronization seems to be a problem that continues with robotics. It's a tricky one with drones and so on. What is it take to build a thing a system with so many constraints? You have a deadline, no time. You're unsure about anything really. It's the first time that people really even explore. It's not even sure that anybody can finish when we're talking about the race or the desert, the year before nobody finished. What does it take to scramble and finish a product that actually a system that actually works?

SPEAKER_01

14:52 - 15:46

I mean, they were lucky, we were really small team, they're co-of the team, were four people. It was four because five couldn't conquer a certain side of car, but four couldn't. And I, as a team leader, my job was to get pizza for everybody and wash the car and stuff like this, and repair the radiator and a broke, and debug the system. And we were very kind of open-minded. We had like no egos involved this. We just wanted to see how far we can get. Or we did really, really, well, it was time management. We were done with everything a month before the race. And we froze the entire software a month before the race. And it turned out, looking at other teams, every other team complained, if they had just one more week, they would have won. And we decided that we're not going to fall into that mistake. We're going to be early. And we had an entire month to shake this system. And we actually found two or three minor bugs in the last month that we had to fix. And we were completely prepared in the race occurred.

SPEAKER_00

15:46 - 16:07

Okay, so first of all, that's such an incredibly rare achievement in terms of being able to be done on time or ahead of time. What do you, how do you do that in your future work? What advice do you have in general? Because it seems to be so rare, especially in highly innovative projects like this, people work the last second.

SPEAKER_01

16:07 - 17:38

When the nice thing about the darker one challenges is that the problem was incredibly well defined. We were able for a while to drive the old darker one challenge course, which had been used the year before. And then, at some reason, we were kicked out of the region, so we had to go to different deserts, the snoring deserts, and we were able to drive desert trails just on the same type. So there was never any debate about what was actually the problem. We didn't sit down and say, hey, should we build a car or a plane? We had to build a car. That made it very, very, very easy. Then I studied my own life and life of others and realized that the typical mistake that people make is that there's this crazy bug left that they haven't found yet. And it's just, they regretted, and the bug would have been trivial to fix it, it has haven't fixed it yet. And it didn't want to fall into that trap. So I built a testing team. We had a testing team that built a testing booklet of 160 pages of tests. We had to go through just to make sure we shake our system appropriately. Wow. And the testing team was with us all the time and dictated to us today. We do railroad crossings to more overdue. We practice the start of the event. And in all of these, we thought, oh my God, it's long-solved trivial. And then we tested it out. Oh my God, it doesn't draw railroad crossing. Why not? Oh my God, it mistakes the rails for metal barriers. We have to fix this. Yes. So it was a video, a continuous focus on improving the weakest part of the system. And as long as you focus on improving the weakest part of the system, you eventually build a really great system.

SPEAKER_00

17:39 - 18:04

Let me just pause in that. It's to me as an engineer. It's super exciting that you're thinking like that. Especially at that stage as brilliant. That testing was such a core part of it. It may be to linger on the point of leadership. I think it's one of the first times you were really a leader and you've led many very successful teams since then. What does it take to be a good leader?

SPEAKER_01

18:05 - 20:50

I would say most of all, I don't just take quite it for the work of others. That's very convenient in turns out. I can't do all these things myself. I'm an engineer at heart, so I care about engineering. So I don't know what the chicken and the egg is, but as a kid, I love computers because you could tell them to do something and they actually did it. It was very cool and you could like in the middle of a night wake up at one in the morning and switch on your computer and what you told you to yesterday, I would still do. That was really cool. Unfortunately, that didn't quite work with people. So you go to people and tell them what to do and they don't do it. And they hate you for it. Or you do it today and then they go a day later and they stop doing it. So they have to. So then a question really became, how can you put yourself in the brain of people as opposed to computers? And it is a computer that is super dumb. So dumb if people were as dumb as computers, I wouldn't want to work with them. But people are smart and people are emotional and people have pride and people have aspirations. So how can I connect to that? And that's the thing that most of our leadership just fails because many, many engineers turn managers and believe they can treat their team just the same way they can treat a computer and it just doesn't work this way. It's just really bad. So how can I, how can I connect to people? And in terms of, as a college professor, the wonderful thing you do all the time is to empower other people. Like, your job is to make your students look great. That's all you do. You're the best coach. And in terms of, if you do a fantastic job with making your students look great, they actually love you. And their parents love you. And they give you all the credit for stuff you don't deserve. In terms of, all my students were smarter than me. All the great stuff invented at Stanford because they are stuff not my stuff. And they give me credit and say, oh, Sebastian, But just making them feel good about themselves. So the question of video is can you take a team of people and what does it take to make them to connect to what they actually want in life and turn this into productive action. It turns out every human being that I know has incredibly good intentions. I've really never really met a person with bad intentions. I believe every person wants to contribute. I think every person I've met wants to help others. It's amazing how much of a urge we have not to just help ourselves but to help others. So how can we empower people and give them the right framework that they can accomplish this? If moments when it works, it's magical, because you'd see the confluence of people being able to make the world a better place and driving enormous confidence and pride out of this. And that's when my environment works the best. These are moments where I can disappear for a month and come back and things still work. It's very hard to accomplish, but when it works, it's amazing.

SPEAKER_00

20:51 - 21:06

So, I grew very much and it's not often heard that most people in the world have good intentions. At the core, their intentions are good and they're good people. That's a beautiful message, it's not often heard.

SPEAKER_01

21:06 - 22:53

We make this mistake and this is a friend of mine, Ageswara, tokenist, that we judge ourselves by our intentions and others by the actions. And I think the biggest skill, I mean, here in Silicon Valley we follow engineers that have very little empathy and kind of be fuddled by it, by it doesn't work for them. The biggest skill I think that people should acquire is to put themselves into the position of the other and listen and listen to what the other has to say. And they be shocked how similar they are to themselves. And they might even be shocked how their own actions don't reflect their intentions. I often have conversations with engineers where they look hate. I love you doing a great job. And by the way, what you just did has the following effect. Are you aware of that? And then people would say, oh my god, not I wasn't because my intention was that. And I say, yeah, I trust your intention. You're a good human being. But just to help you in the future, if you keep expressing it that way, then people would just hate you. And I've had many instances where I would say, oh my God, thank you for telling me this because it wasn't my intention to look like an idiot, it wasn't my intention to help other people that just didn't know how to do it. very simple, by the way. There's a little tail Carnegie, 1936, how to make friends and how to influence others, has the entire Bible, it was repeated and you're done and you apply it every day. And I wish I could, I was good enough to apply it every day, but it's just simple things, right? Like the positive, remember people's name, smile, and eventually have empathy, like really think that the person that you were hate and you think is an idiot, it's actually just like yourself. It's a person who's struggling, who means well, and who might need help and guess what you need help.

SPEAKER_00

22:53 - 23:37

I've recently spoken with Stephen Schwarzman. I'm not sure if you know who that is, but so he says I'm a list. I know it's the way he said sort of to expand out what you're saying that one of the biggest things you can do is hear people when they tell you what their problem is and then help them with that problem. He says it's surprising how a few people like actually listen to what troubles others. and because it's right there in front of you and you can benefit the world the most and in fact, yourself and everybody around you by just hearing the problems and solving them.

SPEAKER_01

23:37 - 23:57

I mean, that's my little history of engineering that is while I was engineering with computers. I didn't care of what the computer's problems were. I just told him I was going to do it. And it just doesn't work this way. It doesn't work with me. If he come to me and say, do A, I do the opposite.

SPEAKER_00

24:00 - 24:27

But let's return to the comfortable world of engineering. Can you tell me and broad strokes in how you see it? Because you're the core of starting at the core of driving it, the technical evolution of autonomous vehicles from the first DARPA Grand Challenge to the incredible success we see with the program. You started with Google South Driving Car and Waymo in the industry that sprung up all the different kinds of approaches debates and so on.

SPEAKER_01

24:27 - 27:08

When the idea of self-driving cargo was back to the 80s, there was a team of Germany, the team of Carnegie Mellon that did some very pioneering work. But back in the day, I'd say the computers were so deficient that even the best professors and engineers in the world basically stood no chance. It then folded into a phase where the US government spent at least half a billion dollars that I could count on research projects. But the way the procurement works, a successful stack of paper describing lots of stuff that almost never going to reach was a successful product of a research project. So we trained our researchers to produce lots of paper. That all changed with the top again, and I really got to credit the ingenious people at DARPA and the US government in Congress that took a complete new funding model where they said, let's not fund effort, let's fund outcomes. And it sounds very trivial, but there was no tax code that allowed the use of congressional tax money for a price. It was all effort based. So if you put it in a hundred hours in, you could charge a hundred hours. If you put it in a thousand hours and you could build a thousand hours. By changing the focus and making the price, we don't pay you for development, we pay you for the accomplishment. They all are medically drew out all these contractors who are used to the drug off, getting money per hour, and they're drew in a whole bunch of new people. And these people are mostly crazy people. There were people who had a car and a computer, and they wanted to make a million bucks. The million bucks for the official price money was then doubled. And they failed, if I put my computer in my car and program it, I can be rich. And that was so awesome. Like, like, half the teams, there was a team that was a surfer dude. And they had like two surfboards on their vehicle. And brought like these fashion girls, super cute girls, like twin sisters. And you could tell, these guys were not your common. It felt very abandoned to like, it gets all these big multi million billion dollar countries in the US government. And there was a great reset. Universities moved in. I was very fortunate at Stanford that it just received tenure so I couldn't fire it whenever I wanted to do it, but I couldn't have done it. And I had enough money to finance this thing and I was able to attract a lot of money from third parties. And even car companies moved in. They kind of moved in very quietly because they were super scared to be embarrassed that they are covered flip over. But Ford was there, and Volkswagen was there, and a few others, and GM was there. So it kind of reset the entire landscape of people. And if you look at who's a big name in South African cars today, these were mostly people who participated in those challenges.

SPEAKER_00

27:10 - 27:30

Okay, that's incredible. Can you just comment quickly on your sense of lessons learned from that kind of funding model? And the research that's going on at academia in terms of producing papers, is there something to be learned and scaled up bigger? These having these kinds of grand challenges that could improve outcomes?

SPEAKER_01

27:31 - 29:07

So I'm a big believer in focusing on kind of an end-to-end system. I'm a really big believer in insistence building. I've always built systems in my academic career, even though I do a lot of math and abstract stuff. But it's all derived from the idea of let's solve a real problem. And it's very hard for me to be an academic and say that it means to have a component of a problem. Like, if someone, there's fields like normal monetary logic or AI planning systems, where people believe that a certain style of problem solving is the ultimate end objective and and I would always turn it around and say hey what problem would my grandmother care about that doesn't understand computer technology and doesn't want to understand now could I make for love what I do because only then do I have an impact on the world I can easily impress my colleagues that's that's that's that's much easier but impressing my grandmother is very very hard So I've always thought if I can build a self-driving car and my grandmother can use it even after she loses a driving provision so she doesn't can use it or we save maybe a million lives a year, there would be very impressive. And there's so many problems like this, like there's a problem of Q&A or there's lift twice as long. Once the problem is defined, of course, I can't solve it in this entirety. Like it takes sometimes tens of thousands of people to find a solution. There's no way you can find an arm of 10,000 at Stanford. So you're going to build a prototype. There's a bit of meaningful prototype. And the job I went to challenge was beautiful because it told me what this prototype had to do. I didn't even think about what it had to do with just to beat the rules. And it was really, really beautiful.

SPEAKER_00

29:07 - 29:20

And it's most beautiful. You think what academia could aspire to is to build a prototype that the systems level that solves gives you an inkling that this problem could be solved with this prototype.

SPEAKER_01

29:20 - 32:11

First of all, I want to emphasize what academia I really is. And I think people misunderstand it. First and foremost, academia is a way to educate young people. First and foremost, the professor is an educator, no matter where you are a small suburban college or whether you are a Harvard or Stanford professor. That's not the way most people think of themselves in academia because we have this kind of competition going on for citations and publication. That's a measurable thing, but that is secondary to the primary purpose of educating people to think. Now, in terms of research, most of the great science, the great research comes out of universities. You can trace almost everything back including Google to universities. So there's nothing we fundamentally broken here. It's a good system and I think America has the finest universities on the planet. We can talk about reach and how to reach people outside the system. It's a different topic, but the system itself is a good system. If I had one wish, I would say, it'd be really great if there was more debate about what the great big problems are in society and focus on those and most of them are interdisciplinary. Unfortunately, it's very easy to fall into a interdisciplinary viewpoint where your problem is dictated, but your closest colleagues believe the problem is It's very hard to break out and say, well, there's an entire new field of problems. So, given example, prior to me, working on self-driving cars, I was a roboticist and a machine learning expert and I wrote books on robotics, something called probabilistic robotics, it's a very method-driven kind of viewpoint of the world. I build robots that acted in museums as two guides that let children around, that it's something that at the time was moderately challenging. When I started working on cars, several colleagues told me Sebastian, you destroying your career because in our feet of robotics cars are looked like a gimmick, and they're not expressive enough. They can only push this bottle in and the brakes. There's no dexterity, there's no complexity. It's just too simple. And no one came to me and said, wow, if you solve that problem, you can save a million lives. And man, all robotic problems that I've seen in my life, I would say the self-driving car, transportation, is the one that has the most hope for the society. So how come the robotics community wasn't all over the place? And it was become because we focused on methods and solutions and not on problems. Like, if you go around today and ask your grandmother what bugs you, what really makes you upset, I challenge any academic to do this and then realize how far your research is probably away from that today.

SPEAKER_00

32:11 - 32:14

At the very least, that's a good thing for academics to deliberate on.

SPEAKER_01

32:15 - 33:23

The other thing that's really nice is sitting in value is sitting in value is full of smart people outside academia. Right? So there's the Larry Paders and Mark Zuckerbergs in the world who are anywhere as smart or smarter than the best academics they've made in my life. And what they do is they are at a different level. They build the systems. They build the customer facing systems. They build things that people can use without technical education. and they are inspired by research and inspired by scientists. They hire the best PhDs from the best universities for a reason. So I think this kind of vertical integration between the real product, the real impact and the real thought, the real ideas. That's actually working surprisingly well on Silicon Valley. It did not work as well in other places in this nation. So when I worked at Carnegie Mellon, we had the most finest computer science university. But there wasn't those people in Pittsburgh that would be able to take these very fine computer science ideas and turn them into massive, impactful products. That symbiosis seemed to exist pretty much only in Silicon Valley and maybe a bit in Boston and Austin.

SPEAKER_00

33:23 - 33:48

With Stanford, that's really interesting. So if we look a little bit further on from the DARPA Grand Challenge and the launch of the Google Self-driving car, What do you see as the state, the challenges of autonomous vehicles as they are now? It's actually achieving that huge scale in having a huge impact on society.

SPEAKER_01

33:48 - 36:26

I'm extremely proud of what has been accomplished. And again, I'm taking a lot of credit for the work for us. And I'm actually very optimistic. And people have been kind of worrying as it too fast as they slow by the way. They're young and so on. It is actually quite an interesting hard problem and in that a self-driving car to build one that manages 90% of the problems and count on every driving is easy. We can literally do this over a weekend. to do 99% of my take a month, then there's 1% left. So 1% would mean that you still have a fatal accident every week, very unacceptable. So now you walk on this 1% and the 99% of that, the very 1% is actually relatively easy, but now you're down to like a hundredth of 1% and it's still completely unacceptable in terms of safety. So the variety of things you encounter are just enormous. And that gives me enormous respect for human being, that we're able to deal with the couch on the highway, right, or the deer in the headlight, or the blown tire that we've never been trained for and all of a sudden have to handle it in an emergency situation and often do very, very successfully. It's amazing. From that perspective, how safe driving actually is, given how many millions of miles we drive every year in this country. We are now at a point where I believe that technology is there, and I've seen it, I've seen it in way more, I've seen it in app, I've seen it in cruise, in a number of companies, in voyage, where vehicles are not driving around, and basically flawlessly I able to drive people around in limited scenarios. In fact, you can go to Vegas today and order a summoner lift, and if you've got the right setting off your app. You'll be picked up by a driverless car. Now, there's still safety drivers in there, but that's a fantastic way to kind of learn what the limits of technology today. And there's still some glitches, but the glitches have become very, very, very rare. I think the next step is going to be to down-cost it, to harden it, the entrapment, the sensors are not quite an automatic grade standard yet. And then to read the business models to really kind of go somewhere and make the business case. In the business case, it's hard work. It's not just, oh my god, we have this capability. People just going to buy it. You have to make it affordable. You have to give people the, find the social acceptance of people. None of the teams yet has been able to, or gut see enough to drive around without a person inside the car. And that's the next magical hurdle. We will be able to send these vehicles around completely empty in traffic. And I think, I mean, I wait every day, wait for the news that Raymond has just done this.

SPEAKER_00

36:28 - 37:26

So, you know, I think you mentioned Gutsy. Let me ask some maybe unanswerable questions, maybe edgy questions, but in terms of how much risk is required. guts in terms of leadership style it would be good to contrast approaches and I don't think anyone knows what's right but if we compare Tesla and Waymo for example Elon Musk and the Waymo team There's slight differences in approach. So on the Elon side, there's more, I don't know what the right word to use, but aggression in terms of innovation. And on way more side, there's more sort of cautious safety focused approach to the problem. What do you think it takes? What's leadership at which moment is right? Which approach is right?

SPEAKER_01

37:28 - 38:58

Look, I don't sit in either of those teams, so I'm unable to even verify like someone who says correct. Right. In the end of the day, every innovator in that space will face a fundamental dilemma. And I would say you could put aerospace titles into the same bucket, which is you have to balance public safety with your drive to innovate. And this country in particular in states has a hundred plus year history of doing this very successfully. Air travel is what a hundred times the safe for mild, then ground travel, then cars. And there's a reason for it because people have found ways to be very methodological about ensuring public safety while still being able to make progress on important aspects, for example, like yelling noise and fuel consumption. So I think that those practices are proven and they actually work. We live in a world safer than ever before. And yes, they will always be the provision that something goes wrong. There's always the possibility that someone makes a mistake or there's an unexpected failure. We can't never guarantee to 100% absolute safety other than just not doing it. But I think I'm very proud of the history of the United States. I mean, we've dealt with much more dangerous technology like nuclear energy and kept that safe too. We have nuclear weapons and we keep those safe. So we have methods and procedures that really balance these two things very, very successfully.

SPEAKER_00

38:59 - 39:38

You've mentioned a lot of great autonomous vehicle companies that are taking sort of the level four level five. They jump in full autonomy with a safety driver and take that kind of approach and also through simulation and so on. There's also the approach that Tesla autopilot is doing, which is kind of incrementally taking a level two vehicle. and using machine learning and learning from the driving of human beings and trying to creep up, trying to incrementally improve the system until it's able to achieve a level four autonomy. So perfect autonomy in certain kind of geographic origins. What do your thoughts on these contrasting approaches?

SPEAKER_01

39:39 - 42:27

When it was about like, I'm a very proud Tesla owner and I literally use the autopilot every day and it literally has kept me safe. It is a beautiful technology specifically for highway driving when I'm slightly tired. Because then it turns me into a much safer driver and that I'm a hundred percent confident at the case. In terms of the right approach, I think the the biggest change I've seen since I went away more teams It's a single deep learning. And the deep learning was not a hot topic, when I started the way more. Or Google self-driving cars. It was there. In fact, we started Google Brain at the same time in Google Ex. So I invested in deep learning, but people didn't talk about it. It was not a hot topic. And now there's a shift of emphasis from a more geometric perspective where you use geometric sensors. They give you a full 3D view. You only do geometric reasoning about all of this box over here. It might be a car. towards a more human, like, oh, let's just learn about it. This looks like the thing I've seen in 10,000 times before, so maybe it's the same thing, machine learning perspective. And that has really put, I think, all these approaches and steroids. And he does that he will teach a course in self-driving cars. In fact, I think we've created it over 20,000 or so people on self-driving cars skills. So every self-driving car team in the world now uses our engineers. And in this course, the very first homework assignment is to do lane finding on an images. And lane finding images for the layman, what this means is you put a camera into your car or your open your eyes. And you would know where the lane is, right? So you can stay inside the lane with your car. Humans can do this super easily, you just look and you know where the lane is just intuitively. For machines for a long time, it was super hard because people would write these kind of crazy rules. If there's like wine lane markers and he's for white really means, this is not quite white enough, so it's not white or maybe the sun is shining, so when the sun shines and this is white and this is a straight line, maybe it's not quite a straight line because it was curved and do we know that there's a six feet between lane markings or not or 12 feet, whatever it is. And now, what the students are doing, they would take machine learning. So instead of writing these crazy rules for the lane marker, as they say, let's take an hour driving and label it and tell the vehicle, this is actually the lane by hand. And then these are examples and have the machine find its own rules, but what lane markings are. And within 24 hours, now every student that's never done any programming before in this space can write a perfect lanefinder as good as the best commercial lanefinder. And that's completely amazing to me. We've seen progress using machine learning that completely dwarfs anything that I saw 10 years ago.

SPEAKER_00

42:27 - 42:47

Yeah, and just as a side note, the self-driving car nanodegrad, the fact that you launched that many years ago now, maybe four years ago. Three years ago is incredible that that's a great example of system level thinking sort of just taking an entire course that teaches us all the entire problem. I definitely recommend people.

SPEAKER_01

42:47 - 44:02

It's been a kind of super popular and it's become actually incredibly high quality with Mercedes and race other companies in that space. And we find that engineers from Tesla and Veim were out taking a day. The insight was that two things, one is existing universities will be very slow to move because the department lies and there's no department for serving cars. So between meki and W.E. and computer science, getting this folks together into one room is really, really hard. And every professor listening here will know, will probably agree to that. And secondly, even if all the great universities just did this, which none so far has developed a curriculum in this field, It is just a few thousand students that can partake because all the great universities are super selective. So how about people in India, how about people in China or in the Middle East or in Indonesia or Africa? Why should those be excluded from the skill of building self-driving cars? Are there any damage than we are? Are there any less privileged? And the answer is, We should just give everybody the skill to build a self-driving car because if we do this, then we have like a thousand self-driving car startups. And if 10% succeed, it's like 100, that means 100 countries now will have self-driving cars and be safer.

SPEAKER_00

44:03 - 44:43

It's kind of interesting to imagine impossible to quantify, but the number, you know, over a period of several decades, the impact that has like a single course. I got a ripple effect through society. I just recently talked to Andrew and who was creator of Cosmos, so it's interesting to think about how many scientists that show launched. Yeah. And so it's really In terms of impact, I can't imagine a better course than the self-driving car course. There's other more specific disciplines like deep learning and so on that you ask is also teaching, but self-driving cars, it's really, really interesting course.

SPEAKER_01

44:43 - 45:31

Yeah, and it came with the right moment. It came in time when there were a bunch of Akvi high guys. Akvi high is a acquisition of a company, not for its technology or its products or business, but for its people. So Akvihai means maybe the company of 70 people. They have no product yet, but they're super smart people in a person in the amount of money. So I took Akvihai's like GM crews and Uber and others and did the math and said, hey, how many people are there and how much money was paid? And as a lower balance, I estimated the value of a self-driving car engineer in these acquisitions. to be at least $10 million. Think about this. You get yourself a skill and your team up and will accompany and your worth now is $10 million. That's kind of cool. What other thing could you do in life to be worth $10 million within a year?

SPEAKER_00

45:32 - 46:08

Yeah, amazing. But to come back for a moment on to deep learning and its application in autonomous vehicles, you know, what are your thoughts on Elon Musk's statement, provocative statement, perhaps that light is a crutch. So this geometric way of thinking about the world may be holding us back. if what we should instead be doing in this robotic space in this particular space of autonomous vehicles is using camera as a primary sensor and using computer vision machine learning is the primary way to look after two comments I think first of all we all know that

SPEAKER_01

46:10 - 47:43

People can drive cars without lighters in their hands because we only have eyes. And we mostly just use eyes for driving. Maybe we use some other perception about our bodies, accelerations, occasionally our years, certainly not our noses. So that the existence proof is there that eyes must be sufficient. In fact, we could even drive a car if someone put a camera out and then gave us the camera image with no latency, we would be able to drive a car that way the same way. So a camera is also sufficient. Secondly, I really love the idea that in the Western world we have many, many different people trying different hypotheses. It's almost like an entail, like if an entail tries to forge for food, right? You can sit there as two ants and agree what the perfect path is and then every single ant marches for the most likely location of food is or you can even just spread out. And I promise you, the spread out solution will be better because if the discussing philosophical intellectual ants get it wrong and they're all moving the wrong direction, they're going to waste a day. And then they're going to discuss again for another week. Whereas if all these aunts go in the van directions, someone's going to succeed and they're going to come back and claim victory and get the Nobel Prize or whatever they want. And then they'll all march in the same direction. And that's great about society. It's great about the Western society. We're not plan based. We're not central based. We don't have a Soviet Union style central government that tells us where to forge. We just forge. We start in the C Corp. We get investor money, go out and try it out. And who knows is going to win?

SPEAKER_00

47:45 - 47:58

I like it. When you look at the long term vision of autonomous vehicles, do you see machine learning as fundamentally being able to solve most of the problem? So learning from experience.

SPEAKER_01

47:58 - 51:21

Let's say we should be very clear about what machine learning isn't is not, and I think there's a lot of confusion. What it is today is a technology that can go through large databases of repetitive patterns and find those patterns. So, in example, we did a study at Stanford two years ago, where we applied machine learning to detect skin cancer and images. And we harvested a built a data set of 128,000 skin photoshopps that were all had been biopsied for what the actual situation was. And those included melanomas and casinomas, also included rashes and other skin conditions, lesions. And then we had a network find those patterns and it was by large able to then detect skin cancer with an iPhone as accurately as the best board certified Stanford level dermatologist. We proved that. Now this thing was great in this one thing I'm finding skin cancer but it couldn't drive a car. So the difference to human intelligence is we do all these many, many things. And we can often learn from a very small dataset of experiences. Whereas machines still need very large datasets and things that we very repetitive. Now that's still super impactful because almost everything we do is repetitive. So that's going to be transformed human labor. But it's not this almighty general intelligence. We have really far away from a system that will exhibit general intelligence. To that end, I actually commiserate the naming a little bit because artificial intelligence, if you believe Hollywood, is immediately mixed into the idea of human suppression and machine superiority. I don't think that we're going to see this in my lifetime. I don't think human suppression is a good idea. I don't see it coming. I don't see the technology being there. What I see instead is a very pointed focused pattern recognition technology that's able to extract patterns from the state from large data sets. And in doing so, it can be super impactful. Super impactful. Let's take the impact of artificial intelligence on human work. We all know that it takes some like 10,000 hours to become an expert. If you're going to be a doctor or a lawyer or even a really good driver, it takes a certain amount of time to become experts. Machines now are able and have been pressured to observe people become experts and observe experts. And then extract those rules from experts in some interesting way that could go from law to sales to driving cars to diagnose and cancer, and then giving that capability to people who are completely new in their job. We now can, and that's been done. It's been unconditionally in many, many instances. That means we can use machine learning to make people expert on their very first day of their work. Think about the impact. If your doctor is still in their first 10,000 hours, you have a doctor who's not quite an expert yet. who would not want the doctor who is the world's best expert. And now we can leverage machines to really eradicate the error and decision-making error and lack of expertise for human doctors.

SPEAKER_00

51:22 - 51:46

they could save your life. You can link on that for a little bit in which way do you hope machines in the medical and the medical field could help assist doctors. You mentioned this sort of accelerating the learning curve or people if they start a job or in the first 10,000 hours can be assisted by machines. How do you envision that assistance looking?

SPEAKER_01

51:46 - 54:20

So we built this app. for an iPhone that can detect and classify and the diagnosis can cancel. And we proved two years ago that it is as pretty much as good or better than the best human doctors. So let me hear your story. So there's a friend of mine that's calling Ben. Ben is a very famous venture capitalist. He goes to his doctor and the doctor looks at him all and says, hey, that moral is probably harmless. And for some very funny reason, he pulls out that form with OARP. He's a collaborator in our study. And the app says, no, no, no, no. This is a melanoma. And for background, melanomas are skin cancer, the most common cancer in this country. Melanomas can go from stage zero to stage four within destiny year. Stage zero means you can basically cut it out yourself with a kitchen knife. and be safe, and stage four means your chances of losing five more years and less than 20%. So it's a very serious, serious, serious condition. So this doctor who took out the iPhone looked at the iPhone and was a little bit puzzled to say, I mean, but just to be safe, let's cut it out and biopsy it. That's the technical term for let's get an in-depth diagnostics that is more and just looking at it. And it came back as cancerous. as a melanoma and it was then removed. And my friend Ben, I was hiking with him and we were talking about AI and I told him I do this work in skin cancer. He said, oh, funny. My doctor just had an iPhone that found my cancer. Wow. So I was like completely intrigued. They didn't even know about this. So here's the person. I mean, this is a real human life, right? Yes. Now, who doesn't know somebody who has been affected by cancer? Cancer is cause of death number two. Cancer is this kind of disease that That is mean, in a following way, most cancers can actually be cured relatively easily if we catch them early. And the reason why we don't tend to catch them early is because they have no symptoms. Like your very first symptom of a gallbladder cancer or a pancreate cancer might be a headache. And when you finally go to your doctor because of these headaches or your back pain and you're being imaged, It's usually staged for plus, and that's the time when the occurring chances might be dropped to a single degree percentage. So if we could leverage AI to inspect your body on a regular basis without even a doctor in the womb, maybe when you take a shower or have you, I know that sounds creepy, but then we might be able to save millions and millions of lives.

SPEAKER_00

54:22 - 54:41

You've mentioned there's a concern that people have about near-term impacts of AI in terms of job loss. So you've mentioned being able to assist doctors being able to assist people in their jobs. Do you have a worry of people losing their jobs or the economy being affected by the improvements in AI?

SPEAKER_01

54:42 - 55:03

Yeah, anybody concerned about job losses? Please come to get acety.com. We teach contemporary tech skills. And we have a kind of implicit job promise. We often measure, we spend way over 50% of our graduates in new jobs. They're very satisfied about it. And of course, almost nothing. Cause like a thousand five hundred max of some men like that.

SPEAKER_00

55:03 - 55:14

And so there's a cool new program that you agree with the US government, guaranteeing that you will help give scholarships that educate people in this kind of situation.

SPEAKER_01

55:14 - 57:57

We're working with the US government on an idea of basically rebuilding the American dream. So you'd ask the US just dedicated 100,000 scholarships for citizens of America, for various levels of courses that eventually will get you a job. And those courses all somewhat related to the tech sector, because the tech sector is kind of the hottest sector right now, and they range from interlevel digital marketing to very advanced self-driving car engineering. And we're doing this with the White House, because he thinks it's bipartisan. It's an issue that if you want to really make America great, being able to be part of the solution and live the American dream requires us to be proactive about our education and our skill set. It's just the way it is today. And it's always been this way. I've always had this American dream to send our kids to college and now the American dream has to be to send ourselves to college. It's very, very, very efficiently and very, very, we've been in the evenings and things to online all ages, all ages. So our learners go from age 11 to age. 80. I just traveled Germany and the guy in the train compartment next to me was one of my students. Wow, that's amazing. How do you think about the impact? We've become the educator of choice for now, officially six countries, or five countries, most in the Middle East, like Saudi Arabia and in Egypt. In Egypt, we just had a cohort graduate where we had 1100 high school students that went through programming skills, proficient at the level of computer science undergrad. And we had a 95% graduation rate, even though everything's online, it's kind of tough, but we kind of trying to figure out how to make this effective. The vision is very, very simple. The vision is education ought to be a basic human right. It cannot be locked up behind ivory tower walls only for the rich people, for the parents who might be even bright themselves into the system. and only for young people. And only for people from the right demographics and the right geography and possibly even the right ways. It has to be open up to everybody. If we are truthful to the human mission, if we are truthful to our values, we are going to open up education to everybody in the world. So, you does it is pledge over 100,000 scholarships. I think it's the biggest pledge of scholarships ever in terms of numbers. and we're working as a separate White House and with very accomplished CEOs like Tim Cook from Apple and others to really bring education to everywhere in the world.

SPEAKER_00

57:57 - 58:18

Not to ask you to pick the favorite of your children, but at this point it's Jasper. I only have one that I know of. Okay good. In this particular moment, what nanography degree, what set of courses are you most excited about, a university, or is that too impossible to pick?

SPEAKER_01

58:18 - 59:21

I've been super excited about something we haven't launched yet, and we're building, which is when we talk to our partner companies, we have no a very strong footing in the enterprise world. In order to our students, we've always focused on these hard skills, like the programming skills or math skills or building skills or design skills. And a very common ask is soft skills. Like how do you behave in your work? How do you develop empathy? How do you work in a team? What are the very basics of management? How do you do time management? How do you advance your career in the context of a broader community? And that's something that we haven't done very well in the university and I would say most universities are doing very poorly as well because we are so obsessed with individual test scores and so little potential to teamwork in education. So that's something I see us moving into as a company because I'm excited about this and I think Look, we can teach people text skills and they're going to be great, but if you teach people empathy, there's going to have the same impact.

SPEAKER_00

59:21 - 59:25

Maybe harder than self-driving cars, but I don't think so.

SPEAKER_01

59:25 - 01:00:19

I think the rules are really simple. You just have to, you have to, you have to want to engage. It's, it's, we, we, we literally went in, in school in, in case of 12, we teach kids like get the highest math score. And if you are a rational human being, you might evolve from this education, say, having the best math score and the best English scores, making me the best leader. And it turns out not to be that case. It's actually really wrong. Because making, first of all, in terms of math scores, I think it's perfectly fine to hire someone with great math skills. You don't have to do yourself. You can't hire some of the good empathy for you that's much harder, but you can always hire some of the great math skills. But we live in a fluent world where we constantly deal with other people and it's a beauty. It's not a nuisance, it's a beauty. So if you somehow develop that muscle that we can do that well and empower others in the workplace, I think we're going to be super successful.

SPEAKER_00

01:00:19 - 01:00:59

And I know many fellow roboticists and computer scientists that I will insist to take this course. So that would be named. That would be named. Many, many years ago, 1903, the Wright Brothers flew in Kitty Hawk for the first time. And you've watched the company of the same name, Kitty Hawk, with the dream of Building fine cars, EV dolls. So, at the big picture, what are the big challenges of making this thing that actually have inspired generations of people about what the future looks like? What does it take? What are the biggest challenges?

SPEAKER_01

01:01:00 - 01:04:41

So flying cars has always been a dream. Every boy, every girl wants to fly. Let's be honest. Yes. And let's go back in our history of your dreaming of flying. I think my, honestly, my single most remember childhood dream has been a dream where I was sitting on a pillow and I could fly. I was like five years old. I remember like maybe three dreams of my childhood, but that's the one that we remember most vividly. And then Peter Seed famously said their promise of flying cars and they gave us 140 characters, pointing as Twitter at the time, limited message size to 140 characters. So we're coming back now to really go for this super impactful stuff like flying cars. And to be precise they're not really cars. They don't have wheels. They actually much closer to helicopter than anything else. They take over vertically in their flight horizontally, but they have important differences. One difference is that they are much quieter. We just released a vehicle called Project Heavy Side that can fly over you as low as a helicopter and you basically can't hear. It's like 38 decibels. It's like if you were inside the library, you might be able to hear it, but anywhere outdoors, your ambient noise is higher. Secondly, they're much more affordable. They're much more affordable than helicopters. And the reason is helicopters are expensive for many reasons. There's lots of single point of figures in helicopter. There's a bolt between the blades that's caused Jesus bolt. And the reason why it's called Jesus bolt is that if this bolt breaks, you will die. There is no second solution in helicopter flight. Whereas we have these distributed mechanism. When you go from gasoline to electric, you can now have many, many, many small motors as opposed to one big motor. And that means if you lose one of those motors, not a big deal. Heavy side, if it loses a motor, has eight of those. We lose one of those eight motors, so it's seven left. You can take off just like before and land just like before. We are now also moving into a technology. It doesn't require commercial pilot. because in some level, flight is actually easier than ground transportation. Like in self-driving cars, the world is full of like children and bicycles and other cars and mailboxes and crooks and shrubs and whatever you, all these things you have to avoid. When you go above the buildings and tree lines, there's nothing there. I mean, you can do the test right now, look outside and count the number of things you see flying. I'd be shocked if you could see more than two things. It's probably just zero. In the Bayer, the most ever seen was six, and maybe it's 15 or 20, but not 10,000. So the sky is very ample and very empty and very free. So the vision is, can we build a socially acceptable mass transit solution for daily transportation? that is affordable. And we have an existence proof. Heavy size can fly 100 miles in range with still 30% electric reserves. It can fly up to like 180 miles an hour. We know that that solution at scale would make your ground transportation 10 times as fast as a car, based on use sensors or such six data, which means you would take your 300 hours of daily commute down to 30 hours and giving 270 hours back. Who doesn't hate traffic? Like, I hate it. Give me the person who doesn't hate traffic. I hate traffic every time I'm in traffic I hate it. And if we could free the world from traffic, we have technology. We can free the world from traffic. We have technology. It's there. We have an existence proof. It's not a technological problem anymore.

SPEAKER_00

01:04:42 - 01:04:56

Do you think there is a future where tens of thousands, maybe hundreds of thousands of both delivery drones and fine cars of this kind, AVTiles, fill this guy?

SPEAKER_01

01:04:56 - 01:06:41

I absolutely believe this. And there's obviously the societal acceptance is a major question. And of course, safety is. I believe in safety viewing and exceed ground transportation safety. It has happened for aviation already. commercial aviation. And in terms of acceptance, I think one of the key things is noise. That's why we are focusing relentlessly on a noise and we built perhaps the quietest electric V12 vehicle ever built. The nice thing about this guy is three-dimensional. So any mathematician will immediately recognize the difference being one D of a regular highway to three of a sky. But to make it clear for the layman, say you want to make a hundred vertical lanes of Highway 101 in San Francisco because you believe building a health and vertical lanes is the right solution. Imagine how much would cost to stack a hundred vertical lanes physically onto 101. There would be prohibitive. There would be consuming the world's GDP for an entire year just for one highway. It's amazing expensive. In the sky, it would just be a recombulation of a piece of software because all these lanes are virtual. That means any vehicle that is in conflict with another vehicle would just go to a different altitude and the conflict has gone. And if you don't believe this, there's exactly how commercial aviation works when you fly from New York to San Francisco, another plane. flies from the San Francisco New York, they are different altitudes so they don't hit each other. It's a solved problem for the jet space and it will be a solved problem for the urban space. There's companies like Google Wing and Amazon working on very innovative solutions, how do we have space management, they use exactly the same principles as we used today to route today's jets. There's nothing hard about this.

SPEAKER_00

01:06:42 - 01:06:53

Do you envision autonomy being a key part of it so that the flying vehicles are either semi-autonomous or fully-autonomous?

SPEAKER_01

01:06:53 - 01:07:05

100% autonomous. You don't want idiots like me flying to sky up. I promise you. And you have 10,000. What's the movie of Fifth Element to get a people to happen if it's not autonomous?

SPEAKER_00

01:07:06 - 01:07:24

And a centralized, that's a really interesting idea of a centralized sort of management system for lanes and so on. So actually, just being able to have similar, as we have in the current commercial aviation, let's scale it up to which much more vehicles. That's a really interesting optimization problem.

SPEAKER_01

01:07:24 - 01:08:34

It is very, mathematically, very, very straightforward. Like the gap we leave between jets is gigantris. And part of the reason is there isn't that many jets. So it just feels like a good solution. Today, when you get vectored by air traffic control, someone talks to you. So any ATC controller might have up to maybe 20 planes on the same frequency. And then they talk to you, you have to talk back. And it feels right because there isn't more than 20 planes around any hour. So you can talk to everybody. But if there's 20,000 things around, you can't talk to everybody anymore. So we have to do something that's called digital, like text messaging. Like we do have solutions. Like we have 45 billion smartphones in the world now. And they all connected. And some of us solve the scale problem for smartphones. We know what they all are. They can talk to somebody. And they're very reliable. They're amazingly reliable. We could use the same system. The same scale for air traffic control. So instead of me as a pilot talking to a human being and in the middle of the conversation, receiving a new frequency, like how ancient is that, we could digitize this stuff and digitally transmit the right flight coordinates. And that solution will automatically scale to 10,000 vehicles.

SPEAKER_00

01:08:36 - 01:08:47

We talked about empathy a little bit. Do you think we'll one day build an AI system that a human being can love? And that loves that human back. Like in the movie her.

SPEAKER_01

01:08:47 - 01:10:57

Look, I'm a pragmatist. For me, AI is a tool. It's like a shovel. And the ethics of using this shovel, I always, with us the people. And it has to be this way. In terms of emotions, I would hate to come into my kitchen and see that my refrigerator spoiled all my food, then have it explained to me that it fell in love with dishwasher. And I wasn't as nice dishwasher as a result, it neglected me. That would just be a bad experience and it would be a bad product. I would probably not recommend this refrigerator to my friends. And that's where I draw the line. I think, to me, technology has to be reliable. It has to be predictable. I want my car to work. I don't want to fall in love with my car. I just want it to work. I wanted to compliment me, not to replace me. I have very unique human properties. And I want the machines to make me turn me into a superhuman. Like, I'm already a superhuman today thanks to the machines that surround me and give you examples. I can run across the Atlantic near the speed of sound at 36,000 feet today. That's kind of amazing. My voice now carries me all the way to Australia using a smartphone today and it's not not the speed of sound which would take hours. It's the speed of light. My voice travels at the speed of light. How cool is that? That makes me superhuman. I would even argue my flushing toilet makes me superhuman. Just think of the time before flushing toilets and maybe you have a very old person in your family that you can ask about this or take a trip to a rural India to experience it. It makes me superhuman. So to me, what technology does the compliments me? It makes me stronger. Therefore words like love and compassion have very little. I have very little interest in this for machines. I have interest in many people.

SPEAKER_00

01:10:57 - 01:11:07

You don't think, first of all, I'll beautifully put beautifully argued, but do you think love has used in our tools? Compassion.

SPEAKER_01

01:11:07 - 01:13:27

I think love is a beautiful human concept and if you think what love really is, Love is a means to convey safety, to convey trust. I think trust has a huge need in technology as well, most of us people. We want to trust our technology the same way we trust people. In human interaction, standards have emerged, and feelings, emotions have emerged, maybe genetically meant virology. that are able to convey sense of trust, sense of safety, sense of passion, of love, of dedication that makes the human fabric. And I'm a big slacker for love. I want to be loved, I want to be trusted, I want to be admired, all these wonderful things. And because all of us who, we have this beautiful system, I wouldn't just blindly copy this to the machines. Here's why When you look at, say, transportation, you could have observed that up to the end of the 19th century, almost all transportation used any number of legs from one leg to two legs to a thousand legs. And you could have concluded that is the right way to move about the environment. We've been waiting for several birds who are just clapping wings. In fact, there are many people in aviation that flap wings to their arms and jump from cliffs, most of them didn't survive. Then the interesting thing is that the technology solutions are very different. Like, technology is really easy to build a wheel. And biology is super hard to build a wheel. There's very few perpetually rotating things in biology and they give oneself things. In engineering, we can build wheels. And those wheels gave rise to cars. Similar wheels gave rise to aviation. Like, there's no thing that flies, they wouldn't have something of rotates like a jet engine or helicopter blades. So the solutions have used very different physical laws in nature. And that's great. So for me to be too much focused on, oh, this is how nature does it, this is replicated. If we really believed that the solution to the agricultural solution was a humanoid robot, it would still be waiting today.

SPEAKER_00

01:13:27 - 01:14:00

Again, beautifully put. You said that you don't take yourself too seriously. Can I say that? You want me to say that? Maybe. You don't take me seriously. I'm not. You know, you have a humor and a likeness about life. that I think is beautiful and inspiring to a lot of people. Where does that come from? The smile, the humor, the lightness amidst all the chaos of the hard work they earn. Where does that come from?

SPEAKER_01

01:14:00 - 01:17:37

I just love my life. I love the people around me. I'm just so glad to be alive. Like, I'm what 52, how to believe. People say 52 is a new 51, so now I feel better. But in looking around the world, looking, just go back to 100, 300 years. Humanity is what 300,000 years old. But for the first 300,000 years minus the last 100, our life expectancy would have been plus or minus 30 years, roughly, give a take. So I would be long dead now. That makes me just enjoy every single day of my life, because I don't deserve this. Like why am I born today when so many of my ancestors died of horrible deaths, like famines, massive wars, that ravaged Europe for the last 1,000 years, mystically disappeared after World War II when the Americans in the Allies dissoning amazing to my country that didn't serve it. It can't be a Germany. It's just so amazing. When you're alive and feel this every day, then it's just so amazing what we can accomplish, what we can do. We live in the world that is so incredibly vastly changing every day, almost everything that we cherish from your smartphone to your flashing toilets, to all these basic inventions. You're new close, you're wearing your watch, you're playing penicillin, I don't know, Anesthesia for surgery, penicillin, had been invented in the last 150 years. So in the last 150 years, something magical happened. And I would trace it back to Gutenberg and the printing press that has been able to disseminate information more efficiently than before, that all of a sudden, they were able to invent agriculture and nitrogen fertilization that made agriculture so much more potent, that we didn't have to work with the farms anymore, and we could start reading and writing, and we could become all these wonderful things we are today from airline pilot to massage therapists to software engineer. It's just amazing. Like living in that time is such a blessing. We should sometimes really think about this. Stephen Pinker, who is a very famous author and philosopher, whom I really adore, wrote a great book called Enlightenment Now. And that's maybe the one book I would recommend. And he asked the question, if there was only a single article written in the 20th century, it's only one article. What would it be? What's the most important innovation, the most important thing that happened? And he would say this article would create a guy named Karl Bosch. And I challenge anybody, have you ever heard of the name Carl Foss? I hadn't. There's a Bosh Corporation in Germany, but it's not associated with Carl Boss. So I looked it up. Carl Boss invented nitrogen fertilization. And in doing so, together with an older invention of irrigation, was able to increase the yield per agricultural land by a factor of 26. So a 2005 percent increase in fertility of land. And that, so Steve Pinker argues, saved over 2 billion lives today. two billion people who would be dead if this man hadn't done what he had done. Think about that impact and what that means to society. That's the way I look at the world. It's just so amazing to be alive and to be part of this. I'm so glad I lived after Carl Bosch and not before.

SPEAKER_00

01:17:37 - 01:18:41

I don't think there's a better way to end this event. It's an honor to talk to you to have had the chance to learn from you. Thank you so much for talking. Thanks for coming out. Thanks for your pleasure. Thank you for listening to this conversation with Sebastian Throne. And thank you to our presenting sponsored cash app. Download it, use code Lex podcast, you'll get $10 and $10 will go to first. A STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, get five stars on Apple Podcasts, support it on Patreon or connect with me on Twitter. And now, let me leave you with some words of wisdom from Sebastian Throne. It's important to celebrate your failures as much as your successes. If you celebrate your failures really well, if you say, wow, I failed. I tried. I was wrong. But I learned something. Then you realize you have no fear. And when your fear goes away, you can move the world. Thank you for listening and hope to see you next time.