Transcript for Judea Pearl: Causal Reasoning, Counterfactuals, Bayesian Networks, and the Path to AGI

SPEAKER_02

00:00 - 03:48

The following is a conversation with Judea Pearl, a professor at UCLA and a winner of the Touring Award that's generally recognized as the Nobel Prize of Computing. He's one of the seminal figures in the field of artificial intelligence, computer science and statistics. He has developed and championed probabilistic approaches to AI, including Beijing networks and profound ideas and causality in general. These ideas are important, not just to AI, but to our understanding and practice of science. But in the field of AI, the idea of causality, cause and effect, to many, lie at the core of what is currently missing and will must be developed in order to build truly intelligent systems. For this reason, and many others, his work is worth returning to often. I recommend is most recent book called Book of Y that presents key ideas from a lifetime of work in a way that is accessible to the general public. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it $5,000 Apple podcast, support it on Patreon, or simply connect with me on Twitter. Alex Friedman spelled FRIDMAN. If you leave a review on Apple Podcast especially, but also cast box or comment on YouTube, consider mentioning topics, people ideas, questions, quotes, and science, tech, and philosophy, you find interesting. And I'll read them on this podcast. I won't call out names, but I love comments with kindness and thoughtfulness in them, so I thought I'd share them with you. Someone on YouTube, highlighted a quote from the conversation with No Chomsky, where he said that the significance of your life is something you create. I like this line as well. On most days, the existentialist approach to life is one I find liberating and fulfilling. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by cash app. The number one finance app in the app store. I personally use cash app to sell money to friends, but you can also use it to buy, sell, and deposit Bitcoin in just seconds. Cash app also has a new investing feature. You can buy fractions of a stock, say $1 worth, no matter what the stock price is. Broker services are provided by cash app investing, subsidiary of square, a member of SIPC. I'm excited to be working with Cash App to support one of my favorite organizations called the First. Best known for their first robotics and Lego competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating and charity navigator, which means that donated money is used to the maximum effectiveness. When you get cash out from the App Store, Google Play, and use Code, Lex, Podcasts, and you'll get $10 and cash out will also donate $10 to first, which again is an organization that I've personally seen inspired girls and boys that dream of engineering a better world. And now, here's my conversation with Judea Pearl. You mentioned in interview that science is not a collection of facts by constant human struggle with the mysteries of nature. What was the first mystery that you can recall that hooked you? That kept you.

SPEAKER_00

03:48 - 04:16

Oh, the first mystery. That's a good one. Yeah, I remember that. I had the fever for three days when I learned about the carton and a little geometry. And I find out that you can do all the construction in geometry using algebra. And I couldn't get over it. I simply couldn't get out of bed.

SPEAKER_02

04:16 - 04:20

What kind of world is in the geometry unlock?

SPEAKER_00

04:20 - 05:03

Well, it connects algebra with the geometry. okay so the cut had the idea that geometrical construction and geometrical theorems and assumptions can be articulated in the language of algebra which means All the proof that we did in high school, trying to prove that the three by sectors meet at one point and that, okay, all these can be proven by just shuffling around notation. Yeah, that was the connection.

SPEAKER_02

05:03 - 05:16

It's the connection between the different mathematical disciplines that they all, just languages. So which mathematical disciplines most beautiful is geometry it for you?

SPEAKER_00

05:16 - 05:52

Both are beautiful, they have almost the same power. But there's a visual element, a geometry being of visual, it's more transparent. But once you get over to algebra, then the linear equation is a straight line, this translation is easily absorbed. And to pass a tangent to a circle, you have the basic theorems and you can do it with algebra. But the transition from one to another was really, I thought that the curve was the greatest mathematician of all times.

SPEAKER_02

05:54 - 06:15

So you have been at the, if you think of engineering and mathematics is a spectrum. Yes. You have been, you have walked casually along this spectrum throughout your, throughout your life, you know, a little bit of engineering and then, you know, you've been done a little bit of mathematics here and there.

SPEAKER_00

06:16 - 06:42

It's a little bit. I mean, we got a very solid background in mathematics because our tissues were geniuses. Our tissues came from Germany in the 1930s running away from Hitler. They left their careers in Heidelberg and Berlin and came to teach high school in Israel. And we were the beneficiary of that experiment. And they taught us math a good way.

SPEAKER_02

06:42 - 06:44

What's a good way to teach math?

SPEAKER_00

06:46 - 07:03

the people, the people behind the axioms, their cousins and their nieces and their faces. And how they jump from the bathtub when they scream you rika and then naked in town.

SPEAKER_02

07:03 - 07:06

So you're almost educated as a historian of math.

SPEAKER_00

07:06 - 07:17

No, we just got the glimpse of that history together with the theorem. So every exercise in math was connected with the person.

SPEAKER_02

07:17 - 07:33

And the time of the person, the period, also mathematically speaking, yes, not the politics. And then in university, you have gone on to do engineering.

SPEAKER_00

07:34 - 07:56

I get a BS and engineering and a technical one. And then I moved here for graduate work and I got engineering in addition to physics in radgales. And it combined very nicely with my thesis which I did and I'll see a lot about all these in superconductivity.

SPEAKER_02

07:57 - 08:25

And then somehow thought to switch to almost computer science software, even that switch, but long to get into software engineering a little bit, programming, if you can call them the 70s. There's all these disciplines. If you were to pick a favorite, in terms of engineering and mathematics, which path do you think has more beauty, which path has more power?

SPEAKER_00

08:26 - 08:38

It's how to choose. No. I enjoy doing physics. I even have a vortex name on my name. So I have an investment in immortality.

SPEAKER_02

08:38 - 08:41

So what is a vortex?

SPEAKER_00

08:42 - 09:32

Vortex is in superconductivity. And it's superconductivity. You have permanent comments, swirling around. One way or the other, you can have a store one or zero for computer. That was we worked on in the 1960s, I was EA. And I discovered a few nice phenomena with the Vortexes. It's pushed out of the room. How do they move vortex? It's the old vortex, right? You came to Google it. Right? I didn't know about it, but the physicist picked up on my thesis. on my PhD disease and it became popular. I mean, things for him to put conductors became important for high-temperature superconductors. So they call it parallel vortex without my knowledge. It has carried only about 15 years ago.

SPEAKER_02

09:32 - 09:47

You have footprints in all of the sciences. So let's talk about the universe a little bit. Is the universe at the lowest level of deterministic or stochastic in your amateur philosophy view? Put another way, does God play dice?

SPEAKER_00

09:47 - 09:50

We know it is stochastic, right?

SPEAKER_02

09:50 - 09:52

Today, today we think it is stochastic.

SPEAKER_00

09:52 - 10:03

Yes. We think because we have the Heisenberg consultant, the principal, and we have some experiments to confirm that.

SPEAKER_02

10:03 - 10:06

All we have is experiments to confirm it. We don't understand why.

SPEAKER_00

10:08 - 10:38

Why is your book? Yeah, it's a puzzle. It's a puzzle that you have the dice of flipping machine. Oh, God. And the result of the flipping propaganda with the speed faster, the speed of light. We can't explain it, okay? But it only governs microscopic phenomena.

SPEAKER_02

10:38 - 10:45

So you don't think of quantum mechanics as useful for understanding the nature of reality.

SPEAKER_00

10:45 - 10:47

No, the illusion, anyway.

SPEAKER_02

10:47 - 10:53

So in your thinking, the world might as well be deterministic.

SPEAKER_00

10:53 - 11:04

The world is a deterministic and as far as the new one firing is concerned, it's a deterministic to feel the approximation.

SPEAKER_02

11:04 - 11:06

What about free will?

SPEAKER_00

11:06 - 11:15

Free will is also a nice exercise. Free will is a solution. We AI people are going to solve.

SPEAKER_02

11:16 - 11:20

So, what do you think once we solve it, that solution will look like?

SPEAKER_00

11:20 - 11:39

Once we put it in a place. It will look like a machine. A machine that acts as though it has free will. It communicates with other machines as though they have free will. And you wouldn't be able to tell the difference between a machine that doesn't have free will.

SPEAKER_02

11:42 - 11:46

So the illusion, it propagates the illusion of free will amongst the other machines.

SPEAKER_00

11:46 - 12:02

And faking it is having it. That's what tool and test is about. Faking intelligence is intelligent because it's not easy to fake. It's very hard to fake and you can only fake if you have it.

SPEAKER_01

12:05 - 12:17

Yeah, that's such a beautiful statement. Yeah, you can fake it if you don't have it. Yeah.

SPEAKER_02

12:17 - 12:31

So let's begin at the beginning with probability, both philosophically, mathematically. What does it mean to say the probability of something happening is 50%. What is probability?

SPEAKER_00

12:34 - 12:38

It's a degree of uncertainty that an agent has about the world.

SPEAKER_02

12:38 - 12:43

You're still expressing some knowledge in that statement. Of course.

SPEAKER_00

12:43 - 12:47

Is it appropriate? Is 90% the end? It's absolutely different kind of knowledge and it is 10%.

SPEAKER_02

12:49 - 12:53

But it's still not solid knowledge.

SPEAKER_00

12:53 - 13:08

It's still solid knowledge by if you tell me that 90% assurance is smoking will give you lung cancer in five years versus 10%. It's a piece of useful knowledge.

SPEAKER_02

13:09 - 13:19

So, this statistical view of the universe, why is it useful? So, we're swimming in complete uncertainty. Most, we have everything around.

SPEAKER_00

13:19 - 13:43

And now, to predict things with a certain probability and computing those probabilities are very useful. But the whole idea of prediction and you need prediction to be able to survive. If you cannot predict the future then you're just crossing the street will be extremely fearful.

SPEAKER_02

13:43 - 13:48

And so you've done a lot of work in causation and so let's think about correlation.

SPEAKER_00

13:48 - 13:51

I started with the probability.

SPEAKER_02

13:51 - 14:20

You started with probability. You've invented the Bayesian networks. Yeah. And so, you know, well, well, well, Dan's back and forth between these levels of uncertainty. But what is correlation? What is it? So probability is something happening. It's something. But then there's a bunch of things happening. And sometimes they happen together. Sometimes not. They're independent or not. So how do you think about correlation of things?

SPEAKER_00

14:21 - 14:53

Correlation occurs when two things vary together over a very long time. It's one we have measured and gave. Or when you have a bunch of variables that are very guisively, then we call it we have a correlation here. And usually, when we think about correlation, we really think causally. things and cannot be called unless there is a reason for them to vary together. Why should they vary together? If they don't see each other, why should they vary together?

SPEAKER_02

14:53 - 14:55

So underlying it somewhere is causation.

SPEAKER_00

14:55 - 15:06

Yes. Hidden in our intuition is a notion of causation because we cannot grasp any other logic except causation.

SPEAKER_02

15:06 - 15:15

And how does conditional probability differ from causation? So what is conditional probability?

SPEAKER_00

15:15 - 16:25

Conditional probability of how things vary when one of them stays the same. Now staying the same means that I have chosen to look only those incidents where the guy has the same value as previous one. It's my choice as an experimenter. So things that are not correlated before could become correlated. like, for instance, if I have two coins which are uncorrelated and I choose only those flipping experiments in which a bell rings and a bell rings when at least one of them is a tail. Then subtly I see correlation between the two coins because I only look at the cases where the bell ring See, it's my design, it's my ignorance, essentially, with my audacity to ignore certain incidents. I suddenly create a correlation where it doesn't exist physically.

SPEAKER_02

16:25 - 16:34

Right, so that's, you just outlined one of the flaws of observing the world and trying to infer something from the met about the world, looking at the correlation.

SPEAKER_00

16:35 - 16:51

I don't look at the flaws of world works like that. But the flaws come if we try to impose causal logic on correlation, it doesn't work too well.

SPEAKER_02

16:51 - 16:57

I mean, but that's exactly what we do. That's what has been the majority of science.

SPEAKER_00

16:57 - 17:22

It's a majority of naive science. The decisions now, if you condition on a third variable, you can destroy or create correlations among two other variables. They know it. It's in a data. It's nothing surprising. That's why they all dismiss a simple paradox. We know it. It's not anything.

SPEAKER_02

17:24 - 17:34

Well there's disciplines like psychology where all the variables are hard to get to account for. And so oftentimes there's a leap between correlation to causation.

SPEAKER_00

17:34 - 17:38

Who is trying to get causation from correlation?

SPEAKER_02

17:43 - 17:52

Not that you're not proving causation, but you're sort of discussing it, implying, sort of hypothesizing with our ability.

SPEAKER_00

17:52 - 18:04

Which discipline you have in mind? I'll tell you if they are absolute. Is they are outdated or they are about to get outdated? Oh, yes. Tell me which one do you have?

SPEAKER_02

18:04 - 18:05

Oh psychology, you know.

SPEAKER_00

18:05 - 18:08

It's okay, what is the SEM? Stock liquid?

SPEAKER_02

18:08 - 18:21

No, no, I was thinking of a place of college studying, for example, we work with human behavior and semi-atonomous vehicles, how people behave and you have to conduct these studies of people driving cars.

SPEAKER_00

18:21 - 18:25

Everything starts with the question.

SPEAKER_02

18:25 - 18:34

What is the research question? The research question, do people fall asleep when the car is driving itself?

SPEAKER_00

18:36 - 19:01

do they follow sleep or do they tend to follow sleep more frequently more frequently and the car not driving it's not driving it's a good question okay and so you measure you put people in the car because it's real world you can't conduct an experiment where you control everything why can't you can you could turn the automatic module on and off

SPEAKER_02

19:02 - 19:27

because it's on-road public. I mean, there's, you have, there's aspects to it that's unethical because it's testing on public roads. So you can only use vehicle. They have to, the people, the drivers themselves have to make their choice themselves. And so they regulate that. And so you just observe when they drive it. And honestly, when they don't.

SPEAKER_00

19:27 - 19:30

And then maybe determine often there will be a very trial.

SPEAKER_02

19:30 - 19:33

Yeah, that's kind of thing. But you, you don't know those there.

SPEAKER_00

19:33 - 22:37

Okay, so that you have now uncontrolled experience. We call it observational study. Yeah. And we form the correlation. detected that we have to infer causal relationship whether it was the automatic piece has caused them to fall asleep. So that is an issue that is about 120 years old. I should only go 100 years old. And oh, maybe it no, actually I should say it's 2000 years old because we have this experiment by Daniel, but the Babylonian king that wanted the exile, the people from Israel that were taken in exile to Babylon, to serve the king. He wanted to serve them King's fool, which was meet in Daniel as a good Jew, couldn't eat a non-cossure fool, so he asked them to eat vegetarian fool, but the King overseers said, I'm sorry, but if the King sees it, you know, performance, false below, date of other kids, now he is going to kill me. And then you say, let's make an experiment. Let's take four of us from Jerusalem, okay, give us the detailed food. Let's take the other guys to eat the King's food and about a week's time, we'll test our performance. And you know the answer, of course, he did experiment and they were so much better than the others. And the kings nominated them to super position in his case. So it was a first experiment. So there was a very simple, it's also the same research questions. We want to know a vegetarian food, assist or obstruct your mental ability. Okay, so the question is very old. Even the Mokritos said, if I could discover one cause of things, I would rather discover one cause and be a king of Persia. The task of discovering causes was in the mind of ancient people from many, many years ago. But the mathematics of doing this was only developed in the 1920s. So, science has left us often. Science has not provided us with a mathematics to capture the idea of x causes y and y does not cause x. Because all the equations of physics are symmetric algebraic, the equality sign goes both ways.

SPEAKER_02

22:39 - 22:52

Okay, let's look at machine learning machine learning. Today, if you look at deep neural networks, you can think of it as kind of conditional probability as the natives.

SPEAKER_00

22:52 - 23:02

Beautiful. So, where did you say that? Conditional probability estimate? None of the machine learning people leveled you.

SPEAKER_01

23:02 - 23:07

That's you.

SPEAKER_02

23:07 - 23:56

Most people in this is why this today's conversation I think is interesting. Most people would agree with you. There are certain aspects that are just effective today, but we're going to hit a wall and there's a lot of ideas. I think you're very right that we're going to have to return to about causality. Let's try to explore it. Let's even take us to the back. You've invented Bayesian networks. That look awfully a lot like they express something like causation, but they don't. None necessarily. So how do we turn Bayesian networks into expressing causation? How do we build causal networks? This A causes B because it's C. How do we start to infer that kind of thing?

SPEAKER_00

23:56 - 24:09

We start asking ourselves questions. What are the factors that would determine the value of X? X could be blood pressure, death, angry, anger,

SPEAKER_02

24:11 - 24:13

But these are hypotheses that we propose for.

SPEAKER_00

24:13 - 24:26

I suppose this is everything which has to do with causality, comes from a theory. The difference is only, what kind, how you interrogate the theory that you have in your mind.

SPEAKER_02

24:28 - 24:31

So it still needs the human expert to propose.

SPEAKER_00

24:31 - 26:12

You need the human expert to specify the initial model. Initial model could be very qualitative. Just who listens to whom? By whom listen to I mean one variable listens to the other. So I say okay the tide is listening to the moon. and not to the rooster crawl. And so far, this is our understanding of the world in which we live, scientific understanding of reality. We have to start there because if we don't know how to handle cause an effect relationship, when we do have a model and we certainly do not know how to handle it, when we don't have a model. So let's start first in AI slogan is representation first, discovery second. But if I give you all the information that you need, can you do anything useful with it? That is the first representation. How do you represent it? I give you all the knowledge in the world. How do you represent it? When you represent it, I ask, can you infer x or y or z? Can you answer certain queries? Is it complex? Is it polynomial? It's all the computer science exercises. We do, once you give me a representation for my knowledge. Then you can ask me, now I understand how to represent things, how do I discover them? It's a second thing.

SPEAKER_02

26:12 - 26:46

First of all, I should echo the statement that mathematics and the current, much of the machine learning world has not considered causation that A causes B, just in anything. So that seems like us That seems like a non-obvious thing that you think we would have really acknowledged it, but we haven't. So we have to put that on the table. So knowledge, how hard is it to create a knowledge from which to work?

SPEAKER_00

26:46 - 27:48

In certain areas, it's easy because we have only four or five major variables. and epidemiologists or economists can put them down, what the minimum wage, an unemployment policy, XYZ, and start collecting data and quantify the parameters that were left uncontified with the initial knowledge. Okay, that's the routine work that you find in experimental psychology, in economics, everywhere, in the health science, that's the routine things. But I should emphasize, you should start with the research question. What do you want to estimate? Once you have that, you have a language of expressing what you want to estimate. You think it's easy?

SPEAKER_02

27:49 - 28:37

No. So we can talk about two things. I think one is how the science of causation is very useful for answering certain questions. And then the other is, how do we create intelligent systems that need to reason with causation? So if my research question is, how do I pick up this water bottle from the table? All the knowledge is required to be able to do that. How do we construct that knowledge base? Does it, do we return back to the problem that we didn't solve in the 80s with expert systems? Do we have to solve that problem of automated construction of knowledge?

SPEAKER_00

28:37 - 28:42

You're talking about the task of an existing knowledge from an expert.

SPEAKER_02

28:44 - 29:00

task of eliciting knowledge or an expert or the self discovery of more knowledge more more knowledge so automating the building of knowledge is much as possible it's a different game because of the main because

SPEAKER_00

29:02 - 29:40

It essentially is the same thing. You have to start with some knowledge and you're trying to enrich it. But you don't enrich it by asking for more rules. You enrich it by asking for the data, for to look at the data and quantifying and ask queries that you couldn't answer when you started. You couldn't because the question is quite complex, and it's not within the capability of ordinary cognition, of ordinary person, or ordinary expert even, to answer.

SPEAKER_02

29:40 - 29:44

So what kind of questions do you think we can start to answer?

SPEAKER_00

29:44 - 30:12

Even in simple terms, suppose we are, and start with easy one. What's the effect of drug on recovery? What did the aspirin that caused my headache to be cured or what did the television program or the good news I received? This is already a difficult question because it's find the cause from effect. The easy one is find the effects from cause.

SPEAKER_02

30:14 - 30:21

That's right. So first you construct a model saying that this is an important research question. This is one question.

SPEAKER_00

30:21 - 30:51

I didn't construct a module. I just said it's important question. And the first exercise is express it mathematically. What do you want to do? Like, if I tell you what will be the effect of taking this drug, you have to say that in mathematics. How do you say that? Yes. Can you write down the question? Not the answer. I want to find the effect of the drug on my headache. Right, right down. Right, right, it does.

SPEAKER_02

30:51 - 30:54

That's where the do calculus comes in. Yes.

SPEAKER_00

30:54 - 30:55

Do operator. What do you do?

SPEAKER_02

30:55 - 30:57

I do operator.

SPEAKER_00

30:57 - 30:57

Yeah.

SPEAKER_02

30:57 - 31:03

That's nice. It's the difference in association and intervention. Very beautifully constructed.

SPEAKER_00

31:03 - 31:14

Yeah. So we have a do operator. So do calculus connected on the do operator itself, connect the operation of doing to something that we can see.

SPEAKER_02

31:16 - 31:23

So as opposed to the purely observing, you're making the choice to change a variable.

SPEAKER_00

31:23 - 32:07

Let's put it into the ethics process. And then the way that we interpreted the mechanism by which we take your query and we translate into something that we can work with is by giving it semantics, saying that you have a model of the world and you cut off all the incoming arrow into X. And you're looking now in the modified mutilated model, you ask for the probability of why. That is interpretation of doing X, because by doing things, you've liberated them from all influences that acted upon them earlier, and you subject them to the tyranny of your muscles.

SPEAKER_02

32:07 - 32:12

So you remove all the questions about causality by doing them.

SPEAKER_00

32:13 - 32:19

So you're now, there's one level of questions. Yeah. Answered questions about what will happen if you do things.

SPEAKER_02

32:19 - 32:28

If you drink the coffee, if you take that. So how do we get the, how do we get the doing data?

SPEAKER_00

32:28 - 32:37

Now the question is, if we cannot one experiments, right, then we have to rely on observational studies.

SPEAKER_02

32:38 - 32:49

The first we could decide to interrupt, we could run an experiment. What we do something, what we drink the coffee and do, and this, the do operator allows you to sort of be systematic about expressing.

SPEAKER_00

32:49 - 33:32

Who imagine how the experiment will look like, even though we cannot physically and technology conducted? I'll give you an example. What is the effect of blood pressure on mortality? I cannot go down into your vein and change your blood pressure, but I can ask the question. which means I can even have a model of your body. I can imagine the effect of how the blood pressure change will affect your mortality. How I go into the model and I conduct this surgery about the blood pressure even though physically I can do I cannot do it.

SPEAKER_02

33:34 - 33:46

Let me ask the quantum mechanics question. Does the doing change the observation? meaning the surgery of changing the blood pressure. I mean, no, the surgery is very delicate. It's very delicate. It's very delicate. It's very delicate.

SPEAKER_00

33:46 - 34:23

It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate. It's very delicate So that means that I change only things which depends on X. By virtue of exchanging. But I don't depend things which are not depend on X. Like I wouldn't change your sex or your age. I just change your blood pressure.

SPEAKER_02

34:24 - 34:30

So in the case of blood pressure, it may be difficult to impossible to construct such an experiment.

SPEAKER_00

34:30 - 34:51

No, but physically, yes. But hypothetically, no, hypothetically, no. If we have a model, that is what the model is for. So you conduct surgeries on a model, you take it apart, put it back, that's ideal for model. It's ideal of thinking come to factually imagining and that's ideal for creativity.

SPEAKER_02

34:52 - 35:04

So by constructing that model, you can start to infer if the blood pressure leads to mortality, which increases the decreases.

SPEAKER_00

35:04 - 35:20

I construct the model. I can still not answer it. I have to see if I have enough information in the model that would allow me to find out the effects of intervention from a non-interventional study, from the end of study.

SPEAKER_02

35:21 - 35:22

So what's needed?

SPEAKER_00

35:22 - 35:51

We need to have assumptions about who affect whom. If the graph had a certain property, the answer is yes, you can get it from observational study. If the graph is too machibushi-bushi, the answer is no, you cannot. Then you need to find either different kind of observation that you haven't considered or one experiment.

SPEAKER_02

35:52 - 35:59

So basically does that, that puts a lot of pressure on you to encode wisdom into that graph?

SPEAKER_00

35:59 - 36:16

Correct. But you don't have to encode more than what you know. God forbid, if you put the like economists not doing that. They call identifying assumption. They put assumptions, even they don't prevail in the world. They put assumptions all they can identify things.

SPEAKER_02

36:16 - 36:25

But the problem is, yes, but the problem is you don't know, you don't know.

SPEAKER_00

36:25 - 36:41

Because if you don't know, you say it's possible. It's possible that X affects the traffic tomorrow. When it's possible, you put down an error which says it's possible. Every error in the graph says it's possible.

SPEAKER_02

36:41 - 36:48

So there's not a significant cost to adding errors that the more error you add about it.

SPEAKER_00

36:48 - 37:12

The less likely you are to identify things from purely observational data. So if the whole world is bushy, And everybody affects everybody else. The answer is, you can answer it ahead of time. I cannot answer my query from observational data. I have to go to experiments.

SPEAKER_02

37:14 - 37:52

So you talk about machine learning is essentially learning by association or reasoning by association. And this due calculus is allowing for intervention, like that word, action. So you also talk about counterfactuals. Yeah. And trying to sort of understand the difference in counterfactuals and intervention. What's the first of what is counterfactuals and why are they useful? Why are they especially useful as opposed to just reasoning what effect actions have.

SPEAKER_00

37:52 - 38:33

But can the factual contains what we normally call explanations? Can you give an example if I tell you that acting one way, effects something, I didn't explain anything yet. But if I asked you, was it the aspirin that cure my headache? I'm asking for explanation, what cure my headache? And putting a finger on aspirin, provide explanation. It was aspirin. It was responsible for your headache going away. If you didn't take the aspirin, you would still have a headache.

SPEAKER_02

38:33 - 38:42

So by saying, if I didn't take aspirin, I would have a headache. You're there by saying that aspirin is the thing that removes the headache.

SPEAKER_00

38:43 - 38:57

but you have to have another important information. I took the aspirin and my headache is gone. It's very important information. Now I'm reasoning backward and I said, what did the aspirin?

SPEAKER_02

38:59 - 39:04

By considering what would have happened, if everything else is the same, but I didn't take aspects.

SPEAKER_00

39:04 - 39:50

That's right. So you know, that things took place, you know, Joe Kilschmore. Yeah. And Schmore would be alive. Had John not used his gun. Okay. So that is the counterfactual. It has a conflict here or clash between observed facts. He did shoot, okay? And the hypothetical predicate, which says, had he not shot. You have a clash, a logical clash. They cannot exist together. That's a counterfactual. And that is the source of our explanation of our idea of responsibility, regret, and free will.

SPEAKER_02

39:52 - 39:57

Yes, certainly seems that's the highest level of reasoning, right?

SPEAKER_00

39:57 - 39:59

Yes, and Philips is doing it all the time.

SPEAKER_02

39:59 - 40:00

Who does it all the time?

SPEAKER_00

40:00 - 41:03

Physicists. Physicists. In every equation of physics, let's say you have a hook floor, and you put one kilogram on the spring, and the spring is one meter, and you say, had this weight been took kilogram, the spring would have been twice as long. It's no problem for physicists to say that. Except that mathematics is only in a form of equation. Equating the weight, proportionality constant, and the length of the string. So you don't have the asymmetry in the equation of physics, although every physicist thinks counterfactually. Ask the high school kids. Had the weight been 3 kilograms, what would be the length of this print? They can answer it immediately, because they do the counterfactual processing in their mind, and then they put it into equation, algebraic equation, and they solve it. But the robot cannot do that.

SPEAKER_02

41:04 - 41:08

How do you make a robot learn these relationships?

SPEAKER_00

41:08 - 41:30

And why you just put learn? So what you tell him, can you do it? So before you go learning, you have to ask yourself, suppose I give more information. Can the robot perform a task that I ask him to perform? Can he recently say, no, it wasn't the aspirin. It was the good news you received on the phone.

SPEAKER_02

41:33 - 41:41

Right, because, well, unless the robot had a model, a causal model of the world.

SPEAKER_00

41:41 - 41:41

Right, right.

SPEAKER_02

41:41 - 41:43

I'm sorry, I have to linger on this.

SPEAKER_00

41:43 - 41:47

But now we have to linger. We have to say, how do we, how do we do it? How do we build it?

SPEAKER_02

41:47 - 41:54

Yes. How do we build the causal model without a team of human experts not running around?

SPEAKER_00

41:54 - 41:58

Why don't you go to a learning right away? You have too much involved with learning.

SPEAKER_02

41:58 - 42:02

Because I like babies, babies learn fast and I don't know if I got a hearty joke.

SPEAKER_00

42:02 - 42:34

Good. So yeah. That's another question. How do the babies come out with the counterfactor model of the world? And babies do that. They know how to play with the in the crib. They know which balls hits another one. And what they learn is by play for manipulation. of the world. Yes. There are simple worlds, involve only toys and balls and chimes and bears. But if you see it, it's a complex world.

SPEAKER_02

42:34 - 42:37

We take for granted. Yes.

SPEAKER_00

42:37 - 42:58

And our kids do it by playful manipulation, plus parents' guidance, pure wisdom, and he'll say, They meet each other and they say, you shouldn't have taken my toy.

SPEAKER_02

42:58 - 43:24

And these multiple sources of information they're able to integrate. So the challenge is about how to integrate, how to form these causal relationship from different sources of data. So how much is the information is it to play? How much causal information is required to be able to play in the crib with different objects?

SPEAKER_00

43:24 - 43:30

I don't know. I haven't experimented with the crib. Okay, not a crib picking up.

SPEAKER_02

43:30 - 43:45

It's a very interesting manipulating physical objects on this very opening the pages of a book, all the tasks, physical manipulation tasks. Do you have a sense? Because my sense is the world is extremely complicated.

SPEAKER_00

43:45 - 44:13

It's only complicated. I agree and I don't know how to organize it because I've been spoiled by easy problems such as cancer and death. I mean, first we have to start. No, but easy. The easy, the easy incentive you have only 20 variables. They are just very well done, not mechanics. It's easy. You just put them on the graph and they speak to you.

SPEAKER_02

44:13 - 44:20

And you're providing a methodology for having them speak.

SPEAKER_00

44:20 - 44:27

I'm working only in the abstract. The abstract was knowledge in, knowledge out, data in between.

SPEAKER_02

44:29 - 44:53

Now, can we take a leap to try and to learn in this very, when it's not 20 variables, but 20 million variables, trying to learn causation in this world, not learn, but do some how construct models. I mean, it seems like you would only have to be able to learn because constructing it manually would be too difficult. Do you have ideas of

SPEAKER_00

44:55 - 45:09

I think it's a metal of combining simple models for many many sources, for many many disciplines. And many metaphors. metaphors are the basics of human intelligence and basis.

SPEAKER_02

45:09 - 45:13

So how do you think of a bottom metaphor in terms of its use in human intelligence?

SPEAKER_00

45:13 - 47:52

metaphors is an expert system. And it's mapping problem with which you are not familiar to a problem with which you are familiar. Like I've given a good example, the Greek believed that the sky is an opaque shell. It's not really an infinite space. It's an opaque shell and the stars are holes. Poked in the shells through which you see the eternal light. It was a metaphor why? Because they understand that you poke holes in the shells. They were not familiar with infinite space. And we are working on a share of a turtle. And if you get too close to the head, you're going to fall down to Hades. Oh, whatever. Yeah. And there's a metaphor. It's not true. But this kind of metaphor enables our estotonous to measure the radius of the earth. Because he said, come on. If we are walking on a turtle's shell, then a ray of light coming to this angle will be different. This place will be different angle that coming to this place. I know that distance, I'll measure the two angles. And then I have the radius of the shell of the turtle. And he did. And he found, Measurement of very close to the measurements we have today. So the year was 6,700 km. That's something that would not occur. to Babylonian astronomer, even though they are but belong in experiments where the machine learning people of the time. They fit curves and they could predict the clips of the moon much more accurately than the Greek because they fit curve. That's a different metaphor. Something you familiar with, a game, a total change. What does it mean? It's your familiar. Familiar means that answers to certain questions are explicit. You don't have to derive them.

SPEAKER_02

47:53 - 48:01

and they were made explicit because somewhere in the past, you've constructed a model of that.

SPEAKER_00

48:01 - 48:41

You're familiar with the Chinese familiar with Billion Balls. So the child could predict that if you let loose one ball, that one will bounce off. You obtain that by familiarity. If a familiarity is answering questions and you store the answer explicitly, You don't have to derive them. So this is ideal for metaphor. All our life, all our intelligence is built around metaphors, mapping from the unfamiliar to the familiar. But the marriage between the two is a tough thing which we haven't yet been able to algorithmize.

SPEAKER_02

48:42 - 48:53

So you think of that process of using metaphor to leap for one place to another. We can call it reasoning. Is it a kind of reasoning?

SPEAKER_00

48:53 - 48:54

It is reasoning by metaphor.

SPEAKER_02

48:54 - 49:05

But the photo is moving by metaphor. Do you think of that as learning? So learning is a popular terminology today in a narrow sense.

SPEAKER_00

49:05 - 49:07

It is definitely a sense.

SPEAKER_02

49:07 - 49:08

So you may not argue right.

SPEAKER_00

49:09 - 50:28

It's one of the most important learning, taking something which theoretically is durable and stored in accessible format. I'll give you an example, chess, okay? Finding the winning starting move in chess is hard. But it is an answer. either there is a winning move for white or there is a draw. So it is the answer to that is available to the rule of the game. But we don't know the answer. So what does the chess master have that we don't have? He has told explicitly an evaluation of certain complaints pattern of the board. We don't have it. Ordinary people like me, I don't know about you, I'm not the chess master. So for me, I have to derive things that for him is explicit. He has seen it before, or he has seen the pattern before, or similar pattern, he has seen it before. And he generalized and said, don't move with the dangerous move.

SPEAKER_02

50:30 - 50:58

It's just that, not in the game of chess, but in the game of billiard balls, we humans are able to initially derive very effectively and then reasoned by metaphor, very effectively. And we make it look so easy, and it makes one wonder how hard is it to build it in a machine. So, in your sense, how far away are we to be able to construct?

SPEAKER_00

50:58 - 51:32

I don't know, I'm not in the future. I can tell you that we are making tremendous progress in the coastal reasoning domain. Something that I even there to call it evolution, the cultural evolution, because what we have achieved in the past three decades is something that the wolf, everything it was derived in the entire history.

SPEAKER_02

51:32 - 51:52

So there's an excitement about current machine learning methodologies. And there's really important good work you're doing in causal inference. Where do the, what is the future? Where do these worlds collide? And what does that look like?

SPEAKER_00

51:52 - 53:35

First they're going to work without collisions. It's going to work in harmony. The human is going to jumpstart the exercise by providing qualitative non-committing models of how the universe works. How the reality the domain of discourse works. The machine is going to take over from that point of view and derive whatever the calculus says can be derived, namely quantitative answer to our questions. These are complex questions. I'll give you some example of complex questions that bugle your mind if you think about it. You take result of studies in diverse population under diverse conditions. And you infer the cause effect of a new population, which doesn't even resemble any of the ones studied. And you do that by do calculus. You do that by generalizing. from one study to another. See, what's common with Beethoven? What is different? Let's ignore the differences and pull out the commonality. And you do it over maybe 100 hospitals around the world. From there you can get really mileage from big data. It's not as you have many samples. You have many sources of data.

SPEAKER_02

53:36 - 53:52

So that's a really powerful thing, and I think for especially for medical applications, I mean, cure cancer, right? That's how from data you can cure cancer. So we're talking about causation, which is the temporal, temporal relationship between things.

SPEAKER_00

53:52 - 54:02

Not only temporal, it was structural and temporal, temporal enough, a temporal presence by itself cannot replace causation.

SPEAKER_02

54:04 - 54:08

is temporal precedence, the error of time in physics.

SPEAKER_00

54:08 - 54:17

It's important. It's important. It's important. It's important. It's important. Yes. Is it? Yes, I never seen the cause of the propagate backbone.

SPEAKER_02

54:17 - 54:37

But if we call, if we use the word cause, but there's relationships that are timeless. I suppose that's still forward and narrow all the time. But are there relationships, logical relationships that fit into the structure?

SPEAKER_00

54:37 - 54:39

Do calculus is logical relationship?

SPEAKER_02

54:39 - 54:46

that doesn't require a temporal. It has just the condition that you're not traveling back in time.

SPEAKER_00

54:46 - 54:48

Yeah. Correct.

SPEAKER_02

54:48 - 54:59

So it's really a generalization of a powerful generalization of what being a logic, you have bullying logic.

SPEAKER_00

54:59 - 54:59

Yes.

SPEAKER_02

55:01 - 55:11

that is simply put and allows us to reason about the order of events, the source.

SPEAKER_00

55:11 - 56:37

Not about between not deriving the order of events. We are given cause of actualization ship. They ought to be obeying the time president's We are giving it. And now that we ask questions about other code of religion, that could be derived from the initial ones, but were not given to us explicitly. Like the case of the firing squad that gave you in the first chapter, and I ask, what if rifleman A declined to shoot? Who the prisoners still be dead? The decline to show that means that this obey order and the rule of the games were that he is obedient and marksman. That's how you start. That's the initial order. But now you ask question about breaking the rules. What if he decided not to pull the trigger? He just became a pacifist. And you and I can answer that. The other rifleman would have killed him. I want the machine to do that. It's so hard to ask the machine to do that. It's as simple as that. But you have to have a calculus for that. Yes.

SPEAKER_02

56:38 - 57:00

But the curiosity and the actual curiosity for me is that yes, you absolutely correct and important and it's hard to believe that we haven't done this seriously extensively already a long time ago. So this is really important work. But I also want to know, you know, this maybe you can philosophize about how hard is it to learn

SPEAKER_00

57:00 - 57:50

We put a learning machine that watches execution trials in many countries and many Locations, okay? All the machine can learn is to see, shut or not shut. Dead, not dead. Code issued an order or didn't. Okay, that's facts. From the fact you don't know who listens to home. He don't know that they condemned person, listen to the bullets. The bullets are listening to the captain, okay? All we hear is one command two shots dead, okay? A triple of variable. Yes, no, yes, no. Okay, well, that you can learn who lists the home and you can answer the question?

SPEAKER_02

57:50 - 57:59

No, definitively no, but don't you think you can start proposing ideas for humans to review?

SPEAKER_00

57:59 - 58:24

You want machine to learn, right? You want a robot? So Robert is watching trials like that, 200 trials, and then he has to answer the question, what if rifleman A refrain from shooting? How do you do that? That's exactly my point. It's looking at the facts. Don't give you the strings behind the facts.

SPEAKER_02

58:24 - 58:40

Absolutely. But do you think of machine learning as it's currently defined as only something that looks at the facts and tries? Right now they only look at the facts. So is there a way to modify? In your sense.

SPEAKER_00

58:40 - 58:42

Play for manipulation.

SPEAKER_02

58:42 - 58:43

play for manipulation.

SPEAKER_00

58:43 - 58:44

What do you do?

SPEAKER_02

58:44 - 58:46

Interventionist kind of thing.

SPEAKER_00

58:46 - 59:29

Interventionist. It could be a random point since the rifleman is sick of the day or he just vomits or whatever. So he can observe his unexpected event which introduced noise. The noise still have to be a random to be able to and related to randomized experiment. And then you have a observational studies from which to infer the strings behind the facts. It's doable to certain extent. But now that we are expert in what you can do once you have a model, we can reason back and say, what kind of data you need to build a model.

SPEAKER_02

59:33 - 59:43

I know you're not a futurist, but are you excited? Have you, when you look back at your life, long for the idea of creating a human level of intelligence?

SPEAKER_00

59:43 - 59:55

Yeah, I'm driven by that. All my life, I'm driven just by one thing. But I go slowly, I go from what I know to the next step incrementally.

SPEAKER_02

59:56 - 59:59

So without imagining what the end goal looks like.

SPEAKER_00

59:59 - 01:00:16

Do you imagine what the end goal is going to be a machine that can answer sophisticated questions, counterfactuals of regret, compassion, responsibility, and free will.

SPEAKER_02

01:00:16 - 01:00:22

So what is a good test? Is a touring test? It's a reasonable test.

SPEAKER_00

01:00:22 - 01:00:23

Free will doesn't exist yet.

SPEAKER_02

01:00:25 - 01:00:27

There's no, how would you test Frewell?

SPEAKER_00

01:00:27 - 01:00:51

That's so far we know only one thing. If Robots can communicate with reward and punishment, among themselves. He thinks each other on the wrist and so you shouldn't have done that. Playing better soccer because they can do that.

SPEAKER_02

01:00:51 - 01:00:52

What he means because they can do that.

SPEAKER_00

01:00:53 - 01:00:55

because they can communicate among themselves.

SPEAKER_02

01:00:55 - 01:00:57

Because of the communication they can do.

SPEAKER_00

01:00:57 - 01:02:09

Because of the communicate, like us, reward and punishment. Yes, you didn't pass the ball the right time and so forth. Therefore, you're going to sit on the bench for the next two. If they start communicating like that, the question is will they play a bit or something? Is it possible? Is it possible they do now? Without this ability to reason about reward and punishment, responsibility. And in fact, I can only think about communication, communication is not necessarily natural language, but just communication. And that's important to have a quick and effective means of communicating knowledge. If the coach tells you you should have passed the ball, pink, he conveys so much knowledge to you as opposed to what? Go down and change your software. That's alternative. But the coach doesn't know your software. So how can the coach tell you you should have passed the ball? But the tech, our language is very effective. You should have passed the ball. You know your software, you tweak the right mode you'll, okay? And next time you don't do it.

SPEAKER_02

01:02:09 - 01:02:12

Now that's for playing soccer or the rules are well defined.

SPEAKER_00

01:02:12 - 01:02:21

No, not well defined. When you should pass the ball, it's not well defined. No, it's Very soft, very noisy.

SPEAKER_02

01:02:21 - 01:02:48

Yeah, to do the pressure. It's art. But in terms of aligning values between computers and humans, do you think this cause and effect type of thinking is important to align the values? Values, morals, ethics under which the machines make decisions is the cause effect where the two can come together.

SPEAKER_00

01:02:52 - 01:03:25

Because the machine has to empathize. To understand what's good for you, to build a model of you as a recipient, which should be very much what is compassion. suffer pain as much as me. I do have already a model of myself, right? So it's very easy for me to map you to mine. I don't have to rebuild the model. It's much easier to say, oh, you're like me. Okay, therefore I would not hate you.

SPEAKER_02

01:03:27 - 01:03:36

And the machine has to imagine, has to try to fake to be human, essentially, so you can imagine that you're, that you're like me, right?

SPEAKER_00

01:03:36 - 01:04:23

And moreover, who is me? That's the fact that that's consciousness. They have a model of yourself. Where do you get this model? You look at yourself as if you are a part of the environment. If you build a model of yourself versus the environment, then you can say, I need to have a model of myself. I have abilities, I have desires and so forth. I have a blueprint of myself. Not a full detail because I cannot get the whole thing problem, but I have a blueprint. So that's level of a blueprint. I can modify things. I can look at myself in the mirror and say, hmm, if I change this mode, tweak this model, I'm going to perform differently. That is what we mean by free will.

SPEAKER_02

01:04:23 - 01:04:32

And consciousness. What do you think is consciousness? Is it simply self-awareness, including yourself into the model of the world?

SPEAKER_00

01:04:32 - 01:04:47

That's right. Some people tell me, no, this is only part of consciousness. And then they start telling me, what do you mean by God? And I lose them. For me, consciousness is having a blueprint of your software.

SPEAKER_02

01:04:49 - 01:05:01

Do you have concerns about the future of AI, all the different trajectories of all of our research? Yes. Where's your hope, where the movement has, where your concerns?

SPEAKER_00

01:05:01 - 01:06:00

I'm concerned, because I know we are building a new species that has a capability of exceeding us. Exit the archer abilities and can breed itself and take over the world, absolutely. It's a new species, it's uncontrolled. We don't know the degree to which we control it, we don't even understand what it means to be able to control this new species. So I'm concerned. I don't have anything to add to that because it's such a great area that I'll never happen in history. The only time it happened in history was evolution with human being. It wasn't very successful. It was a great success.

SPEAKER_02

01:06:00 - 01:06:12

For us it was, but a few people along the way, a few creatures along the way would not agree. So it's just because it's such a great area, there's nothing else to say.

SPEAKER_00

01:06:12 - 01:06:15

We have a sample of one sample of one. It's us.

SPEAKER_02

01:06:17 - 01:06:30

But we don't want people to look at you and say, yeah, but we were looking to you to help us make sure that the sample to works out okay.

SPEAKER_00

01:06:30 - 01:07:16

We have more than a sample of more. We have theories, theories, and that's a good idea. We don't need to be statisticians. So a sample of one doesn't mean poverty of knowledge. It's not. A sample of one plus theory, conjectural theory of what could happen. that we do have but I really feel helpless in contributing to this argument because I know so little and my imagination is limited and I know how much I don't know and but I'm concerned your born and raised in Israel born and raised in Israel

SPEAKER_02

01:07:16 - 01:07:35

and later served in three military defense forces in the in the Israel defense force yeah what did you learn from that experience There's a cupboards in there as well.

SPEAKER_00

01:07:35 - 01:08:08

Yes, because I was in the Nachan, which is a combination of agricultural work and military service. I was really idealist. I wanted to be a member of the Kibbutra out my life and to live a communal life. And so I prepared myself for that. Slowly, slowly, I want the greater challenge.

SPEAKER_02

01:08:08 - 01:08:12

So that's a far world away.

SPEAKER_00

01:08:12 - 01:10:09

But I learned from that what I can either. It was a miracle. It was a miracle that I served in the 1950s. I don't know how we survived. The country was under austerity. It tripples its population from 600,000 to a million point eight when I finished college. No one went hungry. austerity yes. When you wanted to buy to make an omelette in the restaurant, you had to bring your own eggs And they imprisoned people from bringing the food from the farming of the villages to the city. But no one would hangry. And I always add to it. And higher education did not suffer any budget cut. They still invested in me in my wife and our generation to get the best education that they could. So I'm really grateful for the opportunity and I'm trying to pay back now. It's a miracle that we survived the war of 1948. They were so close to a second genocide It was all in plant. But we survived it by miracle. And then the second miracle that not many people talk about. The next phase. How no one went angry in the country managed to triple its population. You know what? It means to triple the imagine United States. Going from what? 350 million? Two million. Yeah. That's unbelievable.

SPEAKER_02

01:10:11 - 01:10:25

That's a really tense part of the world. It's a complicated part of the world. Israel and all around. Religion is at the core of that complexity.

SPEAKER_00

01:10:25 - 01:10:33

One of the components. Religion is a strong motivating course for many, many people in the Middle East.

SPEAKER_02

01:10:33 - 01:10:38

In your view, looking back is religion good for society.

SPEAKER_00

01:10:40 - 01:11:52

That's a good question for robotics, you know? Should I call you that question? We provide with religious beliefs. Suppose we find out, when we agree that religion is good to you to keep you in a line. Should we give them about the metaphor of a guy? The metaphor of the robot will get it without us also. Why? The robot will reason by metaphor. And what is the most primitive metaphor? A child grows with Mother smile, father teaching, father image, and mother image. That's God. So, what do you want it or not? The robot with, but assuming the robot is going to have a mother and a father, it may only have a programmer, which doesn't supply warmth and discipline. What discipline it does, so it's all about to have a model of the trainer. Everything that happens in the world, cosmology and so it's going to be mapped into the program.

SPEAKER_02

01:11:52 - 01:12:13

It's God, man. The thing that represents the origin of everything for that is the most primitive relationship. So it's gonna arrive there by metaphor. And so the question is if overall that metaphor has served as well, as humans.

SPEAKER_00

01:12:13 - 01:12:22

I really don't know. I think it did. But as long as you keep in mind, it's only metaphor.

SPEAKER_02

01:12:22 - 01:12:29

So if you think we can, can we talk about your son?

SPEAKER_00

01:12:29 - 01:12:30

Yes, yes.

SPEAKER_02

01:12:30 - 01:12:35

Can you tell his story? Daniel.

SPEAKER_00

01:12:35 - 01:13:21

So his name was abducted in Pakistan by Al-Qaeda driven sect and under various pretenses. I don't even pay attention to what the pretence will. Originally they wanted to have the United States deliver some promised airplanes. It was all made up, and all this demands were bogus. I don't know really, but eventually he was executed in front of a camera.

SPEAKER_02

01:13:21 - 01:13:24

At the core of that is hate and intolerance.

SPEAKER_00

01:13:24 - 01:14:17

The coin is absolutely yes. We don't really appreciate the depth of the hate which which billions of people are educated. We don't understand it. I just listen to what they teach in Mogadishu. When the water stopped in the tap, We knew exactly who did it, the Jews, the Jews. We didn't know how, but we knew who did it. We don't appreciate what it means to us. The depth is unbelievable.

SPEAKER_02

01:14:17 - 01:14:29

Jesus, thank all of us capable of evil. and the education, the indoctrination, is really what we are capable of.

SPEAKER_00

01:14:29 - 01:15:27

If you are indoctrinated sufficiently long and in-depth, you are capable of ISIS, you are capable of Nazism? Yes, we are. But the question is whether we have to, we have guns with some Western education and we learn that everything is really relative. There is no absolute God. He's only a belief in God. Whether we are capable of now, of being transformed on the certain circumstances to become brutal. That is where I'm worried about it, because some people say, yes, given the right circumstances, given a canonical crisis, you are capable of doing it too, and that's more is me. I want to believe it, I'm not capable.

SPEAKER_02

01:15:29 - 01:15:45

This seven years after Daniel's death, he wrote an article at the Wall Street Journal titled Daniel Pearl, the normalization of evil. What was your message back then and how did it change today over the years?

SPEAKER_00

01:15:45 - 01:15:48

I lost.

SPEAKER_02

01:15:48 - 01:15:49

What was the message?

SPEAKER_00

01:15:49 - 01:16:54

The message was that we are not treating terrorism as a taboo. We are tweeting it as a bargaining device that is accepted. People have grievance and they go and shen bomb restaurants. It's normal. Look, you're even not surprised when I tell you that. 20 years ago, it said, what? For grievance, you go and blow a restaurant. Today, it's becoming normalized. The banalization of evil. And we have created that to ourselves, by normalizing, by making it part of political life. It's a political debate. Every terrorist has to become the freedom fighter today and to more and become terrorist in the eightest switchable.

SPEAKER_02

01:16:56 - 01:17:00

And so we should call our evil one as evil.

SPEAKER_00

01:17:00 - 01:17:24

If we don't want to be part of it, become it. Yeah, if we want to separate good from evil, let's one of the first things that what was in the garden of Eden. Remember the first thing that God tells him, he wants some knowledge. He is a tree of good and evil.

SPEAKER_02

01:17:26 - 01:17:37

So this evil touched your life personally. Does your heart have anger, sadness or is it hope?

SPEAKER_00

01:17:37 - 01:18:05

I see some beautiful people coming from Pakistan. I see beautiful people everywhere. But I see horrible propagation of evil in this country to It shows you how populistic slogans can catch the mind of the best intellectuals.

SPEAKER_02

01:18:05 - 01:18:07

Today is Father's Day.

SPEAKER_00

01:18:07 - 01:18:08

I didn't know that.

SPEAKER_02

01:18:08 - 01:18:15

Yeah, I heard it. What's a fun memory you have of Daniel?

SPEAKER_00

01:18:15 - 01:19:24

Oh, many good memories. He was my mentor. He had sense of balance that I didn't have. He saw the beauty in every person. He was not as emotional as I am and more looking at things in perspective. He really liked every person. He really grew up with the idea that a foreigner is a reason for curiosity, not for fear. It's one time we went in Berkeley and homeless came out from some dark alley and said, hey man, can you spare a dime? I've reached back, you know, two feet back. And then it just hugged him and said, here's a dime in Joseph. Maybe he wants some money to take a bath or whatever. Where did he get it? Not for me.

SPEAKER_02

01:19:27 - 01:19:46

Do you have advice for young minds today dreaming about creating as you have dreamt creating intelligent systems? What is the best way to arrive at new breakthrough ideas and carry them through the fire of criticism and past conventional ideas?

SPEAKER_00

01:19:46 - 01:20:33

Ask your questions. Really? Your questions are never dumb. And solve them your own way. and don't take no for an answer. If they are really dumb, you will find up quickly by trying an arrow to see that they are not leading any place. But follow them and try to understand things your way. That is my advice. I don't know if he's going to help anyone. Now that's brilliant. He's a lot of... It's an inertia in science, in academia. It is slowing down science.

SPEAKER_02

01:20:36 - 01:20:44

Yeah, those two words, your way, that's a powerful thing. It's against inertia, potentially, against the flow.

SPEAKER_00

01:20:44 - 01:21:11

Against your professor. It is, I wrote the book of why, in order to democratize common sense. Another to instill a rebellious spirit in students, so they wouldn't wait until the professor gets things right.

SPEAKER_02

01:21:11 - 01:21:26

You wrote the manifesto of the rebellion against the professor. So looking back in your life a research, what ideas do you hope ripple through the next many decades? What do you hope your legacy will be?

SPEAKER_00

01:21:26 - 01:21:32

I already have a tombstone.

SPEAKER_01

01:21:32 - 01:21:38

God. Oh boy.

SPEAKER_00

01:21:38 - 01:22:09

And the fundamental law of counterfactures. That's what it is. It's a simple equation. what it can't affect in terms of a model surgery. That's it, because everything follows from that. If you get that, all of us, I can die in peace and my student can derive all my knowledge by mother mother called means.

SPEAKER_02

01:22:09 - 01:22:14

The rest follows. Thank you so much for talking to me.

SPEAKER_00

01:22:14 - 01:22:21

Thank you for being so attentive and instigating.

SPEAKER_02

01:22:21 - 01:23:08

We did it. We did it. The coffee helped. Thanks for listening to this conversation with your dad, Pearl. And thank you to our presenting sponsored cash app. Don't want it. Use code, lex, podcast. You'll get $10 and $10 will go to first. A STEM education nonprofit that inspires hundreds of thousands of young minds to learn and to dream of engineering our future. If you enjoy this podcast, subscribe on YouTube, give it $5,000 an Apple Podcast, support on Patreon, or simply connect with me on Twitter. And now, let me leave you some words of wisdom from Judea Pearl. You cannot answer a question that you cannot ask, and you cannot ask a question that you have no words for. Thank you for listening and hope to see you next time.