Transcript for Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI
SPEAKER_00
00:00 - 03:11
The following is a conversation with Daniel Codman, winner of the Nobel Prize in Economics, for his integration of economic science with the psychology of human behavior, judgment, and decision making. His the author of the popular book, Thinking Fast and Slow, that summarizes in an accessible way his research of several decades often in collaboration with Amos Tversky, a cognitive biases, prospect theory, and happiness. The central thesis of this work is the dichotomy between two modes of thought. What he calls system one is fast instinctive and emotional. System two is slower and more deliberative and more logical. The book delineates cognitive biases associated with each of these two types of thinking. His study of the human mind and his peculiar and fascinating limitations are both instructive and inspiring for those of us seeking to engineer intelligent systems. This is the Artificial Intelligence podcast. If you enjoy it, subscribe on YouTube, give it $5,000 an Apple podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter. Alex Friedman spelled FRIDMAN. I recently started doing ads at the end of the introduction. I'll do one or two minutes after introducing the episode and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by CashApp. The number one finance app in the app store. I personally use CashApp to set money to friends, but you can also use it to buy sell and deposit Bitcoin in just seconds. CashApp also has a new investing feature. You can buy fractions of a stock, say $1 worth no matter what the stock price is. Roker services are provided by cash app investing, a subsidiary of Square and member SIPC. I'm excited to be working with cash app to support one of my favorite organizations called First. Best known for their first robotics and legal competitions. They educate and inspire hundreds of thousands of students in over 110 countries and have a perfect rating and charity navigator, which means that donated money is used to maximum effectiveness. When you get cash app from the App Store, Google Play, and use code LexPotCast, you'll get $10 and cash app will also do an $10 to first, which again is an organization that I've personally seen inspire girls and boys to dream of engineering a better world. And now, here's my conversation with Daniel Kahneman. You tell a story of an SS soldier early in the war, all over too, in Nazi occupied France and Paris, where you grew up, he picked you up and hugged you, and showed you a picture of a boy, maybe not realizing that you were Jewish.
SPEAKER_01
03:11 - 03:13
Not maybe, certainly not.
SPEAKER_00
03:13 - 03:27
So I told you, from the Soviet Union, that was significantly impacted by the war as well, and I'm Jewish as well. What do you think World War II taught us about human psychology broadly?
SPEAKER_01
03:27 - 04:24
Well, I think the only big surprise is the extermination policy genocide. by the German people. That's when you look back on it. And I think that's a major surprise. It's a surprise because it's a surprise that they could do it. It's a surprise that they enough people willingly participated in that. This is a surprise. Now it's no longer a surprise, but it's strange. Many people's views, I think, about human beings. Certainly, for me, the Achman trial, and it teaches you something because it's very clear that if it could happen in Germany, it could happen anywhere. It's not that the Germans were special. This could happen anywhere.
SPEAKER_00
04:24 - 04:30
So what do you think that is? Do you think we're all capable of evil? We're all capable of cruelty?
SPEAKER_01
04:31 - 05:26
I don't think in those terms, I think that what is certainly possible is you can dehumanize people so that you treat them not as people anymore but as animals. And the same way that you can slaughter animals without feeling much of anything, it can the same. And when you feel that, I think the combination of dehumanizing the other side and having uncontrolled power over other people. I think that doesn't bring out the most generous aspect of human nature. So that Nazi soldier, you know, he was a good man. I mean, you know, he was perfectly capable of killing a lot of people. And I'm sure he did.
SPEAKER_00
05:27 - 05:38
But what did the Jewish people mean to Nazis? So what did the dismissal of Jewish as worthy of?
SPEAKER_01
05:38 - 06:22
Again, this is surprising that it was so extreme, but it's not one thing in human nature, I don't want to call it evil, but the distinction between the in-group and the out-group, that is very basic. So that's built-in. The loyalty and affection towards in-group and the willingness to dehumanize the art group. That is in-human nature. And that's what I think probably the need of Holocaust to teach us that. But the Holocaust is a very sharp lesson. What can happen to people? What people can do?
SPEAKER_00
06:23 - 06:26
So the effect of the in-group and the out-group.
SPEAKER_01
06:26 - 07:27
You know, it's clear that those were people, you know, you could shoot them, you could, you know, they were not human. They were not there was no empathy or very, very little empathy left. So occasionally, you know, there might have been, and very quickly, by the way, the empathy disappeared if there was initially. And the fact that everybody around you was doing it, that completely the group doing it, and everybody shooting Jews, I think, that makes it permissible. Now, how much Whether it could happen in every culture or whether the Germans were just particularly efficient and disciplined, so they could get away with it. That is a question. It's an interesting question.
SPEAKER_00
07:28 - 07:31
Are these artifacts of history or is it human nature?
SPEAKER_01
07:31 - 07:45
I think that's really human nature. You know, you put some people in a position of power relative to other people and then they become a human that become different.
SPEAKER_00
07:45 - 07:58
But in general, in war outside of concentration camps and war war too, it seems that war brings out Darker size of human nature, but also the beautiful things about human nature.
SPEAKER_01
07:58 - 08:35
Well, I mean, but it, what it brings out is... the loyalty among soldiers and it brings out the bonding, male bonding I think is a very real thing that happens and so and there is a certain thrill to friendship and there is certainly a certain thrill to friendship on the risk and to shared risk and so people have very profound emotions up to the point where it gets automatic that that lived with his left.
SPEAKER_00
08:35 - 09:07
So let's talk about psychology a little bit. In your book, thinking fast and slow, you describe two modes of thoughts as the one, the fast and distinctive and emotional one, the system two, the slower, deliberate, logical one. At the risk of asking Darwin to discuss Theory of Evolution, you need to describe distinguishing characteristics for people who have not read your book of the two systems.
SPEAKER_01
09:07 - 09:51
Well, I mean, the word system is a bit misleading, but it's at the same time it's misleading, it's also very useful. But what I call system one, it's easier to think of it as a family of activities And primarily, the way to describe it is there are different ways for ideas to come to mind. And some ideas come to mind automatically. And the example I've standard examples to plus two and then something happens to you. And in other cases, you've got to do something. You've got to work in order to produce the idea. And my example I always give the same pair of numbers is 27 times 14, I think.
SPEAKER_00
09:54 - 09:57
You have to perform some algorithm in your heads and stuff.
SPEAKER_01
09:57 - 10:33
And it takes time. It's a very different, nothing comes to mind, except something comes to mind, which is the algorithm. I mean, that you've got to perform. And then it's work, and it engages, so to memory, it engages executive function, and it makes you incapable of doing other things at the same time. So the main characteristic of system to that there is mental effort involved, and there is a limited capacity for mental effort. where a system one is effortless essentially. That's the major distinction.
SPEAKER_00
10:33 - 11:19
So you talk about there, you know, it's really convenient to talk about two systems, but you also mentioned just now in general that there is no distinct two systems in the brain from a neurobiological, even from psychology perspective. But why does it seem to, from the experiments you've conducted, there does seem to be kind of emergent to modes of thinking. So at some point, these kinds of systems came into a brain architecture, maybe mammal share it. But, or do you not think of it at all in those terms that it's all a motion, these two things just emerge?
SPEAKER_01
11:20 - 12:53
evolutionary theorizing about this as cheap and easy. So it's the way I think about it. is that it's very clear that animals have a perceptual system, and that includes an ability to understand the world, at least to the extent that they can predict, they can't explain anything, but they can anticipate what's going to happen, and that's the key form of understanding the world. And my clue idea is that we, what I call system two, well, system two grew out of this. And there is language. and there is the capacity of manipulating ideas and the capacity of imagining futures and of imagining counterfactual things that haven't happened and to do conditional thinking and there are really a lot of abilities that without language and without the very large brain that we have compared to others would be impossible. Now, system 1 is more like what the animals are, but system 1 also can talk. I mean, it has language, it understands language. Indeed, it speaks for us. I mean, you know, I'm not choosing every word as a deliberate process. The words, I have some idea and then the words come out. And that's automatic and effortless.
SPEAKER_00
12:54 - 13:07
And many of the experiments you've done is to show that listen, system one exists and it does speak for us and we should be careful about it. The voice it provides.
SPEAKER_01
13:07 - 14:03
We have to trust it because it's the speed at which it acts a system too. If we dependent on system two for survival, we wouldn't survive very long. Because it's very slow. Yeah, crossing the street. Crossing the street. I mean, many things depend on their being automatic. One very important aspect of system one is that it's not instinctive. You use the word instinctive. It contains skills that clearly have been learned. So that skilled behavior like driving a car or speaking, in fact, skilled behavior has to be learned. And so it doesn't, you know, you don't come equipped with driving, you have to learn how to drive. And you have to go through a period where driving is not automatic before it becomes automatic.
SPEAKER_00
14:03 - 14:34
So yeah, you construct, I mean, this where you talk about heuristic and biases, you to make it automatic. you create a pattern and then system one essentially matches a new experience against the previous scene pattern and when that match is not a good one that's when the cognitive all the best happens but it's the most of the time it works and so it's pretty most of the time the anticipation of what's going to happen next is correct and most of the time
SPEAKER_01
14:35 - 15:23
The plan about what you have to do is correct and so most of the time everything works just fine. What's interesting actually is that in some sense system one is much better as what it does and system two is what it does. that is, there is that quality of effortlessly solving enormously complicated problems, which clearly exists, so that the chess player, a very good chess player, all the moves that come to their mind are strong moves. So all the selection of strong moves happens unconsciously and automatically and very, very fast. And all that is in system one. system to verify.
SPEAKER_00
15:23 - 15:56
So along this line of thinking, really what we are are machines that construct pretty effective system one. You could think of it that way. So we're now talking about humans, but if you think about building artificial intelligence systems robots, do you think all the features and bugs that you have highlighted in human beings are useful for constructing AI systems? So both systems are useful for perhaps instilling in robots?
SPEAKER_01
15:56 - 16:52
What is happening these days is that actually what is happening deep learning is more like the system one product than like a system two product. I mean deep learning matches patterns and anticipate what's going to happen so it's highly predictive. What deep learning doesn't have and you know many people think that this is the critical. It It doesn't have the ability to reason, so there is no system to there. But I think very importantly, it doesn't have any causality or any way to represent meaning and to represent really interaction. So until that is solved, what can be accomplished is marvelous and very exciting, but limited.
SPEAKER_00
16:53 - 17:03
That's actually really nice to think of current advances in machine learning as a sensory system, one advances. So how far can we get with just system one?
SPEAKER_01
17:03 - 18:01
If we think of deep learning in artificial systems, it's very clear that deep mind is already gone way beyond what people thought was possible, I think. I think the thing that has impressed me most about the developments in AI is the speed. It's that things, at least in the context of deep learning and maybe this is about to slow down. But things moved a lot faster than anticipated. The transition from solving chess to solving go was, I mean, that's bewildering how quickly it went. The move from alpha go to alpha zero is sort of bewildering the speed at which they accomplished that. Now clearly there are so there are many problems that you can solve that way, but there are some problems for which you need something else.
SPEAKER_00
18:01 - 18:02
Something like reasoning.
SPEAKER_01
18:03 - 19:01
Well, reasoning and also, you know, one of the real mysteries, psychologist Gary Marcus, who is also a critic of AI. I mean, he, what he points out and I think he has a point is that humans learn quickly. Children don't need a million examples. They need two or three examples. So clearly, there is a fundamental difference. And what enables machine to learn quickly? What you have to build into the machine, because it's clear that you have to build some expectations or something in the machine to make it ready to learn quickly, That at the moment seems to be unsolved. I'm pretty sure that if they have solved it, I haven't heard yet.
SPEAKER_00
19:05 - 19:28
They're trying to actually, them and OpenAI are trying to start to get to use neural networks to reason. So a simple knowledge, of course, causality is temporal causality, is out of reach to most everybody. You mentioned the benefits of system one is essentially that it's fast allows us to function in the world.
SPEAKER_01
19:28 - 19:30
Fast them skilled, yeah.
SPEAKER_00
19:30 - 19:31
It's skill.
SPEAKER_01
19:31 - 20:02
And it has a model of the world. You know, in a sense, I mean, there was the earlier phase of AI attempted to model reasoning, and they were moderately successful, but you know, reasoning by itself doesn't get you much. Deep learning has been much more successful in terms of, you know, what they can do. But now, it's an interesting question whether it's approaching its limits. What do you think?
SPEAKER_00
20:03 - 20:34
I think absolutely, so I just talked to Gianlichoon. He mentioned, you know, in the film. So he thinks that the limits were not going to hit the limits with, you know, networks that ultimately this kind of system on pattern matching will start to start to look like system two without significant transformation of the architecture. So I'm more with the majority of the people who think that yes, you know, that works well, hit a limit in their capability.
SPEAKER_01
20:34 - 20:59
He, on the one hand, I have heard him tell them he's a sabis essentially that, you know, what they have accomplished is not a big deal that they have just touched. That basically, you know, they can't do unsupervised learning in an effective way. But you're telling me that he thinks that the current within the current architecture, you can do causality and reasoning.
SPEAKER_00
20:59 - 21:36
So he's very much a pragmatist in a sense that saying that we're very far away, that there's still, I think there's this idea that he says is we can only see one or two mountain peaks ahead and there might be either a few more after or thousands more after. Yeah. So that kind of idea. Right. But nevertheless, it doesn't see a The final answer not fundamentally looking like one that we currently have. So neural networks being a huge part of that.
SPEAKER_01
21:36 - 21:43
Yeah, I mean, that's very likely because because pattern matching is so much of what's going on.
SPEAKER_00
21:43 - 21:47
And you can think of neural networks as processing information sequentially.
SPEAKER_01
21:47 - 22:16
Yeah, I mean, you know, there is There is an important aspect to, for example, you get systems that translate and they do a very good job, but they really don't know what they're talking about. And for that, I'm really quite surprised. For that, you would need and they are that has sensation, and they are that is in touch with the world.
SPEAKER_00
22:16 - 22:22
And the other awareness, and maybe even something resembles consciousness kind of ideas.
SPEAKER_01
22:22 - 22:33
It's merely awareness of, you know, awareness of what's going on so that the words have meaning or can get in touch with some perception or some action.
SPEAKER_00
22:33 - 22:41
Yeah, so that's a big thing for young and as, uh, or here first is grounding to the physical space.
SPEAKER_01
22:41 - 22:56
So that's what we're talking about the same. Yeah, so, but so how how you ground, I mean, the grounding without grounding, then you get, you get a machine that doesn't know what it's talking about because it is talking about the world, ultimately.
SPEAKER_00
22:57 - 23:19
The question of open question is what it means to ground? I mean, we're very human-centric in our thinking. But what does it mean for a machine to understand what it means to be in this world? does it need to have a body? Does it need to have a finiteness like we humans have? All of these elements, it's a very, it's an opportunity.
SPEAKER_01
23:19 - 23:56
I'm, you know, I'm not sure about having a body, but having a perceptual system, having a body would be very helpful too. I mean, if, if you think about human, mimicking human, but having a perception, that seems to be essential. so that you can build, accumulate knowledge about the world. So if you can imagine a human completely paralyzed And there's a lot that the human brain could learn with a paralyzed body. So if we got a machine that could do that, it would be a big deal.
SPEAKER_00
23:56 - 24:17
And then the flip side of that, something you see in children and something in machine learning world is called active learning. Maybe it is also being able to play with the world. How important for developing system owners or system 2 do you think it is to play with the world?
SPEAKER_01
24:17 - 25:06
We will change our actions. A lot of what you learn, as you learn to anticipate, the outcomes of your actions. You can see that how babies learn it, with their hands, how they learn to connect. you know the movements of their hands with something that clearly is something that happens in the brain and and the ability of the brain to learn new patterns. So you know it's the kind of thing that you get without official limbs that you connect it and then people learn to operate the artificial limb You know, really impressively quickly at least from what I hear. So, we have a system that is ready to learn the world's action.
SPEAKER_00
25:06 - 25:25
At the risk of going into way to mysterious of land, what do you think it takes to build a system like that? Obviously, we're very far from understanding how the brain works, but How difficult is it to build this mind of ours?
SPEAKER_01
25:26 - 25:55
You know, I mean, I think that Yandakun's answer, that we don't know how many mountains there are. I think that's a very good answer. I think that, you know, if you look at what Ray Kultswell is saying, that strikes me as of the war, but I think people are much more realistic than that, who are actually Demi Sassabis is, and Yannes, and so the people are actually doing the work fairly realistic, I think.
SPEAKER_00
25:57 - 26:28
to maybe phrase it another way from a perspective not of building it, but from understanding it. How complicated are human beings in the following sense? You know, I work with autonomous vehicles and pedestrians, so we try to model pedestrians. How difficult is it to model a human being their perception of the world, the two systems they operate under, sufficiently to be able to predict whether the pedestrian is going to cross the road or not.
SPEAKER_01
26:28 - 27:36
I'm, you know, I'm fairly optimistic about that actually because what we're talking about. is a huge amount of information that every vehicle has and that feeds into one system into one gigantic system. And so anything that any vehicle learns becomes part of what the whole system knows. with a system multiplier like that, there is a lot that you can do. So, human beings are very complicated, but and, you know, system is going to make mistakes, but human makes mistakes. I think that there'll be able to, I think they are able to anticipate pedestrians, otherwise a lot would happen. They're able to, you know, they're able to get into a roundabout. and into traffic so they must know both to expect or to anticipate how people will react when they are sneaking in and there's a lot of learning that's involved in that.
SPEAKER_00
27:36 - 28:12
Currently, the pedestrians are treated as things that cannot be hit and they're not treated as agents with whom you interact in a game theoretic way. I mean, it's not, it's a totally open problem and every time somebody tries to solve it, it seems to be harder than we think. And nobody's really tried to seriously solve the problem of that dance because I'm not sure if you've thought about the problem of pedestrians, but you're really putting your life in the hands of the driver.
SPEAKER_01
28:13 - 28:38
There is a dance, this part of a dance that would be quite complicated. But for example, when I cross the street and there is a vehicle approaching, I look the driver in the eye and I think many people do that. And that's a signal that I'm sending. And I would be sending that machine to an autonomous vehicle and it had better understand it because it means I'm crossing.
SPEAKER_00
28:39 - 29:05
So and there's another thing you do that actually so I'll tell you what you do because we watch I've watched hundreds of hours of video on this as when you step in the street you do that before you step in the street and when you step in the street you actually look away. Yeah. Yeah. Now what is that what that saying is mean you're trusting that the car who hasn't Sloan down yet will slow down. Yeah.
SPEAKER_01
29:06 - 29:18
And you're telling him, yeah, I'm committed. I mean, this is like in a game of trick and so I'm committed. And if I'm committed, I'm looking away. So there is, you just have to stop.
SPEAKER_00
29:18 - 29:23
So the question is whether a machine that observes that needs to understand mortality,
SPEAKER_01
29:24 - 30:13
Here I'm not sure that it's got to understand so much that it's got to anticipate. So, and here, but you know, you're surprising me because Here I would think that maybe you can anticipate without understanding, because I think this is clearly what's happening in playing Go or in playing chess. There's a lot of anticipation and there is zero understanding. So I thought that you didn't need a model of the human and a model of the human mind to avoid hitting pedestrians. But you are suggesting that you do. Yeah, you do. And then it's then it's a lot harder.
SPEAKER_00
30:13 - 30:58
So this is. And I have a follow question to see where your intuition lies is it seems that almost every robot human collaboration system is a lot harder than people realize. So Do you think it's possible for robots and humans to collaborate successfully? We talked a little bit about semi-autonomous vehicles, like in the Tesla autopilot, but just in tasks in general. If you think we talked about current neural networks being kind of system one, do you think Those same systems can borrow humans for system two type tasks and collaborate successfully.
SPEAKER_01
30:59 - 31:57
Well, I think that in any system where humans and the machine interact, the human will be superfluous within a fairly short time, and that if the machine has advanced enough so that it can really help the human, then it may not need the human for a long time. Now, it would be very interesting if If there are problems that for some reason the machine doesn't cannot solve, but that people could solve, then you would have to build into the machine and ability to recognize that it is in that kind of problematic situation and to call the human. That cannot be easy. without understanding that it must be very difficult to program a recognition that you are in a problematic situation without understanding the problem.
SPEAKER_00
31:57 - 32:09
But that's very true. In order to understand the full scope of situations that are problematic, you almost need to be smart enough to solve all those problems.
SPEAKER_01
32:10 - 32:32
It's not clear to me how much the machine will need the human. I think the example of chess is very instructive. I mean, there was a time at which Kasparov was saying that human machine combinations will beat everybody. Even Stockfish doesn't need people. And Alpha Zero certainly doesn't need people.
SPEAKER_00
32:33 - 32:47
The question is, just like you said, how many problems are like chess and how many problems are the ones where are not like chess? Well, every problem probably in the end is like chess. The question is, how long is that transition period?
SPEAKER_01
32:47 - 32:58
I mean, you know, that's a question I would ask you in terms of, you know, autonomous vehicle, just driving is probably a lot more complicated than go to solve that.
SPEAKER_00
32:58 - 33:01
Yes. And that's surprising. Because it's open.
SPEAKER_01
33:02 - 33:28
No, I mean I wouldn't, that's not surprising to me because there is a hierarchical aspect to this which is recognizing a situation and then within the situation bringing up the relevant knowledge and for that hierarchical type of system to work
SPEAKER_00
33:29 - 34:28
You need a more complicated system than we currently have a lot of people think because as human beings, this is probably the cognitive biases. They think of driving as pretty simple because they think of their own experience. This is actually a big problem for AI researchers or people thinking about AI because they evaluate how hard a particular problem is based on very limited knowledge, basically how hard it is for them to do the task. And then they take for granted, maybe you can speak to that because Most people tell me driving is trivial and humans in fact are terrible at driving is what people tell me and I see humans and humans are actually incredible at driving and driving is really terribly difficult. Is that just another element of the effects that you've described in your work on the psychology side?
SPEAKER_01
34:32 - 35:32
I mean I haven't really, you know, I would say that my researchers contributed nothing to understanding the ecology and to understanding the structure of situations and the complexity of problems. So all we know is very clear that that go it's endlessly complicated but it's very constrained so and in the real world there are far fewer constraints and and many more potential surprises so so that's obviously because it's not always obvious to people right so when you think about well I mean you know people thought that reasoning was hard and perceiving was easy, but they quickly learned that actually modeling vision was tremendously complicated and modeling, even proving theorems was relatively straightforward.
SPEAKER_00
35:33 - 35:53
to push back in a little bit on the quickly part. They haven't took several decades to learn that and most people still have to learn that. I mean, our intuition, of course, AI researchers have, but you drift a little bit outside the specific AI field. The intuition is still perceptible.
SPEAKER_01
35:53 - 36:16
Oh, yeah. That's true. Intuition, the intuitions of the public haven't changed radically. And they are, as you said, they're evaluating the complexity of problems by how difficult it is for them to solve the problems. And that's not very little to do with the complexities of solving them in AI.
SPEAKER_00
36:16 - 36:59
How do you think from the perspective of AI researcher? do we deal with the intuitions of the public? So we're trying to think, arguably, the combination of hype investment and the public intuition is what led to the AI winters. I'm sure that sink can be applied to tech or The intuition of the public leads to media hype leads to companies investing in the tech and then the tech doesn't make the companies money and then there's a crash. Is there a way to educate people so they're to fight the it's called system one thinking
SPEAKER_01
37:01 - 38:12
In general, no. I think that's the simple answer. And it's going to take a long time before the understanding of whether those systems can do becomes, you know, a part, and become public knowledge. And then, and the expectations, you know, there are several aspects that are going to be very complicated. The fact that you have a device that cannot explain itself is a major, major difficulty. And we are already seeing that. I mean, this is really something that is happening. So it's happening in the judicial system. So you have you have system that are clearly better at predicting parole violations than judges, but they can't explain the reasoning, and so people don't want to trust them.
SPEAKER_00
38:13 - 38:28
We seem to insist on one, even, use cues to make judgments about our environment. So this explainability point. Do you think humans can explain stuff?
SPEAKER_01
38:28 - 39:15
No, but I'll... I mean, there is a very interesting aspect of that. humans think they can explain themselves. So when you say something and I ask you why do you believe that then reasons will occur to you and you will but actually my own belief is that in most cases the reasons are very little to do with why you believe what you believe. So that the reasons are a story that comes to your mind when you need to explain yourself. But people traffic in those explanations, I mean the human interaction depends on those shared fictions and the stories that people tell themselves.
SPEAKER_00
39:15 - 39:48
You just made me actually realize and we'll talk about stories in a second. that not to be cynical about it, but perhaps there's a whole movement of people trying to do explainable AI. And really, we don't necessarily need to explain AI, it doesn't need to explain itself. It just needs to tell a convincing story. Yeah. Absolutely. It doesn't necessarily need to reflect the truth. It just needs to be convincing.
SPEAKER_01
39:48 - 40:22
There's something to do with it. You can say exactly the same thing in a way that sounds in a call or doesn't sounds in a call. But the objective of having an explanation is to tell a story that will be acceptable to people. And for it to be acceptable and to be robustly acceptable, it has to have some elements of truth. But the objective is for people to accept it.
SPEAKER_00
40:22 - 40:43
It's quite brilliant actually. But so on the on the stories that we tell, sorry to ask me to ask you the question that most people know the answer to, but you talk about two cells in terms of how life has lived. The experience self and remembering self. Can you describe the distinction between the two?
SPEAKER_01
40:44 - 42:20
Well, sure. I mean, there is an aspect of a life that occasionally, most of the time we just live and by the experiences and their better and their worse and it goes on over time. And mostly we forget everything happens, we forget most of what happens. Then occasionally, when something ends or at different points, You evaluate the past and you form a memory. And the memory is schematic. It's not that you can roll a film of an interaction. You construct, in effect, the elements of a story about an episode So there is the experience, and there is the story that is created about the experience. And that's what I call the remembering. So I had the image to self. So there is a self that lives, and there is a self that evaluates life. Now the paradox and the deep paradox in that is that we have one system, one self that does the living, But the other system, the remembrance of is all we get to keep. And basically, the decision making and everything that we do is governed by our memories, not by what actually happened. It's governed by the story that we told ourselves by the story that we're keeping. So that's the distinction.
SPEAKER_00
42:21 - 42:32
I mean, there's a lot of brilliant ideas about the pursuit of happiness that come out of that. What are the properties of happiness which emerge from us from ourselves?
SPEAKER_01
42:32 - 43:35
There are properties of how we construct stories that are really important. So that I studied a few, but a couple are really very striking. And one is that in stories, time doesn't matter. There's a sequence of events, so they're all highlight, and how long it took. They lived happily ever after, and three years later, something. Time really doesn't matter. In stories, events matter, but time doesn't. that leads to a very interesting set of problems because time is all we got to live. I mean, you know, time is the currency of life. And yet time is not represented basically in evaluated memories. So that creates a lot of paradoxes that I've thought about.
SPEAKER_00
43:35 - 43:51
Yeah, they're a fascinating, but if you were to give advice on how one lives a happy life based on such properties what's the optimal you know I give up
SPEAKER_01
43:52 - 45:04
I abandoned happiness research because I couldn't solve that problem, I couldn't see. And in the first place, it's very clear that if you do talk in terms of those two cells, then that what makes the remembering self-happy and what makes the experiencing self-happy are different things. And I asked the question of suppose you're planning a vacation and you're just told that at the end of the vacation you'll get an amazing drug so remember nothing and there's also destroy all your photos so there'll be nothing. Would you still go to the same vacation? And it's It turns out we go to vacations in large part to construct memories, not to have experiences, but to construct memories. And it turns out that the vacation that you would want for yourself, if you're new, you will not remember, is probably not the same vacation that you will want for yourself. If you will remember, so I have no solution to these problems, but clearly those are big issues.
SPEAKER_00
45:04 - 45:49
and you've talked about how many minutes or hours you spend about the vacation is an interesting way to think about it because that's how you really experience the vacation outside the being in it but there's also a modern I don't know if you think about this or interact with it There's a modern way to magnify the remembering self, which is by posting an Instagram on Twitter on social networks. A lot of people live life for the picture that you take that you post somewhere. And now thousands of people share an impotent shape in a machine millions. And then you can live it even much more than just those minutes. Do you think about that?
SPEAKER_01
45:49 - 46:03
I magnification much, you know, I'm too old for social networks. I, you know, I've never seen Instagram. So I cannot really speak intelligently about those things. I'm just too old.
SPEAKER_00
46:03 - 46:06
But it's interesting to watch the exact facts you described.
SPEAKER_01
46:06 - 47:14
I make a very big difference. I mean, and it will make, it will also make a difference. And that I don't know whether it's clear that in some ways, the devices that serve us supplant functions. So you don't have to remember phone numbers. You really don't have to know facts. I mean, the number of conversations, I mean, involved with somebody says, well, let's look it up. So it's a way, it's made conversations. Well, it means that it's much less important to know things. No, it used to be very important to know things. This is changing. So the requirements of that that we have for ourselves and for other people are changing because of all those supports and I have no idea what Instagram does. Well, I'll tell you.
SPEAKER_00
47:14 - 47:49
I wish I could just have the moment remembering self could enjoy this conversation, but I'll get to enjoy it even more by watching it and then talking to others, it'll be about a hundred thousand people, scary as to say, well listen or watch this, right? It changes things. It changes the experience of the world. that you seek out experiences, which could be shared in that way. And I haven't seen, it's the same effects that you described. And I don't think the psychology of that magnification has been described yet.
SPEAKER_01
47:49 - 48:07
Because it's new world. There was a time when people read books. And you could assume that your friends had read the same books that you read.
SPEAKER_00
48:07 - 48:11
So there was kind of invisible sharing there.
SPEAKER_01
48:11 - 48:48
There was a lot of sharing going on and there was a lot of assumed common knowledge and you know that was built in. I mean it was obvious that you had read the New York Times that was obvious that you had read the reviews. I mean so a lot was taken for granted that was shared. And you know, when there were, when there were three television channels, it was obvious that you'd seen one of them probably the same. So sharing it, sharing it always was always there. It was just different.
SPEAKER_00
48:49 - 49:23
At the risk of inviting mockery from you, let me say that I'm also a fan of Star Trek and Kamu and existentialist philosophers. And I'm joking, of course, about mockery, but from the perspective of the two selves, what do you think of the existentialist philosophy of life? So trying to really emphasize the experiencing self as the proper way to or the best way to live life.
SPEAKER_01
49:23 - 49:49
I don't know enough philosophy to answer that, but it's not, you know, the emphasis on experience is also the emphasis in Buddhism. Right, that's right. So that's, you just have got to experience things and and not to evaluate, not to past judgment, and not to score, not to keep score.
SPEAKER_00
49:49 - 50:08
So, if when you look at the grand picture of experience, you think there's something to that, that one of the ways to achieve contentment and maybe even happiness is letting go of any of the procedures of the remembering self.
SPEAKER_01
50:09 - 51:11
Well, I mean, I think, you know, if, when could imagine a life in which people don't score themselves, it feels as if that would be a better life, as if the self-scoring and, you know, how am I doing, kind of question, is not a very happy thing to have, but I got out of that field because I couldn't solve that problem. And that was because my intuition was that the experiencing self, that's reality. But then it turns out that what people want for themselves is not experiences. They want memories and they want a good story about their life. And so you cannot have a theory of happiness that doesn't correspond to what people want for themselves. And when I realized that this was where things were going, I really sort of left the field of research.
SPEAKER_00
51:12 - 51:43
Do you think there's something instructive about this emphasis of reliving memories in building AI systems? So currently artificial intelligence systems. are more like experiencing self in that they react to the environment. There's some pattern formation like learning so on, but you really don't construct memories, except in reinforcement learning over what's the law that you replay over and over.
SPEAKER_01
51:43 - 51:46
But you know, that would in principle, would not be.
SPEAKER_00
51:46 - 51:54
Do you think that's useful? Do you think it's a feature of bug of human beings that we that we look back
SPEAKER_01
51:54 - 52:09
Or I think that's definitely a feature. That's not a bug. I mean, you have to look back in order to look forward. So without looking back, you couldn't really intelligently look forward.
SPEAKER_00
52:10 - 52:42
You're looking for the echoes of the same kind of experience in order to predict what the future holds. The Victor Franco in his book, Man Search for Meaning, I'm not sure if you've read. Describes his experience at the concentration camps during World War II as a way to describe that finding, identifying a purpose in life, a positive purpose in life can save one from suffering. First of all, do you connect with the philosophy that he describes there?
SPEAKER_01
52:45 - 53:57
not really. So I can really see that somebody who has that feeling of purpose and meaning and so on, that that could sustain you. I, in general, don't have that feeling and I'm pretty sure that if I were in a concentration camp, I'd give up and die. So he talks, he is a survivor, and he survives with that. And I'm not sure how essential to survival is, but I do know when I think about myself that I would have given up. That's all you, this isn't going anywhere. And there is a sort of character that manages to survive in conditions like that. And then, because they survived, they tell stories and it sounds as if they survived because of what they were doing. We have no idea. They survived because the kind of people that they are and the other kind of people survived and would tell them some stories of a particular coin.
SPEAKER_00
53:57 - 54:05
So you don't think seeking purposes is a significant driver in the market.
SPEAKER_01
54:05 - 55:01
It's a very interesting question because when you ask people whether it's very important to have meaning in their life, there's OAS that's the most important thing. But when you ask people, what kind of a day did you have? And what were the experiences that you remember? You don't get much meaning. you get so far experiences then and and some people say that for example in and child, you know, in taking care of children, the fact that they are your children and you're taking care of them makes a very big difference. I think that's entirely true, but it's more because of a story that we're telling ourselves, which is a very different story when we're taking care of our children or when we're taking care of others.
SPEAKER_00
55:02 - 56:09
jumping around a little bit in doing a lot of experiments. Let me ask a question, most of the work I do for example is in the real world, but most of the clean good science that you can do is in the lab. So that distinction, do you think we can understand the fundamentals of human behavior through controlled experiments in the lab? If you talk about people diameter, for example, it's much easier to do when you can control lighting conditions. So when we look at driving, lighting variation destroys almost clearly your ability to use people diameter. But in the lab for, as I mentioned, semi-autonomous or autonomous vehicles in driving simulations, we can't, we don't capture true honest human behavior in that particular domain. So, what's your intuition? How much of human behavior can we study in this controlled environment of the lab?
SPEAKER_01
56:11 - 56:50
a lot, but you'd have to verify it, you know, that your conclusions are basically limited to the situation, to the experimental situation, then you have to jump that a big inductive leap to the real world. And that's the flare, that's where the difference, I think, between the good psychologists and others that are mediocre, is in the sense that your experiment captures something that's important and something that's real and others are just running experiments.
SPEAKER_00
56:50 - 57:05
So what is that? Like the birth of an idea to his development in your mind to something that leads to an experiment. Is that similar to maybe like what Einstein or good physicists do is your intuition? You basically use your intuition to build up
SPEAKER_01
57:06 - 57:49
But I mean, you know, it's very skilled intuition. I mean, I just had that experience, actually, I had an idea that turns out to be a very good idea. A couple of days ago, and you have a sense of that building up, so I'm working with a collaborator. And he essentially was saying, you know, what are you doing? what's going on and I was really I couldn't exactly explain it but I knew this is going somewhere but you know I've been around that game for a very long time and so I can you develop that anticipation that yes this this is worth following that spot of the skill
SPEAKER_00
57:51 - 58:03
Is that something you can reduce towards in describing a process in the form of advice to others? No. Follow your heart essentially.
SPEAKER_01
58:03 - 58:11
You know, it's like trying to explain what it's like to drive. It's not. You've got to break it apart and it's not.
SPEAKER_00
58:11 - 58:12
And then you lose.
SPEAKER_01
58:12 - 58:13
And then you lose the experience.
SPEAKER_00
58:15 - 59:19
You mentioned collaboration, you've written about your collaboration with Amostoreski, that this is you writing the 12 or 13 years in which most of our work was joint, where years of interpersonal and intellectual bliss. Everything was interesting, almost everything was funny. And there was a current joy of seeing an idea take shape. So many times in those years, we shared the magical experience of one of us saying something, which the other one would understand more deeply than a speaker had done. Contrary to the old laws of information theory, it was common for us to find that more information was received than had been sent. I have almost never had the experience with anyone else. If you have not had it, you don't know how marvelous collaboration can be. So, let me ask perhaps a silly question. How does one find in creates a collaboration? That may be asking, like, how does one find love?
SPEAKER_01
59:19 - 59:49
Yeah, you have to be lucky. And I think you have to have the character for that because of HUD. many collaborations and in none whether as exciting as would almost be. But I've had, and I'm having, it was very, so it's a skill. I think I'm good at it. Not everybody is good at it, and then it's the luck of finding people who are also good at it.
SPEAKER_00
59:49 - 59:57
Is there advice for young scientists who also seeks to violate this law of information theory?
SPEAKER_01
01:00:05 - 01:00:52
I really think it's so much luck is involved. And in those really serious collaborations, at least in my experience, are a very personal experience. And I have to like the person I'm working with. Otherwise, there is that kind of collaboration, which is like an exchange, a commercial exchange, and giving this, you give me that. But the real ones are interpersonal. They're between people like each other, and who like making each other think, and who like the way that the other person responds to your thoughts, you have to be lucky.
SPEAKER_00
01:00:53 - 01:01:13
Yeah, I mean, but I already noticed that, even just me showing up here, you've quickly started to digging in a particular problem I'm working on and already knew information started to emerge. Is that a process, just the process of curiosity of talking to people about problems and seeing?
SPEAKER_01
01:01:13 - 01:01:22
I'm curious about anything to do with AI and robotics, you know, and so, and I knew you were dealing with that. So it was curious.
SPEAKER_00
01:01:22 - 01:01:46
just follow your curiosity. Jumping around and the psychology front, the dramatic sounding terminology of replication crisis, but really just the at times This effect at a time studies do not are not fully generalizable. They don't.
SPEAKER_01
01:01:46 - 01:01:50
You are being polite. It's worse than that.
SPEAKER_00
01:01:50 - 01:01:58
So I'm actually not fully familiar. Did I agree how bad it is, right? So what do you think is the source? Where do you think?
SPEAKER_01
01:01:58 - 01:06:05
I think I know what's going on, actually. I mean, I have a theory about what's going on. What's going on? is that there is, first of all, a very important distinction between two types of experiments. And one type is within subjects, so it's the same person as two experimental conditions. And the other type is between subjects where some people are this condition or the people are that condition. They are different worlds. and between subject experiments are much harder to predict and much harder to anticipate and the reason and they're also more expensive because you need more people and it's just so between subject experiments is where the problem is. It's not so much an within subject experiment, really between. And there is a very good reason why the intuitions of researchers about between subject experiments are wrong. And that's because when you are a researcher, you are in a within subject situation. had is you are imagining the two conditions and you see the causality and you feel it. But in the between subjects, conditions, they live in one condition and the other one is just nowhere. So our intuitions are very weak about between subject experiments. And that I think is something that people haven't realized. And in addition, because of that, we have no idea about the power of manipulations, of experimental manipulations, because the same manipulation is much more powerful. when you are in the two conditions, then when you live in only one condition. And so the experimenters have very poor intuitions about between subject experiments. And there is something else, which is very important. I think, and which is that almost all psychological hypotheses are true. That is, in the sense that You know, directionally, if you have a hypothesis that A really causes B, that it's not true that A causes the opposite B. Maybe A just has very little effect, but hypotheses are true, mostly. Except mostly, they're very weak, they're much weaker than you think when you are having images of so the reason I'm excited about that is that I recently heard about some friends of mine who They essentially funded 53 studies of behavioral change by 20 different teams of people with a very precise objective of changing the number of time that people go to the gym. And the success rate was zero. The next one of the 53 studies worked. Now what's interesting about that is those are the best people in the field and they have no idea what's going on. So they're not calibrated. They think that it's going to be powerful because they can imagine it. But actually it's just weak because you are focusing on your manipulation and it feels powerful to you. There's a thing that I've written about that's called the focusing illusion. That is that when you think about something, it looks very important. More important than it really is.
SPEAKER_00
01:06:05 - 01:06:16
more important than it really is, but if you don't see that effect, the 33 studies doesn't that mean you just report that, so what's against the solution to that?
SPEAKER_01
01:06:19 - 01:06:58
The solution is for people to trust their intuitions less or to try out their intuitions before I mean experiments have to be preregistered and by the time you run an experiment you have to be committed to it and you have to run the experiment seriously enough and in a public and so this is happening. The interesting thing is what happens before and how do people prepare themselves and how they run pilot experiments. It's going to train the way psychology is done and it's already happening.
SPEAKER_00
01:06:58 - 01:07:10
Do you have a hope for this my connect to the study sample size? Do you have a hope for the internet?
SPEAKER_01
01:07:10 - 01:07:20
This is really happening. Em took everybody's running experiments on em took. And it's very cheap and very effective.
SPEAKER_00
01:07:20 - 01:07:26
Do you think that changes psychology essentially? Because you're thinking, you can now tell the subject.
SPEAKER_01
01:07:26 - 01:07:51
It eventually it will. I mean, I, you know, I can't put my finger on how exactly, but it's, that's been true in psychology. Whenever an important new method came in, it changes the field. So an end-to-er case is really a method because it makes it very much easier to do something to do something.
SPEAKER_00
01:07:52 - 01:08:11
Is there undergrad students who last week, how big and your own network should be for a particular problem? So let me ask you an equivalent question. How big how many subjects they study have for it to have a conclusive result?
SPEAKER_01
01:08:11 - 01:09:10
Well, it depends on the strength of the effect. So if you're studying visual perception of the perception of color, many of the the classic results in visual and colour perception. We're done on three or four people and I think in one of them is colourblind but partly colourblind. But on vision, you know, it's highly remarkable. Many people don't need a lot of replications for some type of neurological experiment. When you're studying weaker phenomena, and especially when you're studying them between subjects, then you need a lot more subjects than people have been running, and that is one of the things that are happening in psychology now, that the power is statistical power, the experiment is increasing rapidly.
SPEAKER_00
01:09:10 - 01:09:16
Does that between subjects as the number of subjects goes to infinity approach?
SPEAKER_01
01:09:16 - 01:09:43
Well, I mean, you know, goes to infinity is exaggerated, but people, the standard number of subjects were in experiment psychology with 30 or 40. And for a week effect, that's simply not enough. And you may need a couple of hundred minutes, that sort of older magnitude
SPEAKER_00
01:09:45 - 01:10:03
What are the major disagreements in theories and effects that you've observed throughout your career? That's still stand today. We work on several fields, but what's still is out there as as major disagreements in your mind?
SPEAKER_01
01:10:03 - 01:10:19
In, I've had one extreme experience of, you know, controversy with somebody who really doesn't like the work that they must ask you and I did, and he's been after us for 30 years. Or more, at least.
SPEAKER_00
01:10:19 - 01:10:21
You're going to talk about it?
SPEAKER_01
01:10:21 - 01:10:38
Well, I mean, his name is Good Giga answer. He's a well-known German psychologist. And that's the one controversy, which It's been unpleasant and no, I don't particularly want to talk about it.
SPEAKER_00
01:10:38 - 01:11:06
But is there open questions, even in your own mind, every once in a while, you know, we talked about semi-autonomous vehicles. In my own mind, I see what the data says, but I also constantly torn. Do you have things where you, your studies have found something, but you're also intellectually torn about what it means, and there's maybe Maybe disagreements without you within your own mind about particular things.
SPEAKER_01
01:11:06 - 01:11:49
One of the things that are interesting is how difficult it is for people to change their mind. Essentially, once they are committed, people just don't train their mind about anything that matters. And that is surprisingly, but it's true about scientists. So the controversy that I described in others been going on like 30 years and it's never going to be resolved. and you build a system and you live within that system and other systems of ideas look foreign to you and there is very little contact and very little mutual influence that happened the fair man.
SPEAKER_00
01:11:53 - 01:12:05
advice or message on that, thinking about science, thinking about politics, thinking about things that have impact on this world. How can we change our mind?
SPEAKER_01
01:12:05 - 01:13:05
I think that, I mean, on things that matter, which are political religious and people just don't change their mind, and buy an order and there is very little that you can do about it. What does happen is that if leaders change their minds. So for example, the American public doesn't really believe in climate change, doesn't take it very seriously. But if some religious leaders decided this is a major threat to humanity that would have a big effect. So that we have the opinions that we have not because we know why we have them, but because we trust some people and we don't trust other people. And so it's much less about evidence than it is about stories.
SPEAKER_00
01:13:06 - 01:13:41
So the way one way to change your mind isn't at the individual level is that the leaders of the communities you look up with the stories change and therefore your mind changes with them. So there's a guy named Alan Torring came up with a touring test. What do you think is a good test of intelligence? Perhaps we're drifting in a topic that were maybe philosophizing about, but what do you think is a good test for intelligence, for an artificial intelligence system?
SPEAKER_01
01:13:41 - 01:14:47
Well, the standard definition of, you know, of the official general intelligence of that, it can do anything that people can do and it can do them better. Yes. What we are seeing is that in many domains, you have domain specific and you know, devices or programs or software, and they beat people easily in specified way. But we are very far from, is that generally, the general purpose intelligence. So we, in machine learning, people are approaching something more general. I mean, for Alpha Zero was much more general than Alpha Go. But it's still extraordinarily narrow and specific. And what it can do. So it's just... We're quite far from something that can, in every domain, think like a human except better.
SPEAKER_00
01:14:48 - 01:15:12
What aspects of the touring task has been criticized is natural language conversation that is too simplistic. It's easy to quote unquote pass under a constraint specified. What aspect of conversation would impress you if you heard it? Is it humor? Is it What would impress the heck out of you if you saw it in conversation?
SPEAKER_01
01:15:12 - 01:15:51
Yeah, I mean certainly, which would be impressive. And humor would be more impressive than just factual conversation, which I think is easy. And illusions would be interesting. And metaphors would be interesting. I mean, but new metaphors, not practiced metaphors. So there is a lot that, you know, would be sort of impressive, and that it's completely natural in conversation, but that you really wouldn't expect.
SPEAKER_00
01:15:51 - 01:16:01
There's the possibility of creating a human level intelligence or superhuman level intelligence system, excite you, scare you.
SPEAKER_01
01:16:01 - 01:16:33
Well, I mean, as I make you feel, I find the whole thing fascinating, absolutely fascinating. I think, and exciting, it's also terrifying, you know, but I'm not going to be around to see it. And so I'm curious about what is happening now, but also know that predictions about it are silly. We really have no idea, but it will look like 30 years from now. No idea.
SPEAKER_00
01:16:35 - 01:17:04
Speaking of silly, bordering on the profound, they may ask the question of, in your view, what is the meaning of it all? The meaning of life. These descendant of great apes that we are. Why, what drives us as a civilization, as a human being, as a force behind everything that you've observed and studied? Is there any answer, or is it all just a beautiful mess?
SPEAKER_01
01:17:07 - 01:17:16
There is no answer that I can understand. And I'm not actively looking for one.
SPEAKER_00
01:17:16 - 01:17:19
Do you think an answer exists?
SPEAKER_01
01:17:19 - 01:17:58
No. There is no answer that we can understand. I'm not qualified to speak about what we cannot understand. But there is, I know that we cannot understand reality. I mean, there are a lot of things that we can do. I mean, you know, gravity waves, and that's a big moment for humanity. And when you imagine that ape, you know, being able to go back to the big bang, that's, but the why? Yeah, the why? It's bigger than us. The why is hopeless, really?
SPEAKER_00
01:17:58 - 01:18:48
Danny, thank you so much. It was an honor. Thank you for speaking today. Thank you. Thanks for listening to this conversation, and thank you to our presenting sponsored cash app. Download it, use code Lex Podcast. You'll get $10 and $10 will go to first, a STEM education nonprofit that inspires hundreds of thousands of young minds to become future leaders and innovators. If you enjoy this podcast, subscribe my YouTube, give it $5,000 Apple Podcast, follow on Spotify, support it on Patreon, or simply connect with me on Twitter. And now, let me leave you with some words of wisdom from Daniel Karman. Intelligence is not only the ability to reason, it is also the ability to find relevant material and memory and to deploy attention when needed. Thank you for listening and hope to see you next time.