๐ŸŽ™The Elephant in the Brain with Robin Hanson

David speaks with Robin Hanson, an associate professor of economics at George Mason University and co-authored The Elephant in the Brain.

They talked about:

  • The role of self-deception in human behaviour and the thesis behind the book he co-authored, The Elephant in the Brain
  • The potential consequences and risks associated with AI, how human minds might become like artificial minds.
  • Whatโ€™s really out there in space and what the future of space exploration might look like, including ideas like the great filter and grabby aliens.
  • The concept of the sacred in human societies and how it shapes our ideas and decision-making.

๐ŸŽ™ Listen in your favourite podcast player

The Knowledge with David Elikwu - Podcast App Links - Plink
Podcast App smart link to listen, download, and subscribe to The Knowledge with David Elikwu. Click to listen! The Knowledge with David Elikwu by David Elikwu has 29 episodes listed in the Self-Improvement category. Podcast links by Plink.

๐Ÿ“น Watch on Youtube

๐Ÿ‘ค Connect with Robin:

Twitter: @robinhanson

Book: The Elephant in the Brain | https://amzn.to/3m5z7yR

The Age of Em | https://amzn.to/3ZVIA9X

๐Ÿ“„ Show notes:

0:00 | Intro

3:24 | What Robinโ€™s working on

4:22 | Self-deception and its manifestations

5:48 | Thesis of The Elephant in the Brain

8:16 | Changing our thinking about thinking

10:10 | Deluding ourselves unintentionally

11:59 | Hidden desires in social and economic policy

15:05 | Balancing good things and enjoyment

15:50 | Competing internal and external forces

21:24 | Experts, elites, and the masses

23:37 | Shedding light on self-deception

33:10 | Stratification in hiring and IQ bias

37:41 | Lack of agency in decision-making

40:18 | Ai bias and engineering constraints

44:03 | Measuring the distance to artificial intelligence

44:59 | Managing the distance to AI

50:47 | Humans as a general intelligence

53:14 | AI and the potential for Grabby aliens

58:28 | Artificial vs biological bodies

1:02:00 | Substrate-independent humans vs AI

1:05:16 | The great filter and grabby aliens

1:13:34 | Future of nation-states before AI minds

1:14:59 | Quiet wars and evolutionary survival

1:18:10 | The idea of the sacred

1:28:27 | Defending and promoting the sacred

1:30:33 | Making and breaking sacred ideas

1:33:18 | The Paradox of Bosses

1:35:11 | Why Robin often works alone

๐Ÿ—ฃ Mentioned in the show:

The Elephant in the Brain | https://amzn.to/3m5z7yR

The Age of Em | https://amzn.to/3ZVIA9X

Grabby Aliens | https://grabbyaliens.com/

Sadiq Khan | https://en.wikipedia.org/wiki/Sadiq_Khan

Tax Career Agents | https://www.overcomingbias.com/p/testing-tax-career-agentshtml

Garrett Jones | https://en.wikipedia.org/wiki/Garett_Jones

Fermi's famous question | https://en.wikipedia.org/wiki/Fermi_problem

The Great Filter | https://en.wikipedia.org/wiki/Great_Filter

The End of History and the Last Man | https://amzn.to/3Kxr86X

ร‰mile Durkheim | https://en.wikipedia.org/wiki/ร‰mile_Durkheim

Agnes Callard | https://en.wikipedia.org/wiki/Agnes_Callard


๐Ÿ‘‡๐Ÿพ
Full episode transcript below

๐Ÿ‘จ๐Ÿพโ€๐Ÿ’ป About David Elikwu:

David Elikwu FRSA is a serial entrepreneur, strategist, and writer. David is the founder of The Knowledge, a platform helping people think deeper and work smarter.

๐Ÿฃ Twitter: @Delikwu / @itstheknowledge

๐ŸŒ Website: https://www.davidelikwu.com

๐Ÿ“ฝ๏ธ Youtube: https://www.youtube.com/davidelikwu

๐Ÿ“ธ Instagram: https://www.instagram.com/delikwu/

๐Ÿ•บ TikTok: https://www.tiktok.com/@delikwu

๐ŸŽ™๏ธ Podcast: http://plnk.to/theknowledge

๐Ÿ“– EBook: https://delikwu.gumroad.com/l/manual

My Online Course

๐Ÿ–ฅ๏ธ Career Hyperdrive: https://maven.com/theknowledge/career-hyperdrive

Career Hyperdrive is a live, cohort-based course that helps people find their competitive advantage, gain clarity around their goals and build a future-proof set of mental frameworks so they can live an extraordinary life doing work they love.

The Knowledge

๐Ÿ“ฉ Newsletter: https://theknowledge.io

The Knowledge is a weekly newsletter for people who want to get more out of life. It's full of insights from psychology, philosophy, productivity, and business, all designed to help you think deeper and work smarter.

My Favorite Tools

๐ŸŽž๏ธ Descript: https://bit.ly/descript-de

๐Ÿ“จ Convertkit: https://bit.ly/convertkit-de

๐Ÿ”ฐ NordVPN: https://bit.ly/nordvpn-de

๐Ÿ’น Nutmeg: http://bit.ly/nutmegde

๐ŸŽง Audible: https://bit.ly/audiblede

๐Ÿ“œFull transcript:

Robin Hanson: When you buy an AI or when you listen to an AI, the issues are pretty similar to when you listen to humans. If you decide that the Google AI is the one you're gonna listen to, just because Google is the most prestigious, then the Google people are mainly working to convince you they're prestigious and they're not working to make sure the advice their AI gives you is actually good advice because that's not what you're picking them on. So in that case, the advice may well be biased not just because they were basing it on prior data or humans, but because you didn't make them make it unbiased.

David Elikwu: Hey, I'm David Elikwu. And this is The Knowledge. A podcast for anyone looking to think deeper and work smarter. In every episode I speak with makers, thinkers, and innovators to help you get more out of life.

This week I'm speaking with Robin Hanson. Robin is an associate professor of economics at George Mason University and He co-authored The Elephant in the Brain.

Robin spends a lot of time thinking incredibly deeply about many of the facets of human nature. So you're gonna hear us talking about the role of self-deception in human behavior. We also talked about some of the possible consequences and risks associated with artificial intelligence and the extent to which as humans develop, we might become more like . Artificial minds And then thinking about the future, we also talked about what's really out there in space and what the future of space exploration might look like, and unpacked really interesting ideas like the Great Filter and Grabby aliens.

Finally, we talked about this concept that Robin refers to quite often of the sacred and things that are sacred to us as humans, and how sacred things influence the way that we act and interact with each other.

This was a super interesting conversation for me. Robin is supremely intelligent and I think, you'll hear towards the end he pushes me to push back more on some of his ideas because there is an extent to which I think sometimes if I don't have skin in the game, it is preferable to listen and to hear the opinions of the people that are actually out there doing the work and use what I learned from them to shape and reframe the opinions that I have.

So you'll hear my preconceptions being challenged, and I'm sure many of the ideas that you come into this conversation with might be challenged as well. You can get the show notes, the transcript, and read my newsletter at theknowledge.io.

Every week, I share some of the best tools, ideas, and frameworks that I come across from business psychology, philosophy and productivity. So if you want the best that I have to share, you can get that in the newsletter at theknowledge.io.

You can find Robin online on Twitter @robinhanson, and I'll leave links to all his books below.

If you love this episode, please do share it with a friend, and don't forget to leave a review wherever you listen to podcasts because it helps us tremendously to reach other listeners just like you.

You mentioned you were thinking about something this morning. Was that specifically about Grabby Aliens or what was that?

Robin Hanson: It was just about institution design, which is one of my focuses in puzzling yet again over why the world doesn't seem very interested, And that's one of the things that pushed me to think about the sacred and about experts versus elites. And so I've just, in the last few years, just repeatedly tried to come at that question, different directions to try to go, why can't I get people interested in these things?

David Elikwu: Sure. Does this interface with some of what you talked about in The Elephant in the Brain.

Robin Hanson: I think so that is, that was one of my rationales for Elephant in the brain. So I basically started out as a engineer, went into physics and did philosophy of science in school and then did computer research for nine years and eventually went into social science and realized that there was all these big ways social institutions could change and that people are just not very interested in that.

And so, the elephant in the brain is one of my attempts to make sense of that, to say that with respect to many of these familiar institutions, one explanation that people aren't interested in reforms is the reforms are targeting the wrong thing. People are, have other reasons that they're not admitting to for why they go to the doctor, go to school, et cetera.

And then if your reforms are targeting other things besides what they really care about, then that's one explanation for why many of our reform proposals seem uninteresting.

David Elikwu: Yeah. And you talked in the book about this idea of just self deception and in many ways that can manifest in our lives from when we say alcohol is illegal. But if you see people with a, something a bottle in a brown paper bag outside, and no one arrests them even though you probably know it's alcohol, that's an example of the way that we can kind of deceive ourselves. And very often we might say, this is one thing that we want, or this is an ideal that we believe in, and then we actually behave in a way that's inconsistent with that.

Robin Hanson: And people are only modestly interested in that fact which is a big puzzle. So, hundreds of thousands of us social scientists out there trying to understand the world. And if you realize that we had this really huge blind spot because we were just looking away from something that was really central, you might think everybody would go, oh wow. That's why we haven't been able to understand that's what's been in the way. We were just looking away from this big important thing. And I think we persuade most readers of elephant in the brain that in fact we have shown that there are many of these hidden motives and they are important for big areas of our lives. and yet people go, eh, okay, thank you very much. And they want to go down to other things and they don't want to think about it that much more, and they don't wanna pursue it more. They don't wanna like, apply that same concept to more areas of life to understand more things. They're embarrassed and a little, find a little tawdry and would rather go somewhere else.

David Elikwu: Sure. Maybe if we took a step back, could you explain the hypothesis or actually maybe taking a step back, you've kind of talked around it a bit, but what was the thesis going into this book? I'm interested to know if you already had the conclusions in mind before you started writing it, or was there a process of some discovery as you were going through that?

Robin Hanson: So the time order is that I was a social scientist learning about standard social science and coming across puzzles of things that just didn't make as much sense. So for example, I did a two year health policy postdoc, and I came into that with a strong background in economic theory and I assumed that theorists were right about the usual obvious motivation for medicine, which is, duh, you get sick and they could help you get well and they have some expertise and it's expensive, and pretty much all economic theorists just assume that and think it's pretty obvious. And then you go study the details of health policy and you find there's just all these weird things that don't make sense from that point of view.

That was the first place I said, well what if our motive behind medicine was something other than what we say, I postulated our motive there is to show that we care about each other. And that's really what's going on. And I found that you could explain a lot of things with that. And then that let me think about that same hypothesis for other things and so, then over the years I had this open mind to say, well, for other puzzles we come across, like in education, maybe what's going on there is just different than what we say. And so over years, I collected many such examples, of ways you could understand other things.

And then my co-author approached me and said, let's write a book. And he was a very good co-author. And so that's what made the book happen. The book is a sort of a summary of a, of a lifetime of accumulation of realizing that you can understand a lot of puzzles if only you will assume that people are wrong about why they're doing things. Just that one thing, you don't need any sort of complicated game theory or subtler behavioral analysis or maybe, subtler technology. People are stretching for all sorts of different ways to explain these things. But a very simple way is you're just doing it for a different reason than you say.

David Elikwu: Right. But I've heard you mention in the past that, okay, you say this book is not a, not strictly a psychology book and definitely not a self-help book. And there's this idea that in some way, evolution has designed or directed us to hide some of these things. And there might be various reasons in which it might be either a self-protection mechanism or a way in which to, well actually, I dunno. Maybe you can , you can frame some suggestion.

But I'm interested in what you think about, I guess, knowing this now, first of all, does it change how we actually think about the way that we think and should it, is there anything that we should change having this knowledge?

Robin Hanson: Well, first of all, I have to admit, as you indicated, we're built not to know this stuff and so if you know your designer is right about your interests, then it's not in your interest to know this stuff, so you're better off looking the other way and forgetting which you're actually pretty capable of doing.

But there are plausibly some people in our world for whom this is their topic, this is their area of expertise and they should learn this stuff even if evolution thought otherwise. So that might be say managers or salespeople who really especially need to understand the motivations of other people. It might be nerds who are just especially unskilled otherwise, and it might be social scientists and policy makers who say their job is to figure out how to rearrange the world so that we're all better off. And they can't do that if they misunderstand the world in a deep, big way. So that's the claim, is that at least those people should be paying attention to this. But maybe they aren't. Maybe they don't want to. So basically, it could also be true that for most researchers, our motives are also not what we say.

We put on this pretense of trying to help the world rearranged, find better policies that will promote great many things. And in fact, we're not doing that. We we're basically trying to be impressive, trying to let people affiliate with other people who have high status impressiveness, and we don't actually care what's true about the world and therefore we often want to pander to preconceptions and delusions if that's what makes us look impressive and gets people to affiliate with us and allows people to achieve their ends through us.

David Elikwu: Yeah, I definitely agree and I was just thinking as you were saying that, about so on one level there is how we think about ourselves and the way that we act within the world, and the extent to which we might be deluding ourselves unintentionally or intentionally. But then I also think about the dynamics between people maybe in two different scenarios.

One, just socially in general, where very frequently it actually might be beneficial to lie or to intentionally musk what we truly think or musk the hidden reasons behind why we do something. And you see so many acts of this in daily life, a lot of deference where you do something, you know, I invite a friend to dinner because you invited me to dinner and not because I actually want you to come to my house or something like that.

And there are a lot of social dynamics in which we do things where it is intentionally deceptive, but then also sometimes maybe unintentionally. And then on the other side, I think, in terms of, as you have mentioned in the past, how we interact with institutions and with power, and this idea that if we have this duplicity in our own minds about how we think about things, but then we're also maybe electing people to act on our behalf who have their own internal duplicity then you kind of have a principle agent problem, which is inherent before they have even done anything about their own incentives.

Robin Hanson: Well, often we like, we have people who work as agents for us, like politicians, and then they act on our behalf, and then we often complain about what happens as a result. And sometimes the complaint is because these people we have hired to represent us have misled us and misrepresented us and served their own ends.

But often it's because that's what we wanted, we wanted them to take the fall and them to be the ones who serve our actual interests, but giving us cover to pretend otherwise. And that's one of the things an indirect servant or agent can do for you is they can let you pretend otherwise.

David Elikwu: Sure. Can you think of an example in, I guess maybe social policy or economic policy where there is something that people say that they want when really they want something different?

Robin Hanson: Well, for example, we have many sacred things in politics that we give lip service to, that we say are terribly important, such as nature. And then what we might do is elect politicians who say nothing is more important than nature, and they will do everything possible to protect and promote nature. And that's what we want them to say on our behalf. That's the image we would like to project about ourselves. But in fact, we don't actually want them to do everything possible for nature that will come at our expense substantially. And so they give this lip service, but then they don't. They in fact prioritize, prosperity and transportation and defense and retirement and all these other things we want.

And then we look at what happens in nature, we say, but what we said, you were supposed to protect nature and look, they're like doing that. These people must be in the pay of some other nefarious party who is, taking them away from what we said to do and they what they promised to do us, right? Look, it's their fault.

If anyone will point this out, but usually no one will point it out. And so usually we get away with these sort of pretenses, right? I mean, another example might be, say prostitution or drugs, right? Where we all want to be against them in principle, right? We don't wanna be the sort of person who would approve of such things, but then we don't wanna be that strict about actually preventing them. And so we have many rules on the books that are nominally or hardly enforced, but still it's important to us that those rules be there so that we can say we're against things. But we will let our enforcement arms be distracted by other things and we're pretty okay with that.

David Elikwu: Yeah, that makes a lot of sense to me. I was just thinking of taxation as an example where everyone wants something to be done about tax, but if you are the one paying it, you will always hate it. And everyone is very quick to say actually, ah, someone else should be paying, there should be some other form of solution here that doesn't involve the burden being on my shoulders.

Robin Hanson: Right. So if you ask about different categories of spending, mostly people want those categories of spending to rise. But if you look at the overall budget and connect to the taxes, they don't necessarily want the overall budget to change, and they certainly don't want their taxes to change.

David Elikwu: Yeah. And I was also thinking about the one that you mentioned or the environment is also one that has been on my mind recently with, so I live in London and Sadiq Khan, who is the Mayor for London, has over the last few years implemented a lot of changes, which are around like, congestion charges and essentially forms of taxation on driving at peak times or driving through the city in different ways. And everyone hates it, but simultaneously everyone also wants a greener world where we can avoid climate change and we can avoid all the nefarious knock on effects of the earth heating up. And we complain that, oh, this is the warmest winter on record and all of these things, but simultaneously if it means extra traffic or it means we can't drive when we want to drive, or it means we have to buy a new electric vehicle or a hybrid vehicle, everyone's upset. And so no one actually, we want the good things so long as they don't interfere with our enjoyment.

Robin Hanson: I mean, in a sense, we economists have long recommended congestion pricing for traffic, not to protect the environment, but just to protect drivers from wasting their time and traffic. And so, we could also see this as an example of slipping through the efficient thing like congestion pricing under the excuse of protecting the environment. That's at least an excuse, more people are willing to give lip service to you see.

But I might say, well, they're no. The real benefit here is you're saving all these people, all this wasted time sitting in traffic.

David Elikwu: Yeah. What do you see as the, we've just gone through this period in the last few years. We had a global pandemic and we saw lots of different countries reacting in very different ways and I'm interested to know what you see as some of the competing internal and external forces there where, you have at various points, lots of different countries seeming to act in their interests, not acting in their interests. There's people that want to take the vaccine, there's people that don't, and it seemed like organized chaos for the most part. But in the end we are able to produce a vaccine. But also what I'm interested in is just seeing how, for example, China being a very interesting example of how at different points in time they have reacted to covid. Where at the beginning, you see people being locked inside their homes and all kinds of stuff. But then suddenly, not too long ago they just said, okay, actually don't worry. Everyone come back outside, even though right up until then, they had some very strict measures.

Robin Hanson: So, the variation around the world is interesting and useful to think about policy, but I actually think the lack of variation is the more interesting thing, so people don't realize how the world has changed over the last half century in a big crucial way.

In the past, the world was full of different nations and empires even that competed and within each nation or empire people and elites had their main identity as within that empire or that nation. And so each nation or empire was saying in part, well, what should we do so that our nation or empire will stay competitive or perhaps even get an advantage over the rest? And they were okay with being substantially different if that would give them a potential advantage.

And so nations and empires competed in the past and policy was often about what should we do here, different from what other people are doing? And how does that reflect our different identity or our strategy for doing better? And in the last half century, what we've instead had is a merging of communities in the world, especially emerging of the elites in the world, such that they are now more of a world elite community.

And each member in that community more cares about their identity and reputation within that community than they do necessarily about their nation or people locally. And this has caused an enormous convergence of policy around the world in a great many areas, including in the pandemic.

So if you remember at the very beginning of the pandemic, there were the usual public health experts around the world who had their usual recommendations, which included masks weren't that valuable, travel restrictions weren't that valuable. And then as soon as it became clear, this was a serious thing, elites around the world started talking to each other about what we were gonna do about this. And within a few weeks they came to a different conclusion, they all decided, no, masks and travel restrictions were a good idea. And that's what we were all going to do and then basically, most of the world did it pretty similar and the public health experts just turned on a dime and said, oh yeah, you're right, because they were not gonna try to contradict the collective elites of the world. And this same dynamic applies in a great many other regulatory areas, nuclear power, medical experiments, organ donations, airline regulation. A whole range of areas of regulation and policy.

In fact, most of the world does it pretty similar and that's the new distinctive feature of our world compared to the past, is that we now have this world where say, the nuclear regulatory regulators in any one nation, what they more care about is being respected by the regulators elsewhere in the world when they go to their international regulation conferences than they do about what people in that particular nation think. And that happened in Covid as well.

So there was some variation around the world, but that's the less surprising thing than the strong convergence around the world. In fact, the world did it pretty much the same, even though we don't actually have good evidence that, that was the right thing to do.

Once vaccine showed up, everybody agreed on that. Of course, there wasn't much disagreement about vaccines really, the elites in the world and so most of the world happily adopted vaccines. You know, there are some places that at times try to make a thing about their being different, and then often the rest of the world is trying to pressure them so.

For example, the only country in the world that allows organ donations, an organ sales at the moment is Iran. And then medical ethics people all around the world have these conferences where they say, Iran, they're doing organ sales, that's terrible. How are we gonna get them to stop? And Sweden, during the early pandemic, they had such different sort of policy to lockdowns and the big conversation of the rest of the Europe and the world is how are we gonna get Sweden to stop and do it the right way? How dare they do it different?

And that's more the main dynamic in the world today is a large consensus than a few people deviating. And then the attempt to pressure those deviants to do the right thing and do it like everybody else. So that says we're in this new world of an integrated world community of elites who mainly have some convergence and reputation within that community such that whatever they decide is what most everybody's gonna do.

It's not cause of a world government, there is no central ruler, but still we're all gonna be doing it the same way. And that has important implications for how the future's gonna play out.

David Elikwu: Yeah, that's a really good point. I'm interested to know what you think about how The Elephant in the Brain works on an institutional level, cause even just what you were saying, I was also thinking about nuclear where for large groups of people, perhaps it might seem like common sense that this is something we should be investing in and should have been investing in for a very long time. And there were lots of countries where just like you say, they laid the groundwork to do this thing and then undid it and undid it maybe because other countries undid it. And so now in large parts of Europe, we suddenly have this massive energy crisis where we ended up being entirely reliant on Russia and a certain global powers for energy because, well, I don't know why. There's no clear reason why you already had energy plants and then you took them down. It doesn't make much sense to me.

Robin Hanson: I think we can roughly see the world as divided and into experts, elites and masses, and, elites are often want to give you the impression that they are relying heavily on expertise to make their decisions. And that's a rationale for why we should accept our elites and decisions they make. But actually, elites are more often trying to gain sort of prestige among the elites and gain sort of the allegiance of the masses by seeming to be elite. And that requires that they often pander to preconceptions or wider senses of what the right thing is, and they don't necessarily listen to the experts as so much.

With respect to nuclear power, there's long been this strong moral crusade that nuclear power was dirty and evil. And elites were wanting to be respected among other elites by accepting that and pushing for that, and often sort of being overconfident about how far, how fast they could push.

But that's what wins you praise, in that world of elites and in the world of masses who saw that as yes, the right thing. So given enough ordinary people who are scared of nuclear and seeing it as dirty and evil, then elites go along with that too, and claim that that's backed by expert advice, even if it's not. And then they do what, what it takes to become respected in that world. So for a long time, you were more respected as elite when you suppressed nuclear power and got rid of it. And you were of course taking a chance with respect to how the world would change in the future. But that was a collective chance, they were all taking individually, they were each getting praise for the changes they were making.

David Elikwu: Sure that makes sense. How do we snap out of all of this then? It seems quite common sense on the surface that, okay, we're doing this because other people are doing it, or we're making some fundamental mistake of some kind. I guess there's two levels of it. One is a, on an individual level, following a lot of what you've discussed in your book about the elephant in the brain and how we deceive ourselves, but how do we shed a light on this on a broader scale?

Robin Hanson: So I've spent a lot of my life thinking about how we can do institutions better, and I've got what I think are a lot of promising solutions for institutional reform. The biggest challenge is to make anybody care. So, I think that in our current world, I would say, basically masses recognize elites who oversee experts who choose details is roughly how the world's organized, and because of that, we're not actually very interested in institutional reforms proposed by experts. They're the wrong people. What we want is reform proposals that are made by elites and those we will consider, especially, the more elite they are and the more prestigious they are, the more they seem to agree, then the more we might go for that.

So, we don't actually let the experts make that many big decisions about how we are organized, how we're mostly waiting until elites are in agreement in their elite way on key policy changes. And that's sort of the most thing we're vulnerable to here, is that structure. As long as we maintain that, then we're, we're going to be stuck in that way.

I mean, let's take a more concrete example. Hiring a doctor. Basically, you hardly know which doctor to hire. You don't know who's good, you don't know if the things are recommending are good. You'll find out later if you live or die or how long you're sick. But the way you actually hire doctors today is mostly based on their prestige.

What prestigious medical organization they're part of, or what school they went to. And then you mostly don't pay them based on whether they do well for you. You just pay them per hour or per session and you trust that their prestige is enough to assure the quality. And we do that in law and many other areas.

That is what we really crave is this association with a prestigious doctor who we trust and they kind of at least respect us. And that trusting, respectful relationship kind of requires that we're not being too skeptical about them and too careful about with how we pick them and how we pay them

So, I mean, alternatives to this trusting prestige based relationship might be if we say collected careful track records on each doctor about which patients they had and what happened to them and we could do statistical analysis to sort of evaluate the quality of each doctor that way. Or we could do direct financial incentives where basically we set it up such that if we do well, they do well. If we die, they lose a lot of money. If we get disabled, they lose money. So we basically combine health insurance with life insurance and disability insurance such that they would just have strong financial incentives.

And this is something we could do with say lawyers as well. We could collect track records on lawyers, which cases they win or lose. We could give them more direct financial incentives, like contingency contracts. But in both the cases of doctors and lawyers, what we seem to want is this just trusting relationship where we look at their prestige, the school they went to, or the organization show with us. Wow, that's, everybody says that's great. And then we are lucky to be associated with them and we pay them a lot of money, but we pay it per hour or something. And then we just trust that they will do well by us. And that's what we seem to crave in many of these, relationships including with our pundits and politicians.

We don't give them very strong financial incentives or other sorts of things. So, we will basically continue to get bad service from these prestigious elites if we continue to just pay them to be prestigious. Because the things it takes to be prestigious just aren't the same as doing the things that give us good service. They're just two different things. And so when we pay them to be prestigious, they work really hard to go to the best schools and work very hard to be at the top firms and to impress people with their carriage and their language and their wit, how well they dress and all the contacts they have and all the things that elites do to show you their elites.

They are working very hard to do all those things. And that's where all their energies are going. And so their energies are not going so much to actually make you healthy or actually win your legal case or all the other things we're trying to get out of elites. So that in my mind is the fundamental problem.

We're in this relationships where typically we trust an elite rather than an expert, even say CEO's or firms. We often just wanna pick them as the, tallest, handsome person who went to a top school, who was associated with the other firms, and the boards of directors as opposed to like paying them for doing the job well, which is more like, stock options, sort of payments for, Hey, did you make the firm do better?

So I think there are ways that we could instead have a more distrustful relationship with these people than as we could, more collect track records and more look at the track records, and we could have more direct financial contracts with them, but we don't do that so much. So let me tell you my current favorite idea for a experiment in that direction that we might make work. So it's called Tax Career Agents. So at the moment when you choose what school to go to or what career to go into, what first job to take, you go on the advice of family, maybe what you read in the newspapers, maybe a coach or somebody else in your world.

But these people don't have very strong incentives to be expert and actually helpful to you. But you mostly go on either direct personal connections or the prestige of various people who might give you this advice. So I want to put someone in your life who has a direct incentive to make your life go well.

The idea is that in many industries there are agents, like in music or athletics, acting people have agents. And these agents basically take, 15% of their income on average to advise and promote. And in these industries, it's essential to have an agent and people, don't too much begrudge that 15% of their income being paid for these agents who do this advice and promotion.

And those people can then believe that they have an expert who has their interest in mind cause they do well then this agent does well to tell them which jobs to take, which skills to collect, things like that. But in most of the rest of the economy we don't have agents, cause we are mostly thinking, I don't want to give up 15% of my income for somebody who's gonna advise me. That just sounds like too much. So the idea here is that in fact all of us already have an agent who on average, collects 22% of our income, and this agent is doing a bad job. And so the key proposal is just to transfer this agent role to somebody else who will do a better job with it. And you might think, well, who is this agent already?

And that's the government taking 22% of your income on average for taxes. So in principal, since the government already takes a big chunk of your income, they already should be wanting to advise and promote you, right? , they should be wanting to tell you what school to go to or what job to take or what training to get, but they don't.

So my proposal is to transfer that role to somebody else. So at the moment when the government. , spends more than they take in. They borrow money and borrowing money is a way of taking future tax revenue and converting into the current tax revenue. Cause later off, you're gonna have to pay off the loans that you've got.

But now you get the money up upfront. Well, we're gonna do a substitute for that. Instead of borrowing money, we're gonna create a tax career agent for say you by auctioning off the right to get your future tax revenue. So every year you write a check to the government and so much money, that check could be diverted to somebody else, and somebody else every year could get that check.

And we could have an auction where we say, who wants to buy that stream of payments? And whoever wins that auction pays a big amount of money up front, and that's the money that goes to the government. And they get that money now instead of your money in the future. But now whoever wins this auction is your tax career agent. They, now have your interest. They paid maybe half a million dollars for the right to get all your future tax revenue, and now they're gonna get a percentage of your income in the future. And now they want to advise and promote you. And instead of being based on prestige, this is based on, their interest or aligned with yours.

They want this. That is not only do they want this, among all the people who could have won this auction, they were the one willing to pay the most. That is they had the strongest belief and confidence that they could help you, have a better life, at least in terms of money. And this is a proposal I have, and the key thing to notice about it is that nobody loses relative to the status quo, right?

The government, they get their money up front instead of later like they do when they borrow. They're fine. You? You're just as well off as you were now. Now you have this agent who you could ignore if you want to. They have no powers over you other than to advise and promote you. And then this agent, they're okay with it cause they bid to be in this auction to be the agent. They chose to be the agent. So on average they've gotta expect to benefit so nobody loses. This is an example of what I hope to do elsewhere in our lives, which is to replace bad prestigious advice, with incentivized advisors whose, who got into that role by paying the most to be there and having a direct financial incentives to do well by you.

And so we could do that for lawyers and for doctors and for even our politicians.

David Elikwu: Wow, that's really interesting. That's a very interesting idea. I wonder what you think about, so I was just speaking with Garrett Jones, who I think was a colleague of yours and he talks a lot about the idea of IQ and IQ distribution. I'm interested in this idea that you have of tax career agents.

Do you think there's an extent to which it might create some form of stratification where you get a separation of, companies that really would bid a lot more for maybe higher IQ or higher status people, and then maybe be less willing to make that bet on lower skilled or lower IQ people?

Robin Hanson: People vary, as Gary will tell you, humans vary. They aren't all the same. The pretends that they do not differ very much is often an obstacle to having effective policy in many areas, including having someone advise you about your career. So yes, often schools pretend that all their students are the same and therefore will get the same kind of training and the same kind of opportunities in order to keep everybody on the appearance that we all think everybody's the same. But the tax career agents are incentivized to notice differences and to pay attention to them and act on them. But they're incentivized to notice the actual differences. Like so many people are concerned and believe that there are these false beliefs about who's better in the world and that people who are incorrectly believed to be better are often benefiting from these incorrect beliefs.

And so there's class differences or presumptions about who's better even say by race. And the claim is because the people of these social systems, based on elites and conformity, et cetera, because they don't have very strong incentives, they can just go off the rails and say, if everybody believes the tall people are better then tall people get better jobs and better grades, et cetera.

And that therefore we need to restrain those processes to prevent those beliefs from being enacted because they could just be so wrong and they could be quite wrong when the people making the choices are mainly because of their prestige not because they're making good judgments.

For example, the people outta school deciding who to admit as students there, if they're basically there based on their prestige and the perception of other people that they're prestigious people, they may not do a very good job of deciding who to admit to the school. They may just go along with everybody's prejudices about who's better.

But if you have parties who have a direct, strong financial incentive to figure out who's better how, then I'm gonna more going to believe that if they actually come to decide that somebody's better, then they are actually better. So notice it in sports, say, or music. Sports and music agents do not believe all of their clients are equally skilled. Those worlds are very open about the idea that some athletes and some musicians are much better than others. And the whole point is to try to find the better ones and to help those better ones and give them much more resources and opportunities cause that's how you make good athletes and good musicians, is to search for the best ones, right?

So agents are not treating people as equal in these other areas. And they tax career agents would not treat people as equals and they would be incentivized to ask what actually makes the difference. Not what do people think makes a difference or what two people say make the difference. And so if they're looking at your IQ and using that in order to sort you, that would be because they've looked carefully and decided that IQ actually does help. Just like again, athletes. Agents or musician agents, they are trying really hard to figure out which of these people are actually the better ones. And they aren't gonna just look at something superficial like how tall you are? They're going to be trying to do as careful an assessment as they can. And so it's a long answer to say, yes, of course they will treat people different. That's what we want. But it's not all a zero something like some people winning something, some people lose, a big part of what tax payer agents would do different is figure out, well, for you what kind of a life would go best for you? Not send everybody down the same path, but to pay a lot of attention to your differences.

And if you can convince them that you don't like them and you'd rather they went away, then they're gonna probably sell this role to somebody else who you like better. That is if you say, I like this person and I'm willing to listen to this person's advice, but not your advice, then they're gonna find a way to make a deal to transfer this asset to someone. Who you would listen to more because that asset is worth more to that person.

David Elikwu: Fair. Okay. That makes a lot of sense. And I guess, the benefit for the government is the receivables that they get. And just like you're saying, really, the stratification you might end up with between different types of people is equal to the very good analogy that you gave of musicians and their agents, where the best musicians in the world have the best agents in the world, which would be the analog of the biggest companies.

And then maybe, a musician that's up and coming might just have a local agent, which would be the same as maybe a smaller company that might have been willing to pay an equitable payment.

Robin Hanson: But it's still much better than not having any agent. So that's the status quo, almost most everyone has no agent whatsoever and they're going with family advice or their teacher or coach or something or the sales pitches being made to them by schools or jobs. They don't have an agent there who's more experienced than them to help them make these choices.

But again, at the highest level here is the current situation we're in, is in many areas of our life, we go with prestige as our key way to decide who we listen to and who we associate with. And we don't ask the track records of those people, we don't give them very strong incentives. We just fundamentally rely on prestige.

And that goes wrong in a lot of areas. And my proposal is to swap those prestige based mechanisms with more direct mechanisms. But the question is, can I convince people to do that? Because basically this is a hidden motive issue. I think people fundamentally really just crave this association with prestige, but they don't want to admit it that way. They'd rather say, oh, I think prestige is a good indicator of quality. And so they use this excuse in their mind that they believe that the most prestigious doctor is the best doctor, the most prestigious lawyer is the best lawyer, the most prestigious school is the best school for you. Once they believe that, then they don't believe they need another mechanism to induce effort and quality because they think they've got it, prestigious it.

David Elikwu: Yeah. But also I think the additional part is not just how quickly you can convince them, but also if you can convince them before it ends up being replaced to some extent by AI. cause that's the other thing that's in my mind now where, I would love to know what you think about, we were just talking about some of the biases in the way that we select, right?

And some of the biases and the misattribution that we make when we are trying to pick, okay, what should people do, how should people live, et cetera, et cetera. And how it might be a better solution for some companies to do that if they have the right financial incentive. But then also, it's very easy to imagine a world where AI that's already available right now.

People are already playing with the idea of how you could have AI therapists and you could ask AI your questions and AI could tell you what you should do with your career and what you should do with your life. But there is a potential risk in the extent to which we could bake in a lot of the misconceptions that we have into AI.

And, I'd love to know what you think about that is, does AI reach a point where all of our negative biases and all of our mistakes get skimmed out because the AI is so good that it can actually just think without the negative aspects that we put into it. Or are we still constrained by some of the engineering factors?

Robin Hanson: When you buy an AI or when you listen to an AI, the issues are pretty \similar to when you listen to humans. If you decide that the Google AI is the one you're gonna listen to, just because Google is the most prestigious, then the Google people are mainly working to convince you they're prestigious and they're not working to make sure the advice their AI gives you is actually good advice because that's not what you're picking them on. So in that case, the advice may well be biased because, not just because they were basing it on prior data or humans, but because you didn't make them make it unbiased. That is the way that you choose your suppliers is a primary determinant of how biased they are. If you choose them on the basis of prestige, then they will be prestigious and I can't assure you of much else other than that they will be prestigious. If you choose them on the basis of accuracy and that they're rewarded for accuracy, such that if they are more accurate, then you're more likely to buy them, then that accuracy pressure is what would drive away the biases, because by being less biased, they would be more accurate. But there is the danger that people will choose the AI based on prestige if everybody else is buying this AI, and then they'll buy this AI and then they will listen to its advice because it's the prestigious source in the same way they do with doctor's, lawyers, and politicians, just believing them on the basis of their prestige.

It's the same issue there with respect to the AI as with respect to the humans, what's the basis on which you're selecting them and that affects what's the basis of their advice they give to you.

Now, I actually think you still have a long way to go on the AI. Don't, they're not about to take over everything.

So let me tell you my personal story. Back in 1984, I left my graduate program in Chicago in, I've been studying physics and philosophy of science and went out to Silicon Valley to try to work in AI, because I had heard of all these news articles in the previous few years talking about how AI was about to take over everything and I had to get in there before it was too late, before all the jobs were gone. And AI did everything, that was my last chance to sort of be part of making something happen and of course, I was very wrong. But I've since learned that every decade or so for a long time, we've had these bursts of concern about AI, where people say, look at this new demo, look how much more impressive this new demo is than all the demos we've had before. They're able to do something fundamentally due they couldn't do before. And they've said, could this be it? Are we about to have the machines take over all the jobs? Again, this isn't a new thing every decade for a long time, even going back centuries, people have had these bursts of concerns where people were really impressed by new machines and new automation and computers, et cetera, and said, wow, are these things about to take over all the jobs soon and every time we've been just really wrong.

And so the key lesson, I think, to draw is we're just really bad at judging that distance between the latest impressive demo and what it would take to automate almost everything. And most likely, we're still a long way away from that because, even these most in recent impressive machines, they're still can't do most everything. Most of the jobs we have in the economy, these machines just can't do. I mean, they can do some impressive tasks that other previous machines couldn't do, but they are not up to the task of replacing most jobs, and I think it'll be many decades at least. So you've got a lot of time to have a career and to have a life and slowly see AI get better and slowly see jobs get automated. But I wouldn't worry about it all being gone very soon.

David Elikwu: How do we get better at measuring that distance though? Because I think there's also an extent to which, it's a bit of the boy who cried war syndrome where you hear it so many times, you start to ignore it, you start to tune it out. But there is also the effect of each iteration is actually functionally getting better. And sometimes it is, each information is multiples better than the last. Just like you were saying, I remember probably also about 10 or so years ago when we had Google Glass and people were like, oh my gosh, VR is gonna take over everything. And that went nowhere at the time I actually went, I went to the Sony offices in London and I was testing some VR stuff. I don't even know what happened to that. I've never owned one and that was 10 years ago. But now, like it is so much better than what we saw then. And actually it could be the case that within that time we've had enough iteration that it becomes functionally useful and it does maybe start to change the old paradigm.

So do you think, I guess, well the two part question. One is like, how do we get better at managing that distance so that we don't just keep assuming that it's never going to happen, but then also, do you think there's an extent to which you kind of, you knock on the door until it breaks and then actually we do crossover into a period that ends up being drastically different from the time that we had before.

Robin Hanson: So there's a chain of causation. At the end of the chain is actually economic activity. Like jobs change, products change, money changes, right? And so the things you're, the thing you're most worried about is people losing their jobs, people losing wages, the economy slowing down or growing up. Those are the fundamental things. So if you track that, you'll be tracking the thing you most care about. And that's been pretty steady for a long time.

The next step behind that is we may call automation, the points at which a human doing a job is swapped for a machine doing a job. So that's the next step back in the causal chain. And then behind that is maybe innovation in how to automate a job that is demonstrations of particular, attempts to automate jobs of, say, by startup companies that are trying to spread their products so that more people will adopt a particular practice to automate jobs. And then behind that would be more research, conceptions of general techniques of how to do computer things differently. And then there's the sort of the most fundamental, like innovation changes. Now, a big problem is that. People want to get news about this whole chain of causation. And so reporters and media are focused on the earliest part of that chain because that has the most dramatic stories, because that sort of fluctuates the most and that seems to be giving you the most news about things farther and down the chain.

But the people doing that are not actually incentivized to be very accurate. They're mainly incentivized to be prestigious, just like all the other things. So, hey, if the New York Times has an article on automation on chat GPT, well it must be a thing, right? And then there's an incentive for everybody to gush about what the same thing everybody else is gushing about. And they just don't have much of an incentive to be accurate, as you say, they gush about these things, they give these radical predictions, and then a few years later that's just gone. But they're never held to account for that. We don't collect track records on media pundits, and we don't give them incentives for accuracy. And until we do that, they're gonna continue to exaggerate because that's the incentive we're giving to them. Now I did a study a couple years ago about this automation step in the process. I did a study with a co-author on say, 900 different kinds of jobs in the US which basically covers most all the jobs. And then the period from 1999 to 2019, a 20 year period. In that period we tracked how all those jobs changed in level of automation over time. And then we could ask, well, how much did it change over time? Which is roughly a third of a standard deviation in automation, which was modest, but important.

And we asked a number of questions about, well, on average when jobs get more automated, do the wages go up or down? Do the number of workers doing that job go up or down? And we found no effects like that over this period. And then we asked, in this 20 year period, did the kind of things that predicts which jobs get automated change. Was there a revolution or change in the nature of automation over this 20 year period? And we found none, no change whatsoever in the nature of automation. So steady change, modest change, no net effect on number of workers or wages, no change in the nature of automation that tells you that at this more fundamental process, we just have relatively steady change. And that's what you should expect into the future.

What you see, way behind in the chain of causation is you see fundamental research ideas on how to do things differently. But the key point is those things hardly affect yet actual automation, most actual automation is just not much affected by them.

Most automation is very mundane sorts of things. If you look at actual firms, et cetera, they are very simple machines who do very simple things because that's in fact what is actually the most useful in most of the world. So just you have to see the difference between if you're just working in an ordinary store or ordinary office building or ordinary factory, you just see the kinds of things they do and the kinds of automations that you use, you can see. The thing when they actually automate something, it's very, very simple and has little to do with the dramatic research progress that you will see reported in the media that everybody gets all excited about.

David Elikwu: What do you think about the role that attractor states play here? I think, one of the potential fears or the potential differences with AI being that you can progress it to a state where it then can complete itself a bit like how you might type in auto correct or auto complete, where you type one letter, nothing happens. You type two letters, nothing happens. Once you type the third letter, the computer says, ah, I know what you're trying to say. And it just fills in the rest for you. And I think that is one of the ideas that people worry about also with AI, where, it may have been the case that in the past we were just not sufficiently, we were not anywhere near close enough for the computer to be able to finish the puzzle by itself. But actually we may get to a state where, as we continue to make our iterative development, which like you say can be very slow, it could take 20 years where we make slow plotting progress. But then once you get to a significant point, then you can have exponential progress. And I think of even physics as an example where at one point, you know, everyone did physics on the whiteboard, but then now you can get to a point where you can do quantum computing because actually you just need to be good enough at the slow iterative progress of, okay, you can do enough physics to tell the computer how to do the rest of the physics and the computer can compute the rest for you.

And now we can start getting answers to things that actually we don't technically know how to get the answer ourselves, but we know how to program a computer to get the answer for us.

Robin Hanson: We live in a world of 8 billion people and most jobs are done by humans. Humans are in a sense, a general intelligence, right, of the sort that we're trying to make computers into, but humans do not do most jobs in the sense they could switch to them. Like it takes you a lifetime often to learn to do one kind of job. And then other people take a lifetime to learn a different kind of job. And there are many thousands of different jobs out there, each of which people take a long time to learn how to do that job different. So that's a fundamental obstacle to sudden change that is, sometimes there's a sudden change in one particular job in the world, like driving trucks or something, and then you see that area leap forward but that's only a tiny fraction of the whole world. And so when the whole world has these leaps forward every once in a while in thousands of different areas, the net effect of all of that average together is relatively steady progress. That's with humans right now who are relatively general intelligences doing most of the jobs when computers who tend to be less general, more specialized do tasks, it's even more fragmented. That is, yes, once in a while you have a sudden breakthrough and something that was slowly plotting along, jumps ahead by a big amount, but that's usually only one small fraction of all the different things we need to do, that leaps ahead.

That's why economic progress has been so steady over the last few centuries. Even though if you look back in the history books, you'll see dramatic changes that happened at various points in time.

I mean, just as a random example, on the top of the Washington Monument in the United States, I think they once had an aluminum pyramid because aluminum was the most expensive metal that you could get at the time, and then there was this huge innovation in the ability to smell aluminum that suddenly made aluminum, one of the cheapest metals you could have. And there was a vast expansion in the use of aluminum in the world by that sudden dramatic falling in price of aluminum, but that didn't make the world economy leap forward because the world doesn't spend that much on aluminum. And that's a story that just keeps playing over and over again even say, you could say there was a leap forward with the smartphone, right? And hardly everybody had smartphones then everybody has them, right? And that's true. Smartphones have been a boon, but there's still only one small part of the world economy.

David Elikwu: Okay. Fair. What do you think about the potential of the the other form of attractive states where the AI brings us closer to Grabby Aliens. Can you remove him speaking over me here pls? About grabby aliens and then also the, the great filter.

Well, you know, the potential that, for example, we try to reach out into the galaxy, we're building these telescopes to look out and see what we can find. And there's a potential that AI can effectively help us to create a homing beacon that brings whatever is looking for us directly to our doorstep.

Robin Hanson: We are in a vast universe, you might have noticed, and it looks dead empty. We look like we have a promising future that is, humans have been growing rapidly, advancing rapidly on a very short time scales. And so if we can just continue growing at recent growth rates for the next few thousand years, we should be able to fill the solar system and then fill the galaxy within, only a million years, easy. So we seem to have this bright future and the sort of descendants we hope to have are the sort of creatures who would be pretty noticeable in a universe. They wouldn't just be going around and picking some flowers and planting a statue or something. They would be rearranging stuff, that's what we do on large scales. They would be changing the universe in big visible ways, say over the next million years.

That's what we hope to become, but when we look out in the universe, nothing, nothing like that anyway. So there's longer than this issue. Well, what gives? Where is everybody? That was Fermi's famous question. And so, that's long been a puzzle that people have worked to struggle to understand.

And so recently I and some co-authors, tried to work that through more carefully. And what we did is we came up with a three parameter model that describes where aliens are in space time, civilizations like us, especially the sort of civilizations that would grow and become very visible. And we also have an argument for why you should believe this model is true and applies to the universe because of the key datum that we are now very early.

We are at 14 billion years. And you might think that's a long time. It is compared to timescales you're familiar with. But the average star will last roughly 5 trillion years. And if the universe would just stay empty, then our best theories of when advanced life like us should appear is that we should appear toward the end of a planet's lifetime on the longest lived planets. And so then we should be appearing way in the distant future when, after trillions of years of the universe being around, not now, now is really early compared to trillions of years in the future and it's crazy early. So what we say is this shows that you should not believe that the universe will stay empty. That is this model we have of where aliens appear in space time, it has three parameters and we can fit each of those to data and we have data for each of those parameters so that we have roughly the actual model here. And this model says basically that expansive aliens, the sort of aliens that would expand out in the university, be very visible, who we call Grabby. They appear roughly once per million galaxies, so the nearest one is really long way away. And that once they appear, they start growing at almost the speed of light, within a factor of 10 of the speed of light. And that they just keep growing until they meet other Grabby aliens and that our time right now is a random sample from the time when they appear. And if we grow in the next million years and start to keep going, we will meet them in roughly a billion years. And roughly around that time in the next billion years after that, the whole universe will be full of Grabby aliens, and then you couldn't appear. It'll be too late for civilizations like ours to appear again, that's the deadline, which is why we're early.

And that's the answer to where the aliens are in space time. We claim that is we have a three perimeter model each fit to data. The model basically is the answer. It says where they are. They are really a long way away, but the universe will be full of them in a billion years and at the moment, the universe is roughly half full of these aliens. And the reason we don't see them as a selection effect, if we could see them, they'd been here now instead of us . They're moving so fast, so you don't really see them until they're almost here. And that's the answer to where the aliens are.

And if we ever meet them, that'll again be in say, a billion years. And so, who we are, who would meet them, would be ourselves in a billion years. And so then you might, if you want to ask what are Grabby aliens like? You want to ask, well what will we be like in a billion years? And then I think it's pretty obvious we will be artificial, we are now biological and constrained by biology. And that's done well by us, but within the next few thousand years, it seems pretty clear that artificial creatures, artificial intelligences, artificial robots, artificial everything will just be more competitive, more efficient than biology most everything. And so our descendants will just be artificial and that's who the aliens will meet but that's what they will be too. All the Grabby aliens will just be artificial. And so our doing something about AI right now is just our moving along the path toward becoming what? All advanced civilizations will be after millions and billions of years, which is very thoroughly artificial.

David Elikwu: That's really interesting. Do you think we'd lose anything in becoming artificial and kind of giving up our biological body sacs? I think we're already struggling with the idea of maybe to a metaverse where instead of talking to people of and having direct interactions, maybe you're spending more time online. Kids aren't going out and playing with sticks and mud. They're inside talking on Fortnite.

Robin Hanson: I think you have to notice how much we've already changed and given up so far, and then you should just expect a continuation of that pattern. I don't know that the future is that different than the past here. So humans, basically distinguish themselves from other primates, maybe a million years ago or several few million years ago. And then roughly 10,000 years ago, we had this huge revolution to move from being foragers to farmers. And the farming world was really quite different in many important ways than the foraging world. And farmers grew much faster than foragers did. Forgers may have doubled roughly every quarter million years. Farmers doubled roughly every thousand years. And then a few hundred years ago, we had the industrial revolution, which then meant the world economy has started doubling roughly every 15 years. And over that period, humans have changed a lot. Our values have changed, our habits have changed, our practices have changed. I think our distant ancestors, if they were to see us and our lives, I'm not sure how much they would be willing to call us human. We would just be so different from them. And our lives are in some sense, very artificial. That is you and I are right now in a rooms whose walls are covered with paint and on those walls we put pictures as we like, but behind the walls, there's structure, there's wires, there's pipes, but we hide all that from ourselves and we show the image of the room that we want to show. We are in artificial worlds, we are in a room, in a building, in a city. These are artificial worlds and that's what we like. We like these artificial worlds. They are very far from natural worlds and we have gotten used to that and we're even eager to make them more artificial. Over this long period though, we've had what people call alienation, which is that our nature and our habits were designed for the world that we were evolved in a million years ago. And as we move to more alien worlds, we have to deal with more alienation in the sense that we less feel the world around us is natural, and it's less sort of intuitive that it's the world we were built to be in. So we continually, as our worlds get stranger, have to come to terms with alienation. But on the other hand, competition pushes us to do so.

And at least in the last few centuries, it's let us get rich. And so we're very comfortable and rich in this alienated world and the future you should expect even more so that is our descendants will be in even stranger worlds even more different from not only our world, from the past world. And that will just keep continuing for a long time and they will continue to become more alienated until their natures change.

So for the last million years, human nature hasn't changed that much. What we instead changed was our culture. And our environment, but our sort of internal natures haven't changed. But in the next few thousand years, we may well be able to change our natures much more than we have in the past. And then our descendants will be more comfortable with their world because they will be, in a sense, more built for it.

So that's a thing artificiality can do, is it could make our descendants just feel that their world is their natural world because they were more made for that world.

David Elikwu: Sure. What would you say is the difference between substrate independent humans and artificial intelligence? Like if we were to achieve general AI, what's the difference and is it a case that, you talk about The Age of Em. Like, what's the difference between a state where, okay. Humans become emulated and artificial and we develop general artificial intelligence and we just give up and let the machines take over because they are in many ways, almost indistinguishable.

Robin Hanson: The world of the future will certainly be full of a world of minds. Minds are powerful, minds are necessary, minds are very useful, right. In our world, human minds are the main minds of interest, right? There are some animal minds, some machine minds, but human minds are where most of the action is.

In the future, there will be a larger space of different kinds of minds. And then one of the most interesting questions is which kinds of minds from today will be the ancestors of most of the minds of the future? So there are two you like, especially interesting categories. There's human minds like the you and I, and the structures in our minds. And these kind of minds can become converted to artificial hardware. That is, we can create what's called brain emulations, where we make computer simulations of particular human brains, such that they are now, made in factories, out of stuff, dug up in mines, and they are no longer tied to biology.

But, they would still have very human styles because they would be based on particular human minds. So those minds would love and get angry and feel the sacred and be jealous and all the other things that our minds do. And those minds could then have descendants in the sense that people would vary them and modify them and change them in many ways and create a vast cloud of descendants that do various different things, different ways. And then there are the more directly constructed brains that we make today. That is, we write computer programs, we do machine learning, say chatbots, and those kinds of systems will also have descendants. We will be making more of those. And we could ask for any one future mind in the future which of those did it descend from ? Or will we invent new categories of minds? A new very different kinds of minds.

So that's one of the biggest uncertainties about the future is, what kinds of minds will be in it? And so I've tried to do some analysis to guess, well, which kinds of tasks in the future would be more likely to be done by a descendant of a human mind versus a descendant of an artificial mind. And I think I have some rough conclusions there, but if we stand back and we say, if we want future minds to be more like us and less like alien computer minds, then what we want is these emulations to happen and sooner so that there's more of a chance for them to compete and find ways to do more things. That is, I think, at least some subset of jobs will be best done by human like minds, which descend from human minds. But it's an open question how big a fraction that is or perhaps which kind of tasks there are. But I do think we have a way to think about. Which tasks there are, and to assure ourselves that in fact, at least for a while, there is a big place for human like minds to continue doing things.

David Elikwu: Sure. That makes a lot of sense. I might ask one more question about the Grabby Aliens and The Great Filter. I wanna know if the great filter model is accurate, does that make it more likely that we have already passed the great filter and that we then become the grabby aliens?

Robin Hanson: So we are now quiet an aliens. We have not expanded out of the universe and done something big and visible but we think our descendants might. So the total great filter is whatever prevents civilizations from becoming this very visible grabby sort of thing. So we haven't done that yet. We still have a step to go and the entire filter is huge in the sense that again, only one planet per million galaxies produces an expanding civilization, and we have now produced a civilization, but it is not yet grabby. So the key question is, how much harder will it be to get to this grabby point? And that's another way of asking that is saying, what's the ratio of quiet to loud civilizations in the universe? If the ratio was, say, 10 to 1, then it would say, well we, we have a 10% chance of becoming grabby. The ratio was a thousand to one. Then we have a one in a thousand chance of becoming grabby. And that's the remaining part of the filter ahead of us in those cases would be 1 in 10 or 1 in a thousand.

So we don't actually have direct data on that number. Now, if we were to do more thorough study research that is searching for extraterrestrial intelligences, if we ever find one, well, that would be strong data to give us this estimate. For example, if we saw one in our galaxy, then we would say, aha, well there's one per galaxy on average. And that's, but the grabby only appears once per million galaxies. So then maybe there's a million to one ratio and we have a one in a million filter out of us. But on the other hand, you know, if the filter's only one in a thousand, then we'll have to look at a thousand galaxies very carefully before we find one other quiet civilization. You know, that's a very hard task, we're a long way from being able to see that very carefully. So we're a long way really from getting evidence that would speak one way or the other. So at the moment, I just have to say we don't know how much of the filter lies ahead of us, but I think a big part of the filter that lies ahead of us is going to be regarding our choice.

I mean, some things are going to be about whether we get destroyed by some power outside of us. But I think much of the filter will be, will we destroy ourselves or even more important, will we allow ourselves to expand into the universe to become grabby. So you might think that would be obvious, of course we would, but I don't think it's so obvious.

So we talked earlier about how the world over the last half century is becoming this world community such that elites identify with being elites in this world, community want to be respected by other elites, and they're less interested in being respected by the particular people in their nation. And I think that's gonna continue over the coming centuries. We're gonna have this world community that in a sense, regulates the world together and is proud to have solved many world problems together. They could have solved global warming or inequality or racism or all sorts of big problems in the world. And they will credit their ability to solve those problems to the fact that we have a single world community that does that even war, for example, will be something they will be working to solve.

And in some sense, we have been solving many kinds of world problems through this world, community, including reducing war. And I think we will just really like that. And it not only will we solve coordination problems, but we will prevent competition and evolution that we don't like. So at the moment, you may know any kinds of regulation prevent competition. They prevent changes that we find scary or uncomfortable. So we have limited nuclear power, for example, as we discussed, because people have been scared of it. And I think most people like that, they liked the fact that we prevented nuclear power because they were scared of it. And we are preventing, basically genetic engineering, because we're scared of it. And maybe we'll be preventing AI soon because we're scared of it. And we will like the fact that we stopped these things and that we prevented our descendants from becoming too weird and alien from us by controlling the world and regulating it. In a few centuries, we will at some point have the opportunity to allow a colony ship to leave the solar system off for some other star. Once we allow that, we will know that's the end of this era of global coordination and regulation. Once a colony ship goes out and lands somewhere and makes another colony and those colonies go out further from there, the world will evolve, it will compete. The universe will no longer be under a central control. We will no longer decide together what our civilization's like and what it becomes. It will instead become whatever the result of this competition and evolution is. And that could be not only strange, it could come back home to threaten us

Whatever those colonists go out and do, and whatever strange thing they become, some of their descendants could come back here and dictate terms to us. If we had, say, limited our technology and our evolution, but allowed a colony ship to go off. So that I think will be a substantial temptation. People will say, we've gotten along well together as a civilization, we rule a solar system, we have all these advanced technology, we're immortal, we're artificial. We have great art, we have great literature. We have a community that we love each other and we respect each other and we regulate each other. And if we allow any colony ship to leave, that will be the end of that era for the rest of the universe, for the rest of history, right? Do we want to allow that to leave? And right now there are people saying, well, we should have what they call a long reflection. We should not allow any colony ships to leave until we spend a long time really thinking about what kind of civilization we wanna be. Maybe we should take 10,000 years or a million years. It doesn't really matter, we've got plenty of time. We should spend a really long time thinking about what we want to become before we allow anyone to leave. Because if we allow anyone to leave, that's the end of our central ability to control what we are and what we become. So that, I think is this big choice that humanity will have to make, or our descendants will make whatever they are in the next few centuries or thousands of years, whether to allow any colony ships to leave to go change the universe, to become grabby.

David Elikwu: Fair. And what do you think? I guess, if that's a genuine choice because, there's a part of me that just thinks we'll definitely do it just because we have never resisted any form of colonialism throughout human history in the past. But I wonder if aliens might feel differently and if they are as different from us as we might expect, you know?

Robin Hanson: If most aliens are just gungho expansionists, then you know, basically there are relatively few quiet aliens, cause most of them become grabby, right? The ratio quiet to loud could be really low, two to one or even less, right? If most of them say, well of course we're gonna go and they do it, but it's less clear to me, I think maybe a large fraction could want to limit themselves and prevent this expansion. Because, most any species that becomes advanced will probably be pretty social. And therefore it'll have strong efforts to need to coordinate and to have large scale social organizations. And that will create a temptation to create a global or organization, a least a solar system-wide organization. I think that will just be a robust temptation.

Now the key point is though, if any colonist leaves, that's over. So it's not enough just to choose a general policy. I guess we shouldn't expand. You'd have to enforce it really quite vigorously or viciously even that is four millions of years, you would have to prevent anyone from leaving the solar system with a colony ship that could go out and expand. And as technology improves the size of that ship and the cost of making it will decline and so, you'll have to slowly ramp up your surveillance and, detection efforts to make sure you caught any such project early on so that you would be able to prevent it. So it would be a very, in that sense, repressive world where it was very strictly trying to prevent such effort.

David Elikwu: Yeah, that's a really interesting question. But what do you think happens to, so I was just thinking of Francis Fukuyama and The End of History and the Last Man, we're meant to have reached the final frontier in terms of the political evolution and liberal democracy is meant to be it, right? But if we ever reach a point where we're thinking of launching a colony out into outer space, one would assume that we've kind of progressed beyond that point. What do you think happens to nation states in the interim, maybe a bit closer to now? So perhaps before we become artificial minds.

Robin Hanson: Well, the big thing that's happening in nation states is the thing that's happened in the last 50 years, which I told you, is that they are becoming less autonomous, less focused on their own personal survival and prosperity, and they're more the hosts of this world community of elites, who is mainly interested in how the elites around the world think of them. That is a major change in the nature of the nation state in the last half century and that if it continues, will have enormous implications for the world. That is a decline in the importance of the nation state in a very substantial sense. Of course, going along with that causally plausibly is that we have much less war than we had before. War was a driving force in creating the nation state and informing the nation state that is nation states that could win wars prospered and spread the ones that lost didn't. And that's been an evolutionary force that's been shaping the nation state for a while. But since we are no longer having so many wars, that force is much weaker now. We are much more becoming what we are based on sort of internal evolution and this global elite evolution and not based on competition between nations, especially military competition.

David Elikwu: Yeah. I think there's also another aspect of it, I actually wish I asked Garrett about this, but we're having less actual wars and overt wars, but it seems like there are more quiet wars and there's more of a, the evolutionary survival of the fittest in terms of cultures. And there are cultures that die out without, before, let's say in the 1500, you had to go and conquer another culture halfway across the world if you wanted to dominate them and if you wanted to subsume their culture into your own. But now that happens online through the internet. No one actually has to go anywhere in many respects, there are already people that are losing their languages because everyone is moving to English and that culture hasn't had any interaction with mercenaries for a long time.

Robin Hanson: So there's certainly a lot of cultural evolution, but that's not really nation competition, that's cultural competition. So to the extent cultures are now just detached from nations, cultures are competing, but they're competing within this world community that is if you even think about a small town, say in the past, a small town was often an isolated community wherein there were elites and there were other people and then there was an evolution in that community in terms of its values and norms. And sometimes it could even have fashion, right. And different factions. And so even though, there was a integrated set of elites of a town, there could still be competition within that town for different factions dominating over others or also just different styles dominating and within nations, we've had that before or within language communities. And now what we have is that same thing happening at the world level, but it's disciplined by this world community that is, cultures are not competing on their ability necessarily to sort of have the most kids or win wars, or have the most successful businesses. Well, cultures are now competing for just the allegiance of the elites. Which ones will the elites do? And the elites have these fashion contests where they try to compete to know what's in fashion. And the more that you know it before others what's gonna be in fashion, and you can jump on the fashion first, then you get to win. And the elite competitions people invite you to be the keynote speakers and write books and do things like that. Be on their boards if you are more in fashion than others.

So there's change in this elite community, but it's no longer being driven so much by sort of raw direct competition. i.e more having more children, winning wars, getting more money. It's more based on prestige within this elite community. That's the main coin of competition in this integrated world community of elites. And so that's what's driving the choices. And the key question is, will that be functional? That is we can at least see why competition for the number of kids who are winning wars at least pushes for a certain sort of functionality. You're gonna win wars, your chariots have to work and your bullets have to fire and otherwise you lose, right? And even having children is you have to have certain functional things to make that work.

And so a key question is, in this world of competing for elite allegiance, how functional will they have to be? So for example, if they, if fashion says you get rid of nuclear power because that's the fashion, and then you're cold in the winter. You might think that's dysfunctional, but does this world care about functionality? How much will functionality drive this world as opposed to elite fashion?

David Elikwu: Sure. There's another idea that you talked about earlier in our conversation, which is the Sacred. I'd love for you to explain more about that.

Robin Hanson: So I, I've told you a bit about how I have these institution ideas for change and we discussed tax career agents a bit. And what I seem to perceive is that often these proposals run into obstacles where people say basically that's sacred, don't mess with that. And as an economist it's not a concept we economist use much. And so, it was foreign to me and I kept having it as an obstacle often enough. And I finally said, " okay, let's figure this thing out." This thing is in my way basically and I need to understand it. And so I did what I think is the straightforward best method in social science in general, or even science in general, which is just collect a bunch of relevant data. In this case, it's correlates of the sacred, things people say tend to go along with the sacred. When they use that word, they have related concepts. How do they tend to treat those things different? What do they do different? I collected a bunch of correlates of the sacred, and then I tried to collect some theories that could explain these correlates. i.e What are the stories people have about what's going on here? Why do we treat some things differently this way? And then you're trying to do a match. Well, which theories can best explain how many of the correlates. That's, I think just a generic way to try to figure out anything. You say, what's the phenomena? And you ask what are the possible theories? And you ask which theories best explain more of the correlates with as few sort of auxiliary assumptions as necessary to explain it. Of course, in some sense, any theory can explain anything if you allow enough arbitrary assumptions to go into your explanation. And so you're asking no what can explain it cleanly, smoothly.

So I have a list now of 51 Correlates of the Sacred, and that is a pretty big data set, I would say. And then I collected those 51's into sort of seven themes, seven sort of packages of these correlates that I can summarize. And then I'm trying to explain these seven themes. And three of the themes I think are just pretty well explained by a classic theory of the Sacred expressed by Emil Durkheim a century ago. Durkheim was the founder of the Field of Sociology, and it's a widely accepted story that religion is mainly about the sacred. And he said the main function of the sacred was to help groups bond together in order to see, if they see the sacred the same way they see that they are the same people and they bind together and help each other because of seeing something together as sacred that other people don't. So that was his concept of the sacred. And I think that explains three of the themes of the sacred. One theme is that we just value the sacred, that sacred things are highly valued, and we do in fact value them greatly. The second is that we not only value these sacred things, we show that we value them. We go out of our way to be emotional about them, to sacrifice for them, we show that we value the sacred. And those two things, I think can be understood by the fact that we in fact bind together. That is a group to bind together around the sacred would need to not only value it, but to show each other that they value it. And that's something that we in fact do. We are united around the sacred and we show it and we value it. So those three things are fine.

But now there's four other correlates in the sacred that are not directly explained by this theory. So the sacred tends to be set apart, clearly distinguished. We don't like them mixed up, we don't like money involved, for example with the sacred. Sacred things are idealized, they are have less complexity, less fault. They have fewer categories, fewer random coincidences basically, the sacred is idealized. We are supposed to feel the sacred emotional relate to it, intuit the sacred, not sort of calculate or think about the sacred so much. And touching sacred things tends to make those things sacred. So for example, if we have a sacred love, then a love letter, a concrete example of the love becomes a sacred icon of the love, or a flag becomes a sacred icon of a nation. Particular things become sacred by that are concrete by touching sacred themes. So those four themes are not obviously explained by the mere fact that we unite together around the sacred and to explain those, I looked at some possibilities and I decided to draw on something called construal level theory, which is a theory that's now several decades old in psychology.

And construal level theory says that, well, first, our minds are organized by abstraction levels. If you look at the organization of the brain, say, the eye takes in signals, and the first thing it does is do sort of local patterns, it finds, it finds edges, color patches, things like that. And then as you get to deeper layers, it's looking at larger and larger structures in the brain. And so that's organizing my abstraction, your ears do the same sort of thing. And basically your mind has this first set of layers where we take detailed input and we move it to higher and higher levels of abstraction, levels of aggregation. And then we, have a high level decision, then we go the other direction. We make broad level plans and then we translate them into more specific plans until you move your finger a certain way and you move your tongue a certain way to talk. And so we go from abstract down to concrete. So abstraction is just built into the organization. The brain abstract versus concrete is important. So construal level theory says that actually influences how we think differently about things. The wasn't really based on the architecture of the brain, it was basically noticing that, so for example, in a visual scene, when you look out into your window for at the moment, what you'll tend to see is a small number of things up close that you see in a lot of detail.

And then a large number of things far away like leaves that you see in very little detail and the key thing that they noticed is that you think about these things differently. In particular you think about the things far away, more abstractly and less concretely and more intuitively. So your mind is able to think about, say the car you see up close in detail and it has a detailed theory of it and it might see that the tires flat or something and far away things, you don't have much time to think about them cause you have so many of them and you're not, they're not really the focus of your attention. And so each one is just coded in your mind with a few abstract descriptors. Like the leave is green and it's round and it's small and that's it. That's all your mind knows about it. And it's not even thinking very much about it's and more intuitively judging all these things far away that you see. And it's more thinking in terms of abstraction, there's like a tree, a clump of leaves, right? Not even an individual leaf. So that's just true how you think about visually, you think differently about stuff up close, versus stuff far away. You do it also with sound, you do it with time, things close to you in time and far away and time, close to you in far and social distance, close in chance is as a high chance, far low chances, foreign chance. And other, even in plans, your overall goals are sort of abstract and your more particular constraints are near.

And so your mind just basically has a way and thinks differently about things that are near versus far and thinks that are near in some ways you assume are near in other ways and you're just, all sorts of kinds of thinking is organized in terms of near versus far. And you just think differently about things that are near versus far.

And this is not my theory and this is a somewhat well established theory in psychology, but I realized it hasn't, you know, can explain these other aspects of the sacred, that's the key point.

So, the key point is that we are trying to see something together as the same so that we can bind ourselves together around it. But this habit of seeing things differently when we're close versus far is an obstacle to seeing things the same. So for example, in the United States, we tend to treat medicine as sacred. And if you look at your medicine and you're about to undergo medical treatment and you are being near to it, makes you see it differently than I do who see your medical treatment from far away. We may disagree about your medicine and that's an obstacle to our treating it as sacred. And so the key theory as well, when we care enough about treating something, the seeing it the same so that we can see it as sacred and binding together about it, then we have a different habit of how we treat things. Usually things up close we see as near and we see differently. And things far away we see as a far away, we see those differently. But for sacred things we choose to see them as if from afar, even when they're close. Even when I'm about to undergo medical treatment, I will see it as if looking at it from far away, I won't be seeing the details of it. Or similarly when you're in love, for example, you might say, people seem love as if from afar so that when they're very close to it, they're still not sure what it is or whether they're in it because they see it as if from afar. And that allows people to see their love the same as other people and see it the same at different points in their life because they are seeing it as if from a distance.

And so the key idea here then is that's how we use some topics to bind together, is by seeing them as if from afar, and that will explain these other four correlates. If we see things from afar and we have this mode switch where we see it from afar when it's sacred, then that's why it has to be set a far apart, so it can be distinguished in this way. And things we see from afar tend to be idealized naturally as we drop detail from them, we don't see as many distinctions because that's what it's like to see something from far away. Far away things we're supposed to intuit, we're not, think about more carefully. And so when we see things from afar, we more intuit it and we don't calculate it so much. And because we are seeing some our love or our nation from afar, in order to see it the same, we're missing out on sort of close contact and we crave some form of close contact. And so when a concrete thing, like a flag becomes connected to an abstract thing like the nation, we crave that concrete thing as a connection to the abstract thing. And that's why concrete things become sacred by connection to abstract things. It's the way in which we can have a close concrete connection that, if your lover is away at war and you have a love letter from them, now it's a concrete thing that can make real this abstract love that you value that you can't see so directly and you want to see it directly.

And so that's the story about how all these correlates of the sacred can now be understood in terms of this, these two principles. One, we see the sacred the same. Two, see it together so that we can be bound together by seeing it the same and the other is in order to see things the same, we see them as if from afar, even when we're close. And that's the story of the sacred.

David Elikwu: Awesome. Earlier when you were first introducing the idea and how you got interested in it. You mentioned that almost a defensive property of the sacred where because something is sacred, we don't want something to touch it or to corrupt it because this idea is significant to us. Does it also have the properties of promoting action in the same way? Like, because we believe something is sacred that we will also act on its behalf and propagate the expansion of the idea.

Robin Hanson: So, when love is sacred or medicine is sacred, we're very willing to sacrifice for it and to devote energy and effort to it. But what we're not willing to do is think carefully about it, to figure out how exactly to do it best. So we spend a lot more on medicine because it's sacred, but we don't spend it very effectively

David Elikwu: Okay.

Robin Hanson: because we're not so willing to be critical about the particular details and to engage the details. And so this is a key trade off. If you just want more energy to go to something and you don't that much care how careful that energy is, you just need more devotion then making a sacred. So if we are in a war, for example, and you want everybody to be devoted to the war, then making the war sacred will induce more devotion to the war. If you want to fight the war accurately, then you need people involved running the war who are not so much seeing it as sacred, as being practical about actually winning the war. And that's an issue for all sorts of sacred things. That is often we have experts of the sacred, priests. And the priests have to pay attention to the details, that's what we assign them, but they're also supposed to help us see it as sacred. So there's a basic question, well, how exactly do they do that? How can they be well attending to the details while still being sacred together with us?

David Elikwu: Okay. One last question I think I might ask, which I'm very interested in because I think connects a lot of the stuff that we've just discussed is, if there was something, cause earlier you mentioned, for example, some of the policies that you would love people to adopt. And we've also talked about maybe how innovation can be taboo in some respect, where there are some innovations we might make that we might not want to make and there can be some conflicts around that. I'd love to know from your perspective, if you could pick two things, one to make sacred and one to remove its sacred properties, what two ideas might you pick?

Robin Hanson: So that's something I've tried to give thought to, but it's hard. So one thing I've observed is that these properties of the sacred are, in fact actually described some things better. So, for example, space is actually idealized, it is actually simplified, it is set apart and so well, it doesn't hurt space so much to think about it as sacred because it already is many of these things that you'll make it into by treating into sacred, right? The sky is sort of like that too. The sky tends to be relatively simple and set apart, and so we can idealize it in that way and it's less of a problem. Now, sometimes we need to think about this guy, like for weather forecasting or something and those people maybe need to not make it a sacred. I think math is actually idealized set apart in many key ways, and it's sort of can unite us because it's independent of us in key ways. So there's a sense of which it's not so bad to have math be sacred, and I think math is pretty sacred among say, academic elites. Would be greatly respected and revered it, and often give too much attention to it, I guess because of that. But you have to ask what else you would do instead.

What I would make less sacred, I guess, might be politics or policy in the sense that, many of these institutions I like to do, basically they run into this sort of policy sacred thing. And I think, in some sense it's this eliteness as

David Elikwu: Hm.

Robin Hanson: There's a sense in which status is sacred. I mean, I guess in say a place like Britain, royalty is sacred. Royalty is an extreme kind of status and many sort of religious elements go along with royalty. But I think in weaker forms, we do that for many kinds of elites. , we treat elites as sacred and that means we don't so much question who should be elites or why and we don't much question whether their judgments should be, are reliable or informed because that would be questioning our sacred elites. So I might wanna make elites less sacred if I could figure out a way to do that. And you might think that's surprising because so many people are willing to give lip service to saying they don't like elitism. Not that many people embrace the term elitism, right?

David Elikwu: Yeah, I was just thinking actually, even just with what you were saying, it's very weird and maybe it goes back to The Elephant in the Brain, how a lot of the things that we say are sacred. Let's say the environment or love or blah, blah, blah. We don't actually treat as sacred, in fact. And the things that we say that are not sacred and we don't like, like politics or elitism, we actually do treat a sacred.

Robin Hanson: Yes indeed. And actually think that's an interesting function for bosses. So, we give lip service and we say bosses are terrible, bosses shouldn't exist, bosses are mean, right? And so we're just generally against bosses, but of course, almost all have bosses and we almost all actually do what our bosses tell us, so, you might think, well, that's a big conflict we have. We claim that bosses are bad, but here they are and we accept them.

And there's this old distinction in kinds of status between prestige and dominance. And dominance is really the kind of status that we're supposed to dislike and resist prestigious, okay? And so I think a big function of bosses is to be prestigious so that we can pretend we obey them because of their prestige inside of their dominance. And so I think prestige is really much more important to us than we even like to admit because, it's the excuse for accepting dominance like, most people think politicians in general are corrupt, but they're politicians, okay. Just like most people think most bosses are bad, but their boss is okay, right? We have to come to terms with these things. And so yes, I do think, we are not very honest with ourselves about our relationships to our bosses or our politicians, our leaders. Basically democracy lets us pretend that we are in charge of the government and to a substantial extent, we're not. But it lets us think that we are really the bosses above all of it. And they're just there on our choice. We decided who these bosses were, and so, but now it's more okay that they're telling us what to do and deciding things about us.

David Elikwu: Fair. I might actually ask one more question, if that's okay. Cause it just came to mind now, you co-wrote your book and you've mentioned working on a bunch of different projects with a lot of different people. I'm interested to know who you have loved working with the most throughout your career for non status reasons.

Robin Hanson: See, you understand that if I name a name, all the people will hear that I didn't name them. And that's a problem you see. So I don't think I wanna name a particular name of who I've most like working with. I've certainly liked working with many people, but honestly, compared to most people, I don't work with other people as much. I'm more of a loner. So that's a more distinguished feature of me.

My co podcast host, Al Agnes Callard calls me an autodidact, which I guess is a fancy word for a loner. And so, that's feature that maybe I don't understand as well about myself, but I, I'm just more comfortable with sitting down and figuring out something by myself. And then, working through things.

Many other people, even most other people are more social with their thinking. They need to talk to somebody else through it, they really need other people to sort of support a line of thought before they're willing to pursue it farther and that makes sense actually. I mean, it makes sense that humans would be like that. We are in fact much more productive thinking together in groups than as individuals. And a lot of human psychology and capability are about our ability to work together with other people. I'm proud that humans are like that, and I fully support humans being like that. I'm just weird and I'm less like that. I'm not saying it's good to be like me, I'm just saying that's why.

David Elikwu: Fair, but I guess you still do even if you might not work through certain things with other people in the process of making them, you bounce back those ideas for feedback afterwards through your writing and through sharing.

Robin Hanson: I certainly am trying and I would encourage you to respond to these things. Like, I actually think one of the biggest problems with most podcasts is that the interviewer is not pushing harder against the speaker here, the interviewee, basically, I tell you a bunch of stuff and you nod and you say, it's plausible. And I go, no, you're supposed to be like fighting harder, but tell me where I'm wrong, tell me what sounds a little iffy. I mean, that's what I need out of, of conversation like this. I tell you a bunch of ideas and you need to push back some. For me, my purposes maybe not for your purposes, but for my purposes, I want, I want you to push back.

David Elikwu: Fair. it's the interesting balance of ideas and I just like, hearing and observing and you know, I think there is some pushback to the extent that you can probe deeper and you can ask more questions and you can say actually, does this work in this scenario, does it work in that scenario, et cetera.

Robin Hanson: Yep, it's possible. I'm just, I'm pushing for more, more and more.

David Elikwu: Thank you so much for tuning in. Please do stay tuned for more. Don't forget to rate, review and subscribe. It really helps the podcast and follow me on Twitter feel free to shoot me any thoughts. See you next time.