ποΈAI, Careers, and Risk Management with Taylor Pearson
David speaks with Taylor Pearson, an entrepreneur and author ofΒ The End of Jobs.
They talked about:
π€ How AI will impact career choices
π The future of work
π οΈ AI in automating jobs
π How AI might affect career choice heuristics
π The concept of Ergodicity
πΌ Tail risks in investing
π The Helsinki Bus Station theory
This is just one part of a longer conversation, and it's the second part. You can listen to the earlier episodes here:
Part 1: ποΈ Skill stacks and career evolution with Taylor Pearson (Episode 81)
π Listen in your favourite podcast player
π§ Listen on Spotify:
πΉ Watch on Youtube:
π€ Connect with Taylor:
Twitter: @TaylorPearsonMe | https://twitter.com/TaylorPearsonMe
Newsletter: The Interesting Times | https://taylorpearson.me/newsletter/
π Show notes:
0:00 | Intro
01:23 | How AI will impact career choices
07:30 | The future of work
12:18 | AI in Automating Jobs
16:00 | How AI might affect career choice heuristics
21:57 | The concept of Ergodicity
29:17 | Tail risks in investing
35:22 | The Helsinki Bus Station theory
π£ Mentioned in the show:
The End of Jobs | https://amzn.to/41MVNUU
Naval Ravikant | https://en.wikipedia.org/wiki/Naval_Ravikant
Flatland | https://en.wikipedia.org/wiki/Flatland
Neal Stephenson | https://en.wikipedia.org/wiki/Neal_Stephenson
Snow Crash | https://amzn.to/4936QMt
Douglas Adams | https://en.wikipedia.org/wiki/Douglas_Adams
The Hitchhiker's Guide to the Galaxy | https://en.wikipedia.org/wiki/The_Hitchhiker's_Guide_to_the_Galaxy
Herschel Walker | https://en.wikipedia.org/wiki/Herschel_Walker
Minnesota Vikings | https://en.wikipedia.org/wiki/Minnesota_Vikings
Dallas Cowboys | https://en.wikipedia.org/wiki/Dallas_Cowboys
Charlie Munger | https://en.wikipedia.org/wiki/Charlie_Munger
Luca Dellana | https://theknowledge.io/lucadellanna/
Elon Musk | https://en.wikipedia.org/wiki/Elon_Musk
Helsinki bus station theory | https://jamesclear.com/stay-on-the-bus#:~:text=Bus Station Theory.-,The Helsinki Bus Station Theory,-Minkkinen was born
Kevin Kelly | https://en.wikipedia.org/wiki/Kevin_Kelly_(editor)
Paul Graham | https://en.wikipedia.org/wiki/Paul_Graham_(programmer)
Paul Millerd | https://theknowledge.io/paulmillerd/
The Pathless Path | https://amzn.to/3i4LF7J
Nassim Taleb | https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb
David Foster Wallace | https://en.wikipedia.org/wiki/David_Foster_Wallace
Optimize for Interesting | https://taylorpearson.me/interesting/
Eric Jorgenson | https://theknowledge.io/ericjorgenson-4/
Fooled by Randomness | https://amzn.to/3UrUI34
The Black Swan | https://amzn.to/49dS19E
Anti-Fragile | https://amzn.to/492hDGC
π¨πΎβπ» About David Elikwu:
David Elikwu FRSA is a serial entrepreneur, strategist, and writer. David is the founder of The Knowledge, a platform helping people think deeper and work smarter.
π£ Twitter: @Delikwu / @itstheknowledge
π Website: https://www.davidelikwu.com
π½οΈ Youtube: https://www.youtube.com/davidelikwu
πΈ Instagram: https://www.instagram.com/delikwu/
πΊ TikTok: https://www.tiktok.com/@delikwu
ποΈ Podcast: http://plnk.to/theknowledge
π EBook: https://delikwu.gumroad.com/l/manual
My Online Course
π₯οΈ Career Hyperdrive: https://maven.com/theknowledge/career-hyperdrive
Career Hyperdrive is a live, cohort-based course that helps people find their competitive advantage, gain clarity around their goals and build a future-proof set of mental frameworks so they can live an extraordinary life doing work they love.
The Knowledge
π© Newsletter: https://newsletter.theknowledge.io/
The Knowledge is a weekly newsletter for people who want to get more out of life. It's full of insights from psychology, philosophy, productivity, and business, all designed to help you think deeper and work smarter.
My Favorite Tools
ποΈ Descript: https://bit.ly/descript-de
π¨ Convertkit: https://bit.ly/convertkit-de
π° NordVPN: https://bit.ly/nordvpn-de
πΉ Nutmeg: http://bit.ly/nutmegde
π§ Audible: https://bit.ly/audiblede
πFull transcript:
[00:00:00] Taylor Pearson: The number one way to be successful is to not die. The precondition for living is not experiencing death. Both like literally and metaphorically. And so eliminating or severely reducing that possibility is very valuable, because it allows you to continue to play the game, right? If you can like mitigate that sort of sequencing thing.
[00:00:17] Taylor Pearson: So I think that's where it gets, people get tripped up or it's not, exactly intuitive. They're like, oh, this thing is very unlikely to happen.
[00:00:29] David Elikwu: This week I'm sharing part of my conversation with Taylor Pearson, who is an entrepreneur, an investor, and the author of The End of Jobs.
[00:00:38] David Elikwu: So in this part you're going to hear us talking about how AI can start to impact our career choices. We talk about the future of work, we talk about the concept of ergodicity. We talk about his investing strategy and how he considers tail risks. And then finally, you're going to hear us talking about the Helsinki Bus Station Theory.
[00:00:56] David Elikwu: So this was a really fun episode. You can get the full show notes, the transcript, and read my newsletter at theknowledge.io. You can find Taylor online on Twitter @TaylorPearsonMe and in his newsletter, The Interesting Times.
[00:01:09] David Elikwu: Now, if you loved this episode, please do share it with a friend and don't forget to leave a review wherever you listen to podcasts because it helps us tremendously to find other listeners just like you.
[00:01:23] David Elikwu: Do you think AI changes much of what you were previously thinking about? I think of it in two ways. So one is, going back to the the Turkey problem, there's a potential way in which, you know, people give advice, oh, here's what you should do, here's the fields you should go into, et cetera. Loads of people are still, I think there was definitely a much bigger movement at one point of getting people into tech and saying, I remember people saying on the internet, plumbers are gonna be dead. All these other careers will die out and everyone needs to start coding, you know, code or die, or whatever it was. And the irony is, it seems like that's probably one of the things AI is gonna be best at is completely replacing a bunch of, especially the junior coders.
[00:02:04] David Elikwu: And actually, I think this ties to a framework I've heard you mention, which is something along the lines of a simple work, complicated work, and then complex work. Where complicated is, I can't remember the, precise words that you used, but complex work is kind of like heuristic work where it's not so much just getting good at a certain skillset, but it's more thinking in an abstract way and that seems to be what will survive. Whereas if you're doing something wrote, something that can be taught, something that can be, you know, you can essentially just train an AI to do, Hey, coding has a language. If it has a language similar to chess, chess has a fixed number of moves. If you can teach someone to do that, then you can teach an AI to do it, and maybe you don't have it as a job in the same way. And perhaps people will only do some of these things for entertainment purposes.
[00:02:52] Taylor Pearson: Yeah, I think the simple complicated complex is a good way to talk about it. So the idea is, sort of a simple job or a simple system is something that can be broken down into like very clear discrete steps. So like putting together a Lego set, right? Like you open the instruction manual, there's 42 steps of where you put the blocks on a certain Legos. You follow the steps, if you do it correctly it looks like whatever the boat or whatever it's going to be.
[00:03:14] Taylor Pearson: A complicated thing is something that tends to require some level of like, expertise and experience. So I would say like, you know, like a mechanic for example, right? It's like, you can try and fix your car just like following some instructions on YouTube. But like, for a number of things, you probably want someone with some expertise, they can go like, well, if you take that off first, the bolt could fall down here and that causes this thing to explode, right? Like, you need some sort of expertise. But ultimately, like there's a number of good enough answers, right? In a simple system, there's a correct answer, there's one best solution. And a complicated system, there's a discreet set of good answers, right? If your car is broken in a particular way, maybe there's three different ways you could fix it that are one's more expensive, but maybe last longer or whatever.
[00:03:59] Taylor Pearson: A complex system is one in which you have emergent properties and there is no discreet set of good answers that, it's constantly evolving. So, like, sort of distinction between complicated and complex. The example I like is like the difference between like repairing a car and repairing a horse, right? So a car, as we said, of discrete parts, right? So you can take the tires off the car, put a new set of tires on. If you take the liver out of the horse, you cannot put the liver back in the horse, right? It all works together. It's all integrated. You know, once you take the liver out, you take the heart out, the horse is no longer functional. It all requires a sort of integrated system.
[00:04:37] Taylor Pearson: I think it's an interesting way to think about work, right? Like you can say certain type of work a given role or a given task that someone's doing, right? Like there's certain things they do on a day basis that are simple things, right? It's four steps, whatever. Making my coffee in the morning, like it's four discrete steps. There's a correct way to do it. There's a wrong way to do it. There's also complicated things I do that are, you know, slightly more whatever. And then there's also complex things I do that there's not a clear right answer. It's hard to do.
[00:05:02] Taylor Pearson: And sort of the way I talked about it in the book is, you know, if you think of these things sort of as a pyramid, there are these two kinds of forces. One is globalization, I use the word machines. I debate on this a lot about, but like technology more broadly. I would sort of like lump AI into this thing, but these two forces of sort of globalization in machines that are kind of eating their way up the stack, right? And so like a lot of maybe the early stages of globalization was a lot of this simple work that got eaten by one of those two things effectively, right? That things got outsourced. They got to moved overseas for a lot of people in developed countries, manufacturing from the US of China, kind of that classical stuff or it got automated.
[00:05:40] Taylor Pearson: I did a tour of the Toyota production plant in San Antonio, Texas, right? And it's like, there's got about a lot of big machines, right? There's people there that are doing stuff as well, and like the people are important. They've automated a lot of that, like how that sort of, production works.
[00:05:52] Taylor Pearson: And so my thesis around, I think I might have used the term AI. I certainly wasn't like knowledgeable about AI to like make any sort of interesting predictions. But like to me, it's like a part of that machine group that is just like eating up that stack, right? So now I think, a lot of things software did a lot of simple things, right? Like, if you can write a deterministic, if you can write a set of steps for this thing, the software can do those steps in a deterministic way. And AI, I think is when it starts to get into that complicated stuff, right? That you can have things that require certain expertise and heuristic decision making. And you know, it seems plausible that the trajectory that AI is on will get to where it can do that kind of work, right? Like, I guess my sort of mental model for AI is like, it can like let anyone be mediocre at anything.
[00:06:35] Taylor Pearson: You know, if you've seen like AI write a history paper or whatever, like it writes a pretty good freshman, sophomore, level history paper of the impact of Napoleon on Russian culture or whatever, right? Like, it's like a pretty decent attempt at that. But it's not like really good, right? Like, I don't like, I don't like have chat GPT write me an essay on, like write, you know, something and like be like, oh, this is like really not, it's like, it's fine, it would probably get you, I don't know, a B or a C if you like, turned it in and your teacher didn't know, know what it was or whatever.
[00:07:04] Taylor Pearson: So I guess that's the way, you know, I talk about in the book, like becoming more entrepreneurial. Like, I think another way of saying that is like, getting better at dealing with complex environments. Emerging things are changing fast. How do you do that kind of work? Because that is the thing that is, that's what's scarce, right? That's what's hard to do for people, that's what's hard to do for machines. And so getting good at that is making yourself more valuable, I think in the long run.
[00:07:30] David Elikwu: Okay. Yeah, that makes sense. I think the other part that links to what you were saying that I think is interesting is, so Naval Ravikant has, I think at one point, essentially the quote from him is something like, there's two ways to make money bundling and unbundling. And I think part of what made the internet age really great is that it unbundled so many things, right?
[00:07:50] David Elikwu: In order to make money and for a business to be successful, it no longer needed to corner a market. You could make money, just like you were saying, in loads of niche ways. You could have your podcast in one corner of the world, which is just about one particular US football team or college football team. And you could have your business that sells mid-century modern cat furniture. Like you could do all of these very niche things. Because the internet allows you to find all of the people distributed around the world that make money doing that thing.
[00:08:21] David Elikwu: And I think I heard someone else say something similar today, but I think he was just commenting on when people talk on the internet about how, okay, starting an e-commerce business, you can make millions and things like that. A lot of people think it's a scam. And a lot of people, I mean, there's definitely plenty of scams, so I'm not saying there's no scams, but I think the point is when you hear about, okay, all of these e-commerce businesses that are making 7, 8, 9 figures, it seems incredible, but it's because, okay, so I just finished reading a book called Flatland. I'm not sure if you've come across it. It's really good. I'd highly recommend it. It's a, it's an old book and it's a bit weird. It's basically just about a world of a flat world that is two-dimensional and someone comes from a three-dimensional world into the two-dimensional world and explains to someone in that two-dimensional world that there are other dimensions. So they take them up into the three-dimensional world and they're like, whoa, you guys have spheres here. Because in their world, everything is just flat, right? So things are just straight lines or things have points, but you can't really figure out that they're points. 'Cause if you are actually flat, then you don't know how many sides of shape has, anyway, it gets a bit complex.
[00:09:27] David Elikwu: But the point is, it just made me think of that where, when you think of the internet, what that allows you to do is go up another level. 'Cause when you are in Flatland and you can only see the world in one way, you think of things as local. And so when someone explains a business that makes a lot of money, you're thinking, oh yeah, like within this local sphere. But actually the internet is up a level and you can go down into, you know, Japan and China and you can make money in all places in the world. And so it's actually super easy to have a seven figure business because it's so distributed. Like it's not in one place. If it was in one place, it'll be a very different type of business. But because you are able to come up into 3D land, you can make money from all around the world and it's very different.
[00:10:08] David Elikwu: So I went on off on a bit of a tangent there, but bringing it back to AI, the thought that I had was, there's a sense in which AI kind of rebundle things. Because before, for example, you know, you would go to Google to search for text, you would go here to read something, you'd go to your library or you, you'll go to blogs or research papers or wherever. Then you go somewhere else to create images like, you would be multimodal in how you approached life, how you approach research, how you approach learning, how you approach entertainment. And there's a potential future where a lot of these things just coalesce and actually open AI or choose your tool of choice. They begin to aggregate a bunch of different services. And I think, you are already starting to see that there's some AI tools that could generate characters, they can generate images, they can generate potential movies.
[00:10:55] David Elikwu: And I wonder what happens when you no longer need to go to different websites, to different places to see what you need to see. Like if you could just type and the entire film was generated on your laptop, like within the same screen here's a film that you can watch. You know, what does that do to, I dunno, to creativity, to entertainment, to books. So the things that you could be doing as an entrepreneur, I wonder how much of that gets eaten up by the AI.
[00:11:20] David Elikwu: So all of that's on one side, and then on the other side is this idea that a lot of this comes at the price of our cognition. And I wonder what happens when, if you never need to, like I think we've already lost one set of skills, which was, for example, mental maths. You don't need to learn to do mental maths if you have a calculator. And then what you currently use a calculator to do, if you just ask chat GPT, what is this? What's the percentage of this? It's just gonna tell you, you don't even need to learn that. And in a similar way, at least in our generation, you still have to do some research. Maybe you could use the internet for some of that research, but you don't have to go to the library and learn how to look for books. You could maybe do some research on the internet, you can look for webpages. What happens when you no longer need to learn to look for webpages? All you have to do is just type a search.
[00:12:02] David Elikwu: So all of these different functions just coalesce into such, such function and, just being able to think of what you want is all that you need to be able to produce a vast amount of things. So I just wanted to know what you thought of the coming together of all of those ideas.
[00:12:18] Taylor Pearson: My wife and I like those like, like every year Netflix has like five, like sort of C grade Christmas rom-com movies where like the script feels like AI generated. Like if it's not AI generated, it could totally be, you know what I mean? It's like the same cliche plot line where you could like predict the thing or whatever. So whoever's producing those is screwed because that for sure can be turned over to AI.
[00:12:40] David Elikwu: Yeah, hallmark
[00:12:42] Taylor Pearson: complex I've seen that movie five times every year. Yeah, exactly. Exactly. Hallmark. Hallmark better have some good AI engineers because yeah, someone is coming for them.
[00:12:52] Taylor Pearson: I think that's interesting. And I, you're talking about like the flatland thing, and I like the bundling on unbundling framing because right. It captures a sense in which like nothing is novel, right? Or like, we're just like recapitulating certain things. So it's like, in a certain sense, I think about, it's not that there's no local businesses on the internet, it's just the way you define local is different, like local might mean a forum or a subreddit, right? Like, that's the thing that happened. Like in the ways someone would be well regarded in their local community of, you know, 5,000 people, it was like in their neighborhood because they were a good plumber, right.
[00:13:24] Taylor Pearson: That same phenomenon exists on the internet. It's just like you're a common poster on whatever the personal finance subreddit, and you have helpful things to say about, you know, how people should do their budgeting, and you're a trusted and valuable member of that community and that gives you access to certain opportunities that people will trust you and whatever. And then you have an online course that you sell about how to, you know, get your budget right, or you do some consulting or you write a book or whatever. So I guess I have that, like, same intuition about the AI thing is like, It's just gonna change sort of the definition of like what local means or like how that work like, instead of being local, you know, I think about like the cat furniture business. I tried to, I tried to draw this image at one point. Maybe I could see if I could get an AI image of like, instead of like New York City, you have Amazon, right? And like, you live on the outskirts of Amazon and you're on the border of like Amazon and Google. And like, that's how people, you know, people find you through those two channels. And like, that's the local place that you occupy on the internet. And you know, instead of selling plumbing, you're selling, you know, selling a plumber, you're selling cat furniture or, sort of whatever it is.
[00:14:22] Taylor Pearson: I guess that's the intuition I have that like, it bundles and unbundles in a different way. Yeah, I don't know, right. And I think, like, maybe a good answer to your question, like, what would I change about the book now is I think I like, I like probably overestimated the extent to which the internet would like remain like, somewhat decentralized or like somewhat distributed.
[00:14:41] Taylor Pearson: And I think right. Instead we've ended up with a bit more of like this wall to garden phenomenon. You know, like, if you, if you like Sci-fi, there's a book by Neal Stephenson called Snow Crash that was like, I think he coined the term Metaverse in that book. It's like from the nineties.
[00:14:55] Taylor Pearson: It sort of like imagines this future, where it's like Ready Player One, if you enjoyed that book. It's like sort of a conceptually similar book. It like imagines this future of, like, nation states have kind of collapsed and you kind of have this like reutilization. And so I think, I think the protagonist lives in, I think it's called like Mr. Lee's Greater Hong Kong, like now encompasses like the west coast of the US and like Vancouver, Canada, right? That's this sort of like new jurisdiction of like greater Hong Kong, you know, I don't know. I just thought that was like such an interesting concept, right? It's like we're just redrawing the lines here, we're we sort of doing it.
[00:15:27] Taylor Pearson: That's my intuition about AI is like, it just sort of like redraws the lines in a different way. And like, maybe it is something that's like a lot more centralized, right? Like maybe there's huge economies of scale to it because access to certain data sets, chip production becomes, you know, you can't make a good AI model without using one of four play, you know, open ai, Google, Facebook without having access to one of these four sort of players and that, you know, yeah. You end up in some different sort of jurisdiction.
[00:15:53] Taylor Pearson: So that was a long-winded answer. I'm not sure like really got the heart your question, but the unbundling bundling seems directionally correct way to think about it.
[00:16:00] David Elikwu: Okay. Tying this slightly back to what we talked about before, do you think it changes the heuristics that people might have? So, you know, we talk about like navigating with compasses or gyroscopes and, the Turkey problem of people typically may have had heuristics of this is the part that you go on, here is how you get a career, here's how you get a job. Do you think it changes anything about that process of how people find what kind of things to work on?
[00:16:24] David Elikwu: So for example, just using the example we just gave, maybe you don't think about, you know, going and working at Hallmark right now. Maybe, maybe you think of a different company or a different industry to work in. Do you think it changes any of that at all? Or like any mental models that people might have for picking a career or picking a field?
[00:16:41] Taylor Pearson: That's, I'm not sure I have a good answer that question. I think, I think it's certainly worth thinking about. Douglas Adams, the author of The Hitchhiker's Guide, the Galaxy Series, which is one of my favorite fiction series. He has like a great quote or something like, everything that exists in the world when you're born is normal and the way the world should be, everything that's invented between the time you're zero and 35 is new and exciting and you could build a career in, and everything that comes into the world after you're 35 is unnatural and against the way things should be and should be stopped or whatever.
[00:17:10] Taylor Pearson: And you know, right. there's a lot of truth to that observation. Yeah, I feel like it's certainly worth thinking about, like what extrapolate the AI thing outwards and like, what is that? What does that look like? Like I, I mean, I think one thing is like, to our conversation earlier about like what is safe and what is not safe, right? Like you have certain careers that look safe or, you know, may look safe to most people now that in fact aren't super safe, right? Because, you know, you're having this sort of changing landscape of what is valuable and like how the economy is structured. But I guess I think about through the same ones I mentioned earlier, it's sort of like the first wave of machines, technology, globalization, sort of ate away at the sort of the simple type of work. Like, I think AI like, it starts to eat away at the more complicated. They're like a junior coder is a good example, right? Or there's probably like lots of like, junior legal professions. Like we think of like, sort of mid-level or junior levels in like a lot of fields where like, it's not as defensible as it used to be, right? Like, you know, I moved into like a older house and I'm like learning about home maintenance and all this kind of stuff, and like, chat GPT is awesome for that, right? Like, I can get to like a three out of 10 plumber knowledge, like pretty fast.
[00:18:14] Taylor Pearson: Does that make me a good plumber? No. Like, you know, I mean, I'm not qualified to do something, but like, I can get a basic understanding, I can get to like mediocre at something like a lot faster than I used to be. You know, that like first half of the learning curve I think is a lot faster. But yeah, I'm not sure it like affects that much on the second half of the learning. You know what I mean? Like being truly exceptional is something feels as or more valuable as it used to.
[00:18:39] David Elikwu: Yeah, that makes sense. And actually I'm very glad that we talked through this part 'cause it's also helped me to solidify some of my thoughts on certain things. I think that some of the stickiest jobs will also be the ones with the highest risk asymetry.
[00:18:52] David Elikwu: Literally just came off the back of what you were saying where, okay, so thinking about law specifically from when I worked in law, people were already talking about, this is before we were talking about ai, but tech was already starting to be incorporated into law.
[00:19:05] David Elikwu: And there were already these murmurings that, oh my gosh, you know, some lawyers might be replaced by technology. And that seemed like nonsense to me because, and my reason why has changed from now, but at the time it was because, I worked at a firm where, I mean, we'd had a merger with a US firm. So now it was a much bigger firm, but the original version of that firm, we had people that I worked with where, so there was one guy, I think he was almost 80 or something, and he had been a trainee when the first, so that's like a first or second year associate when the first Star Wars film came out. And the person that he worked with that he trained under was also still at the firm. And the person that he trained was also still at the firm. Like you have like three generations of really old people that all one, one trained the next all at the same firm.
[00:19:48] David Elikwu: And when they were first and second year associates, they didn't have email. There were no email, there were no computers, there were no type of writers. Secretaries used to write by hand what a partner dictated to them. So first of all, imagine how hard that job was. And then, and then trying to get everyone trained on email or trained on typewriters first, and then trained on email. And you think at some point, oh, you know, you're not gonna need all these secretaries. People are worrying about what's gonna happen to trainees. Because for example, I think trainees used to have to like, write things out behind or, or to copy documents. They would just type them out again. And that was how you made extra copies of documents that you needed to take with you. And you would think, oh, we're not gonna need all these trainings. And now before you would have like two or three, now you have dozens. The same in the US you have some associates and you have first and second year associates. You have so many more of them because there's actually way more work for them to do.
[00:20:39] David Elikwu: But I think, the reason my thinking has changed, I still come to the same or similar conclusion, was just off the back of what you were saying, where I think it's just a function of risk. And it's the same reason right now. People are still not accepting self-driving cars. If you drive your car and you kill someone, you can say sorry, and it's like, ah, you know, at least there was a human that you can either forgive or hate for the rest of your life. If an AI, is driving a car and kills your child, what do you do? Like the anger that you have, you have nowhere to direct it. And I think that's a big reason why people hate it.
[00:21:09] David Elikwu: And so it's the asymmetry function there where, if the risk could involve AI killing someone, people are not necessarily gonna be on board.
[00:21:16] David Elikwu: And similarly with law, if the risk is your company's gonna lose millions because this AI didn't think to check this other thing, good grief. Like the, the lawsuit for the first you know, law firm that tries to employ AI on something serious is going to be astronomical. I can't imagine being the partner that decides, oh yeah, we're gonna take this risk, especially on something serious.
[00:21:37] David Elikwu: And so I wonder how, how long it will take for some of these things to be sticky. Whereas if the risk like plumbing, Hey, I can just say what's gonna happen? The risk is maybe I can't use the toilet for a little while. You know, the risk is, okay, now I have to call a real plumber to come and do the thing that I just messed up. I made it even worse.
[00:21:54] David Elikwu: So I think that's how some people might do the math.
[00:21:57] Taylor Pearson: I use chat GPT, for writing stuff, and I find it's just like it, to your point about the bundling unbundling, like there's just certain things, like it's really good at like analogies, like help me. I'm trying to, I'm trying to explain this concept, like throw out some analogies for me or like, I was trying to do something with like a sports metaphor and I was like going back and forth with it, trying to come and it was like, great at that.
[00:22:17] Taylor Pearson: But yes, it's certain, like I would never use it for something where I needed to be like 99% confident right. Like something where I'm happy with where 70% accuracy is sufficient. Great. That's fine. Like, I'm gonna do some sort of like, brainstorming exercise, but I, I just, yeah. About law for searching. I wouldn't, I do, I have a chat GPT bot that like, have a little prompt in there for it to be a lawyer or whatever, and like I'll talk to it or whatever. And it actually is, it is useful for like me coming up like, before I have a call with a lawyer, I will come up with the you know what I mean? Like I'll use like, come up with the questions and get like, be mediocre, you know, be three out of 10 or whatever to like be able to participate in that call. I wouldn't like make a major financial decision on the basis of that thing.
[00:23:46] David Elikwu: Yeah, exactly. I think same. And I don't want this to turn into, you know, we're just talking about law, Here is one maybe to connect to your investing, which is probably one of the, the main things that you do now running the Mutiny Fund.
[00:23:57] David Elikwu: But there's an obvious, tell risks that people talk a lot about with ai, which is that it's gonna come and kill everyone. And so maybe not specifically to talk about that, but I would love to know how you think about tail risks in general. 'Cause I've heard you talk a little bit about the way that you do investments, but actually, you know, maybe taking a step back from that, why would you even do investments by yourself? Why would you invest, there is a safe way potentially to do money where you just spread the risk out in a non ergotic way across the entire market. And you just say, Hey, I'm just gonna put all my money in ETFs. Why would you choose to do anything more complicated than that?
[00:24:32] Taylor Pearson: Yeah, so I think, you've mentioned SCA couple times, so maybe it's worth like talking about that because I think that sort of, plays into what we're talking about with the career stuff and also the sort of investing stuff. So, ergotis the term from physics, but basic idea of erodicy is, when you have two scenarios.
[00:24:47] Taylor Pearson: So, one is, I'm gonna use my own terms necessarily technical terms, but what you call like an ensemble average. One is we'd have what's like called like a time average. So you can think about it, you know, does one person doing a repeated action over time get the same outcome as many people doing one action, right?
[00:25:04] Taylor Pearson: So, in the case, if you flip a coin, you have a hundred people flip one coin, and you count how many heads and tails there are, or you have one person flip a coin a hundred times, statistically it's gonna work out to be the same distribution, right? You're gonna have the same thing.
[00:25:16] Taylor Pearson: If you have a scenario where, the fun example is Russian roulette. If you have six people play Russian roulette once, as opposed to one person playing Russian roulette six times, you get a very different outcome, right? If six people play, once one person loses, five people win, and five people are very happy. If one person plays six times in a row, they're guaranteed to lose eventually. So in a not necessarily, you know, a situation where the ensemble and the time average are, or path are not the same as said to be non ergotic.
[00:25:45] Taylor Pearson: And so, it's interesting with respect to like, careers and technology and paths and stuff, right? 'cause it's like, just because it worked for someone else at another point in time doesn't necessarily mean it will work for you now, right? It depends what path and what director you're on.
[00:25:58] Taylor Pearson: But then I think also very interesting in a finance, and investing context because, you as an individual do not get the average returns of the market. You get what you get based on the path that you're on. So, if you have a child that gets sick, if you need to support a parent, if you have get laid off, if you wanna start a business, right? You have all these sort of inflows and outflows that matter, you know, impact the trajectory.
[00:26:20] Taylor Pearson: So like, You know, one example I give, this is like The Dow Jones Industrial Index from 1966 to 1997 returned on average 8% a year. But it did that in two very different ways. So the first part of that period, not remembering these dates exactly, but let's call like 66 to 82, it was basically flat. There were no returns. And for the second part of that period, that's called 82 and 97, it had 15% a year returns, right? So it had 8% on average over this 30 year period. But the trajectory of those really matters, right? So if you are 65 years old and you retire at the start of that time period, you are drawing down that whole period, right? You're sort of spending your money and so you don't get the average returns because by the time the sort of the strong period of returns comes in, you've already withdrawn a lot of your wealth. If you do the inverse scenario, you know, you get strong returns. So that I don't remember these numbers off head, but are off the top of my head. But something like a, you know, let's say a couple that retires with $3 million and they're planning to spend $180,000 a year in their retirement. If they get that bad period first they go broke after like 12 or 13 years, right? Because their investments aren't appreciating and they're just drawing down on it. If they get the good period first, they get the strong returns. Their retirement account grows a ton, right? They're only withdrawing a portion of it, it's going up 15%. You know, it peaks at 12 million and their retirement savings last them for 70 years or something. And so, sort of central to the way I think right in investing is like, we don't know the future path of returns, right? Like we don't know what the trajectory of those are. We don't even know the average. But even if we did know the average, it's not enough to know the average. You also need to know sort of the sequence and the path. So I think this is interesting and relevant to most people in the sense that, a lot of times when people get financial advice, they talk about, oh, you know, the stock market returns 7% on average. Which is a correct statement, that's a roughly true statement from the historical data I've seen.
[00:28:06] Taylor Pearson: One of those things is that you can drown in a river that's two feet deep on average, right? If it's shallow along most of it and has one very deep channel, you can still drown in the channel. So, you know, you can, withdraw 4% a year from an investment strategy that earns 8% a year on average and still go broke, right? Because it depends on the trajectory, the path of sort of what those investment returns are.
[00:28:26] Taylor Pearson: So that's kind of the central idea I've been interested in and sort of, I think how that applies is. I think a lot of investing advice, investment education doesn't tend to make that delineation about like, path and ensemble or, like averages can be deceiving in that sense.
[00:28:43] Taylor Pearson: And so, sort of part of my philosophy and how I think about it is, it's not just trying to maximize whatever your long-term expected return is. It's also thinking about like the possible path and trajectory by which you get there, right? That if you're, you know, you have a down period or 15 year flat period or whatever it is, and you need to withdraw funds at that time, you're not going to get the averages right, because you have inflows and outflows over that period.
[00:29:06] Taylor Pearson: So, yeah, again, been a long-winded thing, but I think that's, I think that's a really important concept that does have a lot of impact on how most people think about their investments. That's not broadly understood.
[00:29:17] David Elikwu: Okay, so how do you avoid that then? Because I think this connects to some of the other things we've talked about. We've talked about tell risks to an extent, you know, something that seems wildly unpredictable, but it can also happen. It's ergotic in the same way that you could have, I mean the Russian roulette analogy is a perfect example. There's six bullets in the chamber. If you get the bullet on the first one, you don't get to live the rest of the five empty ones, right. And so the losses are irreversible.
[00:29:45] David Elikwu: And in a similar way, with careers, with a lot of different things in our lives, if you, okay.
[00:29:51] David Elikwu: So you could say careers are non-ergotic in a sense where, if you have a company that works their people really hard, grinds them to the bone, you could have a lot of people that potentially burn out. But the thing is, the odds for the individual are not the same as the odds for the company. For the company, they are playing the ensemble. They have a hundred people and every single one of them is gonna work however many hours. And a few of them burn out, but they still get the rewards at the end of that for the individual, each individual they get the time series average where they actually have to work through every single hour and day after day.
[00:30:24] David Elikwu: And the thing is, let's say you have X odds of burning out. If you burn out early on, that could irreversibly change your ability to work the rest of those hours. So you don't actually get to live the rest of that timeline. Applying that to investing just like you did, if you take a big loss early on, even though the average result over time is different, you don't actually get to live the rest of that, that time period.
[00:30:45] David Elikwu: So I'd love to know how you think about. Because the thing is with tail risks, a lot of people don't necessarily want to hedge against them. Like if I said, oh, once in a hundred years there's going to be a global pandemic. We just had one a couple years ago. I'm gonna think, Hey, you know, that gives me a good, you know, 97, 98 years, no need to worry. But the thing is, one in a hundred that could actually be five years from now, and suddenly it's a very different picture. So how do you balance these two ideas?
[00:31:11] Taylor Pearson: I like the company and employee thing. I like between venture capitalists and, like startup founders is, they don't like the venture capitalist gets the ensemble return. So oftentimes their incentivized to say, yeah, go big or go home, see if you can do this really good.
[00:31:24] Taylor Pearson: Whereas the individual does not, they get the return of their particular company's trajectory, right? Well, maybe let's be a little bit more conservative and we'll take a path that's more likely to work, but maybe isn't as, as high. So I think, yeah, it is an interesting distinction as well with the company, right?
[00:31:37] Taylor Pearson: But I think sort of like in the, investing context, and I think it's actually, it's harder to talk about with an individual in investing it's a little bit simpler that like, the idea is just thinking more broadly about diversification is like basically the main way to think about it or thinking more realistically about sort of, you know, what to expect on the future.
[00:31:53] Taylor Pearson: So like, again, I'll give like rough numbers here in the case of like, you know, say the average returns of stocks is, is 7%. Well, the 50th percentile is like 5% and the 25th percentile is like 2%. And average returns per year or compound annual growth rate. And so like, yeah, I guess it's like a bit unintuitive, but like the average is highly weighted by the very good outcomes, right? So you have some very, very strong periods and those bring up the average. But like, you don't know where you're gonna be in those periods. Like, are you gonna be invested in that period or are you not? Is that period gonna happen in your, in your lifetime sort of thing? And then, you know, I think to your point about tail risk and sort of, how do you think about the appropriate tail risk?
[00:32:33] Taylor Pearson: Like the number one way to be successful is to not die. The precondition for living is not experiencing death. Both like literally and metaphorically. And so eliminating or severely reducing that possibility is very valuable, because it allows you to continue to play the game, right? If you can like mitigate that, sort of sequencing thing.
[00:32:52] Taylor Pearson: So I think that's where it gets, I think people get tripped up or it's not exactly intuitive. They're like, oh, this thing is very unlikely to happen. Which may very well be true, but like, if the impact is sufficiently negative. It's still extremely problematic to sort of what that long-term growth rate is.
[00:33:08] David Elikwu: Okay, that makes sense. I've heard you mention before Herschel Walker Syndrome. Could you explain that?
[00:33:14] Taylor Pearson: Yeah. So, Herschel Walker was a running back American football player. He was most famous, he played for the Dallas Cowboys. He was like the great running back of the era of generation. And well, I don't, 1990, I'm trying to remember what year it was, he got traded to the Minnesota Vikings and the trade is like now called the Great Trade robbery.
[00:33:35] Taylor Pearson: And basically the Minnesota Vikings like, gave up a ton to get, you know, they gave up like four first round draft picks. I can't remember. Basically, they were saying, okay, Herschel Walker's the greatest player of all time of his generation, and if we get Herschel Walker, we're gonna win a Super Bowl. And that's, sort of that's the number one thing. And Dallas was speaking about it much more from like a portfolio perspective. They're like, all right, we're giving a Herschel Walker, but we're getting four number one drastic picks and all this other stuff. And that was sort of like the peak of the Dallas Cowboy. I think they won three Super Bowls over like a five or six year period. And like a lot of the players from the Super Bowl teams were like the draft picks they got from the Herschel Walker trade.
[00:34:11] Taylor Pearson: And I think a lot of people tend to think about investing in the same way that like the Minnesota Vikings side about Herschel Walker is, it's like, oh, I need to pick the best thing, right? I need to get the number one thing that's gonna do the very best. But like, what really matters is like how the whole team plays together, right? It's how all the different things, all, all the assets in the portfolio interact, right? And so, you know, did the Dallas Cowboys get any players that were as good as Herschel Walker? Probably not. but in aggregate they were better, right? And that's what the, so, you know, they could have all those players in the field, and that was sort of the trade.
[00:34:46] Taylor Pearson: So that, that's my observation about a lot of people tend to think of investing as, like, it's, I need to pick the winner, or I need to pick the best thing. And they don't think about the overall portfolio and how all the pieces work together. So like, coming back to our ergodicity and, and talking about sequencing risk it's like, sometimes it may be better to add a investment to a portfolio that doesn't have as good of strong returns, but is a diversifier against the rest of the portfolio, right?
[00:35:11] Taylor Pearson: So tail risk stuff would be something that may, like, is it gonna have the best long term returns. Not necessarily, but that's necessarily the point. The point is how does it interact with all the other elements in the portfolio.
[00:35:22] David Elikwu: Yeah, that makes a lot of sense. How much does it make sense to hedge against these big risks? Because, for example, okay, I can think of a few different models or a few different people have said things about this. You know Charlie Munger said, okay, we don't try and be smart. We try and avoid being stupid.
[00:35:38] David Elikwu: I think even in this conversation we talked about the idea of the best way to survive is not to die. So if you want to be able to live long and benefit from all the experiments you could take, you have to avoid being completely wiped out.
[00:35:49] David Elikwu: I had Luca Dellana on the pod a little while back, and he talks about a similar idea where, you know, again, instead of trying to win, you trying not to lose. I think he used the example of Elon Musk where, Elon Musk has survived going bankrupt so many times that if you imagined all the other alternate universes where Elon Musk exists, he's probably broke in quite a few of them. And actually we are living in the one universe where Elon Musk gets to be the richest person in the world simply because he is taken so much risk. And actually it might be better to optimize for you know, I think the Model Luca uses is, how could I create a situation so that in the maximum number of alternate worlds I am equally happy? And so I am kind of like cruising along at any equal pace in, in multiple universes. And so I'm not taking on too much risks that I could be wiped out, which means I don't get to benefit from future attempts.
[00:36:42] David Elikwu: However, going back to what we discussed earlier, there is such a thing potentially as playing it too safe and not taking enough risk. And actually if you don't take any risks or you take risks too infrequently, you are not used to risks when they arise. And so the big risks that you do feel are so much more painful because of that.
[00:37:01] David Elikwu: So how do you think of the balance there, both in investing and in life in general of balancing? Okay, we need to build some muscles of learning to take risks, but then also you don't want to take so much risk that you could be potentially wiped out.
[00:37:13] Taylor Pearson: Yeah, I like the exercise example you gave, like jumping off a wall is nice, right? Like you want more moderate stressors are more moderate risk and less significant tail risk, right? So like, I think, someone's like running a small business or working small business. You have a lot more moderate risk, this client doesn't pay, this thing doesn't happen. but you're a little bit more in control of your destiny, right. Like you're sort of like directly interfacing with the market. So, you don't want to eliminate risk. It's that you want to eliminate the very, very big risk and you want to take more of the appropriate medium term risk.
[00:37:45] Taylor Pearson: So like, one of the concepts I had in The End of Jobs, this idea of like stair stepping in your career that like, you want to try and do something that's like reasonably adjacent to your skillset as opposed to like, you know, oh, I'm gonna leave my corporate law job and I'm gonna go do like an AI startup where I have no experience in AI and whatever, right. Like. It's a lot harder than like, I'm gonna leave my whatever job and I'm gonna do something that's like reasonably adjacent to it, where like my existing network and skills or whatever are somewhat transferrable and do it right. Like it's still a risk, but it's not like I'm gonna start from scratch kind of thing.
[00:38:18] Taylor Pearson: So that tends to be the way I think about it career-wise.
[00:38:22] David Elikwu: So from your time running the mutiny fund, is there anything that you've learned from your time as an investor that you can apply to the rest of your life in terms of whether it's the level of risk you undertake or the way that you approach starting certain projects? You know, has that changed the way that you think about any other aspects of life?
[00:38:42] Taylor Pearson: That's a good question. I'm not sure. I think, like I do think about the ergodicity stuff a lot and like the path dependency stuff a lot more. I'm not sure I like any like, great concrete examples other than just like, those things tend to be on my mind a lot more when I'm like thinking about particular decisions. Like what is the sort of like trajectory or, or the path here.
[00:39:00] Taylor Pearson: I'm a big fan of Lucas' work that same way of like okay, across multiple universes, like how do I maximize the probability, maximize the number of them in which things work out pretty good. I think that's maybe one shift in my thinking as a result of it. And I think.
[00:39:14] Taylor Pearson: Yeah, I don't know. I think that that's the main thing that comes to mind.
[00:39:17] David Elikwu: Okay. Fair. Just because it came to my mind from what you said, could you explain the Helsinki bus station theory?
[00:39:24] Taylor Pearson: Oh, yeah. No, I would love to, I think it was a commencement address at a, a photography school, an arts and design school. So the Helsinki bus station, the way the bus routes are designed is there's sort of a central station, and I've actually never been to the Helsinki bus station, but this is how it goes in the story at least.
[00:39:40] Taylor Pearson: And all the bus routes start leaving along the same path. You know, they're leaving the main road out of the bus. And you can think of each of these steps as like a year in your career. And so everyone, whatever gets done with high school, graduate from university and like, you kind of start off on the same trajectory, right?
[00:39:55] Taylor Pearson: And you get, you know, oftentimes you'll get a year or two in and you'll look around and you'll say, I'm kind of doing the same thing everyone else is doing. You know, you're an artist, you're looking around at you're working, you're going, oh, this is derivative, right? I'm doing this. Like, I'm copying some other artist that I like, right? I'm doing some sort of copy derivative thing. And what a lot of he'll do is they get off the bus and they go back to the bus station and they get on another bus, and then you go two or three stops and you look around and you get into the same scenario. Everyone else is at this bus stop. Everything's going different, everything looks the same and starting to go back to the bus station. And so the Helsinki bus station theory is, you stay on the bus because the further the bus gets out from the bus station, you know, the paths start to diverge, right?
[00:40:36] Taylor Pearson: And you're sort of, you know, in the context of like, if you're an artist, you're doing photography, right? Your work starts to evolve, it starts to get unique, you start to develop your own sense of taste. First, it's maybe a blend of two or three artists that you admire. And like, now you've taken that, you've incorporated and you've done something slightly different like, in the same in your context of your career. Like, I think, you know, most people three to five years in their career like, I don't know how much novel stuff are they doing? Like, probably not that much.
[00:41:01] Taylor Pearson: I mean there's research on this as well. Like, I think there's some, like, they sort of like average age of like Nobel Prize winners and like, I think it's usually like 12 to 15 years into their career or something. Like, there's something like that, like it takes them a while to get to the edge of whatever they're working on, right?
[00:41:16] Taylor Pearson: Like the third year PhD student is usually not doing something groundbreaking. They're doing something derivative just like getting them closer to sort of the frontier of their field. So like, I guess coming back to like our compass and our gyroscope it's like, what is that sort of gyroscope down the bus path look like and sort of how do you stay on the bus of like, how can you take what you've already got the skills, the assets, the relationships, all those sorts of things and weave them into the next thing.
[00:41:41] David Elikwu: Yeah, that makes a lot of sense. And just the last thing you were saying made me think of. What you talk about also in your book, which is this idea of apprenticeships and because, okay, you mentioned the Nobel Prize winners, you might also know that a disproportionate number of Nobel Prize winners worked for or with other Nobel Prize winners.
[00:42:00] David Elikwu: And you can find some, you know, almost like factories, unintentional factories where there are two or three different Nobel Prize winners that all worked at one point within the same university working with one particular person. That person themselves may not even have a Nobel Prize, but at least there was a starting point where all of them kind of diverged from.
[00:42:18] David Elikwu: And I think that's also a another underrated aspect of life building or career building, which is just kind of learning from other people as well.
[00:42:26] Taylor Pearson: Very much it. Kevin Kelly, he has a term he uses called the Scenius that I like, you see this in history a lot that like how creativity emerges in certain groups, so, like Paris in the twenties or the Vienna Circle. Someone was telling me about this, like this is an AI thing that like a lot of the sort of most for AI people were the same. They were like, there was like two labs 10 years ago, right? There was like two people that a lot of them were working for that, like that was sort of the promising approach that they all came out of.
[00:42:53] David Elikwu: Thank you so much for tuning in. Please do stay tuned for more. Don't forget to rate, review and subscribe. It really helps the podcast and follow me on Twitter feel free to shoot me any thoughts. See you next time.