πŸŽ™οΈ Rationality, Startups, and the Danger of Belief with Liron Shapira

David speaks with Liron Shapira, an entrepreneur and angel investor. He is the founder of Relationship Hero, a relationship coaching service with over 100,000 clients. He is also a technologist, rationalist, and serial entrepreneur, known for his sceptical views on crypto and other inflated startups. Currently, he hosts the Doom Debates podcast, which aims to raise mainstream awareness of AGI's imminent risks and build the social infrastructure for high-quality debate.

They talked about:

🧠 The bridge between rationality and belief

πŸ’Ό How entrepreneurs find value in risk

⚠️ The danger of hype-driven startups

πŸ€ The role of luck in investment success

❌ Why optimism can lead to Web3 failures

πŸ—£οΈ The Balaji effect on conversations

πŸŽ™ Listen to your favourite podcast player

The Knowledge with David Elikwu - Podcast App Links - Plink
Podcast App smart link to listen, download, and subscribe to The Knowledge with David Elikwu. Click to listen! The Knowledge with David Elikwu by David Elikwu has 29 episodes listed in the Self-Improvement category. Podcast links by Plink.

🎧 Listen on Spotify:

πŸ“Ή Watch on Youtube:

πŸ“„ Show notes:

[00:00] Introduction

[02:14] Why computer science and rationality go hand in hand

[04:31] The flaws in human rational thinking

[06:36] Irrationality can be an asset

[09:04] The role of rational thinking in startup success

[11:11] The common pitfalls in startup value creation

[13:09] What Liron’s first startup taught him

[15:49] Why sustainable value matters in startups

[18:04] The humbling reality of Liron’s investment career

[21:10] What makes a startup's offering irresistible?

[24:30] Is Bitcoin just a fad or a future asset?

[29:40] Why optimism can lead to Web3 failures

[32:55] Why do we struggle to understand Balaji's ideas?

πŸ—£ Mentioned in the show:

LessWrong | https://www.lesswrong.com/

Eliezer Yudkowsky | https://www.lesswrong.com/users/eliezer_yudkowsky

Sequences | https://www.lesswrong.com/tag/sequences

Snapple | https://www.snapple.com/

Dan Ariely | https://danariely.com/

Predictably Irrational | https://amzn.to/3Nt7qKd

Paul Graham | https://paulgraham.com/

Relationship Hero | https://relationshiphero.com/

Quixey | https://en.wikipedia.org/wiki/Quixey

Ask.com | https://www.ask.com/

Alibaba.com | https://www.alibaba.com/

The Bloated MVP Test | https://medium.com/bloated-mvp/the-bloated-mvp-test-1f5b0770305f

Robin Hanson | https://theknowledge.io/robinhanson-1/

Fermi Paradox | https://en.wikipedia.org/wiki/Fermi_paradox

DoorDash | https://about.doordash.com/en-us/company

Starbucks | https://www.starbucks.com/about-us/

Cryptocurrency | https://theknowledge.io/what-is-cryptocurrency/

Coinbase | https://theknowledge.io/nfts-explained/

SpaceX | https://www.spacex.com/mission/

Stripe, Inc. | https://stripe.com/

Anduril Industries | https://www.anduril.com/mission/

Axie Infinity | https://axieinfinity.com/

Helium | https://www.helium.com/

LoRaWAN | https://www.thethingsnetwork.org/docs/lorawan/what-is-lorawan/

LongFi Solutions | https://www.longfisolutions.com/

Marc Andreessen | https://en.wikipedia.org/wiki/Marc_Andreessen

Andreessen Horowitz | http://a16z.com/

Chris Dixon | https://a16z.com/author/chris-dixon/

Blackrock | https://www.coindesk.com/tag/blackrock/

Balaji Srinivasan | https://balajis.com/


πŸ‘‡πŸΎ
Full episode transcript below

πŸ‘€ Connect with Liron:

Twitter: https://x.com/liron

Relationship Hero: https://relationshiphero.com/

Doom Debates Podcast: https://www.youtube.com/@DoomDebates

πŸ‘¨πŸΎβ€πŸ’» About David Elikwu:

David Elikwu FRSA is a serial entrepreneur, strategist, and writer. David is the founder of The Knowledge, a platform helping people think deeper and work smarter.

🐣 Twitter: @Delikwu / @itstheknowledge

🌐 Website: https://www.davidelikwu.com

πŸ“½οΈ Youtube: https://www.youtube.com/davidelikwu

πŸ“Έ Instagram: https://www.instagram.com/delikwu/

πŸ•Ί TikTok: https://www.tiktok.com/@delikwu

πŸŽ™οΈ Podcast: http://plnk.to/theknowledge

πŸ“– Free Book: https://pro.theknowledge.io/frames

My Online Course

πŸ–₯️ Decision Hacker: http://www.decisionhacker.io/

Decision Hacker will help you hack your default patterns and become an intentional architect of your life. You’ll learn everything you need to transform your decisions, your habits, and your outcomes.

The Knowledge

πŸ“© Newsletter: https://theknowledge.io

The Knowledge is a weekly newsletter for people who want to get more out of life. It's full of insights from psychology, philosophy, productivity, and business, all designed to make you more productive, creative, and decisive.

My Favorite Tools

🎞️ Descript: https://bit.ly/descript-de

πŸ“¨ Convertkit: https://bit.ly/convertkit-de

πŸ”° NordVPN: https://bit.ly/nordvpn-de

πŸ’Ή Nutmeg: http://bit.ly/nutmegde

🎧 Audible: https://bit.ly/audiblede

πŸ“œ Full transcript:

Liron Shapira: People don't really understand how they're creating value. Like they're spending so much time at the beginning of their startup doing a lot of things that don't connect into making something somebody wants.

It's not just a cliche, it's literally like, most startup founders just don't make anything that anybody wants.

This is such a low bar. So it's like a weird situation where you take 80% of startups and they're not passing the bar, and yet the bar is a really low bar. I'm like, what's going on? Why is nobody passing a low bar of making, making something people want? Like, it's not supposed to be hard.

If you're gonna fail as a startup, a cool way to fail is like, okay, you couldn't make the unit economics work, right? Like the marketing cost was just a little too high, but when you fail, because you can't even make something that like a single person wants. What's going on there?

David Elikwu: This week I'm speaking with Liron Shapira who is a technologist, rationalist, and serial entrepreneur. He's currently the founder and CEO at Relationship Hero, and this is a really interesting conversation and we had a jam-packed conversation.

So Liron and I talked about his very first startup that raised 170 million and blew up and failed completely. But how he pivoted from that, all the lessons that he learned to building his current startup relationship hero.

We talked about how he got into the rationalist community and how that's helped him on the adoption curve for AI and all the things that he thinks about AI, the pros and the cons, the future prospects for a world that is enabled with the AI, whether it's going to kill all of us or not. We also talked about the pros and cons of Web3 Crypto and Blockchain, and whether those are really promising technologies or not.

So this is a really engaging episode about the frontier of technology in various respects. And not just what we can love and appreciate about how technology is developing, but also how we can think critically about the ideas we're being presented with.

So you can find Liron on Twitter @liron and you can get the full show notes transcript and read my newsletter at theknowledge.io. If you love this episode, please do share it with a friend, and don't forget to leave a review wherever you listen to podcasts because it helps us tremendously to reach other people just like you.

David Elikwu: I was looking at you know, a lot of your, your writing and thinking about a lot of your background and I think two main things that underpin a lot of the things that you talk about are computer science in the sense of what it interacts with and then rationality.

So I'd love to know from your perspective, how did you come to those two places? Like what got you interested in computer science? What got you into this idea? The concept of rationality and digging deeper into that community, 'cause there's a whole community around it as well.

Liron Shapira: Well I'm really into rationality in computer science. I think you, you really got my number on that, you know, it's just been like a lifelong obsession for me. I was always just like very nerdy. I loved being in my head thinking about chains of logic. When I first learned that you could program a computer, it was actually from like a library book that had examples in code. I'm like, what's going on here? You're like typing stuff in that makes the computer do stuff. What? And so of course I ran home and asked my dad, you know, what, how does this work? And my first program language was Basic using examples from that book, and I'm like, oh man, I was like in heaven, right?

So it's, it was just really a good personality fit for me. And then I studied computer science in college. I studied a lot of math and Mega mathematics. It's this branch of math where it's like, you know, how do you formally encode a proof? And what does that mean to prove something, you know, how do computers prove things?

So yeah, like I combined math and computer science, and then I got into LessWrong. Written mostly by Eliezer Yudkowsky. It's called The Sequences. I'm not sure if you're familiar with the Less Wrong Sequences.

David Elikwu: Vaguely I've come across less wrong, but feel free to explain more.

Liron Shapira: Yeah. So it's this giant, multiple thousands of pages. This corpus of basically like how to think, like how to operate a human brain to try to approximate the level of an ideal reasoner built from scratch. So it's kind of like taking an AI lens to the whole philosophy field, and it's like, okay guys, we're not, it's not just about like, what feels right here. It's like we gotta build an AI from scratch here so we better understand, you know, what it really means to think logically. So I thought that was like a very powerful fresh approach to philosophy to kind of turn the AI lens to philosophy.

And to that, that's just been highly influential on me. So in addition to like my computer science background understanding the rationality sequences and you know, redoing philosophy from the AI mindset, it's completely rewritten how I think about everything. Like now I'm like, okay, I'm just like a brain, right. I'm like a type of algorithm. I have like some flaws that we know about, and I have some ideas on like how I would rewrite myself as a better algorithm if I could. That's kinda like my underpinning of everything I do.

David Elikwu: In a very basic way. How do people not think rationally? What was the biggest bridge to cross, either for you personally or maybe for the average person that stops us thinking maybe in the most rational way.

Liron Shapira: You know, we don't think rationally, so first of all, you gotta give humans credit, right? Like a lot of the stuff we do is rational, right? So if you're just going to the store and buying some nice Snapple, you're probably doing a pretty good job with that, right? And like your eyes are doing a lot of work telling you in 3D, right? All the objects around you. And you're planning like a route and you're giving the cashier the right amount of money. So like a lot of stuff is going right and that stuff is rational. And so the question is like, where does it break down right?

So like the same mechanisms you have that are like giving you this really accurate picture of how to get a Snapple, right? Like at what point when you're thinking about like, Aliens or ESP or like, at what point does it break down or when you're thinking about like free will. Common ways that it starts to break down. One is that like, humans tend to think that things are like inherently mysterious. So there's a mode that humans go into where like, okay, they're buying the Snapple, but now they have to think about like, the beginning of the universe. And then they're like, there's a really strong temptation to wave your hands and be like, well, that's a mysterious phenomenon. There's two types of phenomenon. There's the ordinary stuff and there's the mysterious stuff. And people tend to, they have like a switch where they, they like, okay, let's not be logical about this because it's beyond the realm of logic.

And so people naturally don't realize how far you can go with logic. And the reason that is, is because if you look at humanity like a million years ago, or if you look at an ancient tribe, it was like so hopeless to try to use logic to try to reason about that stuff. It made a lot of sense to just believe that you can't, because it's not that you couldn't just that it was ridiculously hard. And then today we've made a ton of progress. Like actually we know quite a lot about the evolution of the universe. And like you can really hold off on the sense of mystery and you can just like use pure logic to understand, you know, mechanistically what's going on with your cells, right? Like with your feelings, you can use evolutionary psychology, you can use logic to talk about your feelings. And so logic is kind of like creeping through everywhere. But most people still haven't like, gotten the memo, right? They're like, oh yeah, I'm logical about my Snapple, but also Mercury is in retrograde, right? And I also like, believe in Astrology, right? So most people kinda like shift gears.

David Elikwu: One aspect that I'm interested in, and maybe you can shed some light on this, I think of two maybe two wrinkles or two additional facets here. One is I'm thinking of Dan Ariely's book, Predictably Irrational and some of the ways humans act in ways that are irrational, but to good ends. And there can be good outcomes simply by the fact that the ways that we act are intentionally irrational, but also how that can skew our judgment. But then the other aspect is also, I think of something like entrepreneurship as a good example is often highly irrational, even when the outcomes are positive. And sometimes the idea of taking risk and perhaps unnecessary risk. From a purely logical perspective. I know you can break down all the Maths of the expected returns, etc. But often you have people that are taking bets that on the surface, I think Paul Graham talks about this which is ideas that look like bad ideas, but are really good ideas. And so at first glance, this might seem like something not worth pursuing, but actually once you do pursue it and you break some kind of intermediary barrier, then it actually becomes a better idea. And I think another analogy that he uses is that ideas look like smooth services, but when you take a deeper look, there are actually like lots of little facets and those are the

Liron Shapira: Yeah. Yeah. He just published that last night.

David Elikwu: Yeah.

Liron Shapira: Yeah, I saw it. It's great. No, I actually, I love the Paul Graham's analogy of the fractal, right? Like when you get up, get up close to something, this might be a little tangent, but I've had the same thought in terms of startups. When you're just a startup trying to go to market, a lot of pitches sound like, it's a smooth pitch attacking a smooth space. Like, oh, we just need 1% of the market and the market is the educational tools for toddlers who have scientists, parents or whatever. So just like the smooth description. But then when you get really hairy, you're like, well, actually the part that really matters is we have the best music backtrack. Or like, there's some random detail that you really can't predict and advance in your pitch, but like, once you get into the weeds, then you're just like, well it's not even like the best music, but it's just like, well, actually what we are good at is the exact mechanism that as the toddler advances through the levels, we know exactly when the levels should go back. So it's like something that'd be really hard and random to pitch, but like, it turns out that that part of your operations turns out to be like important.

And like in my own business Relationship Hero, there's like random optimizations that I spend my day doing, like Ab tests that I did on pricing. Turns out that was like a high leverage thing that I do. But if that, I worked that into the pitch, like, oh, we're gonna have great Ab tests on pricing. It's like, what are you talking about, right? So there's like a big difference between like looking from far away and like really just getting into the weeds and like see what turns out to be important.

David Elikwu: Maybe the better question is, at what point did you get into this thinking about rationality and how did it influence your decision making? Particularly what I'm interested in is your first startup, you say failed massively, and your current startup is going a lot better. So I'd love to maybe dig into the story of, you know, what it was like building that first startup, what went wrong, and how maybe your mindset and approach has changed? Because I know you evaluate lots of different startups, both as an angel investor, but also just as a commentator in general.

Liron Shapira: Yeah, my entrepreneurial journey, so my first startup right out of college was a company called Quixey and we were doing search engine technology for App stores. And yeah, in terms of the product, like it worked okay. We had some partnerships, we powered Ask.Com's app search feature. We had a partnership with Alibaba for their app store. We got a lot of press for just raising a ton of money. So at the peak we raised like over 50 million of strategic capital from Alibaba, 170 million total over the years. And we didn't really deliver that much, so we kind of raised way more than we deserved. And eventually we didn't have much to show for it. The Alibaba partnership didn't work out, the whole thing just shut down. And yeah, I mean, it was a massive failure, like a lot of value destruction, like destroyed way more value than we created. And so, you know, the very least I can salvage some lessons, right? It's like from this expensive education, a lot of people's money got wasted, but I can salvage some lessons.

And one of the lessons is, sometimes people pump way too much money into something that's not working well enough, right? Like, we got more investment than we deserve. And that lesson has been helpful in my career to be like, okay, there's people who can, like, even if they have millions of dollars, they may still not know what they're talking about. And that lesson was helpful to me to like helping pop the crypto bubble, for instance, right? Like, I don't care, if it's a trillion dollar industry. Like it's overvalued.

So, yeah, and then, you know, moving on to my ex company relationship bureau. So I took the lessons I learned and I'm like, look, this has to like be a profitable business, right? That was kind of my constraint going into my current company relationship bureau. And the scale of the fundraising has been much lower. We've raised 4 million and we're currently profitable, but our scale is pretty modest. We're still below 10 million a year revenue, which is not bad for just a business in general. But it's considered, it's not unicorn level.

Yeah, so, you know, it's, it's hard to get both high scale and profitability. So we're still trying to tweak that dial.

David Elikwu: So you review a lot of startups at Bloated MVP. I'm interested in what you think are the most common mistakes that you see startup founders make, aside from perhaps raising too much money at the outset.

Liron Shapira: Yeah, I mean there's really just one major one that connects almost everything else. I mean, it's insane how common this is. Which is just that, the idea of people don't really understand how they're creating value. Like they're spending so much time at the beginning of their startup doing a lot of things that don't connect into making something somebody wants, right? And now I'm like, stealing Y C's, catch failures, make something people want. It's really spot on, you know, it's not just a cliche, it's literally like most startup founders just don't make anything that anybody wants.

My observation is like, this is such a low bar. So it's like a weird situation where you take 80% of startups and they're not passing the bar, and yet the bar is a really low bar. I'm like, what's going on? Why is nobody passing a low bar of making, making something people want? Like, it's not supposed to be hard if you're gonna fail as a startup, a cool way to fail is like, okay, you couldn't make the unit economics work, right? Like the marketing cost was just a little too high, you couldn't make the unit economic work. But when you fail, because you can't even make something that like a single person wants. What's going on there?

So I dug into that and it turns out that what's going on is, it's what we talked about before. It's like the fractal thing. The idea that when you're zoomed out, when you haven't really gone to market yet, when you haven't launched and you're like working on your product, you have this very smooth idea of like, oh yeah, people need this. They need like better analytics and I'm just gonna like, make smarter analytics and I'm gonna put AI in it and it's gonna be better. And then when you finally launch it, what happens is just like nobody cares, right? And then you're like, oh, let me email some people, let me get an email list. And it's just like, okay, your email list, maybe they visit the site and then they leave.

You know what I'm saying? And it's like you just, you never got a person to like, come use the product. It's crazy. And then like, that's it. And then the startup shuts down and they never get like one passionate user. And that's like a typical scenario. And I feel like most people don't know this, like this, this super common failure mode is everybody's walking, they're like lemmings going the same direction because like the news isn't out that this is how startups fail.

David Elikwu: I totally agree, and I think you wrote something about it and you referenced it as the Great Leap or something like that.

Liron Shapira: Right, so, there's this concept called the great filter in cosmology invented by Robin Hanson, which is, it's the idea of just like, Hey, you know, the Fermi paradox? There's no aliens. So like there's gotta be at least one step that's really, really improbable in the development from the beginning of the universe.

The formation of a planet all the way to an intelligent civilization, there's gotta be at least one step that's extremely hard because there's so many planets and we only know of one planet with life on it. So where's the filter? Right? And it could be multiple steps, but there's at least one really big filter. And we're not sure what the filter is for human life, like if it's behind us, it's front of us. But my thing is just an analogy I say for startups, what's the great filter for startups? Why isn't every startup a unicorn? Right? Why are there only a few unicorns? And it turns out that the biggest filter step is ridiculously far in the beginning. It's just the step I talk about. It's the step of getting one person to successfully get value from your thing. I believe something like 80% of people who work on something will never get one person to get value outta something. Like they will fail, like at the starting line. Like the starting gun hasn't even gone off and you're already dead.

So it's like a really weird place for the grey filter to be. And what's even weirder is if you explicitly make that your objective, if you're like, okay, startups are so hard, I'm actually going to just explicitly focus on making sure that one person wants this. There's a trick to making something that one person wants, which is you start by picking a person and then you kind of stalk that person and you're like, Hey, give me 10 bucks. Like, what do I have to do to get 10 bucks? And the person can be like, I don't know, go get me a Starbucks? You got them a Starbucks, you get 10 bucks. You're already past what most startups have done because you did something that somebody wanted. Now is that a scalable idea? I mean, maybe? You kind of, maybe you have a one person version of DoorDash if you can get people Starbuckses, right? So it's not the worst idea. I would argue you're further along toward a good idea. If you at least made somebody's day and got 10 bucks, then if you're like working on technology for a year that nobody ever uses.

David Elikwu: So first of all, I completely agree, and I think taking that a step further, one of the mistakes I think a lot of people make is there's a lot of traditional paradigms that I think people do take to heart in a good sense, but maybe a step too far. So even thinking about what you were explaining is the first filter, I think the next filter is that a lot of people, when they do focus on value, they focus too much on the value that they are extracting rather than the value they're delivering. And so it's not based on empathy at all. It's based on trying to find an idea that is monetizable enough to generate money for themselves as opposed to generating value for users. And so what you miss is the exact interchange where you are providing enough value to get the money instead of just on the broad idea that this is something where money can be extracted.

And so this is a makes a good connection to a lot of the web3 space as an example, where you see a lot of people essentially creating problems that can possibly be solved and people can give money for, but there is no actual value. I think maybe the gap is kind of between fulfilling an urge and fulfilling a need.

So I think maybe it's Paul Graham that talks about this, which is, you know, building startups that fulfill one of the seven deadly sins. And so there's, there are a lot of startups that pursue one of those things which pursue urges. And you can perhaps get over an initial barrier where people can generate hype, you can generate hype around something by fulfilling an urge, by fulfilling the need for hype, the need for curiosity, the need for people wanting to make money, the need for sex for whatever. But beyond that, you don't get from the urge to the need. So after the initial hype is gone, there's nothing.

And so actually on a broader scale, I was thinking about this earlier. I think what you get is that if you look at a very small window, as an example, 2018 to 2021, you'll see a lot of startups that look successful within that window. But when you take a broader scoop and you say, okay, out of the whole 21st century, give me a list of startups that works. Those don't even exist because they weren't statistically significant in any way. And so in in the broader scale they might as well not have existed because they didn't actually create any tangible value, but they managed to extract some value in the meantime, in the brief window that they existed.

Liron Shapira: Yeah, that's true.

David Elikwu: You are also an an angel investor, I'd love to know, okay, how has that gone in general? And then we can talk specifically about Coinbase, which I think was probably a big win. Perhaps one of your biggest wins. I'm not sure. But yeah, tell me more about that.

Liron Shapira: Yeah. Coinbase was my biggest win which is ironic because, you know, I'm so anti crypto, anti web3, and I think it's so overblown. And I was just incredibly lucky that I invested in Coinbase in 2012, because, back in the day, a lot of rationalists were passionate about Bitcoin because it's like, look, this could be used as a currency, maybe like, probably not. But if it is, the upside's really high, right. And so back in 2012, my first insight as an angel investor, when I even started Angel investing, when I even had any cash in my banking account to Angel Invest. You know, I had this observation that many Rationalists had, which is like, look, there is like a thousand x outcome, right? For Bitcoin, which turned out to actually happen, which is, you know, pretty wild, that that kind of thing rarely happens, right? But if it does happen, it's a thousand times return, if not ten thousand. And so, you know, if you do the math, it's like, well, I only need like a 1% chance, right? Maybe there's a 10% chance, but then I'll get a thousand x return. So 10% of a thousand. So that's like an expected value of a hundred x return, but I might have to wait a few years for it. But like, it's pretty good, it's pretty attractive if you can make a lot of bets like that. And you're right, that

they all have some chance of return. Now over the years I'm like, oh man, Bitcoin, you know, it's not really usable as a currency. Like it has macroeconomic issues, like maybe it is a little bit. So I've become like so much less bullish. And I was just incredibly lucky that it was just hard to sell my stake in Coinbase because I owned Bitcoin and I sold the Bitcoin and I just didn't sell the Coinbase. So, and actually I did end up selling more than half of my steak Coinbase.

So it's, it really is just honestly sheer luck that I was able to hold enough Coinbase that I did end up pocketing like a thousand x return, like 10,000 into 6 million, like just insane unheard of return. And yeah, and the fact that I held onto some was totally luck, right?

It's very sobering, it's humbling as an angel investor to be like, okay, so the only reason I can even say that like my angel investing career has been a success is because of this one incredibly lucky element. And like, you know, and I sold my Bitcoin. Do I have of other investments that are pretty good? I have some that are pretty good, but it's hard to say how much of a win my investing career is as a whole because the time period is also super high, right? So my first investment coin is literally my first investment, beginner's luck. And I just cashed out of that you know, a year and a half ago. So if I do have another slam dunk investment, I would only be cashing it out, either now or, and most of my angel investing was done in just the last few years, right? So maybe in five years I'll have the next Coinbase finally come to fruition. And I do have some companies that are doing quite well, I would say are, like maybe worth 40 times more than what I invested in, right? And 40 is still a long way away from 2000 and they're not liquid yet.

But yeah, I mean that's honestly, I have no idea how good I am as an angel investor objectively, and that's probably why I don't angel invest a ton. I only Angel invest when I see a company that's like, I just feel like I need to be involved because I just feel like the team is like so good to be in communication with, or like the product is something that needs to exist. So I've kind of given up on this idea that like, oh yeah, I know what the expected value of this investment is. It's more like when I see enough things that I like about an investment, I try to get involved with a small check because at least I get to like, be part of the journey. It just seems like a great journey that's interesting to me.

David Elikwu: Okay, so outside of personal interest, what are the criteria that you might look for either for a angel investment that you are personally making, but also just a criteria of being a good startup?

Liron Shapira: Yeah. So I have some boxes that I check. So like when I'm doing like a, a Y Combinator mock interview or just like meeting a founder for the first time. The one box is just like, you know, the quality of the conversation, like, is the founder like answering questions in like a direct way? right? Do they sound intelligent? Are we having a high bandwidth communication right now? And if we don't have that, that's gonna make me a lot less interested because even if they're running a good company, it's like, it's hard to even communicate with them. It's like for whatever reason. Yeah, so that's one signal.

And that does rule out like a significant percentage of people that's gonna filter out, I'd say at least half. There's also the filter where I can judge software execution, right? So if there's any, just by showing me if they can put up some slides or demo of the software, I can judge like, oh, okay, I can kind of tell how they code this. And cause that's kind of my niche, right? Software engineering. So that's like another box I can check. Like, oh, these guys are good at software engineering. Which is great because then it's like, you know, you've got technical founders on the team, so even if stuff goes wrong, you can try to like hack different stuff, right? You can keep doing lean prototypes, you just have more shots on goal when you can kind of like, easily put stuff out there using software engineering skills. So that's another box I could check.

Another box is just like the quality of the value prop, right? Like, are you, is this clearly something that people want, right? So there's some ideas we're like, oh yeah, of course people want this.

Another box to check is just, is this a sweetheart deal, right? And this is just something that like, a slimy or as a rationalist, nerdy, introverted guy. This isn't a part that I like about the industry, but there is this idea of a sweetheart deal, right? Where it's like, oh, I have this connection. You wouldn't even be talking to me. But like you know, met me through a friend where we're in this network and so I get to invest and like, this is clearly a good investment. Like, you have a good reputation. I'm just gonna give you a check cause I trust you and you're not even talking to that many people.

So that's like a free box that I get to check if I think I'm getting a sweetheart deal. And in some cases, just by being like a high net worth individual, you can go on Angel list and and you can subscribe to these, I mean, you don't even have to be high net worth, you can like pass an exam. So if you're just on Angel List and you're getting sent deals, right? So like the, the other day I bought secondary shares in like these, you know, great name companies, right? SpaceX, Stripe, Anduril, I mean these are great, great companies. Did I pay the right price for them? I don't know, right? Who's to say if the economy crashed, I will have overpaid, right? But it's still, I still consider it a little bit of a sweetheart deal because I think that the average person who knows the name of these companies, like SpaceX for instance, isn't getting that email saying like, Hey, you can invest in SpaceX, right? So it's still a sweetheart deal in that sense, it's not a fully open auction. There's still some restrictions of who even gets to see the opportunity to bid. So that's a type of sweetheart deal.

I think there's a couple more flags. I mean, one of them is obviously the traction graph. I mean, the revenue graph is the gold standard, right? So if you're seeing an exponential revenue graph, like investors are gonna throw money at you, and sometimes it's gonna be like BS because it's like, oh, well it's unprofitable, but it's still a major signal. If it's somewhat profitable and it's an exponential revenue graph, I'm gonna be like, now the question is like, why shouldn't I invest, right? It's like, it totally flips the conversation. So that's a signal.

And then passionate early users, right? Even with no revenue, if it's just like a lot of passionate usage or like a lot of retention. I'm like a metrics hound like everybody else, but I guess what distinguishes me is that there's just certain areas where I feel like I can have a better prediction of some signs that are even pre traction. I'm like, okay, you have no traction to show. But I can see a lot of things are looking good in the very early stages.

Okay, so that's like a rough summary of like my approach as an angel investor, which is not super rigorous or consistent or professional, but it's how I operate.

David Elikwu: So one of the first things that you mentioned was this idea of specificity, which you've written about, and I think it comes up in a lot of your objections or critiques of Web3 and Bitcoin in general.

So I'd love to know maybe what's your beef with Bitcoin and cryptocurrency maybe in general? And actually maybe this is a better question. I don't know if there is a distinction, but let me know if there is between perhaps what you might be critical about as it pertains to blockchain. As distinct from Bitcoin or cryptocurrency as distinct from maybe like Bitcoin specifically. So they're kind of three different layers which I think are slightly different and might have slightly different uses, but a lot of the time they're all lumped in together. And then there's maybe web3 on top of that as a name, and then NFTs as an additional layer.

Liron Shapira: Yeah. So basically like what is my beef? Be precise about my beef of blockchain. Yeah, so my biggest beef is with this whole idea of web3, and I think a good definition of web3 is blockchain applications other than cryptocurrencies, right? So Bitcoin by itself is not quite web3, Monero not quite web3, Ethereum not quite right. If you just look at Ethereum, not quite web3, but if it's like applications built on Ethereum, like, you know, a Twitter clone somehow built on Ethereum, an Uber clone somehow built on Ethereum, suddenly that is web3. So web3, because it is anything other than cryptocurrency. Now, if it's like it's Uber, but you can pay with Bitcoin, I would argue that's not web3 yet, right? That's just cryptocurrency used as payment.

Okay, so with that definition of web3, my beef is that I think web3 is literally an incoherent zero, not like slight value, like literally zero. And it's a zero on the level of just logical coherence. So anytime somebody even explains how web3 supposedly creates value within that explanation is already a logical flaw. Like it's an explanation that is so bad that you don't even need to go to market, you don't need to build anything, you can just, you know, retire the explanation and like admit that you like didn't think right when you made that pitch.

And if you look at examples of web3 failures, you can trace the failure all the way back to the initial logic like Axie Infinity. The reason Axie Infinity failed isn't because they got hacked, although the hack was ridiculous. It's like a 600 million hack. It's not because they got hacked, it's not because they didn't implement it well, it's not cause they got unlucky. It's because on paper they just created a Ponzi scheme. Like, ponzi schemes blow up for a while and then they crash. Like that is what happens. And you didn't have to run the experiment. You could have just looked at the blueprints for experiment and realized it was a Ponzi scheme and realized that blockchain technology had nothing to offer besides implementing a Ponzi. It was just an implementation layer of a Ponzi, which you don't need blockchain technology, you could do a Ponzi on web2.

So that's Axie Infinity and if you look at Helium, it's a similar thing, you know, Helium, the wifi, the LoRaWAN, LongFi, they call it, right? These, this routers. So people were installing these routers at home in order to earn cryptocurrency and the scheme is called Helium. That also made no sense because if you wanna reward people for having shared wifi, it's a questionable value prop. But if you wanna do it, just put the accounting ledger in a regular database. Like you can pay them with cryptocurrency if you really want, but like, you don't need a decentralized accounting method you know, you don't need to decentralize the server that tells you how much people owe each other in this network. So it just, the pitches are mind blowingly incoherent.

And so the reason I personally got kind of obsessed with dunking on web3 is because like, there was a disconnect between the caliber of the people and the institutions and the capital that we're talking about this idea and how incredibly flawed the idea was at like a basic logical level. It's like the idea should not have passed like a high school business class. And here you have people like Mark Andreessen, the people that they hire at Andreessen Horowitz, Chris Dixon. People that if it weren't for the web3 stuff, I'd be like, these are great people. Like I really respect, they have like some insights, right? I see them as like, you know, mentors. I respect their successes but they've like completely clowned themselves on this whole web3 thing and it's like still going, right? They still have like 2 billion to deploy and they're 7 billion fund and they're lighting it all on fire. And I'm like, what the hell is going on with web3?

And then just to finish my overview of what my beef is, so then you move on to Bitcoin, which is not exactly web3, right? It's the original value proposition. And with Bitcoin it's not quite as easy to say that the logic is fully incoherent. It is more coherent of like, look, it's just a thing, right? It's a protocol that runs it has some protections against the 51% attack, right? It's like an interesting protocol, the proof of work blockchain. And it's gonna like somehow hold its value and somehow be used for transactions or be used as a store of value. It's logically consistent, but the problem is it's not clear as a matter of like macroeconomics or as a matter of like game theoretical equilibrium, it's not clear that there's any coherent state where Bitcoin can have a high value consistently and not be ridiculously volatile and connect into like a legal, well-functioning part of the world. Like it seems like Bitcoin always has to be like the sideshow, that it kind of undermines itself when it gets too valuable because then it like, the network like freezes up, like it can't transact very much. It seems like Bitcoin can be a number of different things. It can get into a number of different states, but none of the states is like really good and really self-consistent. That's like my issue with Bitcoin, like it's really cool and everybody like wants it to be everything, but like in reality it's hard. It's hard to imagine it being anything successful.

David Elikwu: So going back to, I think this idea that you mentioned Andreessen Horowitz as an example, but there's some of the smartest minds in the technology space were extremely bullish about web3.

So how did a lot of those people get that so wrong, particularly Axie Infinity was probably one of the most glaringly obvious ones to me. And I'm just wondering, like I am probably not as smart as a bunch of these people. Why does something that looks so obviously flawed to me, not resonate with them in the same way? And is it just this sense that we were talking about, which is there is some element of building great startups, which requires some irrational optimism or is there something else that I'm missing?

Liron Shapira: Well, I have my own pet theories about why Andreessen Horowitz has gone down this route and like turned evil basically or like turned dumb, dumb and evil, some combination. I have my theories and I just wanna separate that. When I make a claim and say, web3 is logically incoherent and the things that they're deploying capital into, they're doing a disservice to their LPs and they're being irresponsible. Those conclusions I feel very confidently about, I think I'm on very firm ground. Now separately, I can go into speculation about what I'm guessing that they're doing and you can take that with a grain of salt because I don't claim to be able to psychoanalyze them. But here's my attempt.

I think that Mark Andreessen's strategy is to build basically the BlackRock where he's basically just trying to optimize assets under management, like the BlackRock of venture capital. And so the funny thing is for him, the crypto fund is already a success. I mean, I think that they should give up on getting a carry because they're not gonna have a positive return on the fund, so they're not gonna make any 20% carry. But that fund, the crypto fund with the 7 billion, the four different crypto funds, those are walled off from the other funds within Andreessen Horowitz. So it's not gonna take a chunk out of the carry from the other funds. So they still have carry from the non crypto funds and then the crypto fund because they pumped assets under management to 7 billion that means that their share, the 2% per year that turns into 20%, right? So it's 20% over 10 years. So when you're taking 20% and it's 7 billion, right? So that's our, like what is 700 million? I don't even know. It's like more than, more than a billion, it's insane. The amount of management fees that they're gonna get on the 7 billion.

So that's a win, right? So they're pocketing these management fees while they're running the fund of the ground. They're burning, you know, 40% of the capital is destroyed and they're pocketing hundreds of million, maybe even a billion in a management fees. So from Mark Andreessen's perspective, as long as he kind of stands back and he is like, look, LPs invested in the thesis that they want exposure to crypto, we gave them exposure to crypto, we gave them the board Apes at a 4 billion valuation. That we're the number one at giving them exposure to crypto and we deserve this management fee and like everything's ready. So Andreessen, you know, it's a win. Of course, he is completely destroyed his credibility with somebody like me, or people who think my arguments make sense. But at the end of the day if somebody is not falling super closely and Andreessen Horowitz wants to invest in their company, are you gonna take the check? You should still consider taking the check, right? Even if the guy is like ridiculous. You know, it's still money. If you retain enough board control, if you feel comfortable with it. So, look, Andreessen's a smart guy and it seems like he, he's a successful guy and he probably knows what he is doing to some degree.

I think that his employee, Chris Dixon I think there's more of an issue with, I do think Chris drinks his own Kool-Aid and like a lot of the stuff he says about web3, like, I do think there's a, an intellectual limitation there. As far as I can tell like, I do think that he struggles to process his own claims and like see their incoherence.

David Elikwu: So incoherence is a strong word, which you've used a few times, but also I think there's a flip side to it. Not so much just as a criticism, but it's very surface level when you look at a lot of the descriptions. So Balaji comes to mind as a good example, and what I find really interesting is that he could talk about something, and I will, I have listened to him talk about some of his ideas for probably a grand total of like 16 hours, because every podcast interview that he does is three or four hours each. And I can come away from all of that, and I can't relay what he just said. I can't explain it to you why this thing works, because he's, I mean, he throws in a lot of really interesting stuff, a lot of cultural references. He'll pivot between mathematics and physics and biology, and everyone that speaks to him comes away with this sense of how smart he is. But I still can't explain to you what, you know. He's got this current idea, The network state and yeah. I just can't, I can't explain it succinctly in a way that still resonates and makes sense apart from when he's saying it.

Liron Shapira: Yeah. If you search on my Twitter, I've actually, people were wondering why I had a few months where I was kind of obsessed with Balaji like I was on a Balaji kick on Twitter and people are like, okay, leave Balaji alone. But the reason was it's because I shared your feelings where I'm like, what's going on with this guy? Like, why does he go on all these podcasts, ramble for four hours and then I don't have any coherent takeaway, like, what is actually going on? What is he saying? Let me carefully listen to what he's saying and try to unpack what's going on here.

I finished my Balaji kick because I think I got to the bottom of it. If you look at my Twitter it all came to a head when I just broke down one of his most recent podcast interviews. He did an interview for a16z's podcast, and I really broke it down. And the pattern that you can see in my clips is the interviewer asks him a very straightforward question and he just completely ignores the question, completely ignores the question, and just starts rambling. It rambles for like 24 minutes. And then within the ramble I found like one or two sentences that kind of relate to the question, and then the interviewer tries to bring him back on track, just asking like a very simple follow up and again, 20 minute ramble that I could not find an answer to the question in the ramble. So it's not like he's like jumping off and making associations. He's like in his own world.

And then, you know what, what else is crazy is that the interviewer is like buying it. The interviewer's like, wow. I'm like so lucky to be able to hear this ramble. Like this is, we're in the presence of greatness here.

So my conclusion was just that, I think interviewers need to uphold a higher standard where when they ask a question, they really do need to like make sure that the thing being said, at least after they edit it, is logically connected to the question they ask.

David Elikwu: Thank you so much for tuning in. Please do stay tuned for more. Don't forget to rate, review and subscribe. It really helps the podcast and follow me on Twitter feel free to shoot me any thoughts. See you next time.