David speaks with Sterling Crispin, an Artist and Software Engineer.
Sterling is a conceptual artist and software engineer who works with machine intelligence, generative art, and techno-sculpture. His artwork oscillates between the computational beauty of nature and our conflicting cultural narratives about the apocalypse.
They talked about:
π» His exploration of programming and computer science
π¨ Conceptual art, generative art, and the changing dynamics of the art market
π The fascinating world of Apple's latest innovation, the Apple Vision Pro
π€ The impact of artificial intelligence (AI) on society and the future
π€ Various aspects of AI, including its potential to erode critical thinking
π Listen in your favourite podcast player
πΉ Watch on Youtube
π€ Connect with Sterling:
Twitter: @sterlingcrispin
Website: sterlingcrispin.com
Event: Flourish | https://www.sterlingcrispin.com/flourish.html
π Show notes:
0:00 | Intro
4:27 | Mixing art and technology
8:25 | How photography evolved as an art form
10:32 | Rediscovering modern masters like Picasso
12:14 | Photography and generative AI
15:00 | Unconventional art
16:45 | Two ways to judge good art: judgement and effort
23:15 | Conceptual Art through the Ages
28:11 | Evolving value and recognition of artists throughout history
29:45 | NFTs and the financialization of art
33:20 | Computational beauty of nature
38:03 | Future, pessimism, and the influence of mass media
40:35 | Immersive experiences and neurotechnology advancements
44:32 | Innovation behind Appleβs Vision Pro
46:38 | AR, VR, and brain-computer interfaces
50:05 | Balancing connection and immersion
52:33 | The impact of smartphones on social interactions and attention
55:21 | Effects of extended reality and augmented cognition
58:25 | How technology changes us
1:02:02 | Competitive vs. complementary cognitive artefacts
1:05:30 | AI's role in executive-level thinking
1:16:59 | AI and personalized healthcare
1:20:05 | Limitations of human intelligence
1:23:03 | Why human judgement often sucks
π£ Mentioned in the show:
Ray Kurzweil | https://en.wikipedia.org/wiki/Ray_Kurzweil
Carnegie International Show | https://en.wikipedia.org/wiki/Carnegie_International
Andy Warhol | https://en.wikipedia.org/wiki/Andy_Warhol
Banksy | https://en.wikipedia.org/wiki/Banksy
Midjourney | https://en.wikipedia.org/wiki/Midjourney
Marcel Duchamp | https://en.wikipedia.org/wiki/Marcel_Duchamp
Willem de Kooning | https://en.wikipedia.org/wiki/Willem_de_Kooning
Yoko Ono | https://en.wikipedia.org/wiki/Yoko_Ono
Painting for the Wind | https://www.moma.org/collection/works/289486
The rise of the long-form generative art | https://tylerxhobbs.com/essays/2021/the-rise-of-long-form-generative-art
Van Gogh | https://www.theknowledge.io/issue64/#:~:text=The Van Gough method
The Andy Warhol Foundation | https://warholfoundation.org/
Art Basel | https://www.artbasel.com/
Richard Dawkins | https://en.wikipedia.org/wiki/Richard_Dawkins
Future Tense Project | https://csi.asu.edu/category/projects/future-tense-project/
Marty McFly | https://en.wikipedia.org/wiki/Marty_McFly
SpaceX | https://www.spacex.com/
Sistine Chapel | https://en.wikipedia.org/wiki/Sistine_Chapel
Steve Jobs | https://en.wikipedia.org/wiki/Steve_Jobs
Fusiform gyrus | https://en.wikipedia.org/wiki/Fusiform_gyrus
Mark Zuckerberg | https://en.wikipedia.org/wiki/Mark_Zuckerberg
Cognitive artefacts | https://www.theknowledge.io/issue41/#:~:text=Thepower of cognitive artefacts
Dan Ariely | https://danariely.com/
Metaverse | https://en.wikipedia.org/wiki/Metaverse
Stand-your-ground Law | https://en.wikipedia.org/wiki/Stand-your-ground_law
π¨πΎβπ» About David Elikwu:
David Elikwu FRSA is a serial entrepreneur, strategist, and writer. David is the founder of The Knowledge, a platform helping people think deeper and work smarter.
π£ Twitter: @Delikwu / @itstheknowledge
π Website: https://www.davidelikwu.com
π½οΈ Youtube: https://www.youtube.com/davidelikwu
πΈ Instagram: https://www.instagram.com/delikwu/
πΊ TikTok: https://www.tiktok.com/@delikwu
ποΈ Podcast: http://plnk.to/theknowledge
π EBook: https://delikwu.gumroad.com/l/manual
My Online Course
π₯οΈ Career Hyperdrive: https://maven.com/theknowledge/career-hyperdrive
Career Hyperdrive is a live, cohort-based course that helps people find their competitive advantage, gain clarity around their goals and build a future-proof set of mental frameworks so they can live an extraordinary life doing work they love.
The Knowledge
π© Newsletter: https://theknowledge.io
The Knowledge is a weekly newsletter for people who want to get more out of life. It's full of insights from psychology, philosophy, productivity, and business, all designed to help you think deeper and work smarter.
My Favorite Tools
ποΈ Descript: https://bit.ly/descript-de
π¨ Convertkit: https://bit.ly/convertkit-de
π° NordVPN: https://bit.ly/nordvpn-de
πΉ Nutmeg: http://bit.ly/nutmegde
π§ Audible: https://bit.ly/audiblede
πFull transcript:
Sterling Crispin: We're kind of already cyborgs with our phones. Like, I don't remember anybody's phone number, it's all just on there. They're a secondary part of our brain at this point where we store things that we wanna remember, but we don't necessarily need like, short term access to, right? So it frees up our brain to think about other things.
But yeah, the second order effects of that stuff are hard to speculate on. I know people that have spent time in virtual reality playing zero gravity games where you're on the space station and you can like, push off of virtual walls and float in different directions and then, an hour later after playing the game, they take the headset off and they go to push off their desk and nothing happens. It's like their brain has adapted to this new reality and this new set of physics and they have to readapt back to reality as they take it off. And I think there's gonna be a lot of um, strange things like that. .
David Elikwu: Hey, I'm David Elikwu. And this is The Knowledge. A podcast for anyone looking to think deeper and work smarter. In every episode I speak with makers, thinkers, and innovators to help you get more out of life.
This week I'm speaking with Sterling Crispin. Sterling and I had a really incredible conversation.
This is probably one of my favorite recent conversations which was essentially at the intersection of art and technology, which is where Sterling has spent the bulk of his career.
So we talked about his background getting post graduate degrees, both on the engineering side and on the artistic side, and his journey both as an artist and as an engineer.
So I think an accurate framing of the conversation we had which is essentially about the technology that we use to create art and to build things in the world, and how the act of using that technology can change us as humans.
So we started the conversation talking about art and how the medium of creating art has changed over time. We talked about the evolution of photography, the evolution of art and painting, and how things like generative art are changing the art scene and how NFTs have changed the way that artists interact with their audience and interact with the financial aspect of being a successful creator.
And then we started talking more about the technology side of things. Sterling has worked in AR and VR for a number of years. He contributed to building the AR specs at SNAP and also has just spent three and a half years working with Apple and contributing to a lot of the neuroscience that went into the Apple Vision Pro, which has been of particular interest to me and a lot of people in the technology space.
And so we talked about how the proliferation of wearable forms of technology like AirPods, the Apple Watch, the Apple Vision Pro. How using these things could change the way that we think for the better and potentially also for the worse.
And what the future looks like as we rapidly adapt and as we move towards a world that is far more enabled by artificial intelligence and augmented reality and virtual reality and so we went back and forth on a few different visions of the future.
But we touched on so many things that you're definitely gonna come away with a strong idea of the many possibilities that lie ahead and a lot of the things that we'll need to think about as we continue to innovate and collaborate on new and inspiring technologies.
You can get the full show notes, the transcript, and read my newsletter at theknowledge.io.
Every week, I share some of the best tools, ideas, and frameworks that I come across from business psychology, philosophy and productivity. So if you want the best that I have to share, you can get that in the newsletter at theknowledge.io.
And you can find Starling on Twitter @sterlingcrispin and check out his upcoming project Flourish, which will link to in the podcast show notes or the description if you're watching this on YouTube.
One last thing before we kick off. If you love this episode, please do share it with a friend, and don't forget to leave a review wherever you listen to podcasts, but specifically on the Apple Podcast app because it helps us tremendously to reach other people just like you.
The place that I thought made sense to start is, so I noticed that you have both a masters in finance, so you have MFA, but then you also have the masters in multimedia engineering. So I guess it's kind of two very different sides. I know like I've heard you talking a little bit about your background. You grew up in Hawaii and I think you spent some time in different places in the US as well. I would love to know what were the data points that you saw that led you on this journey?
Because what I'm specifically interested in is, I think you mentioned, okay, you started learning to code quite early as a teenager, and so there's some of these building blocks that might have taken you directly down, like let's say the engineering route, but then you still ended up doing the the MFA. So how did that come about?
Sterling Crispin: Yeah, I mean, that's a good question. I've always, you know, been really creative and yeah, spent a lot of time on the computer as a young kid and yeah, got into programming and stuff, but at the time, you know, when they were teaching computer science to high school students in the early two thousands, it was like very not creative and not fully expansive as to like, everything that's possible with code these days.
So it just seemed like a strange career route and I was like, I wanna do something that's really creative. And so, yeah, I went to art school and, you know, really enjoyed that experience and just thinking like conceptually and, you know, like making art with ideas and being super experimental. It was a painting, a drawing degree, but I did like very out there kind of stuff like you know, working with every medium, turning in like sculptures welded out of metal sculptures made of little cakes from gas stations.
I did a artwork once. We had this teacher that was giving us crazy assignments. You have to come in tomorrow morning, you have to wake up and create an artwork on the way to class. You can't start making the artwork now. You have to only, you have to come up with something as you're coming to class.
And I was like, oh, this is awesome. There are all these dandelions in the field that are putting their seeds out. I'm gonna collect up all these dandelions and blow them and have them go across the wind and it's gonna be great. But it started raining and so there was nothing I could do. I ended up like microwaving an ice cream sandwich and pouring it into a shoe and passing that around the class. It was like really, really bizarre artwork that I was making at the time. But it really helped me, like think non-linearly and like sort of allowing my creativity to go in a lot of different places. And yeah, I ended up reading all of Ray Kurzweil's books around the same time thinking about the singularity.
And you know, like the computer in all of our pockets in the shape of a phone is like a hundred million times more capable in the computers that took astronauts to the moon, right? And that exponential growth of technology is setting us on a path where a thousand dollars computer in 2045 is gonna be equivalent to every human brain on the planet combined.
And it's like, what happens to society at that point? You know, it's like a really mind blowing thing and he lays out a really strong argument for it. So at the time I was like, this is what I want to be doing with my life. I need to go back to school and like really get serious about programming and computer science and stuff and that, I mean, Ray Kurzweil's really set me on that journey.
David Elikwu: Okay. That makes a lot of sense. Just so to step back a bit very quickly, the assignment that you mentioned, was this in high school or was this when you were doing the MFA?
Sterling Crispin: That was my BFA. That was my BFA. It was, it was a very like experimental, small kind of private art school. But yeah.
David Elikwu: Yeah, that's exactly what I was gonna ask about because I can't imagine that, that type of assignment in a lot of other places. What I would also love to know is, I dunno the extent to which you are happy talking about I guess your like family background. But I'd love to know if your creativity was something that was encouraged because even the project that they used, you decided to go with is very out of the box.
I don't think it's, you know, something that anyone might typically think of immediately as their first either as their first thought to submit, but I think also more specifically, in terms of like how you were conceptualizing art at that time. And I think this is another thing where very often people go to school and what you get taught is art is very, not rudimentary, but very traditional, right?
You see people drawing with graphite, you see people painting on canvases. It's, you know, this is art. These are the great masters of the past, and here is what you're supposed to be doing. And there's not always a lot of room to experiment and think non-linearly about what forms art could possibly take.
So I'd love to know maybe what maybe took you in that direction as well.
Sterling Crispin: Yeah that's a good question actually. My grandfather was a really prolific photographer. His name was Jim Milmoe, and he was the one that suggested that I apply to that school. And, you know, he was taking photographs as artworks at a time when photography was like not considered an artwork. It was like you take photographs of artwork and their documentation and like photography isn't artwork, you know, so he was doing really creative things with it. And, you know, slowly over his lifetime it became recognized as a full on art form. But I think my mom being raised by a prolific photographer, you know, showed her a lot about art.
She was taking us to art museums. And, you know, I remember being a teenager and seeing the Carnegie International Show in Pittsburgh, which is like a big biennial kind of show showing like the most radical artworks all over the world. And it was like, you know, not exactly traditional work, like really experimental out there stuff.
And I mean, it took me a while, you know, as a person growing up, like I remember even thinking about my grandpa's photography. Like, oh, photography's so easy. You just point the camera and click and like these beautiful images come out and like, no big deal. And then I took this black and white photography course where we had to, you know, wind all of our film and develop it all in the dark room.
And I was like, this is actually really, really, really hard. So I think going through that process by hand helps you develop that appreciation of different medium. And then also, you know, just being exposed to art history in general. Like, I feel like most people, they kind of have a rough understanding of art history, but kind of like it fades out right around impressionism and then they might know like, oh yeah, there's Warhol and Banksy and there's a couple of other people in the last 120 years.
But yeah, I think people's like aversion to really experimental stuff just comes from like a, a lack of context on what else has been going on in art history.
David Elikwu: Yeah, you mentioned so many interesting things. There's so many different ways I can take things from here, but I'd love to know maybe how you might define art specifically because I think, just the last part of what you were mentioning.
First of all, that is very similar to part of my, my art history knowledge. But I know it's the same for a lot of people, but I think more specifically, one of the paradoxes that I find very funny that comes up a lot is that people always assume that Picasso is super old. And because most of the art that you are taught is from people that lived a very long time ago.
And so actually even when people are hearing about artists that are relatively contemporary, they almost put them directly in the box of, here are the old masters, and, and it just must have been at some vague time in the past. So if, if it's painted, it's automatically old, and if it is something strange, then maybe it's contemporary or maybe it's brand new.
And there is, like you say, this gap in between where people don't really know. What people were doing or how art transformed?
So I'd love to know maybe your thoughts on, I guess, how to conceptualize art and specifically also because this is the other part that you touched on, where there's this idea that, using photography, when you take a photo, it might seem easy now relative to, okay, you take the photography on this, on a film camera, you're having to develop the film in the dark room. You're having to do all of this work. And I was actually writing something today that I posted actually, which is just this idea about the connection between effort and output. And how some of that is changing now that we have like generative AI and or generative arts, that people are just going on Midjourney and they type in a prompt and create some artwork and it's, what is the extent to which you could call this art that you have made in the same sense when there was none of the effort that is connected to it.
And yeah. So I know I've just thrown a lot of different thoughts at you, but I'd love to know how you would respond to that.
Sterling Crispin: Yeah. Working from the last point backward. I think the connection between photography and generative AI is super interesting, actually. If you think about a photographer walking around New York City looking for photographs to take, right? Like they didn't create the city, they didn't create all the people, they didn't create the culture, they didn't create the fact that that person was busy today and they're running late and they dropped some paperwork and you saw you were standing at that corner at the right time and like you decided right there, like, oh, this is art. You know, I'm taking this picture.
And like someone could walk to that exact same intersection, point the camera in the same place, decide that that framing is also their photograph and take the same photograph, right? And at the end of the day, photography is like largely an act of starting with the world and like doing a curatorial process of saying like, okay, this moment is artistic choice I'm making, right? And I think generative AI is really similar in a way that like you have this sort of open conceptual landscape of things you could depict and the act of you like exploring that space and picking something that you are deciding is this artwork and is this final image is that creative act.
So, to me it's just obviously an art form, even though I totally understand, like, if you went to school specifically for, let's say, illustration and like your career has been like essentially being prompted by clients and then using your imagination and the craft that you've honed to, you know, like illustrate images and give them back to people. Like, this is a very threatening new tool, but like the history of civilization is the history of automating human labor and making things easier, and like changing the nature of work and like shifting what it is that people do. So I totally empathize with those people, but at the same time, that's kind of how it goes. And like, you know, I write a lot of code and the same thing's happening to coding so it's, it's interesting.
Going earlier about your point about what art is and how I define it? You know, Marcel Duchamp did a really interesting piece where, you know, essentially to get your work seen by the public, there was this institution that was like, we are the deciders of what art is and isn't. And you submit your work to the salon and you're either in the show or you're not in the show. And if you're rejected, like you're not an artist. And like, even impressions at the time, the impressionist painters, they were like totally rejected. They're like, you're making horrible paintings. Like these, these look terrible. This is like a violence on nature. We hate your work, you know? There was a pushback against that. And, you know, artists started organizing their own shows that were supposed to be more inclusive. And it's like, hey, if you've got art and you wanna show it, submit it to us. This isn't the salon.
And Duchamp or arguably, there's some like, argument in art history, there was actually a woman, and not Duchamp, I like can't remember the entire history behind it, but someone submitted this toilet, a urinal that was like signed as an artwork, right? And it was a big controversy at the time, and they had like a curtain around it oh my God pull the curtain back and see the controversial art that's in this show. And that was in like 1905 or something. And since then it's kind of like, blown up the container of what could be art, and really it's if someone decides that something is an artwork and puts it into an a context of art than it is art. And I know that that like frustrates some people and they're like, oh, that's not a definition anymore. If anything can be art, then nothing's art. And it's like, this is how it is. Not everything is art, but it if you sit down and consciously choose like, okay, this is an artwork. Then we can sit down and talk about, we can talk about it. You know? And I think that the question like, is this art? Is sort of a dead end question. Like, you just end up with like a yes or no answer and then you move on, right? But if you're like, who made this? Why did they make it? Like, what did they intend behind this? What are the cultural forces that must have been around them that led them to make this? Is this political, like, is this, you know, there's so many other questions you get asked to like really dig into what it is that you're looking at, try to understand it.
And I feel like, just letting go of the pressure of is this art or not? It's like, it kind of doesn't matter, you know? It's like
David Elikwu: It's funny. So the part you just touched on is very similar to the point that I was making, but I wonder if I expanded my point if you would then disagree with me, cause I'm not sure if you would. So the view that I was taking, and I think in line with what we were just discussing, right? I think there's two paradigms by which people have been conceptualizing art. There's the effort paradigm, like how much effort did this take? How much work did you have to do to get to the end product? And then the other paradigm is judgment. Like, what was your intention? How much thought actually went into the creation of this?
And so sometimes when something looks like low judgment, low effort, People will say, oh, it's not art. You just put a toilet there. You just threw some brush strokes at the wall. You didn't think about it deeply and it didn't take you that, that much effort to do so, this is not art.
But I think what is really interesting about the moment that we are getting into is, now that you can create art without the same amount of effort in the same way that before, photography used to primarily be about effort, you had to go somewhere physically, you had to set up a huge piece of machinery. At first, you know, you've got your pinhole camera, so it's a whole setup. You have the light, you have all sorts of stuff. But then even going after that, when you're developing film, that takes a lot of work. It takes a lot of patience, it takes a lot of skill. So you're still very high on the effort paradigm. And then when you move from there to mirrorless cameras, okay, so it's not the same amount of effort. You maybe do spend some effort doing like, post-processing and learning Photoshop, et cetera, but you can at least take the photo.
And so now it's a lot more about judgment, I would say, at least in how people judge photography as art. And so it's like, okay, you're using the world as your canvas, but what were you trying to create? What was the judgment here? What were you trying to say with this photo that you took? And there's so many choices that go into that. There is, what lens did you choose? Is it wide angle? Is it narrow? And all of these tiny little creative decisions are what give you the end product of something that you would call art.
And so, and I know I'm rambling a lot here, but I think this all goes towards the point. I think the point is that at least for now, the end product is still a product of judgment. And so when you judge the art, you are judging the thought process of the artist. And so I guess the way that that paradigm changes now that you can go on Midjourney and type a prompt and just create something from scratch, is that we are still relatively still in the low effort side where. The effort is still there in terms of, okay, you have to learn to prompt and you have to learn to be creative within those constraints. But the barrier is very low. So anyone can just learn to type a few things from a Twitter thread and create something that looks wow, amazing. So you get the output without necessarily a lot of the same effort.
But then on the other side, I'm wondering what you think of the judgment side of the paradigm where I do think there's an extent to which, you know, creators will always think creatively within constraints. And so artists will always push the boundaries of what it is possible to create. And so what someone could do with a simple prompt might not be the same as someone that spent a lot of time with prompt engineering and trying to figure out, okay, I can use all these different phrases, all these different wording and get something really dialed in, in, in a different way.
But I think maybe the part that I find interesting is there's still an extent to which you don't get to choose everything, you get to choose when you are done with generating. Like you could keep generating, but you don't necessarily get to choose everything that goes into the output. And so there is an extent to which people could judge or say that, you know, you are not creating art so much as like shaping it. It's almost like, a lot of the old masters might have had like an apprentice. You are more like the apprentice, watching the master do work and giving feedback and commentary on the extent to which it's done as opposed to the one holding the brush.
Sterling Crispin: Yeah, I think those are interesting points. I mean, I still think it's almost like photography in a way. It's like I just, I chose that this moment is my artwork, you know? And especially in an iPhone now, it's like I just pull my, this thing outta my pocket, hit go. I don't have to worry about exposure or anything. I can crop it after the fact, you know?
Yeah, I didn't create the world or New York City, but I could just pop my phone outta my pocket and make an artwork, you know? But yeah, like effort and judgment and stuff, those are all totally good points. And I guess I have like some empathy for just the fact that other people have different value systems.
You know, like I have two art degrees and I spent a lot of time in that world. And like the contemporary art world has like a very specific set of like the things that they value and things that they think are, like the real values and, and how you're supposed to understand art. And then everyone else's understanding of art is like some sub, some sub form of artwork that normal people like or something. And I,
I just try to have like empathy that like, it's all human creativity and nothing's really, I mean, it's good to judge things and try to understand them, but I just try not to like, come at it from a point that like things are in like inherently bad because they don't come from some like contemporary art world value system or something, which I think some people get caught up on, or vice versa, like the public has their own value system and they see contemporary artwork. They see Maurizio Cattelan duct taping a banana to a wall and lose their mind. Like, this is just vile non-art artwork, right? So it kind of goes both ways.
But I had a intense experience with William De Kooning's work when I first saw it. I was just like, this is grotesque. Like these paintings of these women that this guy is making, like this is horrible. And my mentor at the time was like, you should go get like a book of all of De Kooning's work and just look through all of it and like spend a couple days with it. He's still not my favorite artist, but like from an initial judgment, an initial knee-jerk reaction. Like, I understand the things that he was trying to say with his paintings and like, I understand the aesthetic and like the color choices he was making, even though he is still not my favorite artist. Like, I kind of get that.
And I feel like the pushback against generative art, like if you hate something, it's probably a good time to look deeper at it and try to understand why and see if you're missing something and see if there's like opportunities there for you to like expand your horizons, you know?
But it's a, it's a tough thing to do and I totally understand why people are just like, oh, nope. Too easy not art. Next, you know,
David Elikwu: I'd love if you could explain, so you, you would describe yourself, I think, as a conceptual artist, if you could explain like what conceptual art is, and maybe also a definition for generative art, because I think even for me, I might be using it in the wrong context sometimes I think there's some context where people say generative art to describe, you know, that there's the type of art where you can put in some code and it's using like some mathematical computation to create art versus, you know, you click a button and something happens and you don't know what happened.
Sterling Crispin: Totally. So, for me, conceptual art is like art that primarily like the focus of the art might be the idea behind it. Whether that's like super ephemeral, like some of Yoko Ono's work, like she has this piece called Painting for the Wind, and I think it's a few sentences that go, like, hold a bag of seeds and cut it open and watch them fall through the wind. That's a painting for the wind. She has this whole, whole book of kind of like poems that are conceptual artworks you just read and like think about. And that's the artwork, right? So there's kind of that historical context of conceptual art.
But for me, in the way that I approach it, it's like I just let my curiosity guide me to like whatever makes the most sense and that kind of like form and content relationship where you have an idea and you're like, okay, what form most could reflect that content and that concept and like, where does that take me?
And I work across like a lot of different mediums and I don't necessarily have like one aesthetic in my body of work that I'm always presenting. It's like a kind of a spiral of interests and aesthetics that kind of like come and go and, and revisit themselves.
Yeah. As far as generative art goes, it is like a very large container of different types of artwork, right? It could be that you're using, you know, cinema 4D or Blender or some, you know, 3D rendering engine and using someone else's algorithm that some computer scientist wrote and clicking a few buttons and watching things happen. Or you could be starting with a total blank slate and writing all of your own code and kind of, you know, really carefully crafting every part of an equation to make something come out of it.
And a lot of that type of generative artists been interesting to see over time. People often write a program to make art. Some images come out of it, they might save them, they might change the program. Some different images come out and in three weeks you might not even be able to get back to how it used to be. And after the course of a month, you might just say, oh, this one sequence of six images that came out of this program, these six images are the artwork. And the program was just a tool for me to get there. And there's a whole genre called long form generative art that's kind of come out of the blockchain space where you write a program and transactions are used as a random seed for your algorithm to produce artwork.
And so everything that the code produces is like a new instantiation of that artwork. So each piece is an artwork, but then as a collection, you're kind of seeing everything the algorithm does as an artwork and there's no curation after the fact. So once you create, as things are getting generated, you can't go, oh, these two look terrible. I'm taking them out of the collection. It's just like you have to get the program to a point where everything it produces is interesting. And that really is like its own sub genre of work that's very specific. And the way that you approach creating that work is like really, really different.
And that's been a super interesting journey for me to just like, engage with this new context that like really changes what the work is and how it's made.
David Elikwu: Yeah, I think that's a really interesting aspect that you mentioned as well, and I can't really think of an analog to prior times where, art can still have this social element and also an element where you are working with something that is not out of your control, but it is, I guess, in addition to you or beyond you in a sense like that there are some contexts where you can code and create some kind of output.
But I can't really think of, apart from maybe like two painters working on a single painting, but even like with photography, which we use as an analogy, you're still very much in control over the output. There's not much that could maybe surprise you in how something comes out unless there was a mistake. Whereas with generative art and working with computers now almost like partners, it is very interesting to see how that is pushing the boundary of what we can create and what we can expect, and the ways that we can interact one with each other, and then also with computers as an intermediary as well.
Sterling Crispin: Yeah, totally. I think there's so many ways to think about it. Like, oh, this program is kind of like my studio assistant. Or it's almost like the chaos that abstract painting might invite where you kind of, you sort of know what you're doing and then something else happens and you kind of have to like roll with it.
So yeah, it's a, it's a super interesting super interesting medium for sure.
David Elikwu: Yeah, just I know that we've been talking about art for a lot and, and there's a lot of other things I'd love to discuss, but one more question that I had on this, which I thought would be quite interesting, just cause you mentioned NFTs. I'm interesting what you think about the relationship between artists and money and financing.
And I think, you know, throughout history we've seen that evolve as well where you have some of the people that we now say are, oh my gosh, this is one of the greatest artists to ever live. But during their lifetime they could hardly sell a single painting. I think even Van Gogh could barely sold any of his paintings while he was alive.
He did over 900 paintings. First of all, he doesn't have 900 famous paintings, he has a handful. And out of that handful they all sold after he wasn't here. And now in retrospect we say, wow, what a great artist. And so it's a really, that by itself is a, is a very interesting paradigm I guess. But then also now that you have NFTs, there is this financialization aspect which is far more immediate I think, unless let's say you are doing commissioned work for, something where a commission is immediate and, and it's done prior.
I think it's very interesting that now you are seeing a shift to a process where there's almost not necessarily an expectation the art will be sold, but the financialization aspect is a lot much closer connected to the creation of the art. So I'd love to know what your thoughts on that and how that might change, if at all, people's creative process in a sense.
Sterling Crispin: Yeah, yeah, definitely all interesting things to think about. I mean, for me, it's been really interesting because, you know, coming from kind of the contemporary art world and the art fair scene and the art exhibit scene, like, I've been making digital art for a long time and there just hasn't been a market for digital art, right?
It's like I've had to take the digital art that I'm making in like programs that I'm writing and try to turn them into like, "Discreet fine art objects" I call them. Just these like singular things that you could like, you know, point to and say like, okay, this is a unit of art that I can buy and, you know, put in my collection or bring home or whatnot.
So, there were people buying art on USB sticks, but like, pretty much nobody, right? So the introduction of NFTs, I think has been really awesome to be able to create a market for that. And they operate a lot like a certificate of authenticity would like, a Warhol can be perfectly replicated. So the fact that you have a thing from the Warhol Foundation saying it's the real Warhol is what's valuable. And similarly, you know, NFTs operate in the same way and they do create this 24/7, global unmediated market, right? Which I think is super exciting.
There's definitely huge new ways that that market has evolved. You know, people sort of like playing to the audience and creating things that they think are gonna sell well to collectors. There's tons of people just gambling and other people making things for them to gamble on. So that's like another part of that market happening. But the financialization of art has been like a huge part of art for a really long time, right?
Like a lot of huge collectors buying artwork at art fairs and stuff and buying Van Goghs and whatnot, they might have hundreds of millions of dollars and they might say like, okay, 10% of my portfolio is gonna be in wine and 10% of my portfolio is gonna be in art, and I'm just gonna do these like more speculative high risks, strange things of my portfolio. Cause I already have money in real estate and stocks and like, I'm treating art like a financial asset anyway, there's tons of that. And like, I don't know, like $5 billion worth of money exchanges hands at Art Basel, Miami in, you know, a three day time period. So the financialization and playing to the audience and figuring out things that people can gamble on in the art world are definitely a thing.
And like, yeah, it's interesting to see like what's successful in the market and what doesn't work. And at the end of the day, like, I'm super interested in mimetics and like how, everything is a meme like Richard Dawkins who coined memes is talking about memes like the cultural version of genes, right? So, photography is a meme. The fact that you can even do that is being, it's an idea that gets passed between people on passed between generations. And it's something that's like survived over time, right? So the idea that Van Gogh is valuable painter and that these objects are valuable is also a meme.
And it might have taken like a rich collector who got in early on those paintings to then go pay an art historian to write articles about how important Van Gogh's work was so that their collection could accrue in value. And they're like all of these games people play by like, storytelling and, you know, I'm on the board of directors at this museum and I happen to own a bunch of work by this artist, and I'm gonna advocate for them to be in this show, which then increases the prominence of my collection. Like it's all kind of Game of Thrones and like the landscape of ideas and culture that makes all this stuff happen.
David Elikwu: I'd love to hear maybe a bit more about your work. You've mentioned some in bits, but you've done a lot of really interesting work. I think you had one project, which was future tense. So you had the Future Tense Project, which I saw, which was super interesting. And then you've also got Flourish, which I think is coming up with art blocks.
I'd love to hear more about, how do you think about the work that you create and, you know, the judgment that goes into it, and also a lot of the ideas that you talk about. Cause I think I've heard you talking about, you know, the future and what the future looks like, et cetera as being one of the influencers in, in your work.
Sterling Crispin: Yeah, so I think I have these two main undercurrents in my work. One is like the computational beauty of nature. Like I grew up in Hawaii and I have this like deep reverence for the beauty of nature and having written code for like nature simulations, you just get in awe of how complex and incredible the natural world is. And I've made a lot of work that kind of touches that, like Flourish is a piece that is indirectly about that. It's a project about making generative architectural drawings for ornamentation. They're kind of inspired by the late 1800 and early 1900 of these architects that would do these conte crayon drawings of primary geometry, recombining and creating these more elaborate forms that would be carved out by mason masonry artists. And I just found those drawings to be sublimely beautiful and wanted to see, you know, what that would look like through code and through software and kind of like reimagined in this more generative automated way. So that's, that's a project I've got coming up in August through art blocks.
And yeah, I've made a bunch of other work kind of like that, and it's like a nice kind of grounding part of my practice. But I'd say like the other underlying current is sort of comes just through like the anxiety of living in the modern world, right? Like somehow there's like geopolitical instability and like, the dollar is weakening. And we're growing water in California or we're growing food in California with 10,000 year old groundwater. And there's so many sort of like collapse narratives that are out in the air, right? But at the same time, you know, AI is developing very rapidly and then we've got driverless cars and all this stuff, and it's kind of like, how is this incredible growth seemingly inevitable? And also the collapse of the global environment feeling inevitable. Like how are these two things happening at the same time and intention with each other?
So yeah, future tense was like a big collection of works. It was my first solo show in New York that I kind of was taking these anxieties and like turning them into objects and just thinking about these like conflicting cultural narratives and trying to make objects that sort of represented them in ways. So there's one of the pieces is a surfboard that I called an escape vehicle, but it's this like existential escape vehicle. Like, I love the idea of you know, if New York City collapsed, like somehow just like surfing into the ocean, like at the end of the Point Break movie. If you saw Point Break and just kind of like letting, letting the apocalypse take you and the surfboards got like graphs about the CO2 levels in the atmosphere and, you know, AI kind of prophecies and predictions about the growth of AI and that kind of thing on it.
David Elikwu: That's super interesting. I guess, you know, even on that topic of the future, I would love to know, I guess it's your take on, one part of it is the future and where things are heading, but I think the other part is what I find really interesting is how our vision of the future has evolved over time. You know, if you look back to, let's say, I don't know, even the twenties or the fifties or so, the vision of the future was, Wow, gleaming paradise and how much better everything could be. It was very optimistic.
And I think actually even at some points, maybe let's say in the eighties or so, there was still some elements of optimism, but definitely there was also simultaneously a shift towards a lot of pessimism around the world that we are creating. And I don't know if that is just a factor of maybe, you know, we had, my point of view might be that I think we had a lot of this initial optimism during a very industrialist phase where we are rapidly evolving and rapidly growing. And there is an extent to which the future could be anything. And it wasn't super crystal clear because technology was taking so many different directions.
And then I think we started to catch up to the future we were envisioning and realizing that it wasn't exactly what we thought it was. There were some parts that seemed magical and seemed great, but it's very interesting. For example, when you look at films, Marty McFly had shoelaces that tied themselves, right? And he had his hovercraft and he had a bunch of these things. But what's really interesting is that in some of these Futuristic films from the past. There's some really basic stuff that they didn't have. They didn't have doors that just opened and closed by themselves, right? Which we have. You just have it in the supermarket.
And so it's strange how, you know, there's some stuff that we thought, oh my gosh, this is gonna be really cool in the future that we never got. And there's some really basic fundamental things like, oh, everyone should just have clean water and, you know, people should have toilets and all these fundamental things that we completely forgot and we don't have at all.
Sterling Crispin: Yeah, absolutely. I mean, what's that saying? The future is already here, it's just not evenly distributed, right? Like that's kind of, I think that's a good way of thinking about it. And I think kind of the pessimism about the future is almost like a second order effect of mass media and like what drives eyeballs and what drives advertising, right? If you say something is scary, it like triggers some like deep, you know, ancient part of our brain. Like, oh, something scary. I have to pay attention to this cause I might run into it in the future. And so you click through and we've all been sort of like, as a culture wound up into thinking that bad things are constantly about to happen. Which I guess part of that show that I made was definitely about, but yeah, It's hard to tell like, the second order effects of things. I think we could sit right down right now and say like, obviously the capabilities of AI are going to rapidly advance and they're gonna change society in ways that we just can't predict.
And like, I think that's part of why Ray Kurzweil's writing it so interesting to me because he does, he does sort of try to predict what might happen after the singularity. But like the singularity is this idea that like, at some point technology is gonna advance so far that there is no predicting what's beyond that. It's just this like line in time where like, future predictions are no longer possible in any way, right? What does it mean a thousand dollars of computing power is equivalent to every human brain in the planet combined? Like, what happens to the government? What happens to corporations? What does money mean? You know, like things just fundamentally break down. But yeah, I mean, it's interesting. Like if you imagine the world that you want to exist and start building toward it, it's gonna be way more possible. And I guess that's, that's like one of the lessons of SpaceX. Like we could have really, really incredibly advanced space programs right now, but people stopped working on it.
You know, it wasn't this thing where people kept working on it and it kept getting better. So yeah, it's kind of up to all of us to like, contribute toward whatever world vision we'd like to see happen.
David Elikwu: Okay. One of the next things I wanted to ask you about is the work that you contributed to at Apple. Cause I think there's very much links in and there's some follow up questions that are related to what you just said, but I want to break the ice with the AR VR stuff first. So I would love to know, I mean, to what extent you can, you know, if you could share some of the work that you were involved in and some of the work that you did. Cause I think it's super interesting and I guess we can go from there.
Sterling Crispin: Yeah, totally. So, I've spent, you know, my entire sort of professional career as a software developer and product designer in the AR VR space. So, you know, like completely focused on head mounted displays, AR glasses, headsets, that kind of thing. And at one point I was recruited onto this neurotechnology team at Apple and they were focused on, you know, designing experiments that would use neurological and physiological data to try to augment immersive experiences. And I can't really talk about it in much detail, that's sort of what my public job description said. But some of the patents that came out of that involved, you know, trying to have models that would predict. Like how relaxed you were or how focused you were doing an immersive experience. So you can imagine like, you're in a virtual reality type of experience and maybe you're trying to study and the app somehow knows that you're being distracted and it changes the environment in a way to remove distractions so you can focus better, right?
So those are the kinds of like, opportunities that you have in an immersive experience. Like at the end of the day just generally speaking, not necessarily about this product, but there's a lot of neuroscience research that happens with virtual reality because you can control what someone hears. You can control what they see. Sometimes with haptics, you can control what they feel and so you have this kind of like ultimate laboratory for the brain to like measure what the brain is responding to and what it's seeing and try to understand our sort of perceptual on cognitive system is one, is one whole, and I'll say as well, I'm not a neuroscientist by training, but you know, I picked up a lot of a lot of it through osmosis and it's it's all super fascinating, especially in the context of immersive media.
David Elikwu: Sure what was, cause I know that I think maybe the last year or so you weren't involved with the projects anymore and so you are, I guess seeing the, the finished product now. I guess from a combination of the experience that you had and from what you are seeing now, what has impressed you the most about what they've been able to do?
And I know that maybe you haven't actually seen the finished products, you know, it's not something that people have got their hands on, but I think considering the amount of work that has gone into it and considering the depth to which, like Apple has thought about the neuroscience and all of the things that go into it, like which aspects of that from a technical perspective, do you think is the most impressive?
Sterling Crispin: Yeah, that's an interesting question. I mean, I'll say that like having seen behind the curtain, like the kind of engineering happening at Apple, most people wouldn't believe, like, I think I saw a public figure that their R & D budget for 2022 was like $26 billion or something like that. That's the money they're spending just doing research on things. Not even necessarily like, oh, we had to spend that money to make the iPhones. It's just like paying scientists and engineers to come up with stuff. So for every little like, oh, and it can do this. It's like, a thousand people might have spent 10,000 hours like making that kind of thing happen. And I guess to answer your question, just seeing all of that labor come together as, as one whole is amazing.
Like I think about like, the Sistine Chapel as this like standout piece of like human creativity and labor in our time, like maybe the International Space Station is this sort of Sistine Chapel of like, oh my God, like humanity was able to do this collectively through all of our work.
Like I think the International Space Station is like a profoundly beautiful object. Like it's a spiritual object almost because it's like such a embodiment of what we're capable of when we cooperate and work hard and understand the universe around us. And like I don't think the Apple Vision Pro is on that level, but there are like aspects of it where it's just oh my God.
People really worked hard and like invented new technologies and new materials like so much discovery and progress behind the scenes, real scientific progress went into making this thing real and changed, and like changed reality even though no one's tried it. Just the fact, you know, I mean, people have tried it, but it's not like in millions of people's hands yet. But just the fact that it exists is kind of awe-inspiring to me. And it's not the only product that does that. There's I'm positive that many other things kind of like approach that level, but just having seen behind the curtain and knowing everything that went into it, I guess existentially, it's very, it's impressive object to me, I guess is what I'd say.
David Elikwu: Yeah, and I think a lot of the magic for me particularly, and it's happened a few times with different Apple products in the past, comes from. There's a gap between what you think you're experiencing and what it takes to create that experience. Cause I remember listening to Steve Jobs talking about with the iPhone for example, it had the touchscreen. But I think what made the touchscreen work so well is that they made the hit box on the screen of what your touching bigger than your finger. Because very often I think when you tap something with your finger, the part where you think is touching the screen is not actually what's touching the screen. It's like slightly below or it's slightly different. And so they are trying to anticipate, at least with the very first iPhone, the whole point was trying to anticipate what you think you are touching, cause it's actually probably not what you are touching. So what part of the screen do you think you're touching and trying to make sure that that matched the output and the experience in the end.
And so it felt like this magical experience where, wow, I can move my hands, I can do the multi-touch, I can touch all of these things. But actually there is so much additional work in the gap between the magic you think is happening and the magic that's actually happening, that enables our experience.
And I think from what I've heard about the Apple Vision Pro as well, what I dunno if you mentioned it, I think you mentioned it in your thread, but I've also heard people talk about it from having tried it as well, just how magical the experience of the Click is. And the fact that, you know, it tracks your eyes, you can look around on a screen that you might have in front of you and then you click with your fingers, you put your fingers together and it clicks. But it's the combination of, okay, it's doing this eye tracking, it obviously has an array of cameras that can see when you put your fingers together, but actually it's also guessing in advance or from your brain and from your cognition, what you are trying to click and when you are trying to click before you actually make the motion with your hands specifically. So it's not just waiting until you've actually done the action, but it's kind of predicting or trying to preempt what actions you're trying to take. And also using the example that you gave before, you know, let's say you're trying to learn something and it can see that you might be distracted.
It's being able to preempt that as well, I think is where the real magic comes in.
Sterling Crispin: Yeah, totally. I mean, I don't want to, I don't wanna speak to like any of the algorithms or potential or technical details of how any of that might work. I definitely was doing research that got patented about predicting intentions to interact before they happened. But the eye tracking system is a very large beast and I don't wanna speak to any internal development. But I, I can say that, this company, SMI that developed a really incredible eye tracking technology. I got to try that when they were public, I think in 2015, and it might have been 2016, but it was at a conference for Augmented Reality, I remember trying their eye tracking glasses on and you know, looking at targets and interacting with things with my eyes and then being able to read things and you, you get to the bottom of the paragraph and the text starts moving up automatically as you're reading. This was at a time where people were trying to use interfaces where you'd rotate your whole head and instead of your eyes, cause nobody really had eye tracking. There were all these clunky ways of doing interaction. And I told the guy who's giving me the demo, like, whoever acquires your company is gonna win the AR VR space. This is the most impressive technical demo I've ever had. And I think like six months later, Apple acquired them and it's been whatever, it's been like eight years or something since that acquisition. So a bunch of people have been working really hard to make it even better, and I think when I tried it eight years ago, it was already mind blowing. So it's really, it's, it's like they say 90% of the effort is getting the last 5% of the results or something like that. And it's really true like, that gap between, oh, it works most of the time and this is magical, is really a incredible amount of effort.
David Elikwu: Yeah. One thing I wanted to ask you about, because I've seen a lot of people being upset or having a variety of responses to it, and I would love to know, I guess from your view, either your best guess or if there is some official answer, the reasoning for having your eyes show on the external screen.
Actually, no, it's a two part question. One, having the external screen in the first place, considering the impact that that would have on the battery because people are already complaining that, okay, it only lasts for two hours and actually if you didn't have this external screen, it could have lasted for longer.
Or you know, there's other cuts you could have made to have a longer battery life. And so obviously the fact that, you know, apple has made this investment in having this screen shows that it's something significant. I'd love to know if you have the best guess as like why that is important? I have some ideas, but I think they're probably a bit crazy, so I'd love to hear what you think.
Sterling Crispin: Yeah, so I'll qualify my answer that this isn't based on internal Apple development or product development decisions, but like you said I think, it's clear the fact that they included it and they made such a point about it in the keynote, that staying connected to other people and not being isolated was like a core value of the product, right? And that's been a big criticism of virtual reality. And like I work in the AR VR space and like I don't spend that much time in VR because like I'm done working at the end of the day, the last thing I want to do is you know, my wife is here and I'm like putting on this VR headset and getting like completely disconnect from the people that care about the most. Like, that's not a good value proposition. But human connection is, is super important, right? It's like a profound part of what we value as human beings.
And I think, people will realize when you see it in person, like Mike Rockwell's described the fact that it's a curved lenticular display. So as you walk around it, you get a unique view of where the person's eyes are and it actually sort of looks like their eyes aren't on the front of the device, but their eyes are where they should be on their face. So, I think it'll be really awesome, I think people had like a big pushback against the scene where the parent is recording their children through this headset instead of being present. I think that maybe, in my personal opinion, was kind of like a PR blunder where it could have, it should have been the kid being like, Hey, like I'm gonna do a cartwheel. And then he puts the thing on to record it and takes it off and it's like, that was awesome. You know, there should have been like a a softer narrative arc around that.
But at the same time, I mean, people will go to funerals and look at their iPhone the whole time, or their Android or whatever, like, we're already super sucked into our devices, so it's not like this is coming out of nowhere, right? Like it's already being connected to our phones is already a huge part of modern life. And I think that it's actually a huge opportunity to be like, oh, now you know, my posture is better, I have this thing on, I'm looking forward, I'm looking out into the world instead of down into my phone. I'm actually able to be more connected just spatially with the world around me. And if someone comes up to me it's easy for them to get my attention because they, they walk into the scene and the spreadsheet I was looking at disappears, and they can see my eyes.
And I think that it's, you're gonna be more connected than you would've been if you were looking at your phone, if that makes sense. Which is maybe like counterintuitive to think about, but that's my expectation.
David Elikwu: Yeah, I completely agree. And it sounds like heresy to say that wearing this huge set of goggles on your head, you're gonna be more connected to people. But genuinely, exactly like you said, I think people underrate just how much time you spend looking at your phone, especially when you're supposed to be social, especially when you are out and you are meant to be interacting with other humans.
So I think there's that whole part of it. And then also, another good point that you touched on is, I just can't imagine how much people are gonna save on medical bills because their screen is automatically at the correct height that it should be, and they're not looking up or looking down and they're not, you know, getting carpal tunnel from having to move their mouse around and clicking all the time and doing all of this stuff that we've accepted as natural behavior. And the fact that you have to sit down at your chair and, and all of that stuff to use your computer, you know, how does that change if you could actually move around and walk in space and walk around your house, you could be standing up. You can actually move and it changes completely the relationship that we have with computing and with work perhaps. Obviously, we'll have to see what it looks like in practice, but I think all of that has huge potential impact of being like very good for humans and very good for society. There is still the part where, you know, there is something on your face.
But one of the things that I was thinking about and I was surprised that I guess, you know, maybe it's not something that everyone thinks about, but considering they have mentioned that they thought a lot about the neuroscience of all of this, and to me it makes complete sense that, well complete sense in the sense that it is unintuitive cause they're the first ones that didn't. Because if it was complete sense, everyone would've done it beforehand. But it makes a lot of sense considering what they've invested into it, having the eyes on the front because so much of our physiology or so much of our, like our brains have evolved to be able to interact with other human beings. We look at people's eyes, there's entire parts of the brain, like the fusiform gyrus that is supposed to be letting you recognize faces, that's why people see faces in clouds. Like you are looking for faces everywhere cause a part of your brain is devoted to doing that one thing. And there's other parts of your brain like the temporal sulcus. Yeah, which is like helping you to perceive what other people are perceiving.
So, what that used to be for maybe is like you are talking to someone and they look at something over your shoulder, which could be dangerous and you are able to track what they are looking at, so you also turn and you see if there's something dangerous. But like our brains have evolved all these pieces to be able to observe other human beings and make inferences and understand the world through seeing each other.
And I think that is also super important. So as much as we wanna have this future where we could wear these headsets and have portable face computers and all of that, it's also important that, you know, we accommodate for human biology at the same time.
Sterling Crispin: Yeah, yeah, absolutely, absolutely. Like you said, it's super important and you know, I also expect this is my own personal speculation, that like, probably if you were in an immersive experience and I walked in the room and I was like, oh, hey, what's up? here's your coffee that you asked me to bring you. You'd be like, oh, cool, thanks. You take the coffee and you go back to what you're doing. But if I came in and I started asking you like meaningfully deep complicated questions, you might turn to me, listen to me for a little while, say something, and if the conversation kept going, you'd probably just take the headset off and just talk to me. You know what I mean? I don't think that
The vast majority of people will just be like, oh, okay, I'm sucked into this thing all the time and I'm gonna go cook dinner wearing this thing, and I'm gonna go downstairs and do my laundry and I'm just never taking this off. Like, people will probably do that, but I think for the most part, like, you know, just as it happens now, like someone's trying to get your attention away from your phone and at some point you just put the phone down and change the context that you're in and like reengage with people differently, if that makes sense.
David Elikwu: Yeah, but I still think, maybe I'm a weirdo, but I'm in a very tough place on this and maybe this is, this will be a follow up question for you about how you think this might, you know, the development of AR and VR might start to change the way we interact with each other as humans, blah, blah, blah.
Okay, I'll give you a really good example. I'm wearing AirPods right now, I wear AirPods most of the time. There's a few reasons for that. One, I'm very sensitive to like lots of different annoying type sounds like, I dunno, high pitched sounds or different sounds. And so it's, it's good for me and I use the noise cancellation mode a lot of the time, pretty much all day, however, one thing that I noticed is the transparency mode is slightly better than my actual ears. I don't know if my, I don't know if my ears have deteriorated. I don't think it's a result of having these, but because it's using the mics to project the sound, it is actually like I can hear slightly better just by turning on transparency mode instead of taking them out. And so I don't always, like, before I would, and sometimes I still do, I'll take out one ear if someone's talking to me, but sometimes I will actually just, especially if there's other noise going on, I'll just put it on transparency mode because I can actually hear you slightly louder than if I took both earphones out and that is one of those weird quirks where, If you can use technology to augment your natural senses, there's a sense in which they can kind of become part of the way that you interact with others and the way that you interact with the world. I don't know if that is for the best in the long term, but it at least seems to be working for now in some circumstances. So I'd love to know maybe what you think about that part of what I've just said, but also I guess moving beyond that, lots of people have shared different thoughts, at different points in time, not just now, but even as we've been using our phones about how, you know, how does social media change the way that kids interact with each other? How does the way that we use our phones interact with the extent to which people are going outside and interacting with each other? Is it possible that when you have AR and VR, everyone will sit at home in their chairs and you know, instead of going out to see your friends, you will just go meet them in the metaverse and something like that.
So, you know, there's lots of different visions of what the far future could look like. I'd love to know, based on the fact that you've spent so much of your time working in this space, how you see that evolving?
Sterling Crispin: Yeah, I mean, that's a really good question. Like you said, we're kind of already cyborgs with our phones. Like, I don't remember anybody's phone number, it's all just on there. They're a secondary part of our brain at this point where we store things that we wanna remember, but we don't necessarily need like, short term access to, right? So it frees up our brain to think about other things.
But yeah, the second order effects of that stuff are hard to speculate on. Like, I know people that have, you know, spent time in virtual reality playing like zero gravity games where you're on the space station and you can like, push off of virtual walls and float in different directions and then, at an hour later after playing the game, they take the headset off and they go to push off their desk and nothing happens. It's like their brain has adapted to this new reality and this new set of physics and they have to readapt back to reality as they take it off. And I think there's gonna be a lot of strange things like that. Like, people get like phantom text messages in their pocket when their phone's not in their pocket, just cause they're used to, you know, their phone going off in their pocket or whatever. I mean, we'll definitely see stuff like that. Like, I think you could imagine at some point having an app that like let you know people's names floating above their head or if you had seen them before the last conversation you had, or like, you know, I'm just speculating on an app that might exist. And then you take the glasses off and suddenly like you just don't remember who anybody is because like you've let go of the task of remembering that kind of stuff to your device. I mean, yeah, whether it's good or bad, it's strange and it's happening. And I did say earlier that you know, I think everyone has the responsibility to like, imagine the future we wanna live in and work toward that and don't build, sort of, don't build dangerous technologies.
But at the same time, I think we're also like, sort of along for the ride, technology is this thing that we're co-evolving with and it is kind of like, second layer of intelligence and evolution that's happening on earth. And it's waking up and this thing is happening and like we have some responsibility to craft that, but it's happening one way or the other.
Yeah, understanding the second order effects of all this stuff is like, it's super hard to tell, right? Like when Mark Zuckerberg was lonely in college and wanted to hook up with chicks and figure out a way to network with people at his college, he probably didn't think that genocides were gonna be carried out on this dating platform that he made, you know, but that happened and like people collected up tons of data and used it to like basically do psychological warfare and manipulate elections. So that's also this like, just crazy second order effect from this weird dating networking website that happened. And like, how do you, how do you like accurately project some of those kinds of things out into the future in a way that's meaningful? I mean, I try to have hope too. You know, it's like you look at what Apple's done with the Apple watch and it's.
It's like a productivity tool, but it's also like this health and wellness tool. And now they've got this app that you can log your mental health in and like, there are ways that it can like, change our relationship to ourselves in really positive ways as well. So I think that if we imagine those possibilities and work toward them, like, that'd be great.
I always joke I wish that there was a show called like Silver Mirror or something, where it was like a show where every episode is about like profound futures where like a small invention like transformed society for the better and we could all like collectively imagine positive things instead of like killer robots, you know.
David Elikwu: Yeah, I think it's a really interesting point that you mentioned though, just around this idea that there's an extent to which technology is self self-perpetuating. So I remember I wrote a while ago about this idea of competitive cognitive artifacts and complimentary cognitive artifacts.
And the distinction being that complimentary cognitive artifacts might be something like writing, because learning to write and the acts of writing actually helps improve your brain even when you stop doing it. And so that's the distinction. Like, competitive cognitive artifacts are things that you do it and once you learn to do it, you become worse when you're not doing it.
So using your phone, like you mentioned the example of, okay, you put all your friends' phone numbers in your phone. That is an example of perhaps a competitive cognitive artifact because the more you offload the needing to remember names onto this external device, you no longer have to need to remember names. And so your memory could get worse the more you. Stop relying on your long term memory. You are mostly just using your, whether it's your episodic memory or your procedural memory of like, Oh, I'm doing this task, as opposed to, Oh I need to actually remember this.
And I think there was a study where they looked at this with phones as well, phones in terms of when people take pictures. So if you go to an event and you know that, oh, I can just take pictures of everything and I can video everything and it's gonna be fine, you are less likely to have strong memories of the stuff that you took pictures of because you kind of offloaded the need to remember that stuff onto your device. So when you go back and look at your device, you're like, oh yeah, I remember all this stuff. Whereas if you were not using your phone to take the pictures and you just looked with your eyes, you might remember it a lot more vividly.
And I think it's interesting the extent to which, cause there's loads of other examples of that like, okay, walking as opposed to walking and looking around and using physical maps as opposed to using digital maps. Physical maps are complimentary artifacts because using them helps with like geospatial awareness and all of that. Like, actually learning how to navigate helps your brain. So even when you stop looking at the map, you have improved your brain in a sense. Whereas just using the GPS, it just tells you to the left. You don't even need to know what is happening, what is left, what's happening after the next turn, you just do what you're told.
And it is interesting, the extent to which, because now you could wrap back in some of the stuff we talked about earlier, this idea that, okay, now you layer AI onto that. And I think it's also an interesting space where now people are starting to use AI in school. And so instead of learning the mechanics of writing an essay, for yourself, you might just generate some pieces of that essay. And there's parts of that might be useful and, you know, not detracting in that, hey, you know, at least you got this thing done faster and at least research is a lot faster. But there is the negative aspect to which, like what happens in a world where you no longer need to find any information anymore. Like you don't actually have to spend any time looking for any bit of knowledge, it is just as soon as you want it to be there, it's presented to you. And what do humans become when humans no longer need to have like long memories? Because all of your memories are stored in this device that is on your head or attached to your head or whatever it is. So if you don't need to have the memories and you don't need to, like, there's so many things that you will no longer need to know how to do. I wonder how brittle that might make us as beings on this earth.
Obviously, you know, global warming might mean that maybe we don't have to live for that long anyway. But it's really interesting to think about how that might change us as humans the more we rely on AI and the more we rely on tools.
Sterling Crispin: Yeah, I mean, absolutely. I heard someone say something like, we've trained exactly one human generation that understands how computers work and then that's it, you know? And if you're using ChatGPT to write essays, you're kind of like a CEO in a way where you're just doing this like executive level thinking and having something else produce it.
But yeah, I mean, I think it will erode people's ability to think critically, right? Like as you're writing an essay, you're like doing a lot of critical thinking and like trying to figure out how you want to communicate and how your ideas can form stronger arguments and how you can like communicate things that are either true or untrue. And yeah, eroding our ability to critically think is sort of like a macro trend in society right now. And another macro trend is like people's inability to distinguish truth from fiction, right? There are all these like, bubbles of narratives forming where people don't think that the earth is round. Other people think that reptilian, pedophiles running the government and they're people in Congress that like, have these, like really deeply out there thoughts where it's like they almost don't exist in the same reality. Like, their world model is so radically different. The number of things that they share between other people, it's like that Venn diagram, the overlap of that Venn diagram of their world model is just shrinking and shrinking. And if you think about like generative media and generative AI and the fact that we're already, you know, our feeds are already getting so customized, you could imagine in the short term future, I open up Netflix or social media or whatever it is, and like everything that I'm seeing or 90% of what I'm seeing is just like being generated in real time just for me. I might ask you like, oh, did you see such and such show? And like, I'm the only one that that show exists for, you couldn't have seen it, right? And then like maybe our only shared reality is oh, like there is a McDonald's on the next intersection. You know, there are like these very few like physically located sort of objective things that we can agree on. And then like otherwise we just live in our own narratives.
And you can imagine those things also colliding with a macro trend of like, you know, spatial perceptual computing where it's not in a feed. It's like literally being manifested into my room. And so, like in a very literal way, my perception of reality is like completely isolated to myself. You get into this weird feedback loop where you're just super deep out there and like almost, we might not be able to really even relate to one another on a small scale because our world models become so different.
But then maybe we aren't relating to one another. We're relating to these like deep networks of AI systems that like know us better than any human could know us because they've seen everything we've seen and. It could be a very strange reality, but it's, you know, it's all speculation.
David Elikwu: Yeah, I think so much of what you just said is so interesting to me. I think just that last part about, I was having a conversation with a friend, just this idea that AI is already at a point where, okay, the way I would start is, let's say like from an IQ perspective, there's obviously a large differentiation and some people have super high IQs. Some people have, you know, not super high IQs. And the average person, just by definition of what average means, half of the people who have like below average and half the people have above average just based on what average means. But what that also means is that like, that the average conversation is not super high brow.
The average conversation does not contain so much data and information, the number of words that you use in a day-to-day context outside of work and outside of politics and some of these different contexts where you have a lot more complex words, it's very simple. And so actually the complexity that you need for AI to be relatively convincing in a standard conversation is not super high. You don't need lots of like a very high fidelity of, wow, he needs to understand all my deepest feelings because the average person is not using that many words anyway. And actually so much of human interaction is intuited. It is what we infer from how the other person responds as opposed to what someone has actually said or what actually happens. A lot of the conversations we have happen in our own heads and it's our own interpretation of what the other person has said.
And so all of that to say, you could be very quickly, and I think we already have a lot of the tooling available that how do you know that when you go on Twitter, we already have the Twitter bot problem. But I think, that's kind of different where those bots are, you know, maybe trying to sell you something or make you click on something, but there is no evidence to say that. Every input by a particular Twitter profile is not being generated by an AI. And so how do you know that maybe the friends that you make online or someone that you interact with without meeting in person and physically seeing them, how do you know that that is already not just artificially generated?
You would never really be able to tell. I think we're already at that point and you could go further beyond that, but I think when you think about, oh, the other part that makes humans human is actually the mistakes that we make. And it's the fact that we are predictably irrational as Dan Ariely might say. Like, we make lots of irrational decisions and we don't do the most optimal thing. Like what happens in a world where you have AI that will always do the optimal things specifically for you because this is AI that's tuned to how you think of what you want.
And so suddenly you could have a bunch of AI friends, like during the pandemic, I had a friend that I just saw for her birthday a few months ago and we'd been speaking every week or so on Zoom, like we would write together and stuff. I hadn't seen her physically for probably over two years. It's like just because of the pandemic and, and we don't live in the exact same place. And so it's just startling to think about how you could develop friendships with people that may not even exist and because they're AI, you might get along with them better than you get along with your friends. And you could have your AI girlfriend that you know, is more receptive to anything that you might have to say, they're never gonna get upset with you, they're ever gonna argue with you. You don't have all the same issues that you have with ordinary humans. And when you put that in the context of all this other stuff that we were talking about, it is, so I have no idea what happens in that kind of world to people. Like what is the extent to which we still need to rely on each other.
And just like the other point that you were making as well that I touch on is this idea, and I did write about this as well that, we're already kind of living in mini metaverses, like people are so worried when Mark Zuckerberg started talking about, oh, come and join your friends in the Metaverse. And I was like, I mean, you are already in the Metaverse, it's just you don't have to wear a headset. Every platform that you go to, every social platform, every news app is already changing what it shows based on your past behavior. And so actually everything is just a reflection of the person that you used to be, or the person you were at one point in time. And so there's a sense in which you don't even get to evolve necessarily as a person because your prior beliefs are being consistently reinforced. And so you get a lot of people where maybe they are stuck in some part of a society that might not even exist just because that is the stuff that they keep being shown. And so there's a growing distinction in what people believe about the world because the world that they believe in might not even exist. Like the world that they believe in is the world that they're being shown through a bunch of digital prisms, and we're getting to a point where it has no bearing on reality.
I think the simplest analogy for it is like the prettiest girl on Instagram. This was the example that I gave. Where it's like, okay, back in the day, you used to have, oh, this is the prettiest girl in my neighborhood, and that is your bar, your inference for what prettiness looks like. And there was a point in time where you had all these beauty pageants. You had, oh, Miss Idaho and Miss Nevada, and Miss wherever, and then you have Miss Universe. And now it's like, whoa, who's the prettiest girl in the world? And then now you have, okay, who's the prettiest goal on Instagram? And what changes there is that now that it's digitized, you are adding filters and you're adding like a bunch of changes that don't actually exist in real life. And so now what you're having is your perception of maybe what the average beautiful person looks like has now diverged from the reality of what the average beautiful person looks like.
And so now what beauty looks like in your head is something that actually doesn't exist in reality. And so the compounding of all these things, like you can very easily end up in a world, you already, many people are already in a world that doesn't actually exist.
And yeah, I don't know how you take that back or where, like you can't walk that back. And I think the more that technology, like we've said, because it's self-reinforcing, the more you use it, the more you kind of need to use it in a sense. I'm very interested in like the direction that we go because as much as, like you were saying, you build the things that you want to see in the world.
Like what are people seeing in the world and what world are they seeing? You know? So I dunno what feedback you might have on that.
Sterling Crispin: Yeah, I mean, for sure it's all, it's all a problem the way that we're our world models are diverging and like you said, like maybe you're chasing some beauty that is impossible and then Instagram is like, oh, how about this other impossibility? How about this other impossibility? And then like, your very own impossible beauty standard is being reinforced and you might not even share that same impossible beauty standard as some other person that is in your friend group or something.
So that's, that's all very strange. And you know, a lot of these systems, they're like drip feeding you dopamine and keeping you sort of, unsatisfied. So you keep using it and you keep scrolling so that they can like, sell you ads and stuff. And a lot of these systems are these like emergent machines where they might not actually have like a very strong central planning element to it. Like obviously Instagram has like a central planning team on its product, but they're also beholden to market forces and they're beholden to user behavior patterns and all of these things are kind of like shaping what it is you know, software and societies in this kind of feedback loop where these things that connect into like primal parts of our brain get get reinforced and they start shaping us.
That's kind of what I meant by like, not always totally being in control, right? It's like, we're along for the ride and we only have so much authority over that. And people are driven by capitalism and it creates all these perverse incentives to make things that like may not necessarily be in like humanity's best interests, you know?
It's very strange and like how do you detox from that, right? There's a lot of like, digital detox and wellness programs and people getting more and more interested in meditation and figuring out ways to like, become more mindful I mean, AI could potentially help with some of this stuff and sort of be like your personal coach where it's like, Hey, I've noticed that you've been telling everyone the earth is flat. Would you like to talk about that more? And like maybe we can go into whether that's true or not, or whatever you know, and it's a strange world we've entered into and it's only gonna get stranger.
David Elikwu: Yeah, I think we've oscillated a bit between high optimism and low pessimism, but I'd love maybe to end on an optimistic note, I'd love to know what your silver mirror might look like. You know, what do you think an optimistic worldview of the future could look like?
Sterling Crispin: Yeah, that's, that's a good question, a good point. I mean, like I was saying about the Apple Watch and their like mental health journaling tool that they're putting out there, I think technology that helps us like be more self-aware and reflect on ourselves is, I think amazing and like these kinds of healthcare tools where large healthcare systems suddenly become small and they become personalized. And maybe algorithms in AI are kind of like helping us become the best versions of ourself possible is really exciting to me. I think that as AI gets more advanced, it's going to, you know, revolutionize, let's say like, the healthcare industry with like drugs, like right now, some drugs, we really treat them almost like, we're cooking like a giant stew and we're just throwing ingredients into our brain and it's like very non-specific as far as like the mechanism of action in the brain that it's targeting. And I think that we could see like very transformative changes in healthcare that AI's use for drug discovery and that kind of thing.
I mean, all of the worries about us offloading our critical thinking is worrisome, but as a analogy, you can look at driverless cars, right? Like headlines about Teslas crashing and killing people are very sensational, but I think that they're like fatalities per 10,000 human, you know, 10,000 driving hours is way lower than humans. I'd imagine, I mean, there's a lawsuit right now where a lawyer is being grilled by a judge because they use ChatGPT to like generate some filing in a court and it like referenced some fictional prior work, you know, so people are getting grilled by that.
But at some point, you know, super advanced AI is gonna be helping Congress people make decisions and offloading their decision making to these systems and already politicians are kind of these like public figureheads and we just like vote in whoever we think is tallest, most popular or whatever. And like man, I would vote for GPT 4 as president already. Like I think, I think GPT 4 is already, gives me way more confidence than like 90% of politicians that I've seen and maybe that seems dark to somebody, but like the idea that we could sort of take our collective intelligence and fine tune out the worst parts of it and elevate the best parts of it and have it try to solve some of these like political human coordination problems that seem to be very bad at you know, it's potentially exciting.
And I think that in a very short timeframe, we're basically gonna create God-like super intelligent AI. Between now and then, I kind of just feel like we have to hold on and try not to get like, swept off the face of the earth by some foolish, some foolish thing that comes before that.
But yeah, I mean, I'm fairly optimistic about the future, but I think we also, like, we look back at the past sometimes with you know, rose tinted glasses about some things. It's complicated, I mean, the future's gonna be complicated just like the present basically.
David Elikwu: Yeah. I love the point that you were making about how, I guess it, there's many ways in which human intelligence is not everything it's cracked up to be. And the self-driving car example you gave is, I think, the perfect example because we prefer being in control and we like to think, oh, everything would be better if a human's hands were on the wheel. Because if my hands were on the wheel, everything would be better but, you are worse, you are worse in so many ways because humans make all kinds of mistakes. Like you will kill far more people than the machine would. We are already at a point where maybe you could probably say all cars should be driverless because the rates at which people die in driverless car accidents is not even comparable to humans driving cars.
And it's funny, I was just listening to a podcast. They were talking about Stand your Ground Laws. And I think it's a very similar concept to where. Okay, if you don't have stand your ground laws in your state, the law is that, if bad stuff happens, you're supposed to run away. And that is actually the best thing to do. If you break down the statistics and you say, okay, out of the options that you have? What behaviors should I take? You should a hundred percent run away anytime there is some kind of danger, some kind of trouble, run as far as you can.
I think in the scenario they were talking about, they were even saying that statistically, and I'm not, you know, advocating for this, but statistically, let's say someone had come into your home and you were downstairs and your kids were upstairs and they had gone upstairs. Statistically they were saying that your best option is to run out of the house, even though your kids are upstairs, run out of the house and get help. Because if you go up and try and help your kids, you are less likely to be of any use to them than anyone else and it's far more likely that your entire family is no longer there than if you had just left and called the police and got in some proper help. Especially if, you know, the bad person wasn't there for your kids, they were there for you or whatever.
And so based on the stats, what they were looking at is in stand your ground states, what is the difference between the homicide rate? And it's far higher? And it's far higher, particularly among, I think they were saying particularly among white man and the kind of people that would have been part of the NRA are the kind of people that ends up being disproportionately killed in homicide incidents. And it's simply because there is this human judgment that says, oh, if I am the good guy with the gun, like I can stop it. I can, you know, stand my ground and intervene in this incident. And actually what you should be doing is just running away. And you should just, let's let out external forces take control.
And I think maybe it's a very similar thing with AI. There are, you know, like you said, we just elect tall people for president all the time. That's pretty much the defacto thing. Trump was tall, you know, relative to the average person. Biden was tall, Obama was tall. Whoever you want is just a tall guy that maybe they look cool or there's something that you think is cool about them.
And so, yeah, you know, there is an extent to which maybe our judgment isn't everything that we want it to be. And maybe there are a lot of benefits that we would get as a society from augmenting our decision making with our artificial intelligence.
Sterling Crispin: Totally, totally. One of the best ways I've heard it put is that our brains evolve to think locally and linearly. And we live in a global and an exponential world. And like the way that our brains are wired just don't work at a global exponential scale. And like you said, there's a lot, I mean, even just the local linear problem of like, what do you do if crime is happening to you. Like our intuitions around some of that stuff just don't match the statistics of like reality and like offloading decisions to a giant statistical intelligence that like understands that as a whole, you know, it's gonna probably make mistakes and horrible things will happen, but like, as a whole probably will outperform humans.
And that's weirdly, you know, all what is, what's that saying? Like, watched over by Machines of Loving Grace or something like that. hopefully it stays good, but that's the future we're headed toward.
David Elikwu: Fair. Anyway, thanks Sterling. Thanks so much for making the time, man. This has been such a great conversation. I've, I've really enjoyed this.
Sterling Crispin: Yeah. Thank you for having me on. Appreciate it and yeah, it's been wonderful.
David Elikwu: Awesome.
Thank you so much for tuning in. Please do stay tuned for more. Don't forget to rate, review and subscribe. It really helps the podcast and follow me on Twitter feel free to shoot me any thoughts. See you next time.
Member discussion