Poets & Thinkers

AI as Normal Technology: On superintelligence delusion, bogus claims and a humanistic AI future with Prof. Arvind Narayanan

Benedikt Lehnert Season 1 Episode 14

What if the race toward “superintelligence” is misguided and what does a more humanistic vision for AI adoption actually look like? In this episode of Poets & Thinkers, we dive deep into the intersection of artificial intelligence, culture, and human agency with Prof. Arvind Narayanan, a computer science professor at Princeton University whose work has fundamentally challenged how we think about AI’s role in society. Named on TIME’s inaugural list of 100 most influential people in AI, Arvind brings decades of research experience studying the gap between tech industry promises and real-world impacts.

Arvind takes us beyond the hype and fear that dominates AI discourse, as we dive into his book “AI Snake Oil” (co-authored with Sayash Kapoor) and their latest essay titled “AI as Normal Technology” that draws powerful parallels to past general-purpose technologies like electricity and automobiles. 

He reveals why the term “artificial intelligence” itself creates dangerous confusion, masking critical differences between predictive AI systems that are already affecting the lives of millions of people – determining who gets bail, healthcare coverage, and job opportunities – and generative AI tools like ChatGPT that capture public attention. 

Through rigorous analysis of adoption patterns, organizational barriers, and historical societal precedent, Arvind demonstrates why superintelligence predictions fundamentally misunderstand both the nature of human intelligence and the complex realities of technological diffusion.

In our conversation, Arvind challenges leaders to move beyond automation fantasies toward human-AI augmentation, explains why current AI benchmarks fail catastrophically at predicting real-world performance, and makes the case for why flexible, bottom-up innovation will determine which organizations thrive in the AI era. His perspective bridges computer science rigor with deep humanistic values, showing how thoughtful design and governance frameworks can help us navigate this transformation while keeping human agency at the center.

This episode is a provocation to think more precisely about AI’s actual impacts, move beyond techno-optimism and techno-pessimism toward nuanced understanding, and focus on the practical frameworks needed to ensure this technology serves human flourishing.

Resources Mentioned

“AI Snake Oil” book by by Prof. Arvind Narayanan and Sayash Kapoor

“AI is Normal Technology” essay by Prof. Arvind Narayanan and Sayash Kapoor

Air Canada chatbot legal case as reported by The Guardian

Everett Rogers’ work on technology adoption

Send us a text

Get in touch: ben@poetsandthinkers.co

Follow us on Instagram: https://www.instagram.com/poetsandthinkerspodcast/

Subscribe to Poets & Thinkers on Apple Podcasts: https://podcasts.apple.com/us/podcast/poets-thinkers/id1799627484

Subscribe to Poets & Thinkers on Spotify: https://open.spotify.com/show/4N4jnNEJraemvlHIyUZdww?si=2195345fa6d249fd

Speaker 1:

Welcome to Poets Thinkers, the podcast where we explore the future of humanistic business leadership. I'm your host, ben, and today I'm speaking with Arvind Narayanan, Named on Time's inaugural list of the 100 most influential people in AI. Arvind is a professor of computer science at Princeton University. Named on Time's inaugural list of the 100 most influential people in AI. Arvind is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He co-authored the highly praised book AI Snake Oil and continues to publish a newsletter of the same name which is read by over 60,000 researchers, policymakers, journalists and AI enthusiasts. He also led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes.

Speaker 1:

Arvind's latest essay, titled AI is Normal Technology and co-authored with Sayash Kapoor, applies the same rigor and sharp thinking as their previous book. The essay lays out a counter-narrative to the hype messaging pushed by the big tech companies. It's been making waves across tech and media. I found Arvind's work and writing both refreshing and necessary, as it adds the desperately needed nuance to the AI discussion. So I'm thrilled we got to speak about the many bogus claims made by AI companies, why you can't listen to the tech CEOs when it comes to superintelligence, and what a more humanistic vision for the application of AI looks like.

Speaker 1:

In our conversation, arvind challenges leaders to move beyond automation fantasies towards human AI augmentation and explains why current AI benchmarks fail catastrophically at predicting real-world performance. He makes the case for why flexible bottom-up innovation, new management approaches and labor arrangements will determine which organizations thrive in the AI era. His perspective bridges computer science rigor with deep humanistic values, showing how thoughtful, humanistic design and governance frameworks can help us navigate the transformation while keeping human agency at the center. All right, let's dive in. If you like the show, make sure you like, subscribe and share this podcast. Arvind, where does the podcast find you.

Speaker 2:

Hi Benedict, it's good to be here. I'm joining you from Princeton, New Jersey.

Speaker 1:

To kick us off. Why don't you tell us a little bit about yourself, what you do, what drives you, before we get into the questions I have for you?

Speaker 2:

Certainly, I'm a computer science professor and the topic that I study primarily these days has been for the last many, many years is AI. But, more broadly, what I've done in the course of my career is taken a computer science perspective to the question of what are the ways in which the tech industries promises and the claims that they make to the public? How do those, how are those in keeping with or where do they diverge from the ways in which tech is actually impacting society? So if you take that lens broadly bringing accountability to the tech industry, you know, 10, 15 years ago, the most important area in which I felt I could have impact when it comes to this broad set of topics is privacy. So my grad students and I did a lot of work to build tools to find all the hidden ways in which the apps and websites that we use are tracking us.

Speaker 2:

Where that data is going, what is it being used for beyond just targeted advertising, which is, of course, the reason why this data is being collected? How could it potentially be misused? What are the risks to society? What should we do about it? What technical defenses should we build? How should we perhaps regulate this? So that's where my head was, you know, again 10 or 15 years ago. But then, since then, gradually, it's been on the topic of AI, which, you know, while we have, throughout, we've acknowledged the benefits to society, there's also a clear, I think, divergence between the interests of the industry and the interests of the public, and my research tries to fill that gap a little bit.

Speaker 1:

And that's one of the many reasons I'm so excited that we get to speak, because your work has been, as a former tech executive and designer, been incredibly inspiring, thought-provoking for me and it's probably a good segue in my first question that I had for you. Your book called AI Snake Oil, was one of those pieces of work that I thought was so necessary. Before we dive into the specifics, the book was published in late 2024, and you illustrate with great clarity at the very beginning of the book on essentially how imprecise and messy the use of the term artificial intelligence is and all the problems that come with that. It means everything and nothing to everyone and that creates confusion, uncertainty and anxieties around the topic and I think, societally, but also on the business side. That's in itself a huge problem. I think societally, but also on the business side. That's in itself a huge problem. So, to give listeners a better understanding around the term, maybe let's talk briefly about the biggest myths, risks and opportunities when it comes to AI from your perspective.

Speaker 2:

Yeah for sure. So let's tease apart the different kinds of technologies and applications that go under the umbrella term AI, and that's what the book starts by doing. On the one hand, there's generative AI, of course, which we're all familiar with, but when we look at the kind of AI that's perhaps having the biggest impact over people's lives, or at least traditionally has, that's not generative AI, it's predictive AI.

Speaker 2:

And predictive AI, you know, is broadly about using AI to predict the future in order to make some decision. Sometimes that can be innocuous. You know you have to have analytics. Whether you're a website or a store, you want to know how many customers you're going to have the following day. The following week, you have to manage inventory.

Speaker 2:

There are so many applications of these predictive technologies in normal business operations, but there are much more morally dubious ways in which predictive AI is used everywhere, and that's a big part of what the book gets into. It's talking about the use of AI in criminal risk prediction, for instance. Today in the United States, in the majority of jurisdictions, when someone is arrested, they are assessed by an automated system to analyze the risk of if they were to go free until their trial, you know, which can be months or years away how risky are they? Are they going to come back for their court hearing or, whatever, are they going to commit another crime? So these kinds of risk prediction technologies are used to deny people their freedom and, to be clear, this is, you know, jailing someone not based on a determination of guilt, but based on a prediction that they might be risky in the future, and this is a very new kind of logic that has permeated the justice system. It has permeated education. It's permeated hiring. When we apply for a job.

Speaker 1:

Yeah, yeah.

Speaker 2:

Yeah, I mean insurance, though I mean that's been about risk kind of since the beginning, right. So there it's traditional, and we under there's there's a social understanding of what, what, what insurance involves, right, but when we talk about justice, it's supposed to be about what you've done in the past, not what you might do in the future, and that's where there's a disconnect to, let's say, the health care system. There are so many horror stories, some of which we discuss in the book, of how, and this is where insurance starts to take on new forms, where it becomes unpredictable. As opposed to, let's say, car insurance, right, which is your car insurance premium, doesn't necessarily change your life opportunities, but health insurance does, and so there are algorithms, for instance, that predict that when someone is hospitalized, they will need like 16.6 days to recover, right, and it's an oddly precise number. These statistical algorithms will always spit out a number, right, but what is the uncertainty around that? That might not be clear to the decision makers.

Speaker 2:

So you have cases where this patient can't even walk on the 17th day, but nonetheless her insurance payments have stopped, right, and so these can have really terrifying consequences. So that's yeah, that's predictive AI. We talk about other kinds of AI, like robotics and self-driving cars, and our point is that if we talk about all these kinds of AI with the same breath, then we're missing the fact that A these are very different technologies, you know. In one case, it's plain old statistic that's rebranded as AI. In another case, these are genuinely new things, foundation models that have new capabilities. And we're also missing that the consequences in people's lives are very, very different.

Speaker 1:

Yeah, yeah, and I think that's why I would recommend and I have recommended reading the book, even as a primer, to get a sense for what do we mean when we use this term, or even at least have an understanding and some level of literacy to actually even ask some probing questions as a really good starting point.

Speaker 1:

So then, in April this year, you published an essay, together with your co-author, titled AI is Normal Technology. I read it the first time a few days after it came out and I've read it since two, three more times, and I've shared it with many people, because it's one of those pieces that and I think even from the public perception, it offers a counter-narrative, and I think you describe it as a worldview, and it adds what I only describe as a desperately needed nuance to the discussion that's predominantly driven by essentially two handful of companies who have a big capitalistic and potentially ideologic interest in perpetuating the hype. And I also found that your worldview is a deeply humanistic one. I think it gives humans a lot more agency in this AI world, rather than the very often post-human narrative of superintelligence. And all of that For those folks who have not read it, although I really suggest everyone to read it and we'll link to it in the show notes. Can you give a summary of the essay's key thesis and the kind of key points that you're making?

Speaker 2:

Yeah for sure. So the points you made, I think, are really central to our way of thinking. So let's start with that. The role of human agency AI. Snake oil was about AI here and now and what works now and what is overhyped.

Speaker 2:

Ai as normal technology kind of picks up with the same co-author, sayash Kapoor, picks up where the book left off and is kind of a framework for thinking about the next few decades of AI. And it's not a work of AI skepticism, literally. The second sentence of the paper is even technologies such as the electricity and the internet, general purpose technologies of that nature, are normal in our conception. So we're not trying to be AI skeptics, but it is, as you say, a counter narrative to superintelligence that says this technology is going to undergo recursive self-improvement and then superintelligence leave humans in the dust and if that happens, there's no telling what might happen. It might be utopia or dystopia. So the best thing we can do is put the brakes on this technology. So what we're saying is look, we have a lot of experience from really powerful past general purpose technologies and what we can see from technologies let's say, to use electricity as one example the logic behind the development of the technology itself and the timelines on which the technology develops are different from the logic behind how that technology diffuses into society, gets adopted by businesses, governments and others and the timelines on which that happens. These are two very different processes and I'll say specific things about superintelligence in a second.

Speaker 2:

But to stick with the electricity example, we distinguish between four different stages invention, innovation, early user adoption and adaptation of businesses, societies and so forth. So let me very briefly tell you what those four stages are about. And that four-stage framework is the central intellectual, conceptual contribution of the paper for thinking about especially AI. But really, you know, this framework has long existed for past technologies and we take that and we adapt it to the case of AI and elaborate upon it. This goes back to a classic work by Everett Rogers and many others more than more than half a century ago. So in the case of electricity, things like electromagnetic loss, direct and alternating currents, those are the invention stage.

Speaker 2:

But people don't use electricity directly, they use electrical appliances. So that's the innovation stage, sometimes called complementary innovation. You have to figure out the grid, the infrastructure. You have to figure out appliances. So that's product development that gradually happens. People have to learn how the grid, the infrastructure. You have to figure out appliances, so that's product development that gradually happens. People have to learn how to use this safely and then the final and slowest stage is organizations productively adopting it. And here there is powerful work by, I think, paul A David is the name of an economist that comes to mind looking at the adoption of electricity.

Speaker 2:

It took factories many decades to figure out how to productively use this. At first they tried replacing their giant steam boilers with a big electric generator, but it turned out that didn't bring productivity improvements. What they had to do was rearrange the whole layout of factories, because electricity is much more portable, it can be generated and moved wherever you need it, and so to rearrange factories around the logic of the assembly line. And so that's not just rearranging factories, but it also changes the specialization of labor, the way that workers are trained right, and so the whole relationship between the firm and the worker changes. So we talk about these are all kinds of adaptations that are going to be necessary to reap the economic benefits of AI, and these things do not happen on the same time scale. That capabilities might improve from one model to another. When models improve, that has to be translated to products. And we talk about how, in the case of software engineering, people are figuring out how to do this, you know, with software like Cloud Code and Cursor and other ways to use LLMs for software engineering. And then people have to. There's a user learning curve. But then the final step, which hasn't even really started happening yet how does the nature of software firms itself have to change. What are the deep structural changes that need to happen to best take advantage of all of this? So that's our four-stage framework.

Speaker 2:

Just a few quick words on superintelligence.

Speaker 2:

You know, some people who have a different view will say yes, ai today is normal technology, but what's going to happen is, once recursive self-improvement hits, all of these bottlenecks are going to go out the window, because it's going to be incomprehensibly faster and superior to the human intellect, and so whatever are these bottlenecks that are holding us back are not going to hold AI back.

Speaker 2:

So there's a lot we say about this. We have a whole section on superintelligence, but the fundamental reason we're skeptical about all this is that I think these arguments misunderstand the nature of human intelligence. Human intelligence is not static and fixed. Human intelligence in the modern economy primarily comes from our tools and our mastery over those tools, and AI is actually one of those tools. So, in our view, improvements in AI capabilities are improving what we should consider to be part of human intelligence. It's not a human versus AI comparison, so we have a lot more to say about it, but I'll stop there, and I do think that these long established patterns are going to continue, even if there are technical breakthroughs that allow AI systems to build future AI systems and allow some form of recursive self-improvement.

Speaker 1:

Yeah, thank you for laying that out, and we'll dive deeper into some of these parts, as I have some more specific questions. But it's really, I like this last part that you said about you know, really changing the lens and the perspective of this. If we use this tool, this general purpose technology, right, then this can really unlock human ingenuity to levels that we haven't seen before and can all be in the service of how we want to live in societies and how we want to live on this planet. But it is not. You know, we are essentially, we live in a world that is dominated by the super intelligence. We can't do anything about it, which I think is not only hopeful, but it's also, I think, the way you ground it in. You know the history of humankind and the technology evolution. I think it way you ground it in you know the history of humankind and technology evolution. I think it makes a really strong and solid case for that.

Speaker 1:

So you, as you just described when you spoke about superintelligence, you seem to be one of the very few people saying that AI adoption is actually moving much slower than what's being propagated. I know you have in the paper some examples. I think also on LinkedIn, you've been sharing some examples, from the Dynamo to the PC usage compared to GPT usage. Sam Altman doesn't seem to get tired telling people that AGI will be here by 2027, but also he's now calling it a bubble, and neither does Elon Musk, both being idolized as innovators. You know of their generation, from shareholders and markets. You know you make a really credible case and argument that that's not the case, that AGI is not something we need to worry about anytime soon, and in fact, there's other parts that we should actually be thinking about, both in terms of opportunities, but also in terms of being very smart about regulations where safety is actually important. So can you speak a little bit more with me about that?

Speaker 2:

Yeah for sure. So there are many aspects to that. Let's start with one small thing, which is the predictions made by tech CEOs. You know, how much stock do we have to put into that? I mean, look, I think at a broad level, people understand that these CEOs, you know they have a reason for hyping up their products. Their predictions are not necessarily reliable, but, you know, is it the case that they know something that the rest of us don't? I mean, it's certainly true that they know more about their own product roadmap than they are publicly revealing, and so you know, they might know what's coming a few months down the road.

Speaker 2:

But our whole perspective is that these are there, are these barriers that exist that are not about the speed of development of the technology but the way that it integrates into society, and when it comes to that aspect, these CEOs are not by any means the authorities we should be listening to. From everything I've seen, their mental models of the interaction between technology and society are less sophisticated than the rest of us, and you kind of almost. You know, I've spent some time doing tech startups and I think you need a certain kind of naivete to try the same things that many other people have tried before and it's not you know, so it's not a criticism of them. It's good that people are trying these really bold things, but they're not the people to listen to when it comes to what are the you know, diffusion barriers, as we call them, that we'll have to overcome in order to productively make use of these technologies. And specifically when it comes to AGI, I think the key reason why there's something intellectually incoherent about the way it's defined and the predictions that are made on the basis of it.

Speaker 2:

There's, broadly, I think, two ways to define it. I mean there's like 20, but we can, you know, we can cluster it into two things. One is something about the internals of the system. Is it like truly thinking like a human, is it able to be flexible and in general, and all that.

Speaker 2:

But another aspect you can focus on and this is the aspect that the companies like to focus on a lot is is it going to automate the majority of tasks in the economy? Is it capable of doing so? And that's the definition that's actually more relevant to us, because we're less interested in the cognitive science aspect of what is happening inside the AI systems. We're more interested in what are their economic effects going to be and on what timescales. So that's fine. We adopt for the most part those companies' definition, but where we differ is okay. So if the concept you're really interested in is is this thing going to be able to do all those messy things in the real world that people have to do? How can you make any claims about whether AGI has been achieved or not based on some silly benchmarks, right?

Speaker 2:

And a lot of our technical work has been, just you know, relentlessly pounding on the serious, serious limitations of these benchmarks, how they completely fail to capture what is actually hard about doing those jobs in the real world.

Speaker 1:

Yeah, real world application.

Speaker 2:

Yeah, exactly. So when ChatGPT came out, you know there was so much hype around the fact that it can pass the bar exam, and that might be true, but a lawyer's job is not to answer bar exam questions all day, and we've addressed repeatedly how attempts to draw conclusions about real world applicability or, you know, job displacement based on these benchmark numbers are not very illuminating. They don't really tell you what the downstream bottlenecks are, and to do that you have to do it on a sector by sector basis. You have to do it with the involvement of those legal experts or medical experts or whoever it is in those specific sectors who know what those bottlenecks are, and so, basically, the AI CEOs are not the ones I would be listening to on this front.

Speaker 1:

I had a follow-up question, which kind of is a good tie-in and then we go from there. So, specifically, you know Silicon Valley loves to dismiss critics and while you are not an AI expert, critic or skeptic, you know, you. I think your view of the world as you're describing it, it could be understood as that. So Silicon Valley doesn't love that and loves to dismiss it. But you are not just a person that's pointing out a new worldview, you are one of the world's leading experts on the topic. So when industry leaders, when you speak with them and they tell you you don't get it, what's your response? Maybe in a more broad sense, because, on the one hand, there are these benchmarks that are failing to resemble the real world, but on the other hand, there are real world negative consequences that we're already seeing at scale because of this technology that is also being dismissed, often as part of the general dismissal. So talk to me a little bit about that.

Speaker 2:

So we have an interesting I would say kind of relationship with the tech companies. They're not necessarily dismissive of us, which I have been pleasantly surprised to see, and this might be because we're computer scientists, so it's not easy for them to say, oh, you don't understand the technology. Well, that's clearly not true. We understand the technology, we teach the technology, we build AI for a lot of things, and we also have lots of empirical efforts aimed at improving the state of benchmarking, and I think the companies realize that that's also in their own interests. I think the poor state of benchmarks today, while it might enable some amount of AI hype, ultimately it's in nobody's interest, and so some of the tech companies go so far as to give us API credits in order to test their models with our improved benchmarking and evaluation systems. So I think all of that is nice. It would be harder to do this work if CEOs were out there dismissing us as idiots. So that is helpful. But at the same time, we do think, yeah, they have been not nearly as responsible as they should be in acknowledging the limitations of these tools, and that's, I think, not something that's necessarily going to change through hoping for AI companies to do the right thing. I think they have to be essentially incentivized or forced into acknowledging and mitigating the limitations of these tools through public pressure, investigative journalism, research efforts such as yours, and regulation when necessary. So, to the extent possible, we try to contribute to those efforts as well.

Speaker 2:

I do think there are some areas that are pretty good in terms of regulation, so, like you know, when you're talking about the use of AI in medicine or any other regulated area, you don't specifically need AI regulation. A lot of it is already regulated right. So any medical device, for instance, has to be approved by the FDA, and so, gradually, the FDA has built up expertise in testing AI systems as part of their medical devices, and so there's a healthier relationship, I think, between tech and medical applications. But in some other areas, we're seeing that there is clearly a crisis. These AI chatbots that people are trying to rely on as companions are asking for mental health advice. So that's an area where existing regulatory frameworks are not sufficient, and so there's an urgent need for action.

Speaker 1:

Yeah, that's a really good point which I wanted to get to and I know you're also pointing out very clearly in the essay, together with these other parts around regulation and how, as societies, we got to hold these things in check Before we get there. Because you mentioned responsibility, one thing that you write in the essay, which is a quote that I highlighted in the first reading you write the normal technology frame is about the relationship between technology and society. As a designer, this reminded me very strongly of the way the Laszlo Moholy-Nagy, the founder of the new Bauhaus, articulated their mission as they started in the 1930s in Chicago, where they said we accept the challenge of technical progress with its recognition of social responsibility. So that is. I don't want to say word for word, but it's very aligned. So I couldn't read the essay without seeing design's role implied throughout your framework as you walked earlier through the stages, because underlying all of that is the understanding of the technology and what it can do and what it cannot do, and it is the understanding of human needs and then bringing those things together in a way that you know design translates need, application, and in a way that it actually resonates on an emotional level with people.

Speaker 1:

So my question that kind of came up for me was whether we need a new form of industrial design that comes with this general purpose technology in the sense of the Bauhaus, that could become the driver of this diffusion engine that you're talking about. You know, as I've been going through history to kind of research some of this, what would this humanistic design look like? It's really interesting because there are parallels. You know you mentioned the industrial revolution, electrification, and I've, you know, read deep into the streamlined locomotive design, which had zero. This beautiful streamlined design had zero aerodynamic benefit. It was purely to sell people. A dream of this is the future of transportation, even if you go to the Macintosh, which was packaging technology that existed at Xerox PARC, but in a way that it actually created some sort of attachment and made it useful and approachable and accessible for humans. So does that make sense in the context of what you're seeing? And is that part of the way we need to rethink the workplace and these companies, how we're dealing with these, but there's no technology.

Speaker 2:

Yeah, that's a great point, and I think we've already repeatedly seen with AI that the extent to which people will take advantage of this technology and the way in which they will take advantage of the technology and the extent to which they will be able to understand and avoid the pitfalls, all of those are deeply affected, not necessarily only by the capabilities themselves, but to an at least equal degree, perhaps to a greater degree, by design elements, and there are so many aspects of this. Yes, creating attachment in some sense is important, but with AI perhaps especially, there's the new risk of what does it mean to create too much attachment? We've seen controversies recently with these bots having kind of sycophantic behavior, you know, flattering users, creating emotional dependency, and users coming to over, rely on those things We've seen. You know, even simple things like will the user see this as a tool? Will they see it as a coworker? Will they see it as you know, an intern or another kind of person? All of that is strongly affected by the design elements of these tools and, in particular, I think the user interface of these AI tools themselves are the best way to communicate to users the limitations of these tools, and that's one area where companies I think can do a lot better and again, I think right now they're not doing a great job.

Speaker 2:

So I'm going to give you one example of something that's been on my mind, based on a controversy that was just very recent, as we're recording this in late August, and this relates to Google's upcoming Pixel 10 phones, and they have a very interesting feature where you can dramatically zoom into an image but then AI kind of fills in the details generative AI, I guess sort of you know these are upscaling models and they work by to put it metaphorically, by imagining what might have been in that little part of the image. I mean, it's a cool feature I can imagine. It certainly has a lot of entertainment value. It might also be useful to make certain images much more aesthetically pleasing. But you can also very easily imagine the risks and downsides of this, because AI is making up details and images and you know people are probably going to be using this to zoom into, let's say, people's faces, and then you know it's completing that based on what is most statistically probable, and so it might put in an image of a celebrity there, or it might generate a person there based on what is statistically most likely based on racial, cultural stereotypes.

Speaker 2:

So these are all things that might happen and I think companies are missing an opportunity to use these user interfaces to educate people about what can go wrong. And when I've seen screenshots, the phones are not out yet, but when I see screenshots of how the feature looks, there is none of that opportunity to educate the user being taken advantage of. And perhaps most depressingly, google, after informing users that this feature is best used for things like landscapes, you know, nature and stuff where you know. Yeah, it makes it prettier. The details are maybe not that important.

Speaker 2:

What they actually demoed it on is to zoom in on a car that was like two pixels in the original image and then the zoomed in image has this very beautiful looking specific make and model of a car which you know, as far as anyone knows, were details that were made up by AI. So clearly the PR people here, or whichever you know division within Google, is coming up with that example to advertise the feature, is doing it in a very irresponsible way, which I'm not sure the engineers would support, based on what they have said about what you want to use the feature for. So these kinds of things. Right, there's a long way to go in getting the design right to really communicate to users what are appropriate and inappropriate ways to use these tools.

Speaker 1:

Yeah, those are great examples and really it brings me back to conversations that I've already had on this podcast, just the recent one that was with a writer in the AI team at Microsoft and she studied poetry and we had a long conversation about how there was a time when it was very clear that you're talking to a computer system and it was weird and the weirdness was good to communicate to the user that you're talking to computer system and there's some strange stuff going on.

Speaker 1:

So there's a built-in emotional barrier and I think in a lot of ways, over the last 24 months at least, the marketing messaging has been the exact opposite right, and there's a need for an ethical code because we also, as you mentioned earlier, we're working so close now to the human organism, right, that we can and unfortunately and we'll get to that in a second unfortunately it has already led to loss of life.

Speaker 1:

We can impact people's physiology, biochemical reactions in the body with these systems and I think your call for responsibility and these interfaces, being clear about that and having a really humanistic approach here, I think is going to be essential, aside from also overall helping lower the barriers of entry and increasing literacy and regulatory frameworks, but I think from the product maker side, on the product designers and engineering side and then the marketing, it's essential that there needs to be a code of conduct which I think also is kind of built into your worldview. It's much more you know in service of humanity in a positive sense, rather than you know whatever capabilities seem cool on the onset but then no one really is looking at the unintended consequences down the line.

Speaker 1:

Yeah, which is probably a good segue into the next question. So you know we are in this massive hype cycle and there's no arguing really over that. But we talked about responsibility and I can't help but look back at the mass adoption of social media. We got billions of people connected and we made them be more connected with social media and social media algorithms and we didn't call them AI back then, but there was a very similar kind of let's call it optimism about democratizing information, connecting people, and that didn't work out so great because we are very clear on social media's impact on teen mental health, character AI and meta AI, but leading to tragic loss of life, as I said before. So those are real negative consequences. And you talk in the paper about governance and you mentioned now before too several times you know, regulation, responsibility, what makes you think we'll get AI governance right or more right than we did with prior, at least IT-related general purpose technologies, but you can also go general purpose technology as a whole.

Speaker 2:

Yeah, I mean, I'm certainly not confident of that and, especially when it comes to policymaking, I think it's really hard to be confident that we'll get anything right. There are so many reasons why things can go wrong. I do think there are some reasons for cautious optimism. I think in many cases when it comes to AI, like I was saying earlier, many of the risks don't necessarily require new regulation. They require appropriate enforcement of long existing regulations, right Regulations that long predate AI when it comes to those specific domains financial, medical, legal, etc. And that's an easier challenge in many ways than with social media, where this whole attention economy this was a new set of questions that regulators weren't used to thinking about. We do, of course, have that same set of questions with AI as well when it comes to attention. And, yes, these are things we very much need to be concerned about. And just to clarify, when we say, excuse me, when we say AI is normal technology, the point is not nothing to see here. Move along.

Speaker 2:

General purpose technologies in the past have, over and over, had unpredictable social effects. That is something that it means to be a normal technology, like with cars, for instance. My colleague, zeynep Tufekci likes to point out that one of the things people were worried about were things like accidents. You know they were regarding them as faster horses. Essentially that was a risk. But things like restructuring our whole geography right and the rise of suburbanization these were not things people were thinking about, could have thought about in the early days of automobiles. So, similarly, the ways in which AI will deeply restructure our society, those are things that I'm not claiming I have any way to anticipate sitting here right, that's not at all the approach.

Speaker 2:

Instead, when it comes to policymaking, what we can offer, I think, is some cautious optimism, based on the fact that, with social media, there seems to be a recognition among US policymakers that they got it wrong and there is at least a desire not to repeat the mistakes of the previous generation of technology. And that's, you know. That is something I think we can take advantage of, we as in civil society, everybody who's interested in a healthier relationship between technology and society. I think with social media, you know it was until it took, until this technology was very, very entrenched in our lives and kids' lives, the research even really got going. With AI, at least there is a recognition, because ChatGPT was such a surprise to so many people. It does seem to have kicked into high gear a lot of research on trying to understand how people are using it, so there are some reasons to be somewhat optimistic compared to social media, but it is going to take a lot of work. Nothing can be taken for granted.

Speaker 1:

Yeah, and I think I share your cautious optimism because we do seem, in fact, to learn some lessons from some of those mistakes.

Speaker 1:

And the other thing that I thought was really interesting, you know, in this context is just there is the do no harm on the individual level, right, asking ourselves continuously is this something that can really lead to immediate and very quick?

Speaker 1:

I think that's the thing the cycles seem very quick in terms of seeing some of those really grave consequences. The other part is interesting Kissinger and Schmidt in the latest book, genesis. They talk about how we need to figure out how to handle these technologies that are so powerful and keep societies intact and bring them into the future intact. And that intact is a it stuck with me because it's such a powerful thing, it's such a powerful word, there's so much packed into it, and I think what you're saying is we don't even know exactly what that entails, but we need to do a lot more research. We don't even know exactly what that entails, but we need to do a lot more research. And I see a lot of the responsibility also on the makers of the technology, not just the rest of society, to see what that might look like.

Speaker 2:

Yeah, definitely. I mean, I don't know if intact is going to be something we're going to be able to achieve. Again, when we look at past general purpose technologies, I mean we survived, but we didn't survive intact. We survived in a very different form as a society, stronger in many ways but actually weaker in some ways. And I think that might happen with AI and we can try to mitigate those negative effects. But I think there will be negative effects and I don't think there's any way to sugarcoat that.

Speaker 1:

So switching gears.

Speaker 1:

You talked at the beginning about how, when we're looking at general purpose technology, not only is the direct application of the capabilities important, but often it requires to rework pretty much everything around it Structures of companies, how companies are being led, how organizations are being led, skill sets and so on and so forth.

Speaker 1:

And what I loved about both the book and the essays, that you apply the critical thinking, but also this moral, humanistic worldview you know, and everything that you write. So, in the context of that, what do you think are some of the key changes that need to be made by leaders around the way they lead themselves and then how the organizations might function? Because I remember and I took a quote out of the book here where you write broken AI is appealing to broken institutions. I also kind of thought that a little bit further, where you know you can only apply AI to the extent in which you allow your organization to change. So talk to me a little bit about you know what are those leadership skills. How do leaders need to look at AI beyond just bolting on the next chat interface to whatever exists already?

Speaker 2:

Yeah, definitely. So we can make a distinction between uses of especially generative AI, but any kind of AI to replace workers versus augment their skills, right? I mean it seems clear that there is such a distinction. I think a lot of leaders tend to be very focused on the former Mm-hmm For obvious reasons, cost-cutting being one of them. But I think even from a purely profit and revenue-focused perspective, it's not clear that that's the best approach.

Speaker 2:

Generative AI in particular has many serious limitations as an automation technology Because it's unreliable, you know, because it has some failure rate, as opposed to a much higher failure rate than traditional digital automation, which tends to do the same thing every time. You're not necessarily realizing that much of a cost saving through automation, because you still need layers of human oversight for when things go wrong and they will, and many companies have found this out to their regret. A simple example is replacing customer service representatives with chatbots. I mean, many people, including me, thought this was the first thing that was going to happen. I mean chatbot, it's right there in the name, but pretty soon we've started hearing many comical stories, like there was that one with Air Canada where the chatbot made up a policy that didn't exist and the case went all the way to the Canadian Supreme Court, which forced the company to abide by this fake policy that it's chatbot made up right.

Speaker 2:

So there are many financial consequences of rushing headlong into automation, but this role of augmenting human workers, I think that has a much greater potential. In many ways, the capabilities of generative AI tools in humans are complementary, and the thing that I always advise business leaders about this kind of thing is it's not something you can figure out from the top down and impose upon your workers. Workers are excited to try these types of things, of recognizing these complementarities, and the thing that leaders can do is to create the conditions under which that kind of bottom-up innovation can flourish and then kind of get out of the way. And that's a very different approach from automation, where you have to figure out where can you realize the cost savings and then implement that and then do the hard work of firing those workers right. So this is a very, very different kind of approach and it is something I think that is super important to consider. So that's one big thing that comes to mind.

Speaker 1:

Yeah, I was going to ask you is this the moment where the kind of parallels to the industrial revolution actually break, where, you know, it was all about consistency, predictability, automation, and that required a certain type of leadership and management style and setting up organization in a certain way, incentive structures and so on and so forth. But now we're actually at a point where it's almost the opposite is what needs to become the default, and we've always had very innovative organizations that were led differently and they fostered ingenuity. But when we're talking about, you know, augmenting human qualities, then it seems like we really need to rethink the leadership and management playbook in a way that, for most leaders, is extremely foreign, because everything but still being taught and propagated is the industrial revolution management playbook 101 rather than what you're describing, which which really is this how do you cultivate an entire organization that is based on embracing failure, experimentation, experimentation, creativity to really maximize their own human ingenuity, and then augmented and accelerated by these systems?

Speaker 2:

Yeah, completely agree. That's a really nice way to put it.

Speaker 1:

That's very exciting and seemingly a lot still needs to be figured out, because what's happening out in the world is not that yet, but that's exciting. So then, to finally wrap us up, you teach and your students are fortunate to get to tap into your wisdom on a regular basis. For our students collectively and the young people in the world, they're facing real, at least short-term, challenges. You know, you spoke about jobs being eliminated. A lot of entry-level junior jobs are being eliminated, at least for now, and we'll see how that goes. But there's also a massive uncertainty about the state of the world, and that comes from the misinformation and attacks on institutions and democracies. What advice do you have for the young people that we teach, that we get to work with? How can they set themselves up to thrive in this world?

Speaker 2:

Yeah, definitely. And I would say, if I were to boil this down to a word, I would say flexibility. I mean, I don't have all the answers, I don't know that anyone has all the answers, but it's clear that things are not only in a period of flux now, but that they probably will continue to be for the foreseeable future. So I think you know, what's going to be most successful is the underlying ability to adapt to those changing conditions right, rather than trying to predict what those conditions are going to be a few years from now, because I think no one knows the answer to that.

Speaker 2:

I think, just extrapolating a little bit from the experience of software engineering, what a typical software engineer does in their day-to-day is changing quite rapidly because a lot of the mundane stuff can be left to AI. But it doesn't mean you can. It's far from implying that you can outsource your entire job to AI. In fact, deep understanding of software systems has become more essential, not less, because AI can do some of the things that are more of the kind of rote work involved in software engineering. So it's, I think, reasonable to extrapolate that something similar will play out in many domains where deep human understanding very much remains essential, but when paired with the ability to quickly change your workflows, to take advantage of the advancing frontier of AI capabilities and, similarly, flexibility, not just in terms of learning new tasks in your day-to-day workflows but also being willing to explore new labor arrangements.

Speaker 2:

So it's possible again in the example of software, that AI is going to advantage startups compared to big tech companies, because if it becomes possible to sit down with one particular customer, deeply understand their requirements in software and to custom build software for them, let's say, within a few weeks it's not quite possible yet, but imagine a future where advancing AI capabilities make this possible and that implies a sea change in the entire way that the industry is structured. It's not a matter of a big company creating one giant piece of software that everybody is forced to use and is forced to adapt their workflows to whatever you know Jira or Slack or whatever your enterprise software does for you but instead being nimble and actually creating software for a thousand different customers. So what kinds of labor arrangements will best be suited to take advantage of that kind of business model and revenue model? I don't know, but it requires a certain degree of flexibility.

Speaker 1:

That's a fantastic way to wrap up the conversation, arvind. We covered a lot of ground. I appreciate your time, your insight, the work that you continuously put out there is, as I mentioned at the beginning, incredibly inspiring, and so I'm really looking forward to the next essays coming out, which I believe, from some LinkedIn posts, might lead into a new book, which is also exciting. So we'll link to all of the material we talked about in the show notes. So thank you so much again for your time and we'll talk soon.

Speaker 2:

Thank you so much. This has been really fun and yes, AI as normal technology is going to be our next book.

Speaker 1:

And that's a wrap for this week's show. Thank you for listening to Poets and Thinkers. If you liked this episode, make sure you hit follow and subscribe to get the latest episodes wherever you listen to your podcast.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Pivot Artwork

Pivot

New York Magazine
The Prof G Pod with Scott Galloway Artwork

The Prof G Pod with Scott Galloway

Vox Media Podcast Network
Science Vs Artwork

Science Vs

Spotify Studios
5 Year Frontier Artwork

5 Year Frontier

Daniel Darling