
Poets & Thinkers
Poets & Thinkers explores the humanistic future of business leadership through deep, unscripted conversations with visionary minds – from best-selling authors and inspiring artists to leading academic experts and seasoned executives.
Hosted by tech executive, advisor, and Princeton entrepreneurship & design fellow Ben Lehnert, this podcast challenges conventional MBA wisdom, blending creative leadership, liberal arts, and innovation to reimagine what it means to lead in the AI era.
If you believe leadership is both an art and a responsibility, this is your space to listen, reflect, and evolve.
Poets & Thinkers
AI Sovereignty & the Literacy Gap: Policy lessons from the frontlines with Jaxson Khan
What if the biggest regret we’ll have in 10 years isn’t over-regulating AI, but failing to educate people about it? In this episode of Poets & Thinkers, we explore the intersection of AI policy, national sovereignty, and digital literacy with Jaxson Khan, a unique cross-sector leader who transitioned from startup founder to senior policy advisor for Canada’s Minister of Innovation, Science and Industry. From his home in Toronto, Jaxson shares hard-won insights from the frontlines of AI policy development, where he helped craft Canada’s approach to artificial intelligence across multiple critical areas.
Jaxson takes us behind the scenes of government AI strategy, revealing why less than 25% of Canadians have any formal AI education despite the country being home to some of the technology’s foundational researchers. He explains Canada’s Sovereign AI Compute Strategy – a response to the brain drain that sees Canadian talent and capital flow south to Silicon Valley – and makes the case for treating AI infrastructure like a public utility. Through his current work helping nonprofits and corporations adopt AI, Jaxson demonstrates how the same technology reshaping global geopolitics can be leveraged for social good.
Throughout our conversation, Jaxson challenges the notion that we need to choose between innovation and regulation, instead advocating for what he calls “meaningful consent” in privacy frameworks and emphasizing the critical importance of cultural sovereignty in AI development. His perspective bridges the technical, political, and deeply human aspects of our AI-powered future, showing how policy decisions made today will determine whether societies remain intact through this transformation.
In this discussion, we explore:
- Why AI literacy should be treated as urgently as national defense in the modern era
- How Canada is building sovereign AI infrastructure without trying to replace Big Tech
- The three pillars of AI sovereignty: technology IP, data and compute, and cultural preservation
- Why privacy laws that predate iPhones are a “travesty” in the AI age
- How the imagination gap is holding back traditional companies from AI adoption
- Why NGOs and government agencies must accelerate AI adoption to stay relevant
This episode is an invitation to think beyond the hype and fear surrounding AI, focusing instead on the practical policy frameworks and educational foundations needed to ensure this powerful technology serves humanity’s highest aspirations.
Resources Mentioned
Canada’s Sovereign AI Compute Strategy
“Bridging the Imagination Gap” Royal Bank of Canada white paper
OECD data on international AI adoption patterns
“AI is Normal Technology” by Prof. Arvind Narayanan and Sayash Kapoor
“Genesis” by Kissing
Get in touch: ben@poetsandthinkers.co
Follow us on Instagram: https://www.instagram.com/poetsandthinkerspodcast/
Subscribe to Poets & Thinkers on Apple Podcasts: https://podcasts.apple.com/us/podcast/poets-thinkers/id1799627484
Subscribe to Poets & Thinkers on Spotify: https://open.spotify.com/show/4N4jnNEJraemvlHIyUZdww?si=2195345fa6d249fd
Welcome to Poets Thinkers, the podcast where we explore the future of humanistic business leadership. I'm your host, ben, and today I'm speaking with Jackson Kahn. Jackson is the CEO of Aperture AI, a consulting firm helping institutions in the private, public and social sectors with AI adoption and policy. Prior to this, jackson served as Senior Policy Advisor to Canada's Minister of Innovation, science and Industry, where he played a pivotal role in developing a $2.4 billion investment into AI compute adoption and safety. Jackson is a Senior Fellow at the University of Toronto's Munk School of Global Affairs and Public Policy and the board director of the Human Feedback Foundation.
Speaker 1:I first connected with Jackson when he was at Accessibility Startup Fable and I've long admired his unique cross-sector perspective to AI governance, having worked at the intersection of technology policy and social impact. So I was excited to speak with Jackson about the behind-the-scenes of government AI strategy, what it takes to create meaningful governmental oversight and develop AI adoption frameworks for long-term societal benefits Not dissimilar to the EU. Less than 25% of Canadians have any formal AI education, despite the country being home to some of the technology's foundational researchers. Jackson explains Canada's sovereign AI compute strategy a measured response to the brain drain that sees Canadian talent and capital flow south to Silicon Valley. His perspective bridges the technological, political and deeply human aspects of our AI-powered future, showing how policy decisions made today will determine whether societies remain intact through this transformation for decades to come. Jackson challenges the notion that we need to choose between innovation and regulation, instead advocating for what he calls meaningful consent in privacy frameworks and emphasizing critical importance of cultural sovereignty in AI development.
Speaker 1:So let's dive in. If you like the show, make sure you like, subscribe and share this podcast. So let's dive in. If you like the show, make sure you like, subscribe and share this podcast. Jackson hi, where does this podcast find you.
Speaker 2:I'm currently at home here in Toronto, Ben. It's nice to see you Awesome.
Speaker 1:Well, let's get started and maybe we kick us off with just you telling us a little bit about yourself, who you are, what you do, what you've done in the past, and then we'll get into the questions I have for you.
Speaker 2:Sounds good. It's a good existential identity question. I basically spent a bunch of time in the startup sector, so I worked in a number of different companies. I know that's when you and I first met was when I was working at a company called Fable. That was in the accessibility technology sector. So I had a great time building a few different companies at different stages. So there's you know Series B companies with 100 plus employees. There was, you know, me and another guy, me being employee number one.
Speaker 2:I've also run my own business in the past and then once again now and then I had an opportunity a few years ago from an old friend of mine. He said Listen, I hear that one of the federal government in Canada, one of the government ministers, was looking for basically a tech person that could own a few policy files. And so I actually did take the jump across sectors a few years back and then joined the federal government as a senior policy advisor to the Minister of Innovation, science and Industry in Canada and that was super cool, basically got thrown into AI policy, quantum semiconductors and innovation writ large, and that was extremely exciting. It was a huge, huge crash course in public policy and got to do a lot of work with the ecosystem, especially in Canada, but then also interface with a lot of different groups around the world, different organizations, sometimes in Europe, sometimes in the US so that was a lot of fun.
Speaker 2:And then the past year or so, I've spent kind of a mix of traveling and consulting, and now just moved back here to Toronto with my family and mostly focused on AI adoption. I feel like that is the phrase of the day, but really, I think what that means is how do we help organizations really embrace the new tools that they're being given? Right now and I feel like you know, ben, as we've talked about before, we're facing more challenges in the world than ever before, and I'm feeling both a mix of optimism and, you know, just some realism and being like, yeah, like this is really a moment that we have to meet as a society. So, yeah, that's where I'm at today Awesome.
Speaker 1:And I've been really super excited to have this conversation with you, especially because of your background and knowing what you've been working on and when you started with walking through your career so far, initially describing that the government was looking for a tech person. It's really important to point out how you've really covered these fundamental new technologies that are going to shape the future, specifically for this conversation. When we're talking about AI and the humanistic future of leadership, we do have to talk about policy and regulation geopolitics, which is why I love that I get to have this conversation with you. I've been reading a paper that is called AI is Normal Technology, which was published by two Princeton professors and that is arguing that AI adoption, as you said, is the word of the day, but probably the decade, if not following decades. It will follow the same cycles and time horizons as other general purpose technologies in human history.
Speaker 1:At the same time, we're already experiencing both the positive and negative impacts of deployed AI technologies across a ton of sectors and layers of society. So you've spent the last years at exactly that intersection, working for and with the Canadian government. I admire the nation for its culture, its values and, as the ninth largest economy, it's also a clear driver of a lot of these philosophies, frameworks that will set us into the right track, hopefully into the future. So I want to kick us off with a question that I've been pondering and I want to ask you what do you see currently and into the future?
Speaker 2:And what decisions being made today will we regret in 10, 25, 50 years if we don't get them right now? Oh gosh, Okay, some big openers here. Well, so, first off, in terms of the essay you referenced in, AI is normal tech. I mean, listen, like I think much of AI certainly is normal technology.
Speaker 2:I think when this especially when this chat GPT boom first happened, I think there was a huge amount of concern and I've been lucky enough to speak to some of the top researchers in the world, even some of the godfathers of artificial intelligence, and you know, you know, if you've read into super intelligence or you know that AI 2027 predictor like, I think, any public policymaker who doesn't take seriously even the remote possibility of a super intelligence or emergent ways of using AI that we can't even predict, that could cause mass harm. I think that could be a real thing. At the same time, the vast majority of AI use cases that most people are going to encounter in their lives. I mean, this is novel, it's innovative, it's interesting, it can have profound impacts on the way we work, but it ultimately isn't going to necessarily cause harm. That being said, like any dual-use technology, where it can be used for potentially major harm or major good. We should think about how do we regulate it, how do we use it appropriately. These are all considerations, right, and I kind of think about what are we ultimately going to regret in 10 years? I don't think we're going to regret not imposing immediately severe regulations. I don't necessarily think that's the right approach. I think that can be measured in gradients of how public policymakers think about it, but I think the real thing we're going to regret is not making more upfront, major investments into AI literacy.
Speaker 2:So there was a report that I was involved in that just came out a couple of days ago in Canada and one of the it was actually last week, sorry and one of the things we found was that, at least in our country, less than 25% of Canadians are actually educated or have any formal or informal training in artificial intelligence. That's wild right, Like we've known about this now, for you know, years, and let alone, if you think about LLMs, the last three years. I mean, we've all known that this is here and we could have you know, for example, really, really launched it and said no, we are one of the inventors of this technology in Canada and neural networks, deep learning, a lot of the fundamental tech and even through that Google brain paper, I think in 2018, 2019 about transformers, and we could have been one of the first in the world with a very, very strong AI literacy curriculum, but I think we're behind.
Speaker 2:And I think more globally, if you look at trust in AI, including in the United States, I mean, they're talking about preempting state legislation. Most of AI is not going to be harmful. That's great, but you've got to help make the case for it through mass education and literacy and really, really helping bring people along, because I think there is a lot of fear, there's a lot of mistrust, and I think tackling that trust gap is essential for positive adoption. And then, at the far end, you know, for potentially more harmful use cases that we more see in society, like deep fakes, misinformation. Literacy can also help on that front and it can really help people to understand. You know what's right, what's wrong, how can you verify, and so I think that verification stage, empowering people to have higher agency and the ability to verify use AI.
Speaker 1:I think that that's a critical gap right now, I agree. I think what this brings up for me is any insight on your end why that's been such a miss, such a gap, and it also kind of leads me to some of the other questions I have. But why have there been no efforts to really go into educating the general public on artificial intelligence, the potential impact, just your overall understanding of what this big term means? It doesn't mean just LLMs, it doesn't mean foundation models, it doesn't mean just some machine learning optimization. These gaps seem to exist in almost all nations around the world. Is this a typical path that we've seen with general purpose technology, or is this something that is AI-specific?
Speaker 2:I think we've seen waves of, you know, general purpose technology throughout the years that have cycles of fear and you know also promises of complete and total abundance. Right, like I should probably be better historically informed, but I, you know, I think about internet and computers themselves, I think about nuclear technology, the industrial revolution itself. Like, I think, people have always been scared. There's always been worries about job loss in automation, and not that those fears are unfounded. There's always been worries about job loss in automation, and not that those fears are unfounded. But traditionally, you know, we found, if you think about like ATMs and banks, like everybody was worried, the bank tellers and everybody was going to go, all those jobs were going to go away. But the truth is there's just been shifts, right, people have moved on to sometimes higher touch customer service roles. They may have been read into other positions in banks. So there's the automation angle. The second is, I mean, on the fear side maybe, why have there not been a commensurate investments in education, safety? I think a lot of people were blown away when the research preview of ChatGPT came out. I think they were blown away about oh my gosh, all of our jobs are going to get replaced right away. Ai is going to take over super fast. You know we're three years out and certainly there's some transformations happening. You can build lighter weight companies in a number of areas now, but it's not like our society has wholesale changed overnight, right, yeah, and maybe it won't have even wholesale changed in five years or 10 years from now. We don't know yet. But if we can plan for a society where those investments could be made and everybody can come along for the ride, that's probably a good thing and it will help people adapt to new jobs, new ways of working. I think people jump to the industrial investments. So you know the hundreds of billions, maybe even trillions, going into data centers powering this revolution.
Speaker 2:And the second is on the policy side. I feel like there's a real jump towards AI safety and I think some of that was merited right, like I think having government-funded some of that was merited right. I think having government-funded AI research and safety institutes is probably helpful, have pre-deployment testing on the most powerful models at the frontier and make sure they're as safe as possible, and I think a lot of the top companies have been willing and even quite useful in engaging in those processes. You saw that with the UK hosting the first ever safety summit, I think where society's probably fallen short is saying okay, well, you know, if we think about our prior line of work, benedict, like accessibility, you know, data sets that are less divisive and more inclusive I think those are clear gaps. And then I think, training to make sure that people in underserved communities, but even just people in the general kind of middle of the population, are also better trained. I think all of these are really valuable things to do. I will also say quickly there's not nothing that has happened Like I do think the United States recently had that executive order saying let's do K to 12 AI literacy I actually met one of the people who's gonna be in the Department of Labor, who's gonna be leading the charge on that.
Speaker 2:I literacy I actually met one of the people who's going to be in the Department of Labor, who's going to be leading the charge on that. I'm very excited to see what they do there. I think in Europe there's been programs. Even in Canada there have been programs and content developed. I think it's just about funding for that, dissemination for that from the best sources in the world and making sure that there's good dissemination and diffusion across society.
Speaker 1:Yeah, that last part, I think, is encouraging and seems to be happening faster than it did with general, let's say, computer education and making that part of the overall curriculum and really getting especially kids to learn how to use this powerful technology in all the various ways that they could implement it into their daily lives and, through that exploration, understand what's possible, understand how to use it specifically for good, and that is one thing that I can certainly say.
Speaker 1:You know, working with my students, the AI technology that's already available is already pretty much part of their day-to-day life. So building more foundational education around that, I think, will be hugely positive for especially, you know, the younger generation. You already mentioned some of the work that you've done with the government in Canada. More specifically, I would love to talk with you about a program that you were involved in, which is called Canada's Sovereign AI Compute Strategy. Talk to me a little bit about that program and especially, also maybe more broadly, what do you think, coming from your learnings coming through this program, what countries need to do to approach their respective AI strategy? So what have you learned through this work? And maybe starting just giving a quick outline of what the program is?
Speaker 2:just giving a quick outline of what the program is. Ai cyber compute strategy and the programs under it were developed due to a strong demand that was unmet from, effectively, the Canadian economy. So, first off, if you look at global rankings, canada is really well in talent for AI, in fact, probably some of the most talented AI researchers in the world, some of the foundational researchers. Where we've been falling short is capital, which has historically been a problem for the Canadian innovation ecosystem. It's a lot of talent capital. Capital is often pulled into the valley, so talent is often talent-trained.
Speaker 1:Yeah, similar to.
Speaker 2:Europe? Yeah, exactly, and the challenge, of course, too, is AI has required this massive, often infrastructure investment, whether it's earlier on in the training side or, you know, an ongoing inference basis. And so when we looked at the challenges, we were hearing over and over again from startups, especially the larger scale ups in Canada, that, yeah, like compute is a high cost. And so what can we do to cross that gap? Make sure that there's enough for one infrastructure over the medium and long term that Canadian companies can rely on, and ideally that infrastructure will be sovereign, so that maybe they can even pay below market rates, especially for startups or researchers. And then the second is in the short term, can we make sure that we can help fund and subsidize access to compute? So when we were in the government we looked at this, it just honestly seemed like a potentially a home run to invest in. This is a very clear need.
Speaker 2:Other countries are stepping up. The United States is investing billions of dollars in national research resources. Obviously there's big partnerships and, effectively, massive corporate subsidization from, let's say, Microsoft to OpenAI or Google and Amazon to Terenthropic, let alone the US Defense Department. Even France is investing big into Mistral. You have President Macron being very vocal about that and lots of countries are making big strides on this Japan, Denmark, all investing in compute, and so Canada actually is probably behind the curve on investing in compute, but I'm hopeful that this program can close the gap quickly and then maybe even build enough sovereign infrastructure in the future to come out ahead. And again, the goal of this is not to try and replace or supplant the hyperscalers like Amazon or Microsoft or Google. The government's never going to do that or have enough capital to do it. It doesn't even make sense from a cost to benefit ratio. But it is probably a good idea for there to be a base amount of sovereign compute available for more sensitive let's say, government use cases, for startups, for researchers. There's always going to be value.
Speaker 1:It's kind, it as a utility on a country or nation level is not a bad idea, and there needs to be some level, as you said, some level of sovereignty that just needs to be built into the way a country is running their AI strategy and then, as you said, but not with the goal to really replace any of the public private companies that are running large-scale compute.
Speaker 1:So that makes a ton of sense. You already mentioned some of the obvious big players in the market that are driving a lot of this deployment, which brings me to a question I had as a follow-on, given that really, the current deployment of these AI technologies really, up and down the stack, from the silicon all the way to the models and the distribution of the compute and the technology itself really controlled by maximum maybe two handful of companies, most in the US, china, here and there, some across the globe and other countries but how do you really think about sovereignty, or how do we need to think about sovereignty, if most of the technology is controlled by a very small number of companies? What's your take on that?
Speaker 2:Yeah, I mean I think about sovereignty in the context of number one, critical technologies and intellectual property. The second area I think about is data and compute, and then the third I think about is cultural sovereignty. So in the first one I think about you know, is Canada going to be able to own all of the IP for artificial intelligence? Definitely no right, but could we, in the process of being a part of a global AI economy, own the intellectual property for some key aspects of AI? So, for example, I think about, you know, chips itself. I mean you've got a company in the Netherlands, asml, which is 100% of the most advanced you know semiconductor tech and they own the IP for that. I mean that's instrumental. They don't own the supply chain, there's no way, but they've owned this critical piece of machinery in the IP environment. So I think about, you know, in Canada, can we own some critical tech around AI inference chips, maybe a critical component that goes into those, some sort of interconnect something with you know silicon, photonics or optical connectors. I don't know for sure. You'd have to talk to someone super technical to really understand what is that, but maybe it's a particular type of chip, right? I know I've met people who are, you know, embedding, literally embedding, like the models themselves onto the chip, and designing highly specialized ones. I think that could be very interesting, right, and so could we do that. I think it's a very good idea.
Speaker 2:I think we should figure out what are those key swim lanes, those key areas, that in particular with the US supply chain, where we make ourselves integral, so that, also in the context of the trade war, you know we have a stronger position and say listen, like you know, we are an essential part of the United States economy. We're not just, you know, there's this old kind of I almost use the word parable, but this kind of old phrase in Canada, the hearers of water and, you know, of wood and oil. Like, we're providing all these raw materials to America and then it goes our oil goes down there, crude oil goes down to be refined in Texas. But we want to be value chain providers, right, and we can do, often through owning the intellectual property behind it or, at the very least, you know, some critical hard pieces, factories, manufacturing. That does happen here. That is this highly valuable.
Speaker 2:The second part, I think, is on data and compute. I don't think we need to have complete data sovereignty as a country. That'd be highly expensive. But having, you know, some servers co-located here and including them, some that are sovereign, owned by Canadian headquartered institutions, companies I think is a good idea, especially for sensitive government applications. It just seems like a good idea. A security thing as well Doesn't mean, you know, there's not gonna be massive server build-outs from the tech giants and I think we should welcome those as a country.
Speaker 2:And the other piece, too, on data is we should make sure that we have a very, very strong data law and ecosystem. So there's a privacy law that I helped work on in the government that it's taken, I think, over a decade now to try and get modern laws and our current law is over 22 years old. That's a travesty, right. I don't want my family family, whoever is living in Canada to grow up just have their data be a free for all right. You know the fact that we have privacy laws that were before, before AI, before iPhones and let alone before social media. I mean that's just a travesty. It's like we missed three rounds of technological innovation to even have modern federal privacy laws. So I think that should be like order business. Number one on data sovereignty is not let's go and build a bunch of data centers just to have data in Canada, but like, let's just have modern laws so that we have good rule of law around data in our country.
Speaker 2:And then third is cultural sovereignty. So I think about you know everything, all these English language models, a lot of them being trained on you know, a lot of them being trained on, you know, american data. Is Canadian culture just going to get diluted? I mean, high chance, right, but I would want to make sure that we preserve our Anglo-Canadian content, that there are unique values in our histories and our stories, that we tell the way we talk about things, the content we have. The second is, I think about Franco-Canadian content, quebecois content super valuable. We want to make sure that there are data sets that basically can be used to train LLMs. And the third is, I think a lot of indigenous and Inuit and in Northern remote communities making sure that their data is not lost and they actually have ownership and control over that. You don't just want to get basically taken over by foreign trained artificial intelligence systems. I think that's just not a good thing. So I think that matters to protecting our cultural sovereignty and we should be really, really attentive to them.
Speaker 1:One word that that brings up to me, which has come up in other contexts in the conversations I've had for this podcast, is dignity, and it's one of the reasons why I wanted to speak to you specifically is because I think, as I mentioned earlier, a really enormous respect for you know the Canadian culture and the value set that is embedded in the culture, and I think what you mentioned toward the end here and data sovereignty, making sure that representation is part of the data set, cultural representation underlying all of that is a sense of dignity, something that's much deeper than all the technological discussions to say we are here to use this technology for human good, right for the individual, for society at large and, in fact, also for every culture and every society in their own way.
Speaker 1:So does that? Also because you mentioned the privacy laws, which we've seen, at least in Europe and in Canada, both around actual data privacy but also things like accessibility. We've seen some real innovation and modernization of regulation, but are there ways to think about this moment also as a way to innovate in legislation? Is there real innovation? Maybe here, because we can't miss like this is a technology that's evolving so fast and it's so vast in its impact. Does it also require innovation in the regulation itself?
Speaker 2:Definitely could. I mean, I think Canada, you know, sought to take a bit of a balanced approach and make sure that there is a principles-based framework for privacy that can also, for example, have exemptions for socially beneficial use cases quite easily. I think that's essential for health data, it's important for financial technology and open banking. I think that's exactly what we should be doing trying to do with legislation like make it easier to do the good thing, make it easier to comply clear rules of the road and if you do do something that's really bad, there should be actual consequences and penalties. Like there've been some major privacy issues in Canada privacy breaches and there's been no consequence, whereas comparative issues, even sometimes by the same company in the UK or Europe you know there's been some real consequences. I do think there are clearly some gaps and challenges and probably some failures of GDPR. Like I do think over there I don't recall if it's the Draghi report or others have looked into you know how can we make this more effective? I think one thing I worry about is, or others have looked into, you know how can we make this more effective? I think one thing I worry about is, you know, with Europe and GDPR, because of the notice and consent model. You're just hit with cookie banners and these annoying things and you're not really providing like informed consent, and so what I care about is meaningful consent, yes, and so designing a system that's like, hey, if your data is going to be used in a really meaningful, significant way, that's going to impact your life and livelihood. Like, how do we focus on those use cases? Because I want Canadians to be like I don't want them to be surprised, I want them to be like I'm interacting with an AI system that's going to use all of my financial data, it's going to make a decision about a loan application or a health system. It's going to run some analytics that might, you know, might impact the type of care that I receive. Like, those are really really important wellbeing decisions, right? Not just oh no, I've been signed up for another newsletter. It's, you know, I think people, most people, turn to a lot of these. But but it's like, how do you design a privacy system that is actually, you know, I wouldn't say future proof, but is, is, is attuned to the future, right, like it should be something that is, that is prepared, that that actually, you know, ensures you move towards more meaningful informed consent Some people have talked about, like you know, maybe instead of a notice and consent model, maybe you focus more on, like, differential privacy I think I spoke to Jillian Hatfield or something about this the Schwartz Reisman Center in Canada.
Speaker 2:But these are all things that should be looked into right, like we should make sure that bottom line we have something that is new and better than the status quo, but and we should like I think perfection is the enemy of good on this front, but ideally we design something that can be iterated on over time. We've just been stuck in this situation where we try and get this monolithic legislation that you know encompasses everything, like what happened in the past. Iteration of the Canadian government was, you know, we tried to do privacy and AI legislation at the same time as a three-part bill, and I think that was maybe too big, too unwieldy for parliamentarians to really get their head around and have time to go through in a reasonable amount of time. So I think better to not slice it up into a million different parts, but focus on getting something big done and maybe don't try and do everything at once. I think that would be more successful and, honestly, better for the legislative process as well.
Speaker 1:Makes complete sense, which is a fantastic segue into my next question, and I'm going to just read a quote from the book Genesis by Kissinger, schmidt and Mundy. And they write in order to bring along societies intact and I thought that was a really important word here in order to bring societies intact, the benefits that may be reaped from AI would need to be incorporated into human institutions incrementally. So how do we consolidate this notion, which I think is right from everything we talked about so far, with the immense hype-driven speed of deployment we're currently seeing, as well as the clear weaponization both by state actors but also businesses at large scale? So how do we navigate the reality that you just described too, where we have systems in place that are fairly slow? At the same time, we have a super fast deployment and rollout of these technologies. How do we make sure that we actually carry forward our societies intact into this AI future, which I'm also generally hopeful about, but also, I think, only if we're realistic about the potential downsides?
Speaker 2:Yeah, I know, and I was lucky enough to see Mr Schmidt speak recently, and I think I have a copy of this book as well. I finally need to read, but I've been reading some summaries of it. I have a copy of this book as well. I finally need to read, but I've been reading some summaries of it. Yeah, listen, I think if our AI engineers and researchers are now using artificial intelligence to accelerate the rate of research and development on the next version of AI, it is incumbent on our societal institutions, civil society and government and nonprofits to use AI as well. We just have to like. I think there has to be a way that we accelerate the velocity and the capabilities and the insights that we bring together. And I feel connected to this because I'm kind of playing a bit of a cross-sector role right now, where I've come from government, I'm still sometimes speaking to and getting outreach from folks who want to learn, kind of what's on the cutting edge of artificial intelligence. And then I'm engaging with some of the largest corporations in our country and seeing, okay, well, where are they trying to take AI, where do they want to go, what do they need to know? And then also I'm helping to run a accelerator for the five largest nonprofits in Canada so that they can adopt artificial intelligence for higher productivity. And I think if they want to keep pace with their populations, especially younger populations that might interact entirely with these organizations in different ways, they have to use artificial intelligence. I just think it would be like, you know, if the entire corporate sector started using computers and then your NGO was stuck, you know, still using paper files for another 20 years. I mean, yeah, you would fall behind, you wouldn't be as effective.
Speaker 2:And I think right now, like you know, it can't be understated how significant it was for the global kind of civil society institutional framework that USAID was basically kind of had the rug pulled out from under it in a matter of weeks, months. Yeah, it was something like what you might know. I think it's like 70 plus billion dollars, and not speaking to the politics of it for a moment, but it is like that is a massive vacuum gap that's been creating funding, right. So the question is well, how does that get filled? And then, if aid organizations and NGOs are fighting for less dollars, how can they stand out? I mean, I would argue for smaller, more agile NGOs like this is your chance, like show that if you're doing something or a set of programs differently, like stand out, you try and try and try and actually jump fast forward on this, and I think it could be a real opportunity to maybe just take the opportunity and develop a new way of doing things.
Speaker 2:And sometimes a crisis can precipitate like some significant change. I think this is a good one that can be seized, and so that's the one of my focuses right now is how do we, how do we help the best NGOs to succeed and take advantage right now. I think that's how our society stays intact, because if that change doesn't happen, then well, yeah, I mean it is. It is kind of more corporatization and it is more kind of corporate control over this technology versus, you know, maybe a bit, maybe a bit more balanced, a balanced system.
Speaker 1:Yeah, and really a good reminder for everyone who is listening that, first and foremost, AI is a technology set of technologies that we direct to do a certain thing, and by directing it a certain way, we also embed our values and goals into whatever use case we're applying the technology to, which I think is a great way to think about using the technology for good, using it to advance also the systems in which we work together as society and as society. So I think it's a great reminder and one that is hopeful and a lot less you know the doom and gloom that is also getting a little bit more steam Because, as you said at the very beginning, people are afraid, especially of things that they don't understand, and this is a technology where it is hard for us to comprehend the long-term impacts, the vastness of the impacts. So I like the hopefulness that is in that. So I want to switch gear because ultimately, I focus a lot in these conversations largely on business leadership and entrepreneurship, and I want to make sure I get your take on all of that as well.
Speaker 1:I know you've contributed to the Royal Bank of Canada's. I think it's a white paper that focuses on helping leading companies bridge what you call in the paper the imagination gap which I thought was a great phrase and one that I think captures very well where a lot of the traditional companies are stuck currently meeting the moment, changing the way they're doing things, and the best idea people have is taking people out of jobs and looking for efficiencies, but not really thinking about how they can meet the moment and create net new value. So what do you see currently as the three biggest challenges for business leaders navigating AI today? And I know there's probably a different answer for startups versus traditional companies. So if you want to make the bifurcation, that would be great.
Speaker 2:Yeah, of course. So one last quick thing I was going to say in the previous point was just I'm also a big believer in public AI and investments in effectively resources and technology that we can use, that sometimes is open source and that can be used by civil society. I think that could be a huge add and the government can play a role there too. Just quickly, in terms of business leadership, I mean, look, there's a lot here. I'll say that, at least from our research in Canada sometimes the companies that are slow to adopt AI you know the costs associated with AI adoption can seem upfront and the benefits don't always seem clear. They could even seem distant. There's also maybe a higher scrutiny of AI and its use. People almost expect it to be perfect. So there's potentially greater reputational costs. But there's also this risk of late adoption, right, like if you are the last one to go ahead, then maybe you move too late and you risk kind of having a permanent or at least a longer term competitive disadvantage. And so what we've seen people do that have sometimes been successful is, if you're an AI champion, especially in the executive of your organization, use a cost of delay analysis. So just you know, for example, if you've identified an AI use case or two that you're going to be quite useful maybe you've already run a pilot just identify for your organization how much a delay it could actually cost your organization in terms of lost productivity, lost value, lost opportunities, and then use that to present to your board or to the decision makers and maybe that can help you get through. The second is again I mentioned AI literacy before. I do think that both literacy among an executive team as well as in the broader workforce can certainly be helpful. Given how many people are concerned about or uncertain about AI's outcomes or negative outcomes. That can be really helpful. We have seen people sometimes empower their employees to have much higher agency, sometimes even without coding or technical experience. You can use Gen AI or some of these new platforms to develop real applications like running web apps. They can be quite useful. They can adjust new data. They can auto-complete certain web apps. They can be quite useful. They can adjust new data. They can auto you know, complete certain tasks. Ai agents can be quite useful. So there are just so many examples of where this can happen.
Speaker 2:I think sometimes people also have this, like you know. It's like the trouble of abundance or the paralysis. We had a fancy phrase for this, it was the paralysis of plenty. I just pulled up the report so, like there's too many potential use cases that you can look at and you kind of get, oh you know what, which one should I really really invest in? Um, I wouldn't be too concerned, like, if you can find a low hanging fruit, great. I would just say, don't always like, don't always just be like oh you know, can I pick something that's low risk? Like what I think what you really want to focus on over time is can you actually pick something that is in the main wheelhouse of what you do day to day, like it's core to your business or organization? It actually is high impact and there might be some risk involved. But if you never choose one of those use cases or applications, like you're just going to be kind of stuck in pilot purgatory forever, right, like you're just gonna be running pilots on a low impact use cases, and so that's what I would try and focus on.
Speaker 2:I would also just try and say, like you know, there are some concerns around regulation and things like that, but I would mostly say that regulation is probably going to be a long time coming, I think any serious regulation that's my perspective. I think, because of the US and China, the two biggest markets have pulled back on this I think whatever regulation comes is probably going to be lighter. It's going to be more vertical focused. It's going to be more specific to like real challenges that we see like non-consensual, deep fakes, for example, and maybe like that's a good thing, like I think people should keep pushing for on the regulatory side on these kind of very clear, very emerging issues, let's say craters, for example, are seeing their voices being used without their consent. I think that's a huge problem. We should definitely stop that.
Speaker 2:But you know, mostly I think people should be trying to really not let fear get the best of them. Like really try and move this to an opportunity mindset. The delaying too long is going to be worse. Right, it's better to make some mistakes up front right now. Yeah, because everybody, everybody's trying to do this. Everybody's making mistakes right now. Maybe in five years, not as many people are going to be making mistakes. So if you wait too long, you'll be caught out.
Speaker 1:And the question I wrote down for myself here was isn't the imagination gap that you're pointing out here also, or maybe even in fact, actually a leadership skill gap? We've trained managers and leaders from the C-suite even to politicians, to work within a system that's basically been intact since the industrial revolution more or less, and it's a lot about optimization. It's a lot about a very kind of sequential way of thinking. With the roll out of AI at large scale, that is changing. So we might actually also have the added challenge that we have leaders that are not only risk averse but also really don't necessarily have the creative skill set to put their organizations in that opportunity mindset, in that mindset to experiment, and I wonder what your take is on that. Are we actually also witnessing a rethink of what it means to lead in this really exciting and dramatically changing time?
Speaker 2:When we looked around the world and even some of the latest OECD adoption data, we're just finding that Canadian companies are not doing horribly, but they're usually using AI for a narrower set of use cases and then not going as deep when they do choose those use cases. So I think Canadians in particular just need to be more open to thinking big on this stuff, whereas typically American companies are open to taking bigger risks, taking a bigger plunge. How is that connected with sort of leadership unlock, new phase of leadership? I mean, I think it's complimentary. I do think part of leadership is certainly aiming for ambitious opportunities, but the other side though I think there is a bit of an empathy piece here Like I think this type of transformation is tough, like it's tough it has. There is some anxiety involved for a lot of folks, and so I think really being sensitive to that is important.
Speaker 2:I also think you know, sometimes in the past I think Sam Altman, a CEO, founder of OpenAI, had had a quote about this.
Speaker 2:It was like sometimes ideas people have kind of sat in the back corner and specialists have been running the room, but I think sometimes if you're an activator, someone can just get ideas going.
Speaker 2:Sometimes, well, you may be able to create not just a muck but like a real full running app, and maybe it's not perfectly optimized and wonderful, but if you can get out there fast like, there is a bit of a first mover advantage here much lower cost to get rolling. So, yeah, I think, getting your idea generation machine and your brain running faster and like almost you know, I think good writers like make practice of writing every day, similar to how you know you as a wonderful designer, I'm sure you design like quite frequently part of developing quality is doing it at quantity, right. Michael Jordan probably took 5 billion basketball shots, right. You know what I mean Like. I just think like there's a practice element here and I think practicing constantly generating ideas and throwing them at AI is it's a good thing for more people to be doing, especially if you know people are typically a bit more timid in doing so.
Speaker 1:Yeah, I think that's why I'm asking.
Speaker 1:I think the ability now, with these new technologies, to actually take an idea to a fairly viable level of execution is absolutely amazing, and that requires a set of skills that just have you go and take a stab at it, as you said, and building organizations which is what I've heard in other interviews building organizations that are more trimmed toward that could be hugely beneficial, and I would even go as far as say, okay well, as a society, that could actually be an immense opportunity, you know, if we embed that skill set and that way of thinking at large.
Speaker 1:So, and just to your point, I don't think it is solely an imagination gap in Canada. I can certainly speak for Germany, where I'm from, and probably more broadly Central Europe. There certainly is a similar kind of situation, and I think the comments apply there as well. So bring me to just the last few questions I had for you before we wrap. As you mentioned earlier, now you're in a situation where you're leveraging all your past expertise and experience and you're running your own consultancy company called Aperture AI, and I wanted to just ask you more broadly in this role, working with a variety of different clients, what are you seeing in the market? What are you most excited about and where do you see the biggest potential for leveraging AI for good?
Speaker 2:Where do you see the biggest potential for leveraging AI for good. Oh boy, I mean. I will just say I am partial to nonprofits really getting rolling in this technology and I think one challenge is I feel like nonprofits can sometimes I don't want to be critical, but it's always about I need more funding and I think this is one area where you maybe don't uh like, I don't. I don't actually think you necessarily need a ton to get started. You don't necessarily need engineers, top line people at house, like. I think there's a ton that you can do without having that on, even off the side of your desk, and so I'm excited about that. I think it's a double-edged sword right. Like I think it will require some investment, some some investments, some executive sponsorship and some risk-taking, and I think nonprofits can sometimes be stingy and worried about donors or beneficiaries or participants, and right now I'm helping to work with a community of people on a playbook to help responsible AI effectively deployment and adoption for NGOs and social impact orgs. I think that'll be useful. I think it's a good starting point. I want to help promulgate that and share that with different people, so I'm excited about that. I also think AI for good like government AI adoption should really really just move forward much more quickly. Like I think there was so many examples of government where I could have seen AI helping faster. I think the challenge that government's going to have is, even when I was inside the machine itself and I asked for data, I said can you pull together these different data sources dashboards? It was so difficult. I had to kind of break the chain of command sometimes and go around the departmental organizational hierarchy and that was super frustrating. And so I think government needs to really unshackle people, like at the mid-level, to have more agency.
Speaker 2:I think government people are expecting a lot more of government right now and both on the public service side and on the political side, to do more and just be more attentive and responsive to the needs of citizens. And I think part of that's going to have to be like hey, like not every proposal can involve hiring a bunch of new people. Like sometimes it may have to be like hey, like we're going to be investing 10% of our time to build 10 AI agents in these areas and trying that and you know, and also on the part of the citizens and the public being okay to provide forgiveness to government if and when they get it wrong, right, like right now, I think forgiveness is like dramatically missing. Like we were expecting everybody to be perfect with this stuff, but if we don't give people more leniency and more room to grow, like we're going to get stuck. So I don't want to see that happen.
Speaker 2:And then in terms of companies trying to do good, I mean, you know, I'd actually be curious on your answer, benedict Like I would love to see more of this, like in the accessibility space, where both of us have worked. I think there's a huge opportunity there. But yeah, I just think AI for good is. There's just so much good that can be had and people just got to get moving.
Speaker 1:Yeah, yeah. Well, two things. Yes, I've been very excited by the possibility of AI technology to really make a massive leap forward in accessibility. I've ever since my time at Microsoft, been a big believer in multimodality as a real breakthrough in human-computer interaction, and I think that the technologies we already have now available actually can make it happen. Similar here, too a lot of the operating systems that would rely on are really stuck, you know, 20 years ago, and so there's that, but ultimately I'm hopeful.
Speaker 1:And then the point you made about NGOs and the advancements in government processes is really interesting. Do you think that requires to upskill or infuse new talent into governmental organizations to really kind of bring some of the expertise in? Because I found that when I'm working with you know folks either on the university level or on the consulting side with organizations that are, let's say, not necessarily creative tech organizations. Everyone is creative and has ideas, but what's often lacking is really that the skill set to just start making something happen. Like you said earlier too, is that something that would be required to get to the point where NGOs governmental organizations could actually start really making use of the technology?
Speaker 2:Yeah, I mean, look, the UK announced that program. I think they're going to pay 200 pounds for top digital talent to come on in. I tweeted about it yesterday or the day before and like I just think that's exactly what should happen. Like, listen, like I'm sure lots of tech engineers can make more than 200 pounds a year. I think that's like, I don't know, maybe 275 US or maybe almost 400K Canadian. Like that's, that's a good chunk of money, right. Like I'm sure you know you're a top engineer at Uber, you can make millions of dollars. Or maybe if you go work for Mark Zuckerberg, you make a hundred million dollars. But, like, I think most people could live a very, very, very good life on that amount of money and it's certainly more than, you know, the average government employees getting paid. So I'm hopeful that more people from the tech sector, if they're so, to go even do a tour of duty for two to three years.
Speaker 2:I think we should expand that program in the US. I think we should expand it in Canada. I hope the UK makes it an essential part of the public service and I think that calling in people for tours of duty should be normalized so that you can have this back and forth. There's this interchange, reverse influence of knowledge in between, this knowledge exchange, and then, if you normalize it, I am also hoping that some people choose to stay permanently and I'm also hopeful that you can extend it the other way, like allow the public service to go and do easy tours of duty like in a major corporation, in NGOs. Make that easier and faster. I think that would all be great, yeah, and I think that could help a lot.
Speaker 1:Yeah, I've heard that concept. I think Scott Galloway also talked about it. I like the idea of a tour of duty, although the term is probably not exactly right. But the sense of really contributing to your country or the country you choose to live in in a way that is meaningful and work in something that outlasts you, I think overall is probably also especially at times we're currently living in a really useful concept to maybe spend some cycles on and trying to figure out how to make that happen, and it's great to see that some of the countries, like the UK, are making some advancements there. My hope is the same as yours. I would hope that that's going to be more possible and I hope that certainly Europe also is going to take that as an example to implement.
Speaker 1:So last question for you before we close. I know you also still involve somewhat in academic work. As a senior fellow and advisor at the University of Toronto, I teach my class at Princeton and I always love to hear from the folks that I speak to what advice they would give the students that we interact with and that we teach, and who will. Some of them certainly will be the future leaders that we will rely on. What would you advise them, what skills should they build and what do you think will be relevant for them to go into this AI, everything future?
Speaker 2:Oh goodness. Well, first off, pay attention to the emerging studies about AI possibly causing brain rot if you're using it to do your thinking for you, but, at the same time, use it it, it, like I mean it's going to be part of everything. I think, no matter what field you're going to work in, it's probably going to be an important tool. But I would also like say to students like gosh, like I would say this with empathy Like I think a lot of young people are having trouble getting entry-level jobs right now. Yeah, which I can't imagine, that I mean, that was not the situation I, or I think you, were facing. I was graduating like I got a job really fast, and I think even people were struggling to get a job like I I have close, you know, members of my family or friends. Everybody eventually got a job.
Speaker 2:But I think a lot of people may not be really struggling for a long time to get a role. I don't know. I think there may need to be entrepreneurship, like at maybe an unprecedented level than ever before. But I also think that a lot of young people need to agitate to say listen, like you know, governments, nonprofits there need to be some sort of gap to cover here. Like I think it's possible, there will be a big transition. There's going to be some new training programs developed, new industries. There's going to be some new training programs developed, new industries. There needs to be some support at the intrams that people still have decent lives and can afford the basic necessities.
Speaker 2:I don't know. I think people should, if, in absence of getting a job, should also try to be politically active and aware and constantly educating themselves on the latest tools and also try not to be too hard on themselves. I think this is a big, crazy moment of systemic change global geopolitical conflict, trade renegotiations probably the biggest upheaval we've seen in a number of years right, and so I hope people are kind to themselves. I'm going to try and encourage the students that I teach to embrace that as well. And yeah, I mean I, you know I feel very fortunate to you know, have gotten to this point in my career, but at the same time, like I want to make sure everybody can live a life of meaning.
Speaker 1:So that's how I feel. I hope that helps. That's a wonderful way to wrap it up, Jackson. Thank you so much for your time. All right, that's a wrap for this week's show. Thank you for listening to Poets and Thinkers. If you liked this episode, make sure you hit follow and subscribe to get the latest episodes wherever you listen to your podcasts.