
Peaceful Life Radio
Welcome to Peaceful Life Radio—your trusted companion for navigating the second half of life with wisdom, intention, and grace. As the fast-paced seasons of career-building and family-raising transition into a new chapter, it’s time to focus on you—your purpose, your well-being, and your legacy. Join hosts Don Drew and David Lowry as they share inspiring stories, expert insights, and practical strategies to help you embrace this transformative phase with confidence and joy. From cultivating deeper relationships to fostering emotional resilience and discovering renewed purpose, Peaceful Life Radio empowers you to make these years your most fulfilling yet. Tune in, and let’s embark on this journey together—because the best chapters of life are still ahead.
Peaceful Life Radio
The Future of AI with Michael Hanegan
In this episode of Peaceful Life Radio, hosts David Lowry and Don Drew engage in a profound discussion with futurist Michael Hanegan about artificial intelligence (AI) and its transformative impact on society. The conversation explores the rapid advancements in AI, including generative AI and machine learning, and their implications for various fields such as medicine, education, and intellectual property. Hanegan highlights the exponential growth in computational power and information, showcasing examples like Google's AlphaFold in protein modeling. The hosts and Hanegan discuss the potential benefits and risks of AI, the future integration of learning and work, and the democratization of education through modular and decentralized programs. The episode provides valuable insights into how professionals can adapt to AI advancements and leverage these technologies to solve complex problems and improve human well-being.
00:00 Introduction and Welcome
00:15 Introducing Michael Hanegan
00:51 The Evolution of AI and Moore's Law
02:15 Understanding Generative AI
06:10 Machine Learning Explained
07:48 AI in Education and Work
15:42 AI's Impact on Medicine
19:49 Future of Education
24:14 Navigating the AI Revolution
27:00 Conclusion and Final Thoughts
Visit the Peaceful Life Radio website for more information. Peaceful Life Productions LLP produces this podcast, which helps nonprofits and small businesses share their stories and expertise through accessible and cost-effective podcasts and websites. For more information, please contact us at info@peacefullifeproductions.com.
And hello, everyone. Welcome to Peaceful Life Radio. This is David Lowry, and with me today is my good friend, Don Drew.
Don Drew:Yes, I am here, and it is quite a cold and snowy day in Oklahoma City.
David Lowry:That's reason enough to stay indoor for a couple of days.
Don Drew:There you go.
David Lowry:Don, I'm really excited about our guest today. Michael Hanegan. And I'm telling you Michael is a person you want to know. I describe him as a futurist. He does intellectual property architects. He's a founder of a movement for moral care. He's a tedx speaker. Has an organization called Working for a World Worth Living For. But I think of Michael as a wonderful person who's studying AI, the ins and outs of it, as he does his work helping people write and do intellectual property work. So we're gonna be talking to Michael today.
Don Drew:Okay. So in 1965 a fellow by the name of gordon moore He was one of the co founders of the intel corporation he made a prediction that the number of transistors that could fit on a microchip With double in about every 18 to 24 months. We today refer to Moore's law as saying technology doubles about every two years. Interestingly, up through around 2020, that was fairly true, but along comes this new thing we're all hearing about called artificial intelligence or AI. Just last week there was an AI summit in Europe where 61 countries signed a statement talking about how they would support open, inclusive, transparent, ethical, safe and so on AI activities in their country. And today we're hearing all kinds of things about what AI is, everything from allowing students to cheat with impunity, to killer robots, Michael. Tell us a little bit about yourself and what is AI?
Michael Hanegan:Yeah, thanks for having me. I run a company called Intersections, which is a learning and human formation company. And I teach AI and the future of work at Rose State College and the University of Central Oklahoma. And then I advise K 12 districts and universities trying to figure out that new line between the future of learning and the future of work. AI is not new. We've been working on AI for a long time. What's new in the general sense is what we call generative AI, a particular subset of artificial intelligence. What we're seeing here is that we are looking at a technology that not only Outpaces the speed at which we're used to things advancing. You talked about Moore's law just a few minutes ago about how computing doubles every 18 to 24 months. We're now essentially at Moore's law squared. Which means every four to nine months we're doubling. One way I like to talk about this is to say the computational power of Alan Turing's first machine that broke the Nazi Enigma Code could do 26 calculations a second. And one of the more advanced chips from NVIDIA, one of the more valuable companies in the world because of AI, their most advanced chip can do 420 quadrillion calculations a second. So, to give a spatial metaphor for that, Alan Turing's machine in one second could go the length of my arm. The current Blackwell B200 from NVIDIA can go from here to Pluto and back three and a half times in one second. So, we've reached the scale of capacity and computational power.
David Lowry:And along with that, Michael is the doubling and quadrupling of information that we have available to us as we conduct our scientific research, continue to write as more and more people graduate with graduate degrees from around the world. And in places we had overlooked in the past. So, we have an information explosion along with this hardware explosion.
Michael Hanegan:Yeah. And so what we really have is this birth of a age of intelligence, right? Where we now have technology that is capable of helping us use at scale and at speed the full wisdom, insight, and technical prowess of the whole of humanity. Our medical literature doubles every 72 days. So, the amount of learning that humans produce is impossible to keep up with as a human being, but these technologies enable us to work and learn at a speed and scale previously impossible. One example of this I love is from Google DeepMind, their award winning tool AlphaFold, which is for protein modeling and drug discovery. We've been working on protein modeling since the 1970s. You used to be able to leave your computer on the internet overnight and donate cycles to do protein modeling. Right? AlphaFold has now modeled all known proteins. Which is approximately 1 billion years of PhD time work-- in about a year.
David Lowry:Oh my goodness.
Michael Hanegan:And now that tool is open source and available to the entire research community. More than a million scientists use it every day. It won the Nobel Prize in chemistry this year. The people who created it. So, we're living in this space where. Change is at an unprecedented scale. It is unlocking things at a speed and in capability that we had not anticipated. And there's a couple of differences with generative AI and other technologies. Our experience with software has always been for our entire lives, that we made software to do something, and then, that's what it did. This is the first kind of technology where we discover its capabilities. We say, Oh, we didn't know that could happen. And this opens up, a different kind of way of engaging with technology that, I'm optimistic, I'm not doom and gloom and I think this is, it's all over for us. I think that comes more from our pop culture movies. We've watched maybe a little too much Terminator and iRobot. But I do think there are real risks, but I think there are tremendous upsides to explore.
Don Drew:Michael, there's a term AI called machine learning, which is some of what you've been talking about. But when somebody uses the phrase machine learning, what do they mean?
Michael Hanegan:Yeah. I mean, part of what they mean by machine learning is this ability for algorithms and other forms of computation to do work where getting a correct answer or an incorrect answer enable them to improve. This is some of the more basic machine learning, like for example, when we created tools that could beat human beings at chess. We would learn from wins or losses about the better way to move. So machine learning is this technology which enables us to not write something that's static and it works or it doesn't, but for it to continue to be developed.
Don Drew:A human side of that might be that I learn a lesson. I know something about an action that I want to take or not take. I want to avoid making that mistake again and yet I managed to do it and fall in the same trap, make the same mistake twice. The machine would not do that. It would learn the first time and not make the same mistake the second time it. Might make another error, but it's going to learn from that error as well. So with each iteration, it would get smarter. Is that fair?
Michael Hanegan:It can, it doesn't always deal with the first time, but similar to humans, like sometimes it takes more than once for us to make the pivot. But the difference between machine learning and other forms of, it's this kind of software based technology is that it can improve. If you want to improve other kinds of software, you have to change it. There's no mechanism internal to itself to get better.
David Lowry:Michael, how are you using AI in your work as intersections and the intellectual architect?
Michael Hanegan:I use this all the time in the coursework that I teach. I'm teaching students not only about the societal and ethical questions that are raised by generative AI but also using tools for research for ideation and planning for data analysis. When I was in grad school, I was preparing to apply for a National Endowment for the Humanities grant where I wanted to do a digital humanities project that was going to take me about a year and a half and cost me about a quarter of a million dollars. And I think I can do that entire project now for probably a couple hundred bucks in a couple of weeks. So, the capacity for us to leverage learning is remarkable. One practical example. One of my favorite tools right now is a tool called Notebook LM. It's from Google. You can upload documents. You can create podcasts. You can ask questions. I have a notebook that has probably 3, 500 pages of text in it from stuff that I work with. And every day I go in and essentially say, show me something I haven't seen before and explore at a scale that I couldn't. I could spend the next six months reading all of that if I wanted to, but then I couldn't actually hold it in my mind. But with this technology, it is accessible at my fingertips at any time. And I think that's where it gets really, really interesting.
David Lowry:One of the things that seems concerning at the same time is. Whether or not it can do creative work. I believe we're beginning to see the first stages of actual creativity. Not only does it learn from how we've written things in the past, but it can improvise. You can say I'd like to write in this kind of romantic style, but fuse it with this postmodern look at something. And it seems like it's does a pretty good job of creating a new a hybrid product.
Michael Hanegan:Yeah, I think one of the challenges that we have here is this is so unlike any other technology we've encountered that we inevitably use language that makes sense for how humans do things. And it's not necessarily because that's technically what's taking place, but it's the only kind of metaphorical language we have. I am regularly surprised or taken aback sometimes by the progress or the insight or the connections that are made. Or recently as a joke for a demo, I used notebook LM to create a podcast and I said, I need you to communicate this very serious technical information with as many dad jokes as possible. And it was some literature about creativity. In the podcast, these two people who don't exist right there, AI generated, we're talking about how humor is uniquely human, and that AI would never be able to make a dad joke. Immediately after having made what I thought, as a dad, was a pretty good dad joke. So, we have these moments where what we used to feel like were the lines of this as a human domain get fuzzy sometimes. And I think that's exciting. And I think what we're also finding out is that these tools are more like us than we expected. One of the examples that I love about this is a year ago, there was a lot of concern about this idea of using what's called synthetic data, which is where the machine creates data, and then you train the machine on the data. They were worried that it would be like a snake eating its own tail, right? That it would fall apart if you did this. And what we learned in the middle of all that completely unrelated from completely different scientific research. Is that human beings run on synthetic data. And what we learned was that when humans walk and run and play and fight in their dreams, they are training their actual motor system for moving in the world. So they never actually moved, but they did. And we're finding in so many ways that the way that we think, the way that we plan, the way that we create is in some ways mirrored in some of these technologies, which I think is it's interesting. I don't know how to feel about it, but it is curious for sure.
Don Drew:That's amazing, David. I'm afraid we may be out of business here. Sounds like. New hosts that do dad jokes.
David Lowry:And they'll have really wonderful vocal qualities as well. I don't know. What can we do? Michael, all three of us teach university students and year before last, our university was absolute, we are not going to allow AI to be used. That's cheating. The line was drawn. A year later, it was we must learn how to work with AI but there was still this holdout that you would not use it for certain things. I opened up my latest version of Word and the first thing that comes up is an AI prompt. What are you writing about and how can I help you? And if we think that people are not going to use that in our classes I mean, let's all get real here. This tool is embedded in everything. We were using it to correct our grammar for a while. Now we're using it to help us find resources. And then it's like, how do we organize these resources? Then how do we summarize these resources? Where do you see this going from these very simple tools that we're using now to where it may go?
Michael Hanegan:So, I think there's two possibilities and this will depend largely on how people think about their learning. There are some people who will absolutely outsource their learning to these tools. They will no longer put in the effort to acquire the skills. They will ask for what they need, they'll copy, paste and ship. And I think those people will be at a profound disadvantage very, very quickly. But the other option is to say, what does it look like to use these tools as an integral and enhancing capacity of our work? To do that, you have to better understand at a deep level, the work you're trying to accomplish. That's going to be the key differentiator. So, I don't necessarily need to have a PhD in whatever to use these tools in a meaningful way, but I do need to understand very clearly the problem I'm trying to solve, or the result that I'm trying to produce or the question that I have. This is similar to saying it doesn't do a lot of good to bring something that's more powerful than the scenario enables. Right? It doesn't make sense to bring Michael Jordan to the recreational basketball league. It doesn't make sense to bring a Ferrari to a go kart track. It doesn't make sense to bring something that is infinitely more capable than what you're actually able to use it with. It doesn't help you. You don't benefit from it. So, I think the future of those who really can take advantage of this are those that can think deeply about their respective area. And know how this can help them get to what they want. In fact, one of the things that I love here, there was a mathematician named Richard Hamming who was a mentor to an obscene number of Nobel prize winners. And there was always this pivotal moment in these Nobel prize winners lives when they would go to lunch with Richard Hamming. He would ask them two questions after hearing about their work. He would say, What is the most important question in your field and why are you not working on it? And that would ultimately change the trajectory of so many of these people's careers. And I think in a lot of our work we're about to experience tools that are capable of dealing with the boring, the mundane and the routine. And all that's going to be left for us are the more difficult and the more interesting questions in our work. I think that's true for academics. I think that's true for a lot of fields. So, if I'm trying to think about the second half of my career I'm saying, What are the problems we've always wanted to solve, but either didn't know how or didn't have the resources? And my hunch is, now or very soon, we do have what we need to tackle some of those questions.
Don Drew:Michael, let's zero in on one of those kinds of questions or classifications of questions that a lot of our listeners will be interested in. You were mentioning proteins earlier. Let's talk about medicine. What are some of the practical applications of AI to medicine that are changing the way medicine is done now?
Michael Hanegan:There's a wonderful essay from Dario Amadai. He's the CEO of Anthropic. His essay is called Machines of Loving Grace. And he talks about what he calls the compressed 21st century. He suggests that in the next 10 years, we will do a hundred years of medicine. And that has two implications. Obviously we'll be tackling more problems and coming up with more solutions. But like one concrete version of this is that we believe that eventually, and in my mind, sooner than later, we will be able to do the early stages of clinical trials essentially in virtual. We'll already know how it's going to go before we actually get to the parts to where we involve people which can take the development and release of drugs from decades to years. And then if we change the way that we do this because of these technologies, maybe even less, right? The other piece I think you're going to see is an increasing ability for the rapid personalization of medicine. You see this in experimental places already. Especially in some experimental cancer treatments. You don't have treatment for this kind of cancer. You have treatment for this particular gene sequencing of this person's cancer. Like this medicine is made for this person and this person only. And these, this technology has the capacity. To increase the scale of that work and to decrease the time and cost of that work. And so part of what we're seeing is huge advancements in diagnostics. We already have AI tools that are better at diagnostic than doctors without tools. And so we're hoping that the synergy of physicians and technology can really level up. I saw a study recently where they were predicting breast cancer up to 18 months earlier than a human could detect by themselves when they paired doctor and technology. And so they were having obviously much better outcomes in treatment, in prevention in life expectancy. So I'm very optimistic about where this is going to go in medicine. I think it's one of the places that I think is most exciting. I think that'll be most obvious to everybody, in the next five to 10 years.
David Lowry:It seems to me like there'll be a sorting out of what's important to know, vital to know, what's something we can always look up and depend on. But I'm hopeful that we'll work harder at teaching people how to ask good questions. I have another concern Michael, and i'm wondering what you might think about this Our machinery will only be as good as the information that we train it with. i'm very concerned about governments saying, we're not going to train it with certain kinds of information or certain kinds of information would be forbidden. So, there wouldn't be this intellectual free and open intellectual discussion anymore, but it would be highly formatted in a way that's palatable for people in power.
Michael Hanegan:I think it's always a concern how technology gets used in society. I don't think that's a new problem. What I do think is different with AI is that the gap between open sourced and closed sourced is not as big as it is in other forms of technology. Your open source tools are oftentimes, nearly approaching the full capacity of these closed source tools. Which essentially means that even if you're in a country where some of these models are unavailable, you have the capacity through open source tools to have your own. You can actually have them for yourself. While the closed models and companies are still at the frontier that gap is not that big in this space. And so I think there will always be some capacity to access things that are maybe less restrictive in that way.
Don Drew:Michael, all three of us are educators. We have grown up through an age where there was a body of knowledge that we were all expected to know. We all took history classes, we all took English classes, we all took math classes. I'm asking you to play futurist for just a second. How is education going to evolve? What is education likely to look like? 10 years from now? Give us your best thoughts on that.
Michael Hanegan:Yeah, I think in a decade, education looks much more decentralized. It's not that you have to go to this place to get that or this place to get that, but that you'll be able to get this particular piece of learning in a whole bunch of places as opposed to it being, buried in a degree or additional kind of hoops that you have to jump through to get what you need. I think you'll see an acceleration of things like micro credentials or modular programs where you stack together what you need as you go. I also think you're going to see a re blending for the first time. We didn't do this in the industrial revolution. In the industrial revolution, we separated learning and work. You went to school for however long you were going to go. My grandfathers went until sixth grade. Some people go until high school. Some people go to college. Some people go to graduate school. But you go to school and when you're done with that, then you go to work. That's how the industrial revolution has worked for us. The future of learning and work is integrated. Where there's not going to be, I did my learning and now I do my work. It's going to be learning is an essential part of my work. And how we navigate that in the future is going to be really, really interesting. Because industry does not have the infrastructure for the amount of learning that we're going to have to do in the future as this technology changes the way that we work. So, I think it'll decentralize. I think it'll become much more modular and small. And then I also think there will be less of an emphasis on expertise and more of an emphasis on proficiency. Expertise has been the gold standard of higher ed forever. You have to be an absolute beast or monster at whatever you're learning to have any kind of credibility. And most of our work in the real world does not use expertise. It actually uses this kind of a competency and proficiency. I like to talk about basketball in this way to say in game six of the NBA finals when Michael Jordan is shooting free throws, he's not using his expertise. He's using this proficiency that he's built since he was a young man. Most of our work doesn't require our expertise. We don't have to turn that part on of ourselves often.
David Lowry:Some people are concerned that with AI and the knowledge explosion that why not create a robot who will do all these things for us? We probably are looking for some kind of machinery to do the work that we need done.
Michael Hanegan:Yeah, as robotics increasingly accelerates, we'll have a gap between what we can do, what we must do, and what we don't want to do. And that will have a direct impact on what things cost and how accessible they are. In the U S we have often a massive shortage of affordable housing. And part of that is it's expensive to build housing with technology, both robotics and other kinds of automation the cost of creating housing can drop significantly. But the challenge is that any of these innovations introduce changes that not everybody's happy about. If we have robotics that can build a home in three or four days, because they work 24 seven with perfect efficiency and safety, the cost of a home becomes 10 grand. The people who can't afford a home are going to be really excited. And people who bought a house in the 1970s for a nickel and a smile and it's exponentially more valuable than when they bought it, are going to be really upset when their property value declines because a house of equal quality now can be built for a fraction of the price. How we navigate those tensions is going to be really difficult. So, yeah, it opens up lots of options for us. And part of the challenge is that we're advancing faster than we can plan for what this might mean. We are used to technology that advances incrementally. Right? A lot of changes happened in our lifetimes technologically, but it didn't feel fast in the moment. Right now, it feels exhaustingly fast. And the reality is that if you could look behind the curtain, it's faster than it feels.
Don Drew:What kind of advice might you give our listeners to how they should process what they're hearing, what they're reading? They're inundated now with information about AI and you've given a lot of knowledge, a lot of material to think about. But say your grandfather, what would you advise him to do or to think about AI, not to freak out about it, because it seems like it's pretty crazy.
Michael Hanegan:If you're still in your career, I would encourage you to get very clear on where your expertise lies, then try and understand as quickly as you can, where these technologies will impact your particular field. So, if you've been an accountant forever, a whole lot of that is going to disappear in the next decade because it can be automated. We can describe enough how this works that we can eventually teach a tool to do what you do. What we can't do right now is move everybody over to that new world. So, if you already have expertise in accounting, if you understand how these technologies work, may be the second leg of your career is helping people navigate whatever that transition is going to look like for the next 10 years. So, the AI and your area is the place where people can really build some momentum and stability in whatever's coming so that your expertise is not useless but will be applied differently as these technologies emerge. My colleagues who work in medicine, their expertise is infinitely valuable. But the timetable on which they work is going to change dramatically. The kinds of questions they can explore are going to alter over time. If you're not in a place where you're going to work in the near future, I would encourage you to find some ways that you can use these technologies to serve your fellow human beings. How can you make sure with what power and influence or with the way that you vote that these tools are used to take care of those in your family and those who you have influence over who are still working or who are still in school to really plug in and make a way for themselves. This can be a really remarkable time in human history where we solve a lot of problems that we haven't had the time, the energy, or the intelligence to solve. I love this quote from David Graeber and maybe this is a great place for us to think about this. He says, the ultimate hidden truth of the world is that it is something that we make. And that we could just as easily make differently. I think that's where we find ourselves. There are some things in our world that are as they should be. And there are a whole lot of things that could be better. And I hope we'll take advantage of what's here and what's coming to think meaningfully about the kind of world that we want to cultivate and sustain going forward.
Don Drew:Michael Hanegan, thank you very much for being with us today. That was a wonderful insight into artificial intelligence. And I want to thank you for being with us.