How Both/And Offers A New Insight on AI and Tech
Nothing in life is to be feared, it is only to be understood.
“Nothing in life is to be feared, it is only to be understood.
Now is the time to understand more, so that we may fear less.”
- Marie Curie
My high school senior twins wrote lots of essays this Fall as they applied to college. Here’s a intro from one of my son’s essays:
The Terminator, The Matrix, and RoboCop all shared a similar message: Fear robots. They may kill us all.
I’ll give it to this kid! That intro draws you in - especially from someone applying to your robotics programs (more on how he finished the essay below).
That said, he’s not far off. Technological innovation triggers lots of fear. Doomsday concerns about how technology will take over the world are as old as time (or our ability to tell time). We’ve been narrating that fear for years.
When Gutenberg unveiled the printed press, the religious community revolted, fearing that this would spread heresy. Umberto Eco’s The Name of the Rose retells pieces of that story.
Ray Bradbury’s class Fahrenheit 451 extended that fear, offering a dystopian story of how the advent of the television was not only spreading heresy, but numbing our brains
Maybe most classic, yet most poignant today, is Mary Shelly’s Frankenstein. Written in the early 1800’s, Shelly wonders what happens when man constructs machines that outsmart man.
Robots and AI are poised to change our world as we know it (I recommend going back to Ezra Kleins podcast with Ethan Mollick as a great discussion on this… and welcome your suggestions for more as I’m deeply curious to learn!).
Yet….(here it comes)…. Its not an either/or! Technology is not either good or bad. We do not either embrace new innovation or fear it. And certainly, its not EITHER technology or humans. Here’s one big space for both/and. The key question though is not if we switch the both/and but how! How can we both embrace robots and AI AND create limits? How can we engage with technology and still explore what is uniquely human?
To learn more, I reached out to an expert - Kate O’Neill , author of books like What Matters Next, and Tech Humanist. In this newsletter, I print our interview, with some surprising insights from Kate.
I hope you enjoy and learn as much from the interview as I did. And let me know what questions this triggers for you as we all learn more together.
PS…. My son’s college essay ended on a much more positive note. For him, the advent of robotics and AI is a playground full of opportunity that he is enthusiastically learning to play in. Yet as he notes in this essay, understanding the ethics and morality of technology will be just as important and integral to furthering the coding and engineering….. Again, I’ll give it to this kid, he is definitely shaping up to be a both/and thinker!
Hi Kate - Thanks for doing this interview with me. In your new book, What Matters Next you make a strong case about the both/and of human AND tech, and unpack that. I’m excited to dive in so that we can all learn more from you.
There's so much uncertainty about the future of tech - especially as AI ushers in a massive tech revolution that involves both extensive opportunity and excessive fear. Your book helps us put that revolution in some perspective. There is so much detail in the book... what's the big picture? As people read your book, what's the core message that you hope people will walk away with
I hope readers take away from What Matters Next is that we have the power to shape technology's impact on our lives and society.
After decades of working at the intersection of technology and humanity, I’ve realized: we're not passive observers in this story of progress. We're the authors. The core message of What Matters Next is deceptively simple — technology isn't some unstoppable force sweeping us along. It's a collection of human choices, stacked on top of other human choices.
The big picture is about being intentional. About matching our technological capability with human wisdom. About understanding that every line of code, every algorithm, every digital innovation is ultimately in service of human meaning. And that's not just optimistic thinking — it's a practical framework for progress.
Your core focus is on being a 'tech humanist' - making human decisions in a technological world. What is a tech humanist? How would a tech humanist operate and make decisions in our world different from someone who puts tech in the decision making driver's seat?
Being a “tech humanist” sounds like a lot of philosophical abstraction, but it's actually quite a practical approach to shaping our technological future. I've spent years studying how technology impacts human experiences, and here's what I've learned: the most powerful innovations aren't just technically impressive, they're meaningfully human. Think about the last time a piece of technology made you smile, not because it was clever, but because it solved a real human problem. That's tech humanism in action. We start with human needs and aspirations, then work backward to find the right technological solutions. It's the difference between asking "What can we build?" and "What should we build?"
When I work with organizations, I often see the lightbulb moment when leaders realize that putting humans first doesn't mean putting technology last — it means making technology work harder for actual human outcomes. So this isn't just feel-good philosophy; it's good business, good ethics, and ultimately, good technology.
Unsurprising, I love the extensive way you talk about the both/and. Your book is a case study on integrating opposites - tech and human, long term and short term, now and next, etc. How does the both/and inform your thinking? How do you see people drawing on both/and in the space of technology and future thinking?
I've spent my career navigating what others see as contradictions. Tech versus human. Progress versus profit. Innovation versus quality. Corporate versus idealism. None of these are actually opposites at all. They're complementary forces that, when thoughtfully combined, create something more powerful than either could achieve alone.
Look at how we're handling AI right now. The headlines want us to choose: Will AI enhance humanity or replace it? But that's missing the point entirely. The real opportunity lies in the "and" - in building AI systems that amplify uniquely human capabilities while handling the tasks that machines do better.
I saw this firsthand when working with a healthcare organization that used AI to handle data processing, freeing up doctors to spend more time actually connecting with patients. This both/and mindset isn't just philosophical — it's practical, profitable, and profoundly necessary for our future. It helps us build smart cities that don't just collect data, but use it to foster stronger communities. It guides us toward technological solutions that don't just work well, but work well for humans. And most importantly, it reminds us that progress isn't a zero-sum game.
A main focus of your book is on strategic leadership. What is one thing that a CXO can do to make better decisions in our increasingly complex world?
One of the most powerful things a CXO can do is pause. Yes, pause. In our rush to keep up with technological change, we often forget to look up and ask "Where are we really headed?"
Which is to say that the biggest thing a CXO can do to make better decisions in our complex world is to cultivate a long-term, holistic perspective. It's not just about predicting trends or managing disruption. It's about understanding how each decision ripples through your organization, your industry, and ultimately, human experience itself.
This means looking beyond immediate profits or quarterly results to consider the broader implications of decisions on stakeholders, society, and the environment. It involves regularly stepping back from day-to-day operations to consider emerging trends, potential disruptions, and long-term consequences of current actions. By balancing short-term necessities with long-term vision, leaders can guide their organizations towards sustainable success and meaningful impact.
The best leaders don't just plan for the future — they actively shape it. They're constantly asking: "How will this decision look not just next quarter, but next decade? What kind of world are we building?" That's not mere idealistic thinking — it's strategic wisdom.
What other question should I be asking you given the breadth of perspective and insight that you bring?
You know what I wish someone would ask me? "In a world swimming in tech solutions, how do we actually put humans first — not just in theory, but in practice?" I see this gap constantly in my work with organizations. They nod along with the principles of human-centered technology, but then struggle to translate that into Monday morning decisions. The real work isn't in understanding why we should be more intentional with technology — it's in figuring out how to do it when quarterly targets are breathing down our necks and the latest AI breakthrough is making headlines. That's the question that keeps me up at night, and it's the one I'd love to dig into. And really, that's at the heart of what all of these other questions and answers allude to. Because being a Tech Humanist isn't just about having the right philosophy — it's about making the most meaningful decisions in the moments that matter.
Here’s a list of Kate’s awesome books:
Brian Evergreen Autonomous Transformation: Creating a More Human Future in the Era of Artificial Intelligence. Brian’s book offers another insight into what it means to have tech work for humans and not humans work for tech.
Both / And Thinking: What’s Next
Here’s what I have coming up for me. Please reach out if there’s any way to connect with you during any of my travels!
February 8 - FAMILY TIME!
I’m excited for the bar mitzvah of my third child (the one who is NOT applying to colleges this year). and look forward to celebrating him and being with family and friends. We could all use a bit of celebration these days
February 12-27 - UNSW Sydney, Australia
I look forward to returning to UNSW as a visiting research fellow and learning from and with this community of amazing scholars, including joining the UNSW Organizations and Society Research Day on Feb 19.
March 3-4 - Keynote
I’m looking forward to a keynote on Both/And Thinking near Berkeley, CA.
March 5 - Keynote: Dupont
I’ll be back in my ‘backyard’ to explore Both/And Thinking with chemicals giant, DuPont.
March 17-21 LinkedIN Learning Taping
I’ll be heading back to California (Southern California) to tape a LinkedIN Learning course. I’m excited to create content that will allow the tools for both/and thinking to be available more broadly. If you have questions about paradox or both/and thinking that you’d love to see in this course, now’s the time to reach out and let me know.
Stay tuned for next month….. I might preview the LinkedIn Learning course if I’m inspired.