"Generally positive with subtle notes of terror".
That's what I wrote on my badge last night when asked by the hosts of an AI discussion, "How would you describe your outlook on AI?".
The Boulder Salon event hosted by Glider (I think soon to be known as TEDxBoulder) featured three panelists who had both consistent and divergent viewpoints (as did the energized audience) on this complex and often emotional topic.
Favorite Speaker
Brain Decamp, a software architect-turned Zen Buddhist monk who, in his talked titled "AI and Self", married rich research and expertise-backed facts (ex. singleton vs. multipolar) with his lived experience of embodied consciousness ( 🤔 ). He posted that AI will help us determine what is uniquely human from what is not. When considering this musing in the context of organizations today, I'm excited to see how this will translate into an elevated workforce experience. For example, we can consider how we might use a skills-based approach to reimagine jobs so they focus on the HI in SRHMS's recently published article, "AI + HI = ROI", or as they call it, "human ingenuity". THAT is the stuff of fulfillment, retention, elevated productivity and increased profits.
Favorite Audience Member
The real thought provoker though wasn't a panelist but a gentleman in the audience who stated,
"Someone must fix it! Calling it 'Artificial' Intelligence makes it seem fake, not real, and scary. I'll call my friend Elon Musk to see what can be done about this."
I have to say that while he seemed to be a little to heavily steeped in the glass-half-full perspective on it all, I have to somewhat agree with him.
When driving change in an organization, we know the words we use to communicate wields great power. I pictured myself as an employee and a leader in a company somewhere, sharing my "subtle notes of terror" sentiments with others, and it made me think:
How are we talking about AI at our organizations today? Both formally and informally?
What is the energetic nature of those words?
Are we opening doors or closing them?
How might we use language with our leaders and employees that generates curiosity about AI in the presence of fear, while also creating safe spaces where that fear is welcome?
In order to answer the questions above, there must be an overarching strategy with supporting values an organization can root into that can serve as a northstar. There is so much uncertainty about what the future holds around AI. As individuals, it can look eerily grim one minute and positively exhilarating the next. Therefore, organizations must support their workforce during this uncertainty and emotional upheaval by providing an (inevitably evolving) roadmap so there is something steady to hold onto even as the ground continues to move beneath our feet.
Comments