- StealthX
 - Posts
 - Keeping it human in the age of AI
 
Keeping it human in the age of AI
Key lessons from my AI talk at UNC Charlotte exploring discernment, ethics, and the ongoing evolution of digital experiences.
Last week I spoke about keeping it human in the age of AI with with Kimberly Nevala (host of the Pondering AI podcast, leader at SAS) in front of an auditorium at UNC Charlotte.
We covered a lot of ground in the session including themes that keep showing up across client work at StealthX. Things like how important discernment and trust are, how to connect work across altitudes, how to roll out AI by creating a pull (not a push), why speed needs precision, how to design for the shift to conversational experiences, and what to do about ethics & privacy.
I thought I’d unpack these topics a bit more in this week’s newsletter and share some practical applications for your team.

“Keeping it Human in the Age of AI,” a conversation with Kimberly Nevala at UNC Charlotte on 10/29/25.
The human advantage in a sea of sameness
AI has simultaneously lowered the floor and raised the ceiling. The floor is speed and access, while the ceiling is judgment, taste, and trust. Because it’s so much easier to build and create with AI, the barrier to entry is lower than ever.
Let’s say two companies are using the same AI tools, have the same capabilities, and hire the same great talent. Both deliver quickly and efficiently. One focuses on making something that solves their customer’s problem AND is unique/differentiated/memorable and cultivates trust. The other delivers something that solves the customer’s problem, but is generic and forgettable. Who do you think wins?
The difference isn’t the tools or the capabilities.. it’s the decisions and investments to create a great experience.
It’s how you innovate within constraints, where you apply taste/judgement/discernment, and how you make people feel. People don’t remember what you did, they remember how you made them feel while you did it.
A pattern I see in client work is that the fastest gains don’t come from just adding more stuff (e.g., more features, more products, more services, more capabilities…). It comes from discerning what to add (and remove) to make the experience better, investing in moments that matter to build trust with customers, and ultimately solving their problems.
Those choices turn generic touch points into great experiences that people talk about. Being human, creating a differentiated experience, and trust is a HUGE differentiator in the AI sea of sameness.
Think in altitudes
There are 4 altitudes of experience: product, service, brand, then ecosystem. When a team considers all four levels and how a choice in one place ripples everywhere else, something amazing happens. I’ve talked about this in this newsletter several times prior including:
Most teams get stuck at one or two altitudes and miss the overarching experience for the customer. The companies who orchestrate across all altitudes and connect the dots the best for their customers will win in this next era.
A great exercise for any team is to map out their customer experience across all 4 altitudes and analyze for disconnects and where you’re optimizing for your org chart instead of the customer.
When all functions (e.g., sales, support, marketing, finance, operations, etc.) all come together in the same room and talk through the end-to-end experience, it’s powerful. It gets people moving in the same direction by understanding how all the pieces fit together (or don’t).
Create a pull, not a push
Earlier this year an MIT study was published saying that a majority of AI pilots are failing. The media ran with this and started posting a bunch of bold headlines suggesting AI isn’t ready for primetime.
Fast forward a few months and MIT has since retracted the findings.
AI pilots don’t fail because AI isn’t able to help them. They fail because of the people stuff (i.e., poor change management, training, and adoption).
Getting value from AI is a people problem, not a tech problem.
People don’t like changing how they work and they’re often afraid of what they don’t know or understand. Our brains are designed so that when we feel threatened our amygdala is triggered and fight-or-flight kicks in. This shuts down the part of our brains where curiosity, creativity, and innovation happens.
I often use the phrase “create a pull, not a push.” The antidote to adoption is creating a magnetic pull where people want to change without being forced into it. Nobody wants to be shoved off a cliff. You have to consider basic human psychology when approaching AI pilots and rollout.
A critical leadership skill is guiding people who feel afraid so they don’t get defensive, angry, fearful, or bury their head in the sand. This has always been true, but will become even more important in the years ahead. Leaders must improve their communication and coaching skills and meet people where they are.
I often suggest starting with one boring problem that annoys everyone. Something that’s visible and perceived as “safe.” Solve it well with AI, showcase the before and after, and let the team ask for more.
Momentum beats mandates every time.
Focus on speed + precision
I talked about this in last week’s newsletter and it came up in the discussion with Kimberly as well. There’s an increasing complaint of “work slop” where teammates give AI generic prompts without context, accept whatever output they get, and send it over to someone else to deal with. It’s fast but sloppy and doesn’t help anyone and erodes trust. People stop believing each other and work suffers.
Speed is the name of the game right now, but the trick is to combine it with precision.
You still have to think critically, check your sources, give proper context, and define what good looks like before you start. The goal isn’t perfection, but you still need to keep your brain on to avoid errors.
Put the right checks and balances in place and build the habit in your team of asking critical questions before they accept what AI gives them. The work will get faster and sharper at the same time.
Design for the next interface shift
Software is shifting from a place you go, to a partner that meets you where you are. Screens are becoming conversations. Your agent will talk to my agent. My agent will talk to other agents I have.
This shift changes the UX playbook. We need etiquette and rules for how agents interact. We need clear consent patterns, confidence signals, and a graceful path back to a human. We need better error handling.
The old model hid failure behind loading spinners and error messages. The new model must say what it knows, what it doesn’t, and what will happen next.
Start designing for these patterns now, not later.
Software as we know it is fundamentally changing. Right now, you go to a company’s app to use their software. Experiences will move inside AI platforms like Claude, GPT, and Gemini. We’re moving from designing software to designing experiences that live within conversational interfaces. My team is actively solving this on multiple fronts. What are the new patterns and heuristics? How will mental models change? How do we design digital experiences that still connect emotionally while potentially not having an interface?
The experience implications are massive.
We’re a quarter-step away from pulling out your phone, giving it access to your camera, it figuring out where you are and what you’re doing, then starting to take actions based on the context without you needing to ask.
That’s a different experience problem.
As it gets easier to give AI access to do things on our behalf with many sources of data, how do we connect them to solve the right “job to be done”? How do we connect customers to the right tools at the right time to achieve their goals, while making it clear how their data is being used/shared?
Ethics, privacy, & opting out
We’re in the wild west right now.
At the moment, many company leaders are unaware (or don’t care) about the ethical implications of AI. At StealthX we’re continually working to raise awareness where we can and encourage others to do the same while regulators and policies catch up.
It’s becoming increasingly difficult to know if you’re talking to AI or if content has been created/augmented by AI. There’s a massive opportunity for companies to provide clear disclosures so it’s clear when you’re talking to a person, or not.
I have a hot take that I shared during last week’s talk. In the near future, I think large AI model providers will charge a subscription to keep your data out of model training. Right now we’re all simply opted in by being in these environments with or without our consent. For example, let’s say your friend is wearing a pair of Meta glasses and you don’t want to be recorded. Maybe someone has an Alexa device in their house, you’re a guest, and you don’t want your conversation recorded. Maybe you don’t want your likeness or artwork to be used in image generation. I think there will be services offered by these companies soon with something like, “Want to opt out of AI training on your data? Pay a monthly subscription fee and we’ll remove/hide your data from our platform.” Sounds dystopian, but honestly I don’t know if there’s another way because the incentives aren’t aligned in a capitalistic society. (Side note, this opens a huge can of worms around what happens for people who can’t afford to pay to protect their data).
As we figure out the new governance structures, policies, and processes to protect people, I challenge you to consider the potential ethical impacts of your AI decisions.
Wherever possible speak plainly about the data you collect and why. Put explanations and disclosures where people can easily find them. Provide basic ways to opt out (that doesn’t require a lawyer). Make it so folks can choose to interact with another human if they want (i.e., offer them an eject button).
Don’t slap AI on it for the sake of having AI
As AI’s capabilities get better, a lot of teams are trying to quickly slap AI and automation on their existing workflows and process. There’s a ton of time savings that can come from this, BUT if you’re not careful it can also lead to unforeseen consequences.
Be strategic about where and how you apply AI and automation. Automate the right things and have the right review processes.
Identify where you can create the most value and where humans still need to drive. Use judgment, discernment, and taste. People still need to intimately understand and own the outcomes. Not every process should be AI first. Some work needs human ownership. The most important skill is discerning when to keep it human and when to roll with AI. Asking yourself these simple questions is super helpful:
How much judgment is needed?
How much customer or business impact is at stake?
High volume and low judgment is a great place for automation. High judgment and high impact is best for humans or AI augmentation. At least for now..
A few ways to put this into action
Choose a signature moment along your customer journey and come up with a signal of care that’s hard to copy. Look for ways to make your experience unmistakably and undeniably memorable.
Map a current decision across all 4 experience altitudes (product, service, brand, ecosystem). Are there unintended consequences or ripple effects?
Record a short video of the old way and the new way. Share it with the team to create a pull.
Add checkpoints to avoid work slop. Build the habit into your team of asking these 3 questions before using work from AI: What could be wrong? What’s missing? Where will this fail?
If you’re using AI with customer interactions, be sure to disclose what you collect, why, for how long, and how someone can opt out. Put it where people will see it and use plain language.
Wrapping up
I’m extremely optimistic about the future of technology. The upside is real, but the gap is human. Leaders who combine clarity, ethics, and simple systems to ensure quality and trust will build companies that customers love. The rest will drown in the sea of sameness.
 Onward & upward,
Drew 
