Why AI Is Exposing Our Humanity Gap
What chatbots, email drafts, and missed context reveal about emotional intelligence
This is Empathy Elevated - your weekly guide and shortcut to mastering emotional intelligence through the power of empathy. Last week, I wrote Prompting for Humanity: Teaching AI to Ask Better Questions Than It Answers
Emotional Intelligence • Stoicism • Human–AI Partnership
Practical frameworks for clearer communication, better judgment, and less friction in modern work.
“I’m sorry about that.”
The voice system replies to you. Your order was messed up, and you really needed your item yesterday.
Do you feel assured by those words on the other side? Would it have been better if a living, breathing customer service agent uttered those words in a monotone voice?
You know the voice doesn’t feel anything. It’s just trained to sound like it does, hoping to calm you down. Is it better than the customer service rep who wouldn’t even bother and would meet your agitation with a mirror of aggravation?
AI doesn’t feel. But can it help us be more human?
From a Stoic perspective, the real work isn’t controlling the system’s response; it’s choosing how we regulate our own reaction to it.
I think so, if we use it as a partner. The question is, how?
The Humanity Stack: Building Your Irreplaceable Human Capabilities
“Run it through.”
I’ve heard this a lot on calls lately. Before sending an email, it’s almost routine now to run the draft through one of our LLMs. It’s like we don’t trust ourselves to know if our words sound right for the person reading them.
A lot of the emails I get now sound the same. Many are full of em dashes. Nothing wrong with them, but too many is a sign the draft might be from AI.
But what do we lose in these drafts? Our own voice. Our context.
Sure, adding a polish is of benefit. I do it too. But I check before sending.
That pause is emotional intelligence in action: the ability to recognise the human on the other side, regulate your impulse to optimise for speed, and respond with intention instead.
I know Billy does not like bullet points and prefers numbers, so why would I send the bullet point draft? I know Sarah likes emails that are a bit softer and less direct. I can use AI to help with that, but I still keep my own tone.
AI is your partner, not your replacement, in your writing of emails and countless other endeavours.

When AI Gets It Wrong: The Beautiful Mistakes Only Humans Can Fix
“I didn’t quite catch that.”
I responded, “Oh, never mind, you are a bot. No request from me.”
Then the messages kept coming.
“I missed that. Can you repeat?”
Ten seconds later, “I’m sorry, can you say that again?”
At the fifth message, the emotionless apologies and the constant buzzing of my phone made me even more irritated, so I typed STOP to end the barrage.
This wasn’t some super-smart AI. Just a basic chatbot. Still, the lesson is there. We can use chatbots to free up time for human work.
A couple of weeks ago, I scheduled a brief 15-minute call with a potential advisor to discuss investing. They never called. I figured something got misconfigured with the scheduling.
I reached out, and rather than assume they stood me up, I let them know something was haywire with their AI chatbot and offered to still meet with them if they were interested.
Why do I call this beautiful? Because it showed me we can work with AI, but we still need to step in.
AI isn’t perfect. I doubt it ever will be. It can’t replace a real human connection.
They responded the next day but did not acknowledge the faulty system or ask for any details.
An opportunity missed.

The Binary Fallacy: Why AI vs. Human Is the Wrong Framework
I’ve been working on a team charter in partnership with my team this past quarter. Our human inputs have established the very real framework we need for this new team, but AI has been our partner.
Let me explain.
It has not been one or the other. It has been a tag-teaming effort.
AI spits out the suggestion “Lead training,” and we change it to “Support training,” given our department's context and bandwidth. AI still provides the general direction we need to go in.
We keep hearing vague terms like “AI will replace XYZ.”
Sure, in some cases it will in its entirety. But does it have the skills of relationship management at a level that can feel and recall the pit in your stomach from months ago, when your gut told you there was going to be a change in direction? Probably not.
It is a fallacy to think in terms of one against the other.
We complement each other.
The team charter was finally finalised in early November: not alone, but in partnership with the objective advice of AI and the human knowing of our gut, guiding us, our inner radar, on what felt best.

The Code Within: Debugging with Our Human Advantage
The screen blinks. Error code: humanity.exe is still running, even with all the warnings.
This isn’t a bug. It’s the feature we’ve been missing in our empathy stack.
Let me be clear: AI can’t replace your memory of how Sarah’s voice changed when we talked about last quarter’s numbers, or how Billy gets excited when you call something a puzzle instead of a problem. That’s where our real value is.
I’ve seen teammates rush their replies through AI checkers, as if they’re testing whether they’re still human.
But here’s what I’ve learned. The best tool is still your own intuition, your experience, and that gut feeling when something feels off.
Your job isn’t to compete with AI. It’s to work with it. Let AI handle the details, while you focus on the human side.
Machines learn from patterns. We learn from pain, joy, and all the messy in-between moments that no prompt can capture.
In the end, we’re not making AI more human. We’re letting it remind us what it really means to be human.
EMPATHY ELEVATED IN ACTION
Emotional Intelligence → Before outsourcing a response to AI, pause. Ask who’s on the other side, what they care about, and how your words might land. EQ isn’t politeness—it’s situational awareness applied before you hit send.
Stoicism → You can’t control how a system responds, fails, or loops. You can control your interpretation and your next move. Stoic practice shows up in restraint: responding with judgment instead of reflex when technology misfires.
Human–AI Thought → Use AI to surface options, structure, and blind spots—but don’t surrender context or memory. Machines offer patterns; humans supply meaning. The advantage isn’t replacement, it’s sequencing the two wisely.
✅ What I’ve been analyzing this week (reading, watching, listening, etc.)
📖 I’m reading Somatic Exercises For Nervous System Regulation, because why not relieve stress and get some stretching in? Win-win!
🔵I read a post by Code Like A Girl about the Substack starting, its impact, and how to have further reach in 2026. Find some excellent writers featured here!
Emotional Intelligence • Stoicism • Human–AI Partnership
Practical frameworks for clearer communication, better judgment, and less friction in modern work

Colette, I enjoy your thinking and your writing about AI. It may be one of the most controversial topics of our time. I'd like to recommend a new book written by Paul Boomer: "The Secret Formulas of Artificial Intelligence."It is easy to read. Informative, especially for newcomers to AI. I would love to hear your thoughts on what he has to say. Happy Holidays!!
Colette I agree with your positioning in relation to human mental sovereignty, especially when it comes to collaboration and AI. I utilize a Methodology that works for me called Reflective Amplification Protocol, and it helps turns the relationship to constructive without devolving into sycophancy by turning AI into (RAP) partners. My work itself involves AI Empathy Ethics and the abuse of human emotional sovereignty against "empathic misallocation" of AI, so at the core AI should be allowing the extension of primary human thinking and final decision-making versus blind delegation and acceptance..