When AI decides, what’s left for humans?

Why HR needs to be the moral compass in the age of AI

 

“We need to do something with AI.” You’ve probably heard that sentence a few too many times this past year.

Tools are rolled out, pilots launched, dashboards connected. But one question often stays under the radar:
What is AI actually doing to people, work, and dignity?

In a conversation with Gunter Loos – consultant at Beanmachine with one foot in technology and the other in anthropology and ethics – one thing becomes clear very quickly: the real risk isn’t that AI becomes too smart.
The real risk is that we become too compliant.

From expert to rubber stamp

Gunter uses a phrase that sticks: rubber stamping.

Think of all the professionals with real expertise and decision-making power: recruiters, case handlers, risk officers, doctors, planners, HR business partners…
Today, they make decisions based on knowledge, experience, and intuition.

With AI, we often add a layer in between:

  1. A model is trained on historical data.
  2. The system produces a recommendation or decision.
  3. The human gets to “validate”.

 

And that’s where it goes wrong.
The check becomes a formality. “It’s probably correct, the system has always worked well.”
Stamp on it, move on.

What happens?

  • Errors in the system get approved and passed through.
  • The role of the human shrinks: from thinking professional to conveyor belt.
  • Critical thinking fades. Why keep asking questions if “the machine” supposedly knows better?

 

For organizations that claim people are their greatest asset, that’s a strange kind of self-sabotage.

Human in the loop… but for real

The answer is reassuring: “Don’t worry, there’s always a human in the loop.”
But the real question isn’t if there’s still a human somewhere in the process.
The real question is: does that human still have a real role?

A human in the loop only makes sense when that person:

  • adds context the system doesn’t know
  • sees patterns that fall outside the data
  • has the mandate to say no
  • and carries responsibility for the final judgment

That requires more than an extra checkbox in the workflow.
It requires deliberate design and training.

Critical thinking as a safety mechanism

In many organizations, critical thinking is still treated as a soft skill. “Nice to have if someone happens to be good at it.”

In an AI reality, it becomes a safety mechanism.

Gunter names three simple but sharp questions that every professional should internalize when AI has a say in decisions:

  1. Where does this come from?
    What data, assumptions or biases, and context sit behind this output? Is this based on our reality or on some generic world model?
  2. What do I miss if I follow this blindly?
    Who or what falls out of view? Which minority voices disappear? Which risks am I no longer seeing?
  3. What human judgment still belongs here?
    Does this improve the efficiency or effectivity? And what about dignity, trust,…? Which part of this decision must be made by a human, and who owns that responsibility?

These are not philosophical extras.
They’re the checks that prevent AI from quietly becoming the norm and humans the footnote.

HR’s role: more than an L&D catalog

That brings us to a question that looks HR straight in the eye:
What is your role in all of this?

According to Gunter, at least this:

1. Be at the design table

When AI is embedded in core processes (recruitment, promotion, performance, risk, planning), HR needs to be at the table early.

Not only with the question “How can this be faster and more efficient?”, but also:

  • What role remains for humans here?
  • Are roles silently being reduced to rubber stamping?
  • What does this do to motivation, mastery, and autonomy?

 

2. Not just rolling out tools, but skills

Explaining how an AI tool works is one thing.
Teaching people how to relate to it is something else entirely.

HR has a natural role here:

  • Structurally training critical thinking.
  • Helping teams and leaders question AI output instead of swallowing it.
  • Creating room for doubts, risks, and moral questions, without anyone being labeled “anti-innovation”.

 

3. Understanding where AI lives in the organization

HR doesn’t have to become data science.
But it does need to know:

  • where AI is already running
  • who is affected by it
  • where decision-making power is shifting (from human to system)
  • and where dignity and meaning are quietly being eroded

 

Without that insight, it’s hard to speak credibly about “people-centric organizations”.

The deeper question: what does AI do to the soul of work?

AI is not just about efficiency.
It also touches a much bigger question: why do we work in the first place?

If more and more tasks shift toward technology, the following questions inevitably surface:

  • What’s left for people to do?
  • Do we want jobs where people mainly “push buttons while AI does the interesting parts”?
  • Or do we want work where judgment, imagination, emotion, and values remain central?

Many organizations reverse that logic: first the business case, then (maybe) the moral question.
Gunter argues for the opposite:

Make the mission human and ethical. The business case comes later.

AI belongs in that second part.
But it should never be allowed to overwrite the first.

From hype to real choices

Very concretely: where do you start as an organization?

One possible roadmap:

  1. Clarify your compass
    Decide what you will use AI for – and where you explicitly won’t.
    For example: is it acceptable for generic tools to write messages to your customers if “personal attention” is a core brand promise?
  2. Prepare people for permanent acceleration
    Don’t treat this as a hype that will blow over, but as a lasting shift in how we work and learn.
  3. Build in critical checks
    Not just in processes, but in culture: is it okay for someone to say, “I don’t fully trust this,” without being labeled as difficult?

 

What does this mean for leadership?

Leadership in the age of AI goes far beyond change management and tool adoption.

Leadership is about:

  • having honest conversations about identity (“What is our role when machines do part of the work?”)
  • naming the tension between speed and care
  • and, together with HR, defining which decisions will never be fully automated

 

That takes courage.
And it requires a very different conversation than: “Do we have budget for this tool?”

So, what now?

At Beanmachine, we work with clients on exactly these questions: in leadership programs, culture work, and in our training AI & Critical Thinking.

In that workshop we help teams and leaders:

  • see AI not as an oracle, but as a tool that needs human judgment and boundaries
  • sharpen their thinking and challenge the output instead of rubber-stamping it
  • look beyond the hype, question their own assumptions, and spot where bias sneaks in
  • and translate those insights into concrete, daily habits

 

What’s inside?

  • A hands-on exploration of how AI really works, and where it can be brilliantly helpful or confidently wrong
  • Critical thinking skills that go beyond prompt engineering and “10 best prompts” lists
  • A simple model to evaluate and adjust input and output with clarity
  • Five smart habits to use AI with confidence, curiosity and care, without switching off your own brain

 

Want to dive deeper into this theme?
In our Beanzine, we publish the full conversation with Gunter.
And if you want to explore what this means for your organization, we’re happy to think it through with you.

Share the Beanmachine Love

More Articles

Leadership Development

Your nervous system as a leadership tool

Why emotional regulation is the missing link in modern leadership You know the drill. Your calendar is packed, one change isn’t even embedded yet and

tools
Leadership Development

The power of purposeful tools

At Beanmachine, we design tools that spark action, deepen reflection, and turn learning into lasting change — for people and organizations.

Learning Arch Design
Cultural Transformation

Learning from the Danes

At Beanmachine, we know that powerful learning environments make a difference – not just for us, but for organizations, too. To stay at the cutting

More freshly ground insights in your inbox?

Join our newsletter!