Skip to main content

I am responsible, the AI isn't

I am responsible, the AI isn't

Undoubtedly, AI/LLM is an impressively powerful new tool that we as professionals, whether software engineers or other knowledge workers are happily adopting. But after all, it is a tool we are wielding. And no matter which tool we use - as professionals, this means:

I am responsible for what I deliver as a piece of work. The AI does not take the responsibility for the accuracy, trustworthiness, safety or any other kind of social guarantee.

This thought keeps coming up in discussions online and offline around the obvious high potential of AI. I love the enablement I feel from these tools that sometimes feel like a little bit of magic - but in the end we must swing the hammer of AI power in such a way that we hit the nail on the head and not tear through whatever gets in the way (sure, it's a metaphor, but you can imagine different AI-based work scenarios where this metaphor fits).

Let's shift the metaphor into the physical world: We are surrounded by electrical devices that are plugged into the power grid. When we pick up a toaster, we trust that it won't electrocute us. So when I repair a defective toaster, e.g. replacing its plug, I will make sure that it is done with the necessary diligence and knowledge. Everyone in my household rightfully expects me to ensure it is still safe to use. That responsibility doesn't vanish just because I'm not a professional electrician, or because I delegated the repair job to an AI.

The same responsibility applies to anything we produce that involves AI in its making.

  • I am generating a report? I am responsible for its accuracy, completeness and the trustworthiness of its sources.
  • I am delivering software? I am responsible that it is safe, secure, correct and does not come with a burden to operate it.
  • I am publishing documentation of any kind? I am responsible that it is accurate, helpful, concise and stays up to date.
  • I am writing a blog post? I am responsible for the message, the reasoning, and for making it valuable for my audience.
  • I am drafting an email? I am responsible for clear, honest language and for being mindful of the emotional context it may be received in.

All of this holds true whether or not AI is involved. But AI is making this harder to spot. The results often look so promising, but your AI agent sometimes boldly lies to your face (aka. hallucinating), misinterprets sources, and makes wrong conclusions. All mistakes that humans are prone to as well. But as professionals who others rely on, we need to actively guard against these pitfalls.

Sure, this thesis is as obvious as it sounds, but public unprotected database access of vibe-coded websites and global consultancies which do not verify their AI-generated reports suggest otherwise.

I get the feeling that we really need to emphasize this collectively in the discussions that happen right now. Let's repeat to ourselves every time we swing the hammer of AI and feel proud about the polished work that we got out of our prompts and are then delivering to the world:

I am responsible for this. The AI is not. I give this to the world, my colleagues or family in the hope of providing value. You can trust me that it is safe to use and will cause you more benefit than harm.

If you cannot take responsibility, please disclose the lack of verification - or don't ship at all.

I trust you.

I trust you to not harm or mislead me.