About the only idea from the world of Game Theory that I know that I know is a thought experiment called The Prisoner’s Dilemma. It goes something like this:

You’ve been arrested. Your accomplice has been arrested. You are both told that if you snitch on the other you’ll be let free. If you are the only person who is snitched on you will get three years gaol time. If you both snitch on each other, you’ll both get two years. If neither says a word, you’ll both receive a year in prison.

The short term best thing for both prisoners to do is to stay quiet. But with the lure of freedom, there’s a high likelihood both with snitch. The best thing to do individually (the chance of no gaol time) is the worst collective thing to do (a total of four years gaol time).

In a conversation on LinkedIn this morning I was reminded, once again, about the dilemma. And how decisions about deploying AI tools may well fit this pattern.

For example, I have a tedious task to do that serves me little value and serves my organisation little value. Let’s call it “Performance Review” for want of a better name.

The best thing to do would be to work with my organisation to get rid of this pointless and often divisive exercise. But that would be quite a lot of effort. If everyone took a stand it might be easier, but with these shiny new Generative AI tools I can get ahead by just getting ChatGPT to fill all the forms in for me.

The people owning the processes have been tasked to get Performance Review Processes completed (and have been financially incentivised to hit such targets) notice that something strange is going on. Rather than call into question the entire enterprise, they get sold a clever AI tool that can spot when people are using AI tools to complete the process and flag up such miscreants. We definitely don’t want people using technology to improve their productivity in this organisation, thank you very much!

The tools that are used to complete the forms therefore evolve to evade detection. The detection tools evolve to evade the evading form-filling tools. And thus an AI arms race escalates, all because people were being incentivised to do something that ultimately about which the organisation had lost any sight of purpose.

(As an aside, “We don’t do performance reviews because they are daft” was one of the strongest pulls for me to join my current employer Equal Experts.)

It’s really easy to say we should automate things. It’s quite easy to say we shouldn’t automate things that are pointless. But sadly we often find in large organisations that individual incentives drive behaviours that are counter productive to the collective whole, and when it comes to some of the tools that are becoming available today, this could become extremely problematic.

For those of us old enough, never forget what happened with Excel Macros…

One thought on “The Prisoner’s D-AI-lemma

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.