Technology

Can AI prevent humans from repeating their mistakes?

Published

on

Despite the common adage that those who forget history are doomed to repeat it, humans often fail to learn from their past mistakes. They repeat the same errors, from engaging in fruitless land wars to making the same dating mistakes.

One reason for this phenomenon is our forgetfulness and myopia – we don’t see how past events are relevant to current ones.

Additionally, when things go wrong, we don’t always determine why a decision was wrong, and how to avoid it ever happening again. Can technology, particularly AI, put an end to this cycle of mistakes?

The challenge of information

One issue with learning from mistakes is that we struggle with information processing. We fail to remember personal or historical information, and we often fail to encode information when it is available.

Moreover, we make mistakes when we cannot efficiently deduce what is going to happen because the situation is too complex or time-consuming, or we are biased to misinterpret what is going on.

Can AI help?

AI can help store information outside of our brains and retrieve it. However, remembering is not the same thing as retrieving a file from a known location or date. Remembering involves spotting similarities and bringing things to mind.

An AI needs to be able to spontaneously bring similarities to our mind, often unwelcome similarities. But if it is good at noticing possible similarities, it will also often notice false ones.

That means it will warn us about things we do not care about, possibly in an annoying way. Tuning its sensitivity down means increasing the risk of not getting a warning when it is needed.

Where technology stops mistakes

Idiot-proofing works, such as cutting machines that require holding down buttons, keeping hands away from blades, or dead man’s switches that stop a machine if the operator becomes incapacitated.

When technology works well, we often trust it too much, and this can be dangerous when the technology fails. Much of our technology is amazingly reliable, and we do not notice how lost packets of data on the internet are constantly being found behind the scenes, how error-correcting codes remove noise or how fuses and redundancy make appliances safe.

However, when we pile on level after level of complexity, it becomes less reliable than it could be.

The double-edged sword of AI

AI is a double-edged sword for avoiding mistakes. Automation often makes things safer and more efficient when it works, but when it fails, it makes trouble far bigger. Autonomy means that smart software can complement our thinking and offload us, but when it is not thinking like we want it to, it can misbehave.

Anybody who has dealt with highly intelligent scholars knows how well they can mess things up with great ingenuity when their common sense fails them, and AI has very little human common sense.

Training AI systems

AI systems are programmed and trained by humans, and there are lots of examples of such systems becoming biased and even bigoted.

They mimic the biases and repeat the mistakes from the human world, even when the people involved explicitly try to avoid them.

Reducing the consequences of mistakes

In the end, mistakes will keep happening, but we can work to reduce the consequences of mistakes. The undo button and autosave have saved countless documents on our computers.

The Monument in London, tsunami stones in Japan, and other monuments remind us about certain risks. Good design practices make our lives safer. Ultimately, it is possible to learn something from history. Our aim should be to survive and learn from our mistakes, not prevent them from ever happening.

Technology can help us with this, but we need to think carefully about what we actually want from it – and design accordingly.

Trending

Exit mobile version