What do we do when everything is automated?

The unknown spurs creativity. That’s why the inception of the Sci-Fi genre was marked by so many original concepts. Nascent Sci-Fi writers sat on the cusp of a technology revolution. The future was a veldt of fascination and terror. Nothing was known of our path, but anything was possible.

We no longer have that freedom. We’re here – living the reality, unable to execute the same commentary from deep within the mire of technological fanaticism. Some try – the BBC show Black Mirror is a great example – but for most of us, anything that’s new is unreservedly yearned for. Now with AI and automation, terrifying Sci-Fi scenarios are breathing upon our neck, swapping every ‘if’ for a ‘when’.

Automation isn’t evil. As a precipitator of performance and efficiency, it’s the stuff of dreams. Pursuing automation blindly, however, leads us towards a cliff edge, with nothing other than manmade systems to prevent our fall. We lose control when the ‘human element’ becomes secondary – less error, but at a cost to potential.

There’s a common Sci-Fi scenario that depicts humanity’s role as caretaker usurped by its creations. In it, everything is perfected: machines repair machines, systems are infallible and our every need is catered for without prompts. But overtime we forget how to repair these systems; we forget how to be independent, growing absolutely reliant on the structures our ancestors built. We forgot what it is to strive for new technology, brighter futures, because the need to survive diminishes against the backdrop of perfect health, resource abundance and a world extinct of danger. 

This is an extreme scenario, but we’re already seeing where its tracks would lay. Automation removes us from the machines we depend upon. As we interact less, we understand less. Safer, yes, but at the price of control.

This is how we detach ourselves from natural selection. As we remove the possibility of human error, so too do we diminish our tolerance to mishaps. But our world remains fallible, even if our structures do not. With unrestricted pursuit of automation, we’d become less capable, even dumber. Our survival would be guaranteed only by degradable systems, to the detriment of our species’ health.

If we don’t expose ourselves to danger, we don’t expose ourselves to risk. If we don’t expose ourselves to risk, we don’t expose ourselves to failure. If we don’t fail, we don’t learn.

The famous final line from T.S. Eliot’s The Waste Land, which prophesies humanity’s apocalyptic end, goes: “This is the way the world ends: not with a bang – with a whimper.”

It’s not a bomb or a tidal wave. It’s a slow surrendering – a self-imposed subjugation to our technological children – and a relentless pursuit for unmitigated convenience and ease. We cannot fail if we’re not present. To remove human error completely means to deny ourselves, both our nature and our role as caretaker. Automation explored with earnest, eager intent seems glorious in the moment, but it’ll take decades for the consequences to unravel. Sometimes, the only way to impede an illnesses’ spread is to recognise its first sign – otherwise, we’ll sit back and watch the grass grow high above us, and then slowly turn brown.