Spare Capacity

AI reasoning models are now so good, especially with the introduction of ‘deep research’ type models that combine advanced reasoning with thorough content search capabilities, that they are generally my first port of call when tackling a complex, technical, open-ended question like “Should we pivot from product development to consulting services?” or “What business models could be built around such and such a website?”.  This is not to say that I blindly trust the output, just that rather than making even a cursory effort to sit down and process the big questions off the bat, perhaps with a notebook and pen in hand, I’ll ask AI first.

Given the novelty of the technology, this is, of course, a novel behavioural effect.  But it didn’t just materialise overnight.  It crept up on us (on me, at least), slowly, silently, insidiously.  It started a few years ago with the first releases of ChatGPT, where we’d ask it to write us a joke or a limerick.  That was cute.  Then we realised it could correct material we’d already written, or make suggestions for improvements.  As time passed it got better at basic reasoning and we soon found we were turning to ChatGPT, especially once voice interaction was added, for first-line triage of everyday issues like “How do I fix this drip in my toilet?” or (with video camera turned on) “My dog is acting weird - what do you think is wrong with him?”.  No plumber, not vet.  Straight to AI.

Now I find myself going straight to the models for even the most advanced questions, the ones I thought and prayed would be our exclusive remit, at least for as long as I’m alive.  I could resist, of course, but to be honest, a 15-page ‘deep research project’ conducted by AI on a topic or idea that has just sprung to mind is a gem too valuable to resist.  Ask first, think later.  I certainly don’t bother to puzzle out simple, everyday mathematical/geometric questions that come up, like “Will my new caravan fit in my car port?” or “How do you convert from a margin to a markup?”.

With hindsight it seems quite obvious that this was the inevitable outcome.  Human beings always tend to laziness, both physical and cognitive—it’s a natural consequence of the harshness of evolutionary pressures.  The self-preservation reflex pushes us to seek out shortcuts wherever we can so we can preserve some ‘spare capacity’ for lean times and emergencies.  We gorge when able, so we can better deal with periods of scarcity.  We rest when able, so we can sprint when needed.  The last 100 years have seen almost the wholesale replacement of physical labour, both in the workplace and domestically, with an arsenal of tools and machines that give many of us a huge amount of ‘spare capacity’ in both body and mind.  In the developed world, we no longer need to hunt, wash clothes in a river, transport ourselves by foot, program computers with punch cards or low-level languages, sew, cut or weld by hand.

What have we done with our spare capacity? In the physical realm, the outcome is clear to see.  We have vegetated—becoming unfit, sick, and overweight.  Those of us who try to resist do so by attempting to offset 16 hours a day of sitting with 45 minutes of HIIT every now and again.  In other words, we pissed our ‘spare capacity’ up the wall.

In the brain domain, things are a little murkier, perhaps because the progress has been slower and is still trickling through.  The standard argument is that as ever increasing technological ability has freed us from more mundane tasks like data entry, bit manipulation, manned surveillance and control and more, we can now focus on the more abstract, more ‘human’ tasks like planning, analysing, strategising - thinking, essentially.  This the story currently told in the world of software development at least.  Those who are bullish on AI claim that programmers will be freed from the necessity of writing actual code, allowing us to focus exclusively on the higher-level abstractions of business logic, architecture and user experience.  I’m not sure if the bulls have asked o1 for an architectural plan for a complex web app.  Look, I’m not saying a non-technical user could create a web platform simply by prompting an AI model - I’m saying that I, a senior dev specialising in web architecture, would tend to ask for a complete report from ChatGPT before putting neuron to synapse.  Ask first, think later (if at all).

First-line thinking is dead and buried.  How long will it be before all thinking goes the same way?

If the necessity to think is going the same way as the necessity to hunt, what will become of our undernourished grey matter? What will we do when we reach the ceiling of abstraction and there’s no higher we can go? When any vexing question can be solved by a stroke of the keyboard?  When any piece of software can be created with a simple description?  Will we feast on this externally available intelligence, only to seek penance in 45 minutes of HIBW (High Intensity Brainwork), 3 times per week?  Will the rich have ‘brain gyms’ in their houses full of shiny brain-training machines they never use? What will be the mental equivalent of a bicep curl machine?

Perhaps art, culture, music, literature, even love, are the higher abstractions we will seek out with our newly found spare capacity.  Freed of the need to think, we’ll just feel.  Or perhaps we’ll just watch Netflix.