Let’s start today’s proceedings with a recap of the concept of a ‘derivative’. If this one isn’t already in your toolbox of mental models, it’s a good one to pick up.
As an abstract concept, or set of concepts, derivatives can be gnarly, so let’s start with something concrete: speed. Speed, or velocity, is the rate of change of position over time - in technical jargon, a ‘time-derivative of position’. If I’m driving at 60 km/h, then for every minute that passes, my position, P, changes by 1 km - simple. Velocity is the first derivative of position. Can you guess what the second is? Which familiar concept describes the rate of change of speed? The answer is ‘acceleration’. If my speed changes from 60 km/h to 100km/h I’m accelerating. If that change happens over 5 seconds, I am accelerating hard. If it happens over a minute, I am accelerating gently. Acceleration is the second derivative of position.
Let’s take another example from a different field. In economics, inflation, or the rate of generalised price increases across an economy (assuming prices are actually increasing, which is pretty safe assumption these days), is the first derivative of price. You might hear that inflation has increased over a certain time period, from, say 1% to 3%. If inflation is increasing, we’re talking about the second derivative of price, akin to acceleration in the previous example.
First, and even second, derivatives are easy enough to comprehend, but once we move into third, fourth, fifth and beyond, it becomes near impossible to conceptualise their meaning. What do the fourth (snap), fifth (crackle) and sixth (pop) derivatives of position represent in real life? Nothing I, at least, can get my head around.
Away from the purely mathematical-physical definition of derivatives, we find looser, but related uses. In finance, a derivative is a financial instrument that derives its value from an underlying asset class. Here we’re not talking about ‘rate of change’, but something like ‘degrees of removal’. ‘Mortgage-Backed Securities’, the infamous villain of the 2008 global meltdown, derived their value (in the loosest use of the word) from what turned out to be the shredded up mess of near worthless loans that had been securitised (again, playing fast and loose with semantics here). When it became clear that the loans were junk, the securities tanked. Now imagine if the MBSs themselves had been used to create some other financial instrument - a second derivative of the original loans. The world of finance is full of such examples. As assets and instruments get divided up, securitised, rehashed and re-securitised, we move further up the derivative chain. A house of cards, no doubt, but at least a bit easier to get one’s head around than ‘jerk’ (the third derivative of position).
That’s a wrap on ‘derivatives’; we’ll come back to them shortly.
My (British) mum spent 1978 in California. I’ve often said she doesn’t know how lucky she was to be a twenty-something at that place, at that time. If I had the chance, I wouldn’t think twice about travelling back in time and living my twenties and thirties on a weed-and-sun soaked beach somewhere near San Diego - no mobile phones, no internet, not even a bloody fax machine. But let’s try to be a little objective here - some, maybe even a lot, of the digital tech that’s been invented since 1978 has been pretty useful. When did it go to the bad place though? When did we reach the summit of Mt Digitalis and start descending into the cold river of bullshit that sweeps us away while we try to keep afloat?
Compare an acoustic to an electric guitar for a second. I think few of us would have any serious complaints about the technological ‘leap’ represented by guitar electrification. It seems like a net positive: broader range of sounds, easy amplification, possibility to practice quietly etc. Any downsides would seem purely ideological in nature. Purists might argue that an electric guitar is not really a ‘guitar’ at all, for example, but it feels like a push to suggest that an electric guitar takes something from you, undermines you, spies on you, or seeks to manipulate you.
I’ve thought a lot about this, and have always wondered (presuming it is actually in the past) when we reached the point where digital innovation stopped serving us and improving our lives and started slowly shitting all over us. Recently, I made a list of all the ways our gadgets take from us; not only can it be used to evaluate what to buy and use, but can also serve as a yardstick for trying to work out when ‘peak digital’ was (the day before the iPhone was announced, in my opinion).
Try asking yourself the following questions with reference to an electronic gadget, a piece of software or even a digital service:
- Does it encroach on your privacy, steal your data, spy on you?
- Does it require you to agree to terms and conditions you’ll never read and could never understand just to use it?
- Are you actually renting it even though you think you’ve bought it?
- Does it require an on-going paid subscription?
- Does it tie you into a proprietary ecosystem?
- Does it show you ads?
- Does it try to sell you other products or services?
- Does it regularly require your attention? Does it send you notifications?
- Is it purposely designed to be addictive? Does it press your dopamine buttons?
- Does it have a significant learning curve?
- Does it require software updates?
- Does it have planned obsolescence built in? Does it need to be replaced every two or three years?
- Does it need to be regularly charged?
- Does it require internet connectivity?
- Does it have a long startup time when you turn it on or need to use it?
- Does it make you stupid? Does it cause you to end up losing simple human skills?
- Does it socially isolate you? Does it make you envious of or aggressive towards others?
- Can it incriminate you in the future if you make a mistake today?
- Does it try to manipulate your behaviour?
- Does it transform what should be an object with a specific task into a general tool that does everything?
Let’s imagine we can assign a score on each of the above metrics, say 0-10, with 0 for no effect and 10 for the worst offenders. A simple analogue wristwatch scores zero. A Casio from the 90s might rack up low single digits, mainly because setting the time with those little buttons was finicky as hell. But a smartwach, what would that score? Hundreds maybe? They tick just about every box: needs daily charging, tracks your every move, tells you what to do and when, needs to be connected to your phone or the internet, locks you into an ecosystem - the list goes on. Some of them even have screens that only turn on when you lift your wrist up to look at the time, which introduces an annoying lag into the simplest of quotidian gestures. Tech innovators have given us watches that make telling the time painful yet we buy them by the truckload. I’ve got one. I bought it so I can get precise satellite positioning and with it know how slow I’m running down to the millisecond and have precise biometric data that tells me my heart rate is too high. A forty-something who’s never run with a high heart rate whilst jogging. Astounding.
Let’s drill down on how bonkers this whole thing really is for just a second. I have a solar powered G-Shock that never needs charging and can withstand a lorry running over it. It cost about 80 Euros. Instead of wearing that, I went out and bought a watch which commands control over almost every aspect of my life, tells me when to stand up and sit down, needs daily plugging in and helps me gradually lose any natural self-awareness over my own body’s signs and signals since I can just glance down and know immediately if I’m hot, thirsty or tired without even a second of introspection. Plus, of course, it cost around ten times as much as my G-Shock, is crap for telling the time and needs further accessories to stop it getting damaging or destroyed. I truly hate myself.
I’m not the only one to buy a shiny new octopus-gadget then almost immediately regret it as it wraps its nasty tentacles around my entire life. Nor, by a long shot, is the Apple Watch the first time this has happened. I had a Kindle until I realised that every page turn was being beamed up to Mr Bezos in real time and the 200 or so eBooks I thought I owned were actually just leased under a pretty shaky contract which Amazon could seemingly rescind at will. Goodbye Kindle it was then. Not long ago I relapsed and bought a Kobo. Same shit, different technofeudal book retailer.
At times in the past I’ve tried to de-{insertDigitalServiceName} my life. For example, about 5 years ago, after reading Soshana’s Zuboff’s masterwork ‘The Age of Surveillance Capitalism’, I set about de-Googling (it’s a thing, look it up). This is not a feat for the fainthearted. I jailbroke the Android phone I had at the time and installed a deGoogled operating system instead of stock Android, I moved off Gmail, I stopped using Chrome, I changed my default search engine to DuckDuckGo, I wiped my very extensive Google history and I toggled all the options to ask them not to track me any more. Was it painful? Yes. Did it work? Kind of. I still use (and pay for) Youtube, because there’s no realistic alternative and having the richness of all the world’s knowledge at my fingertips is a poison too sweet to resist. I still have to use some of the Google services for work. They’ve probably still got a tonne of data about me. But largely, I’m pretty free of Google services; I guess I semi-de-Googled. I’m more relaxed about it these days. Please Shoshana Zuboff, no more books.
Let’s be honest though, rather than having to surgically remove the tumours that are modern digital products from my body and mind, wouldn’t it be better to just avoid them in first place. I’m please to say I have some experience with that approach too. For whatever reason (actually it’s because I’m a mild sociopath), I never got into social media. I’ve dabbled a few times, opened accounts for different business and projects, eventually closed them and generally been absolutely smug in my total denial that this quite sizeable social clusterfuck actually exists. I’m glad I don’t have to put my phone in a faraday cage to protect me from myself and I’m glad I can’t accidentally say something whatever-phobic on Twitter and get deported for it. I exist in a blissfully TikTok free world, I don’t read Twitter and I don’t endlessly scroll Instagram, and the best part is I don’t have to expend a single, solitary drop of willpower to stop myself doing so. It just isn’t a thing for me.
I think I’m going to have to follow this cold-turkey approach for AI as well.
I got to thinking more deeply about this because I’m about to exchange my acoustic piano for a digital one. Although this is blasphemy to ‘proper’ piano players, I’ve come to the realisation over the 7 or so years I’ve been playing that I actually prefer digital pianos. Real-string feel and resonance are less important to me than being able to control the volume, having lots of different sounds and never needing to have it tuned. But ‘redigitalising’ my musical life didn’t sit quite right with me given my current attitude to all things electronic, so I sat down and came up with the previous list of questions as a way to evaluate whether a digital piano was a step backwards in my ongoing march towards the analogue utopia I have in my head. I came to the conclusion that it was probably more like an electric guitar than a smartwatch, so I was alright. Some of them have slow start up times and a learning curve attached to fiddling with the menus, but the more minimalist ones really are like digital versions of their acoustic counterparts but with bell and whistle sounds - so I would go for one of those. In the piano shop, the assistant showed me beautiful Yamaha Avant Grande models, which are almost identical to my existing piano, with the difference that the sound comes out of an array of strategically placed speakers rather than from a hammer striking a string. It looked and sounded incredible. He also started showing me a new model with an 18-inch LED screen that connects to the internet so you can download……Stop. Just Stop. It’s a slippery slope from happily jamming on ’Nights on Broadway’ with a funky bass in the left and a smooth Rhodes in the right to a 13.99 a month subscription which gives you access to 128 oboe sounds.
On the walk back to the car, which I had left on the other side of a big public park, I ran into a group of Sri Lankans playing cricket next to a water deposit. This was quite an unexpected site in a Madrid park, not least because I could swear the water deposit had not been there on the way in. I realised I was lost. No problem - I whipped out my phone, opened Maps and followed the directions to my ‘parked car’, which turned out to be less than 200 meters away - I could actually see it from where I was!
I’d like to say I was surprised by this, but I already know that my ability to navigate went by the wayside a long time ago. I’ve lived in Madrid for 10 years and have been to the airport probably 200 times, but if I attempt the journey without GPS I always end up outside some army barracks on the shady side of town. About 5 years ago, perhaps at the same time I was de-Googling, I tried de-GPSing, which was even less successul. Armed with printouts, I’d attempt to take on Madrid’s epic spaghetti junctions and invariably end up on the R5 to Toledo and €3.50 out of pocket. I quickly accepted that whatever gland or cerebral appendage that was once responsible for my geospatial awareness was by this point puckered to a sultana and totally useless and went back to my GPS. I’ve never looked back. Literally.
As I started the car I had a moment of clarity. This is what the AI-fuelled future has in store for my entire brain. This scared the absolute living shit out of me.
Back in the heady days when I thought AI might turn out to be something positive (3 months ago), I dabbled with a concept for an app or website that would publish articles where the user could tune the output to their own liking. The original author would be responsible for supplying the basic content or argument of the post in using a sort of pseudocode, but the textual output could be dynamically tweaked by the reader according to their preferences. Take this article for example. What if wordy and sarcastic is not your thing? Maybe you’d prefer bullet points in Japanese. Or a sonnet. That seemed like a reasonable approach. It felt like we were moving towards a world where ‘creative output’ was taking a backseat to something else, something that we were all struggling to put our finger on. Clearly the output of generative AI models, be it in the form of text, image or even video, is no more creative than a fart. It is a bland average of everything that has gone before it. But there is normally creativity buried deep within the dense AI slop - the spark of human ingenuity that led to the prompt or base instruction. Clayton Ramsay, in I’d rather read the prompt captures this feeling neatly:
I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience - if there’s no experience to share, why bother? If it’s not worth writing, it’s not worth reading.
As James Walsh notes, everyone is cheating their way through college by turning in work done by ChatGPT. How far are we, then, from the day when students are too lazy to even write a prompt? I’d be astounded if we weren’t already there. If your college professor says “look, don’t waste my time with 8 pages of output from ChatGPT, just show me the prompt”, what do you, as a time-pressed college student, do? The answer should be clear: ChatGPT is great as creating prompts.
Prepare to descend into the meta.
Perhaps you’ve already seen where this is going but let’s try to formalise it a bit. We’ll refer to the original, intangible, elusive creative spark as S. S is an extremely abstract concept - its a synaptic crackle, a neurological blip on an N-dimension representation of mental space. Think back to our discussion of derivates and position, P, which can be thought of a set of coordinates in a 2D or 3D space: S is the psychological equivalent of that. Whatever that is. Starting simple, we might think of a prompt as the first derivative of S and the output of the LLM as the second derivative or maybe even more technically we could say that the output of an LLM is the second generative-derivative of S in the same way that ‘acceleration’ is the second time-derivative of position. Less technically, the model output is twice removed from the original spark.
Once professors catch on that students are using AI to create the prompt in the first place, do they start asking them to just hand in the ‘prompt of the prompt’ and then the ‘prompt of the prompt of the prompt’. How far does this go? Once professors are seeking the 4th derivative of S, will we need specialist machines to just extract S directly? Is this what Elon Musk is after with his brain interfaces? What does S look or sound like? Is it just ‘snap, crackle and pop’? Have our brains been reduced to Rice Krispies?
I’m jumping off this train now before it builds up any more steam. I don’t want to lose any more of my basic human abilities. My ancestors (well maybe not mine, I have it on record that mine were more, let’s say, artistically inclined) roamed the savannah for hundreds of kilometres hunting gnus without so much as a map, yet I can’t find a parked car at 200 meters without satellite assistance. Modern digital tech has stolen our physical fitness, our social connections, our attention spans, our ability to create democratic societies and now it’s after our most fundamental human ability. How long will it be before you’re reaching for AI to write a 2-line email? To add up some simple numbers? To code a simple function? You’re not, surely, already doing that right?
Personally, I’m not falling for this trap. I don’t even want the temptation. I was successful in totally excluding social media from my life and I’m going to use the same approach with AI.
The most common argument against this philosophy is “why bother if the machine can do it better?”. Why write if writing has no economic value any more? Why create art if models can spit out any sort of personalised creation in seconds? Why write code if AI can do it faster and better? Why play music when AI can generate any kind of tune on the fly?
I believe the key is in differentiating ‘process’ from ‘output’. As highly-trained consumers, we’ve come to see output as the holy grail - something we can purchase, eat, watch, listen to. But behind output there is process. At least when humans are involved. There is creativity. There is S. No matter how badly AI obscures S, no matter how many derivatives removed we get from it, it still exists. S is what makes us human. S is what differentiates a text written by me from a text written by AI, even when they are word-for-word identical.
So this is the crux of it. I’m going to stop using AI because I’m worried that my S will go the same way as my geospatial awareness and seemingly everyone else’s attention span, social skills and ability to communicate. I still believe in process. I want to write my own essays, thank you notes and code. I want to conduct my own research by hand. I want to seek and understand other human beings’ opinions.
Oh, and I’m getting rid of my Apple watch too.
Postscript
Before I hit ‘Delete’ on my ChatGPT account, just for shits and giggles, I gave it one last spin. I wanted to see if it understood the prompt derivative concept outlined here. I fed it this article and asked it to create the third-derivative prompt for creating this same article. After a bit of prodding and correcting, here’s what it came up with:
First, a quick review of the layers for clarity:
• 0th derivative: The article itself
• 1st derivative: A prompt to write the article
• 2nd derivative: A prompt to create a prompt to write the article
• 3rd derivative: A prompt to create a prompt to create a prompt to write the article
Now, assuming the original article (0th derivative) is about:
How digital tools are reshaping human attention and the implications for creativity
Here is a true third-derivative prompt — deeply meta and clearly layered:
“Write a prompt that instructs someone to generate a second-order prompt — that is, a prompt whose purpose is to guide someone else in crafting a prompt to help a writer produce an insightful article analyzing how digital technologies (like smartphones, social media platforms, and digital workspaces) are transforming the structure and quality of human attention, and what downstream effects this has on creative thinking and expression. Your prompt should emphasize meta-prompting literacy: it should guide the writer not just on the content of the ultimate article but also on how to frame prompt-writing in a way that elicits depth, perspective, and cognitive nuance in the final output.”
It didn’t quite follow my derivative logic, but honestly, I think it might have conceptualised it even better than I did. If you can get your head around the output I’ll be deeply, deeply impressed.