Why you should heed Yahaya's Law, the democratisation of coding, how Google wants to change your mind, and ghosts in the machine

Written by Fola Yahaya

Thought of the week: AI is fantastic when you’re rubbish

Arthur C. Clarke’s assertion that “any sufficiently advanced technology is indistinguishable from magic” perfectly encapsulates the rapid progress of AI. Seemingly overnight, it has given anyone, anywhere the ability to write perfectly mediocre English, take perfectly mediocre photographs and create perfectly mediocre music. In short, if you were formerly rubbish at something, AI has now levelled you up to being well… perfectly mediocre. In fact, the best way to tell if you’re actually not very good at something is to get an AI to do that something and see how amazed you are with your own creation. For example, while I was blown away by the brilliance of my 30-second, AI-generated Raging ‘Putin’ Bull video epic (see above), my Head of Film and Video was distinctly underwhelmed.

I’ve modestly dubbed this phenomenon ‘Yahaya’s Law’, which states that: the true quality of your AI output is inversely proportional to your ability to create it without AI. So before you crow with self-satisfaction at how quickly and effortlessly ChatGPT penned your new report, run it by someone who actually knows what they’re doing, because “if you think it’s great it probably ain’t”.

But on a more serious note, taking everyone from knowing Jack to being a Jack of all trades has started to change our expectations in a number of ways. Firstly, it discourages the development of deep expertise. While successive improvements in information accessibility have made us more informed, AI is now coming for our thinking, which ultimately disincentivises learning. After all, why bother going to university to study photography when modern, AI-image processing phones can fake reality better than any camera? Why bother learning to decipher an X-ray when an AI can do it better and faster?

If future generations are discouraged from becoming deep experts, we will be wholly dependent on AI systems for our thinking and will eventually be unable to tell the difference between mediocre and great. This is why we have to resist the urge to lazily resort to AI when we write our emails and ask important questions. With algorithms already determining much of what we see and create, we need to guard against letting it do our thinking as well.


The democratisation of coding

The Internet (or at least a corner of it) is awash with the output of people playing with bolt.new, a shiny new text-to-app AI tool. Yep that’s right, a text-to-app tool. Want to ‘build’ a Spotify clone? Just type in “build me a Spotify clone” and Bolt starts building something that looks like Spotify (see my video below). As if by magic, the AI starts spitting out line after line of complex code and (just in case you care) explains what it’s building. Bolt’s genius move is to have a window on the side that gives you a live preview of your AI-generated software doppelganger.

I’ve spent the last few weeks experimenting with the capabilities of the new ‘thinking’ models and have already managed to build simple apps with Claude and ChatGPT. But though these only took a few days of my time, in this era of instant gratification and empowerment I slightly resented the trial and error nature of the process. Bolt does away with all this faff by offering users who don’t know their html from their elbow the nirvana of being able to create a working prototype of an idea within minutes.

In fact, what Bolt churns out appears so good that it made me reconsider expanding my IT development team. Mindful of Yahaya’s Law, however, I asked an expert coder to rate my AI app. His response, even controlling for the conflict of interest inherent in asking a coder to rate an AI coder, was that Bolt’s code was mediocre at best and would be a nightmare to maintain if ever released into the wild.

Bolt will surely get better, and with companies like Google and Salesforce already using AI to create significant code (see story below), the future of easy, fast and cheap coding is already here. I’m already ecstatic at suddenly being a mediocre coder. I no longer need to try and articulate what I want – I can just grunt and an AI magically does a pretty good job of building me something quickly and, critically, without having to use a coder.

It’s because of this last part, however, that I’m considering adding an addendum to Yahaya’s Law: every advancement in AI should be accompanied by an acknowledgement of the career that it has just killed.

And so the great acceleration continues.


How Google wants to change your mind

The only module that I regretted studying during my BA in Economics was the philosophy of science. A key reason for this was my struggle to understand the ideas of Jürgen Habermas, probably Germany’s most famous living philosopher. Habermas explored how people with opposing perspectives could reach genuine understanding or what he called ‘communicative rationality’. Decades later, Google DeepMind is attempting to automate a version of this with its new Habermas Machine, an AI conflict mediator designed to help people find common ground.

The Habermas Machine works by taking input from people with opposing views, then crafting statements that both parties agree on: the classic common ground strategy. An AI model drives this process, iterating on statements based on feedback until participants see something they can both agree on. Getting to common ground, goes the theory, reduces the distance that each party perceives is between them. But this concept, getting AIs to manipulate us into doing the right thing, is wrong on so many levels:

Firstly, it fails to tackle the root cause of why we are so divided – social media algorithms that weaponise our differences for profit. While Habermas focuses on creating a space for genuine exchange, algorithm-powered platforms create echo chambers that prioritise engagement through outrage, and confirmation rather than empathy. Yuval Noah Harari sums this up nicely in an article in the Financial Times:

In pursuit of user engagement, the algorithms made a dangerous discovery. By experimenting on millions of human guinea pigs, social media algorithms learnt that greed, hate and fear increase user engagement. If you press the greed, hate or fear button in a human’s mind, you grab the attention of that human and keep them glued to the screen. The algorithms therefore began to deliberately spread greed, hate and fear. This has been a major reason for the current epidemic of conspiracy theories, fake news and social disturbances that undermines societies all over the world.

Secondly, this contradiction runs even deeper when you consider the foundation of DeepMind’s technology: the Transformer model, famously associated with the slogan “Attention is All You Need.” While attention-grabbing models power both social media and this AI, they operate differently. Social algorithms drive us deeper into ideological silos, yet this AI, optimised for consensus, could easily soften differences without truly addressing them. The Habermas Machine might help people find common ground, but it could also water down divisive issues to keep everyone comfortable, glossing over the real grit of disagreement.

Finally, we must always consider the negative use of the same tech – using this in a similar vein to social media algorithms to foment dissent, as there is a very thin line between facilitating dialogue and steering sentiment. If AI mediators become commonplace, we can’t trust them to let us think freely or quietly shape our opinions.


Ghosts in the machine: Whisper’s strange hallucinations

Much was made this week of researchers’ finding that OpenAI’s Whispermodel, now being used for transcription by thousands of healthcare professionals, has some unnerving traits, including inventing entire sentences in the face of silence. Whisper has been deployed in sensitive environments such as hospitals, where it’s trusted to accurately transcribe patient conversations.

Yet, when it encounters pauses it has, ironically, the all-too-human habit of just making sh*t up. Researchers from Cornell University and the University of Washington found that Whisper “hallucinates” in about 1% of cases, fabricating sentences when faced with long pauses. I only mention this total non-story because it’s a great example of how to dress up great results as bad news.

The real point here is that AI systems, like nuclear reactors, should be 100% infallible, but when we’re dealing with messy human interactions, the really amazing news is that the AI gets it right 99% of the time, which I guarantee is way better than most human transcribers.


AI video of the week

Couldn’t resist this AI-generated spoof of The Office with an all-star cast.


Robot of the week

Boston Dynamics continue to be at the forefront of creating robots that are actually useful. This latest iteration of Atlas is totally autonomous and is making decisions in real time based on its environment. Interesting to see where this gets to within a year. Love this comment:

Imagine this thing is walking away from you and you yell “Stupid robot!” Then it stops in its tracks, its head and torso swings around 180 degrees. “What did you say?”



What we’re reading this week



Tools we’re playing with

  • bolt.new – yet another addictive tool that will let you prototype any app in minutes.
  • Tad.ai – an AI text-to-music generator, competitor to Suno and Udio.



That’s all for this week. Subscribe for the latest innovations and developments with AI.

So you don’t miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.

RECENT

POSTS

Stay in the loop

We’ll send you quarterly updates so you can see what we’re working on, what the future holds and how we’re shaping it.