Why isn't AI better? My wish list for GPT-5, OpenAI gets closer to AGI, five things NOT to use ChatGPT for, and will agents end service businesses?

Written by Fola Yahaya

Thought of the week: AI satisficing becomes the new normal

There is a fundamental problem with criticism of the AI-generated content that is quickly taking over the Internet: the fact that it actually sounds and reads OK.

Let’s be honest, within only 12 months, AI-generated content has already become fit-for-(most)-purpose(s), sadly better than most people can muster and largely free (if you ignore the environmental, privacy and copyright issues). In fact, rather than being annoyed at being on the receiving end of ChatGPT content, there is a growing sense of resignation and even acceptance of the collective lowering of our quality expectations.

This content satisficing has a number of profound effects on those whose livelihoods revolve around producing content:

  1. People are increasingly comfortable with ‘fast food’ AI content.
  2. ‘Gourmet’ human-made content needs to be really good to justify waiting for it and, critically, paying for it.
  3. It begs the question, do we really need all this content if it can be written better and instantly by an AI?

In the very near future – and we’re talking months here, not years – I foresee us questioning why we need to fill in yet another form with our personal details and getting frustrated at those who don’t enable our digital twins to autocomplete this stuff.

Companies and organisations will also be questioning the wisdom and cost implications of continuing to spew out tired and dull documents such as annual reports that no one really reads.

Satisficing is an awful vision of our content future, but it might offer a pathway back to creating content that is profoundly human rather than going through the motions.


Takeaways from the AI for Good conference

I attended the UN’s AI for Good Global Summit in Geneva last week. As I mentioned in last week’s newsletter, after the same conference five years ago, I was left rueing my decision to hop on a plane and amble around a largely empty, and certainly uninspiring, conference that was more student science fair than flagship event.

To paraphrase a famous jazz song, what a difference ChatGPT makes. The place was buzzing and it took us almost two hours just to get in. The International Telecommunication Union (ITU), the UN agency hosting the event, was clearly unprepared for the demand for (free) tickets and had to resort to discouraging pre-registered visitors from attending in person. Even worse, by 1:15pm even the paying canteens had run out of food!

Ok, ok, but how was the conference? Well… like the proverbial curate’s egg, it was good in parts and not so good in others.

The sun is out in London so let’s start with the positives:

1. Some really powerful presentations about useful AI (finally). Elon Musk has hogged the limelight for Neuralink, his brain-to-computer interface start-up, but other quieter companies are making good headway in helping paralysed people do stuff with their minds or just put one step in front of another.

2. Mass interest. Two-hour queues, huge outdoor screens and light rave music made it feel more like a summer festival than a conference.

3. Great presentations on AI safety. There was a great talk on how easy it is to use AI to ‘cancel’ someone you don’t like on social media from Tristan “the closest thing Silicon Valley has to a conscience” Harris at the Center for Humane Technology (a rehash of his presentation below).

The not so good

4. No Big Tech! OpenAI was nowhere to be seen, despite (or maybe because of) their current struggles with negative press and their avowed intent to develop AI that is aligned with human interests. Harris had a really compelling stat that the ratio of AI capability to AI safety research was 1,000:1! So for every thousand people beavering away to put creatives out of work, only one is focused on making it safe!

5. Robots everywhere. In general, the practical tech demos seemed like they had been dusted off from the school science fair, with unconvincing robots hampered by the curse of all live demos: terrible Wi-Fi that meant five-second lags between responses. In summary, massive interest in AI but (aside from neural AI apps) no real sign of physical killer apps and Big Tech couldn’t give a flying…

For the visually inclined, here’s a short video with my key takeaways.

Coal is back… thanks to AI

AI’s dirty secret is not so secret any more. Our collective outsourcing of our boring emails and holiday planning to ChatGPT is now having a massive impact on our power grids.
 
It’s getting to the point where, without a serious innovation in how AI models are trained, AI may become the single biggest threat to tackling climate change. Think this is hyperbole? Think again. The demand from power-hungry AI developers is so large that the US, the world’s second-largest polluter, is now backtracking on plans to retire coal-fired power plants as power demand from AI surges. In a class action suit being brought by 25 US governors to stop government efforts to reduce the country’s reliance on coal, they justify their case with suitably jingoistic language such as:

“We absolutely as Americans can’t afford to lose the AI war.”

Add in China’s race for AI (which produces almost three times the carbon emissions of the US) and, Houston, we have a massive problem.


Is ‘empowering’ creators with AI tools Big Tech’s thinly veiled attempt to get free content?

 
There’s been much handwringing and wailing at the launch of yet another streaming platform designed to undermine the creative industries this week. UK company Fable Studio has launched Showrunner, a new streaming platform that allows users to create their own animated content using AI technology.
 
Through prompts, users can write, voice and animate episodes, with the platform offering extensive control over dialogue, characters and shot types. Users can join a waitlist for the free testing version, which will last until the end of the year.
 
The platform currently features 10 (pretty poor) AI-generated animated shows of various genres, including Exit Valley, Ikiru Shinu, and Sim Francisco. Showrunner’s tech, which currently supports only animation, enables users to create scenes that can be stitched into full-length episodes, with the best user-created episodes potentially included in the series catalogue and earning revenue.
 
Founded by the son of fabled marketing guru Maurice Saatchi, Fable’s vision is to become the ‘Netflix of AI’. The company faced criticism and intrigue when it released an AI-generated episode of South Park, showcasing both the potential and the comedic limitations of AI-generated content. Despite mixed reactions, this effort underscored the evolving role of AI in content creation and its potential impact on traditional production methods.
 
While personally I think this start-up is doomed to fail – there’s a very good reason why it takes a massive army of creatives to create even seconds of brilliant content – the undeniable direction of travel is using AI to reduce production costs but also to fundamentally shift how content is produced and consumed.

Another one bites the dust: ElevenLabs’ new sound effects target Foley artists

I once shared an office complex with a fascinating guy called Nigel Holland who made sound effects for huge Hollywood blockbusters like Braveheart. My work would be frequently interrupted by what sounded like someone being punched or a drive-by shooting.

So I thought of Nigel when I read about ElevenLabs launching a new tool that creates realistic sound effects in seconds. ElevenLabs are the go-to people for voice cloning and are responsible for voicing many of the deepfake politicians that have been doing the electoral rounds.

In their breezy press release, they say that the current ‘problem’ they solve is that when creators want to add ambient noises to their content — such as social media videos, games, movies and TV shows — they must either manually record them or (God forbid) buy/license audio files from different repositories on the internet.

ElevenLabs’ new Sound Effects tool changes that, giving creators and production teams a way to get exactly what they want by simply typing it in plain, conversational English.

When a user enters a text prompt detailing the sound effect they are looking for, the AI model powering Sound Effects processes it and generates six unique audio samples. The user can then choose the one that works best for their project and download it or store it directly on ElevenLabs’ platform.

Let’s unpick this. So ElevenLabs is clearly stating that people shouldn’t have to pay for this kind of stuff and that Foley artists, who are some of the most creative people I know, are soon to be out of a job. Again, another example of an AI solution looking for a problem.


What we’re reading this week



Tools we’re playing with this week

Invideo, a short or video app that automates a lot of the heavy lifting in video editing. I’m testing this out to understand how easy it is to (contribute to the torrent of) short-form video content.


That’s all for this week. Subscribe for the latest innovations and developments with AI.

So you don’t miss a thing, follow our Instagram and X pages for more creative content and insights into our work and what we do.

RECENT

POSTS

Stay in the loop

We’ll send you quarterly updates so you can see what we’re working on, what the future holds and how we’re shaping it.