
Written by Fola Yahaya
It’s the school holidays here in the UK, and I’ve been enjoying time in the Kent countryside with friends and their children. The weather has been glorious, with consistent blue skies – the only cloud on the horizon being the constant policing of our teenagers’ screen time and the AI-anxiety of a therapist friend.
A colleague recently sent her an article on the highly positive results of the first-ever clinical trial involving a generative AI-powered therapy chatbot. The trial involved 106 participants from across the United States diagnosed with major depressive disorder, generalised anxiety disorder, or at risk of an eating disorder. Participants engaged with Therabot through a smartphone app, typing responses to prompts about their feelings or initiating conversations as needed.
Those diagnosed with depression experienced an average 51% reduction in symptoms, achieving clinically significant improvements in mood and well-being. Generalised anxiety participants reported an average symptom reduction of 31%, often moving from moderate to mild anxiety or even below clinical thresholds.
“Our results are comparable to what we would see for people with access to gold-standard cognitive therapy with outpatient providers,” notes Nicholas Jacobson, Associate Professor of Biomedical Data Science and Psychiatry at the Geisel School of Medicine. “We’re talking about potentially giving people the equivalent of the best treatment you can get in the care system over shorter periods of time.”
What’s intriguing here is that significant results were achieved through typed interactions rather than real-time conversations. I suspect outcomes would be even more impressive with the current conversational AI mode. Add to this the imminent arrival of what I call ‘3C AI’ – conversational, context-aware and customised – and many professions previously considered safe from AI disruption will soon face transformation.
Like a good therapist, I listened carefully and posed questions to help my friend ‘self-reflect’. The first question: who currently pays for her services? Ironically, it’s predominantly tech bros and financiers who spend their days glued to screens and crave genuine human interaction. Next, I asked about supply and demand: is society becoming mentally healthier (no!) and are there enough therapists available (also no)? I reassured her that her role remains secure, but AI will fundamentally alter how mental healthcare and many professional services are delivered.
Pricing models will shift to freemium structures – free or low-cost AI-driven services complemented by premium, human-led services. This trend, already emerging in fields such as translation, will soon dominate everything from accounting to healthcare.
If your business or career relies on flat fees for service delivery, consider how AI could enable a freemium model. Perhaps you could offer clients a digital twin – a virtual representation capable of handling routine interactions – while reserving your personal services as a premium offering.
Linda Evangelista’s much-quoted line in the Nineties, that she wouldn’t “get out of bed for less than $10,000”, seems increasingly out of touch with a new era in which every aspect of advertising is being impacted by AI. H&M announced last week that it would create AI ‘twins’ of 30 models, with the intention of using them in social media posts and marketing imagery if the model gives their permission. According to The Guardian, Jörgen Andersson, the chief creative officer at H&M, described the idea as “something that will enhance our creative process and how we work with marketing but fundamentally not change our human-centric approach in any way”.
H&M has worked with models including Vilma Sjöberg and Mathilda Gvarliani, who also model for Vogue and Chanel. Under the agreement, each model would be able to book her twin on shoots for other brands – meaning they could, in a sense, be in two places at once. Gvarliani told The Business of Fashion her replica is “like me, without the jet-lag”.
Perhaps in the future, even Evangelista might find it worthwhile to ‘get out of bed’ virtually – leaving her twin to handle everything below her premium rate.
The general-purpose AI agent landscape is suddenly much more crowded and ambitious. This week, Palo Alto-based start-up Genspark released what it calls Super Agent, a fast-moving autonomous system designed to handle real-world tasks across a wide range of domains – including some that frankly freak me out – like making phone calls on your behalf to restaurants using a realistic synthetic voice.
Both of the most-hyped tools at the moment, Manus and Genspark, work by knitting together a bunch of different large language models (LLMs) to achieve a task. In geekspeak, this is known as a ‘mixture-of-agents’ system. In the task shown in the video above, finding and calling a restaurant to book a table, they use one LLM to do the research, another to verify the results, a third to initiate the call, a fourth to create the script and so on.
The launch adds fuel to what’s shaping up to be an important new race in the AI competition: who will build the first reliable, flexible and truly useful general-purpose agent? Manus AI operates as a fully autonomous agent with its own computing environment. It performs tasks asynchronously, meaning users can assign tasks, close their laptops and receive notifications when the work is complete.
Every single organisation and company should now be prepping for an agentic future where a potential customer’s agent is the first interaction with your brand, either offline or online. It will also mean getting your own agent prepped to deal with their agent!
ost Nigerians of a certain age grew up on a diet of Kung Fu movies. With limited cinematic choice, it was always a case of “never mind the terrible dubbing, just give us the action!” It looks like this will be a thing of the past as the first AI-dubbed foreign language sci-fi movie is shortly to be released in U.S. movie theatres. For the first time, an international feature film will look and sound as if it were made in English thanks to AI.
Though the supernatural Swedish adventure Watch the Skies was made in Swedish, Variety reports that AI company Flawless has digitally altered the film’s images and sound to perfectly sync characters’ mouth movements and speech for an English-speaking audience. The tech uses the original cast’s voices to create dubs, and is compliant with SAG-AFTRA.
Moreover, AI has already been used for ‘performance-enhancement’. In the Oscar-nominated film The Brutalist, AI voice technology was used to make Adrien Brody’s accent more Hungarian. Similarly, Emilia Pérez utilised AI to blend the lead actress’ voice with that of a professional singer, enhancing the musical performances. So we need to add voice coaches to the growing list of AI victims.
Personally, bad dubbing was always an annoyance so AI tweaks are a good thing, but what will AI replace next?
That’s all for now. Subscribe for the latest innovations and developments with AI.
So you don’t miss a thing, follow our Instagram and YouTube accounts for more creative content and insights into our work and what we do.
Network Hub, 300 Kensal Road, London, W10 5BE, UK
We deliver comprehensive communications strategies that deliver on your organisation’s objectives. Sign up to our newsletter to see the highlights once a quarter.