Google's back in the game, the World AI Festival, junior lawyers are toast, ChatGPT is getting memory and why you shouldn't tell AI anything personal

Written by Fola Yahaya

Thought of the week: the end of romance?

My wife came back seriously worse for wear after spending two weeks filming an amazing renewable energy project in Mahasolo, Madagascar. For Valentine’s Day, I gave her a heart-shaped water bottle, as she’d been sleeping in a corrugated iron shack for six days and surviving without running water for a little longer. So I found it fundamentally depressing that, according to a survey, 45% of men were using ChatGPT to craft their heartfelt love messages.

Guys, I know Valentine’s Day is a commercial invention, but it’s the thought that counts. You can and should do better!


Google’s back in the game

Last week, Google ditched its stupidly named ChatGPT clone Bard and consolidated everything under the Gemini brand. You can try it now by visiting gemini.google.com. I found it as good as, and sometimes better than, ChatGPT for my research questions. It also has the advantage of providing links to the pages underpinning its response. But Google’s still really bad at making clear what model is called what, and which model is available in which territory.

For example, Gemini is powered by something called Gemini Ultra 1.0. Gemini Ultra is only available a) in limited countries and b) if you pay for something called Google One (which I had never heard of). I tried to sign up to Google One ($19/month) and got this:

So far… so annoying. Then I found out that Gemini comes in three different flavours: Ultra, Pro and Nano. Google! I’m starting to lose the will to live. Sort out your branding and comms already!!!!! Just do what you normally do: crush your competition and release the most powerful version to everyone for free!

This mess seems to be indicative of where Google is at these days. Friends of mine who work for Google have been telling me that the ChatGPT-triggered existential crisis has left everyone reeling. Staff morale has hit rock bottom and many perks (as well as staff) have been removed. I have little sympathy for the Big G – especially given that it has grown fat on getting everyone to create rubbish content to appease its godlike algorithm – but it’s good for you and me that Google has finally stepped up its AI game. It was also clear that it was only a matter of time before Goliath came for David.


Key takeaways from the World AI Festival in Cannes

Joe Thorpe, our social media hotshot, and I attended the World AI Festival held in Cannes, France last week.

Though it was held in the same building as the Cannes Film Festival, any comparison with that glitzy shindig ended at the entrance. Once we got inside, it was very much a sober, corporate affair, with a free-to-access exhibition area downstairs and lecture theatres upstairs.

Though they managed to bag Yann LeCun, Meta’s Chief AI Scientist, for the opening keynote speech, he delivered an overly technical lecture on using AI for depth perception that I’m sure went over everyone’s heads!

Where was the AI sizzle? Where was his take on Meta vs OpenAI vs Google?! What did he think about us all wearing clear-framed Ray-Ban glasses with built-in displays? On these burning issues… nada. Had I not been sick as dog, I would have asked these questions but nature, er… kept on calling during the speech.

It struck me that herein lies the difference between the two-speed, US vs Europe AI race. In the US, companies like OpenAI are just moving at hyperspeed and breaking things (such as jobs and industries). In Europe – and by Europe I really mean France, as it’s the ONLY European country competing with the US (and you can forget about the UK) – the focus is on regulation. This makes total sense, given the European focus on ensuring decent living standards for citizens and a generally slower, more thoughtful approach to massive technological change.

However, AI is such a transformational technology that all countries have to get with the programme ASAP.

My key takeaways are here:

My wife came back seriously worse for wear after spending two weeks filming an amazing renewable energy project in Mahasolo, Madagascar. For Valentine’s Day, I gave her a heart-shaped water bottle, as she’d been sleeping in a corrugated iron shack for six days and surviving without running water for a little longer. So I found it fundamentally depressing that, according to a survey, 45% of men were using ChatGPT to craft their heartfelt love messages.

Guys, I know Valentine’s Day is a commercial invention, but it’s the thought that counts. You can and should do better!

By the way, the picture at the beginning of this section is of a panel session on the AI opportunity for Africa. See if you can spot the glaring problem this picture represents.


Are junior lawyers (and accountants, analysts, etc.) toast?

I resolved a few weeks ago to be really upbeat about AI’s impact, but then I read a new paper on Arxiv. Arxiv is a world-renowned research website where researchers from big tech to academia in science, technology and maths post the outcome of their experiments. The paper is called “Better Call GPT, Comparing Large Language Models (LLMs) Against Lawyers”.

The researchers found that LLMs (e.g. ChatGPT) “match or exceed human accuracy in determining legal issues”. Further, they found that: “In speed, LLMs complete reviews in mere seconds, eclipsing the hours required by their human counterparts”. The paper goes on to say:

“Cost-wise, LLMs operate at a fraction of the price, offering a staggering 99.97 percent reduction in cost over traditional methods.”
“These results are not just statistics—they signal a seismic shift in legal practice. LLMs stand poised to disrupt the legal industry, enhancing accessibility and efficiency of legal services. Our research asserts that the era of LLM dominance in legal contract review is upon us, challenging the status quo and calling for a reimagined future of legal workflows.”

Ok, so there’s a little bit of hype at play here, but I don’t think the “seismic shift” will be restricted to new graduates. Junior staff are cheap and are happier to slave away at work whilst their managers are ‘networking’ on the golf course. This means graduate-level jobs are at risk, and so is any layer of management that doesn’t add value.

The direction of travel is clear: If what you do can be done, more quickly and cheaply, by a robot, then your company (especially if you’re in an aggressively corporate sector like finance, law or accounting) will soon see you as surplus to requirements. Yikes.


Why you shouldn’t tell your AI anything personal

Alongside the release of the updated version of Gemini, Google posted an advisory that users should not post anything personal through its chatbot. Google recently got fined for retaining and using data from Incognito browser sessions which many assumed were private, or at least wiped when closed.

Its advisory on the use of its GenAI products is very clear: Every “chat” you have with its app will be retained on the Google servers for three years. One of the last statements made in the warning is:

“Don’t enter anything you wouldn’t want a human reviewer to see or Google to use.


ChatGPT will now remember what you asked for in the past

ChatGPT can now remember stuff. This upgrade aims to make interactions more personalised and effective. Users can instruct ChatGPT to memorise and forget specific details for more customised advice.

Users can toggle memory, review and delete data, or erase all memories to address privacy concerns. Additionally, a temporary chat mode is available for those who want more privacy, allowing interactions without accessing past data, though OpenAI may keep these chats for up to 30 days for safety purposes.


How AI is affecting our (translation) business

I began this newsletter in 2023 because AI was already massively impacting the amount of work we were receiving from our clients. My communications and consulting agency, Strategic Agenda, began life as a translation company (officially we went by the grander-sounding “language service provider [LSP]”) back in 2002. Our main client is the United Nations (UN) and I’m vey proud that my team of in-house and external translators continue to work on some of the UN’s most important reports into its five other official languages (Arabic, Chinese, French, Russian and Spanish).

Translation has always been at the forefront of technological change, with most translators using various computer-aided translation tools to translate words more quickly and consistently. Fifteen years or so ago, LSPs began experimenting with statistical machine translation, which uses hardcore statistics and maths to work out the probability of a particular translation being appropriate. It was hit-and-miss, and probably about as accurate as a politician telling you about their manifesto. The big change came in the 2000s with something called neural machine translation (NMT).

Pioneered by Google, it ushered in an era of half-decent, ultra-cheap and almost instant machine translation. During the last decade, we’ve experimented heavily with NMT and come to the conclusion that:

  1. It was great for popular language combinations such as French or German to English.
  2. It wasn’t so good for lower-resource languages (i.e. those with less training data, say when translating from English to Swahili).
  3. It still ‘smelt’ like it had been authored by a robot and was not good if you cared about your audience.

However, in spite of the above issues, what’s changed (and has been so devastating to my business) is that clients seem to be more comfortable with fast, cheap and OK quality.

The acceptable quality threshold for translations seems to have fallen with such sticklers for linguistic preservation as the UN increasingly happy to pay pennies for a machine-translated report.

I get it. Budgets are tight and even we would sometimes question the need to translate a 500-page report into five languages when we all know everyone will just skim the Exec Summary. However, translators (or language specialists, as they may soon start being called) are a really important cog in the communications process. If you care about what your audience thinks and want to get the right people to do the right thing, then tell stories in local languages and make sure content is written by language specialists who know the tone, rhythm and vocabulary that resonate locally.

If you don’t care, then by all means just Google Translate it.

RECENT

POSTS

Stay in the loop

We’ll send you quarterly updates so you can see what we’re working on, what the future holds and how we’re shaping it.