It’s one year since the launch of ChatGPT.
Before it was unleashed upon the world on 30 November 2022, AI chatbots were broadly considered a joke.
It’s certainly hard to imagine anyone trusting Microsoft’s infamous Tay bot to help with homework or trip planning.
ChatGPT has had much more staying power, becoming part of daily life for hundreds of millions of people.
Catching the world’s attention
OpenAI had been pretty unknown outside tech circles before last November, so few were taking much notice when ChatGPT was quietly released to the public.
It took its name from GPT-3.5, the company’s then-flagship large language model. That refers to an AI trained on enormous amounts of text and data, which can then respond to questions and prompts.
ChatGPT was all about giving these complicated concepts a friendly public face, packaging it up into a product that would look familiar to anyone who’s used a messaging app.
When early adopters started sharing how it could do anything from writing poetry to debugging computer code, its accessibility meant even casual observers (and recently hired reporters) were tempted to try it for themselves.
The AI arms race begins
With 100 million users by January 2023, ChatGPT quickly became the fastest-growing app in history – a record broken just a few months later by Meta’s Threads.
Its success was a surprise even to its creators, who saw it as little more than a research project.
We saw examples of how it could help people apply for jobs, give children a leg-up on homework, and write politicians’ speeches for them (it was not exactly “I have a dream”, but it was definitely safer than what Tay might have come up with).
Who is OpenAI boss Sam Altman?
While February saw Google launch Bard, rival Microsoft went all-in on OpenAI instead – investing $10bn and implementing its GPT tech into products like Bing (yes, it was still going) and Teams.
ChatGPT itself also got an upgrade and charged for the privilege, as the once non-profit OpenAI looked to commercialise the internet’s new favourite toy.
The company’s former board member Elon Musk also unveiled plans for his own chatbot, despite warning the technology could pose an existential threat to humanity.
Chinese tech firms like Alibaba and Baidu were similarly quick to respond, keen not to fall behind in a field some believe will prove as transformative as the Industrial Revolution.
The rate of development – and a timely film about Oppenheimer’s atomic bomb – had some looking nervously to history for cases of how scientific breakthroughs can lead to destructive arms races.
The beginning of the end?
For every innocent example of ChatGPT being used to write a dinner recipe or draft an email to avoid a meeting, sceptics had a warning for us not to get too comfortable.
Its tendency to occasionally hallucinate (that’s AI speak for making things up) has raised fears it could supercharge misinformation many already struggle to identify, especially ahead of 2024 general elections in the UK and US.
And despite its name, critics have accused OpenAI of not being transparent about GPT’s training data. Some say it could perpetuate existing fake news and biases.
Artists are among the most concerned about what AI is trained on, fearing their work could be ripped off.
At the most serious end of the doom-saying spectrum, there have been warnings of an unprecedented wave of cyberattacks and child abuse crimes; job losses on a scale hitherto undreamt of; and AI powerful enough to threaten humanity altogether.
Nobody’s suggesting ChatGPT will kill us all, but its success has brought the issue of AI safety to the front of public consciousness like never before.
To regulate or not to regulate?
It’s a debate that’s rippled through every office in the land: from those of the managers wondering whether AI could save them some money on staff, to the Oval Office of the US president.
World leaders have found themselves caught between a desire to stay on top of the tech, and an unwillingness to curtail innovation.
The UK’s AI Safety Summit earlier this month – which OpenAI’s Mr Altman attended – acknowledged the threats AI poses but approaches to regulation are scattershot.
ChatGPT has helped make Mr Altman a key voice, and he’s spent much of 2023 meeting world leaders.
“My worst fears are that we, the industry, cause significant harm to the world,” he told the US Senate back in May, advising government regulation would be “critical to mitigate the risks”.
Not that he’s appeared keen to put the brakes on.
What happens next?
ChatGPT’s popularity has rubbed off on Mr Altman’s ambition.
Ahead of the anniversary, he appeared at an OpenAI developer conference dedicated to empowering third parties to leverage GPT in their products – even building their own digital assistants.
In September, the Financial Times reported the company wanted ex-Apple designer Jony Ive to help build the “iPhone of AI”.
ChatGPT is just one step on the path to Mr Altman’s ultimate goal: creating artificial general intelligence.
This refers to super-powerful AI capable of outperforming humans in a number of tasks, and is something sceptics warn could reach a level beyond our control.
Reuters reported that ahead of Mr Altman’s ousting on 17 November, some staff researchers wrote to the board warning of a powerful AI discovery they said could threaten humanity.
The board’s choice to be his very short-lived replacement as CEO, Emmett Shear, had previously said he supports “slowing down” development of AI.
But with Mr Altman brought back just five days later, after hundreds of employees threatened to quit, he can lead OpenAI even more confidently than before.
It will likely make for an even more interesting year ahead.