It’s not a silly question. Customer service agents may ask if ChatGPT could be a relentlessly cheerful version of themselves. Knowledge workers may wonder if it could be faster and more accurate than them, reducing the need for -and value of- their services. Both have probably enjoyed happy fantasies about making ChatGPT do their job – without their employer noticing, of course.
OpenAI’s chatbot appeared out of nowhere for many. Earlier versions GPT3, released in June 2020, and GPT2, released in February 2019, did not receive much attention from the general public. Although, there was that time in October 2020 when GPT3 posed as a human on Reddit. The chatbot went undiscovered for just over a week – and made waves before it all ended badly. But for ChatGPT, Google searches shot up as soon as it launched at the end of November 2022. GPT3 spiked too.
In the chart, a value of 100 indicates peak popularity for the search term. So, ChatGPT and GPT3 hit their highs simultaneously, but the graph doesn’t tell us which term people entered more often. Still, the quadrupling of GPT3 searches from its previous high in June 2022 suggests that we wanted to know the story behind ChatGPT.
After a two-week hype that flooded everyone’s social media accounts with ChatGPT screenshots, popularity has now started to fall. Time will tell where it levels out. In the meantime, though, the compelling question is, ‘who is looking’? A breakdown by country of GPT3 search term interest lists the top five as China, Israel, South Korea, Switzerland, and The Netherlands. The metric calculation is search term occurrence as a proportion of total searches per country, so population size does not skew the ranking. The map in Fig 2 visualises the popularity in countries with data relative to high-scorer China.
Each of the top five countries also ranks high in the WIPO Global Innovation Index for 2022. Switzerland (#1), The Netherlands (#5), and South Korea (#6) are in the top ten, while China (#11) and Israel (#16) are just outside. An encouraging signal. If searchers are product developers and entrepreneurs, we may see exciting new products come to market soon.
These products are likely to make work easier. Using GPT abilities in the background can reduce tedious and time-consuming tasks: debugging code, summarising long texts, explaining complex concepts in simple terms, and presenting data attractively, to name a few. But, right now, such tools are limited to ones rooted in language. Large Language Models (LLMs) like ChatGPT are not great at maths and logic – Itamar Golan provides an excellent example (Fig 3).
ChatGPT is clearly wrong, but doesn’t it sound confident? The tone is the same whether the information is correct or utter nonsense (the technical term for the latter is ‘hallucination’). With the current state of the art, it can be hard to know if an answer makes sense, but we will learn as we have done before. Chatbots and the general public have met several times already. Let’s take a look at what happened with Cleverbot, Tay, and GPT3.
1. Cleverbot
Cleverbot has been online since 1997 and is still growing today. It uses machine learning and a database of previous conversations to hold down new ones. Most chats are amusing and silly – the YouTube video below gives a good idea. In 25 years of chatting, Cleverbot taught us two things: people enjoy playing with AI, and the interactions help improve the product.
2. Tay
Microsoft took note and launched its Twitterbot Tay in Mach 2016. “The more you chat with Tay, the smarter she gets,” Microsoft announced, explaining that the AI learns “through casual and playful conversation”. What happened next is best summarised by this article – and its title: Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.
The AI is not as bad as it looks. The worst statements resulted from Tay complying with requests to “repeat after me”. We learnt that, perhaps, we need some filters to avoid bots propagating the worst of human behaviour at lightning speed.
3. GPT3
In October 2020, GPT3 posed as a human on Reddit under the username thegentlemetre. It went undiscovered for just over a week, answering hundreds of questions – including some on sensitive topics like immigration, racism, harassment, and conspiracy theories. A post about feeling suicidal (Fig 5) touched many and received more than 150 upvotes.
GPT3 got caught primarily because of its super-human posting speed. The quality of its posts was excellent and prompted new questions about where boundaries on the use of AI should lie.
ChatGPT appeared out of nowhere for many, but we see above that the road has been long. Each time a new chatbot meets the real world, we uncover new issues – and fix them. As we progress, the problems that remain increase in difficulty. They now include ethical issues relating to the appropriate use of AI, and technical ones like hallucinations of fact and fiction. These are not easy to solve. So, probably, ChatGPT can’t do my job for a while longer.
Progress has been fast in the past five years, and with the most innovative countries taking an interest, we should expect new products soon. Collaborative technology will make jobs easier. Customer service agents -and customers- may indeed be happier if ChatGPT handles all straightforward cases. Knowledge workers may be more productive if they have access to AI. One area that immediately springs to mind is Law.
Human-AI collaboration could boost productivity or, alternatively, enable a better work-life balance for many. Shifts in the demand for labour may follow, and the benefits may not be distributed fairly. But that’s a topic for another post. For now, the developments in LLMs are exciting, and we have much to look forward to.