THE WORLD has changed. We are now, for the first time in human history, living among sophisticated intelligences that are not human. The most prominent of these artificial-intelligence applications are large language models, such as OpenAI’s GPT (and its chatbot, ChatGPT) and Google’s LaMDA (and its chatbot, BARD), that are capable of having natural conversations on a nearly limitless range of subjects. The capabilities of these LLMs, scarcely imaginable a decade ago, allow them to talk to us as a friend, or a trusted advisor, might.
In January this year, merely two months after its launch, ChatGPT had over 100 million unique users—unprecedented growth for any application. (The social-media sites TikTok and Instagram took nine and thirty months, respectively, to achieve that figure.) BARD, which was released to the general public in May, logged 142.6 million visits that month. The immense popularity of these chatbots, which have been integrated into a number of existing applications, has sparked a cottage industry of lectures, articles, books and YouTube videos explaining how they work, as well as of think pieces debating what they mean for the future of work.
A question you might be asking yourself is why you should care about understanding AI or LLMs at all. Many people, after all, drive their car or use their phones without knowing how they work. There are two types of answers to this question.