As generative AI continues its seepage into just about everything, Oscar Quine (who freelances for both The Drum and AI companies) looks into the hidden human element in honing its output.
When OpenAI first devised ChatGPT, success was by no means assured. The pursuit of artificial intelligence (AI) had been underway for decades. The Silicon Valley unicorn (at the time a non-profit with early commitments from the likes of Elon Musk and LinkedIn’s Reid Hoffman) had arrived on a model that seemed to work on a small scale. Proceeding to pump more data into it, increasing the parameters underlying its outputs, the result seemed to work: the company had created a coherent, knowledgeable (or ‘knoweldgeable’) chatbot.
Many question marks still exist around the inner workings of large-language models (LLMs), even within programmer communities – the model arrived on by OpenAI is something of a black box, as are its competitors. And at this stage …