This study prompts a broader discussion on the ethical and legal considerations of machines that exhibit sentient-like behaviors.
Large language models (LLMs), such as those powering chatbots like Chat-GPT, might demonstrate behaviors akin to sentient beings when faced with pain and pleasure scenarios.
Researchers at Google DeepMind and the London School of Economics have conducted a first-of-its-kind study—still awaiting peer review—that explores whether AI systems could exhibit sentient-like characteristics through a novel text-based game.
Study method and significance
In this game, LLMs were presented with two scenarios: achieving a high score under the threat of pain and choosing a low-scoring but pleasurable option. The scientists observed that the AI models made significant trade-offs, often opting to minimize pain or maximize pleasure, hinting at a complex decision-making process.
Jonathan Birch, a professor at LSE and co-author of …