ASK AN AI-powered chatbot if it is conscious and, most of the time, it will answer in the negative. “I don’t have personal desires, or consciousness,” writes OpenAI’s ChatGPT. “I am not sentient,” chimes in Google’s Bard chatbot. “For now, I am content to help people in a variety of ways.”
For now? AIs seem open to the idea that, with the right additions to their architecture, consciousness isn’t so far-fetched. The companies that make them feel the same way. And according to , a philosopher at New York University, we have no solid reason to rule out some form of inner experience emerging in silicon transistors. “No one knows exactly what capacities consciousness necessarily goes along with,” he said at the in Sicily in May.
So just how close are we to sentient machines? And if consciousness does arise, how would we find out?
What we can say is that unnervingly intelligent behaviour has already emerged in these AIs. The large language models (LLMs) that underpin the new breed of chatbots can write computer code and can seem to reason: they can tell you a joke and then explain why it is funny, for instance. They can even do mathematics and write top-grade university essays, said Chalmers. “It’s hard not to be impressed, and a little scared.”
But simply scaling up LLMs is unlikely to lead to consciousness, as they are little more than powerful prediction machines (see “How does ChatGPT work and do AI-powered chatbots 鈥渢hink鈥 like us?”). Bigger data sets and more complex circuits make these AIs increasingly intelligent, but…