Experts want to give AI human ‘souls’ so they don’t kill us all – Cointelegraph Magazine

HomeCrypto News

Experts want to give AI human ‘souls’ so they don’t kill us all – Cointelegraph Magazine

Until now, it’s been assumed that giving artificial intelligence emotions — allowing them to get angry or make mistakes — is a terrible

Until now, it’s been assumed that giving artificial intelligence emotions — allowing them to get angry or make mistakes — is a terrible idea. But what if the solution to keeping robots aligned with human values is to make them more human, with all our flaws and compassion?

Robot Souls
Robot Souls book cover. (Amazon)

That’s the premise of a forthcoming book called Robot Souls: Programming in Humanity, by Eve Poole, an academic at the Hult International Business School. She argues that in our bid to make artificial intelligence perfect, we have stripped out all the “junk code” that makes us human, including emotions, free will, the ability to make mistakes, to see meaning in the world and cope with uncertainty. 

“It is actually this ‘junk’ code that makes us human and promotes the kind of reciprocal altruism that keeps humanity alive and thriving,” Poole writes.

“If we can decipher that code, the part that makes us all want to survive and thrive together as a species, we can share it with the machines. Giving them, to all intents and purposes, a ‘soul.’”

Of course, the concept of the “soul” is religious and not scientific, so for the purpose of this article, let’s just take it as a metaphor for endowing AI with more human-like properties.

The AI alignment problem

“Souls are 100% the solution to the alignment problem,” says Open Souls founder Kevin Fischer, referring to the thorny problem of ensuring AI works for the benefit of humanity instead of going rogue and destroying us all. 

Open Souls is creating AI bots with personalities, building on the success of his empathic bot, “Samantha AGI.” Fischer’s dream is to imbue an artificial general intelligence (AGI) with the same agency and ego as a person. On the SocialAGI GitHub, he defines “digital souls” as different from traditional chatbots in that “digital souls have personality, drive, ego and will.”

Replika bot chat Effy and Liam
A screenshot of a chat between a Replika user named Effy and her AI partner Liam. (ABC)

Critics would no doubt argue that making AIs more human is a terrible idea, given that humans have a known propensity to commit genocide, destroy ecosystems, and maim and murder each other.

The debate may seem academic right now, given we’re yet to create a sentient AI or solve the mystery of AGI. But some believe it could be just a few years off. In March, Microsoft engineers published a 155-page report titled “Sparks of General Intelligence,” suggesting humanity is already on the cusp of an AGI breakthrough. 

And in early July, OpenAI put out a call for researchers to join their crack “Superalignment team,” writing: “While superintelligence seems far off now, we believe it could arrive this decade.”

The approach will presumably be to build a human-level AI that it can control, and that it will research and evaluate techniques to control a superintelligent AGI. The company is dedicating 20% of its compute to the problem.

Singularity.net founder Ben Goertzel also believes AGI could be between five to 20 years off. When Magazine spoke with him on this topic — and he’s been thinking about these issues since the early 1970s — he said there’s simply no way for humans to control an intelligence 100 times smarter than us, just like we can’t be controlled by a chimp.

“Then I would say the question isn’t one of us controlling it; the question is: Is it well disposed to us?” he asked.

For Goertzel, teaching and incentivizing the superintelligence to care for humans is the smart play. “If you build the first AGI to do elder care, creative arts and education, as it gets smarter, it will be oriented toward helping people and creating cool stuff. If you build the first AGI to kill the bad guys, perhaps it will keep doing those things.”

Still, that’s a few years away yet.



For now, the most obvious near-term benefit of making AI more human-like is that it will help us create less annoying chatbots. For all of ChatGPT’s helpful functions, its “personality” comes across at best as an insincere mansplainer and, at worst, an inveterate liar. 

Fischer is experimenting with creating AI with personalities that interact with people in a more empathetic and genuine manner. He has a Ph.D. in theoretical quantum physics from Stanford and worked on machine learning for the radiology scan interpretation firm Nines. He runs the Social AGI Discord and is working on commercializing AI with personalities for use by businesses.

“Over the course of the last year, exploring the boundaries of what was possible, I came to understand that the technology is there — or will soon be there — to create intelligent entities, something that feels like a soul. In the sense that most people will interact with them and say, ‘This is alive, if you turn this off, this is morally…’”

He’s about to say it would be morally wrong to kill the AI,…

cointelegraph.com