top of page

Oxford scientists suggest humans could merge with artificial life to create a hybrid consciousness.

Updated: Jul 24

By Chethana Janith, Jadetimes News

 
Oxford scientists suggest humans could merge with artificial life to create a hybrid consciousness.
Image Source : (LEONELLO CALVETTI/SCIENCE PHOTO LIBRARY)

The concept makes us rethink what it means to be human.


At the turn of the 8th century B.C.E., ancient Greek poet Hesiod wrote a curious tale about a strange robot named Talos. Forged by the godly blacksmith Hephaestus and infused with icho, the same mysterious life force that flowed through the veins of the gods themselves, Talos marked the first ever description of an artificial lifeform, and the beginning of humanity’s long obsession with artificial consciousness.


Fast forward 2,700 years, and like a modern day Hephaestus, companies like Microsoft, OpenAI, and Anthropic are imbuing their own artificial creations with the ichor of human creativity, reasoning, and data, lots and lots of data. Although Hesiod’s 100 foot tall, brass clad creation only plays a minor role in the supernatural soap opera of Greek mythology, AI researchers are beginning to use words like “synthesis,” “merger,” or even “evolution” to describe humanity’s future relationship with artificial life.


Suddenly, a hybrid consciousness that felt so decidedly sci-fi (or ancient Greek depending on your point of view) now seems inevitable. While the future holds many permutations of this merger of consciousness, humanity appears to have reached a momentous fork in the road: will the rise of a hybrid consciousness bring about a technological utopia or a Terminator style apocalypse?


Oxford philosopher Nick Bostrom, Ph.D., has spent more than a decade pondering both of these possibilities, and while the future is uncertain, Bostrom says that some sort of human machine hybrid consciousness is likely inevitable.


“It would be sad to me if like, in a million years, we still have the current version of humanity … at some point you may want to upgrade, then you can imagine uploading or biologically enhancing yourself or all kinds of things,” he says.


But such an “upgrade” can come with some serious, potentially world ending consequences, a series of apocalyptic futures Bostrom previously explored in his 2014 book Superintelligence. Yet his latest book, Deep Utopia, argues the other side. It contends that a hybrid existence could create a “solved world,” one free of the everyday drudgery that fills our lives today.


Whether humanity is fighting against Skynet or exploring the galaxy in a kind of Star Trek-ian paradise still boils down to one question: Will AI ever become conscious?


THE DEFINITION OF CONSCIOUSNESS is a notoriously slippery concept that philosophers, scientists, and now AI engineers, have debated for centuries. Alan Turing’s famous test measured the intelligence of a system, or at least its ability to trick humans into thinking it was intelligent. However, experts have since argued that such a test only really examines a very small piece of the consciousness puzzle.


More complicated hypotheses, such as Global Workspace Theory, Integrated Information Theory, High Order Representation Theory, Attention Schema Theory, and others all point to certain ways someone or something could be regarded as conscious. Bostrom argues that consciousness isn’t a black and white matter, like flipping a switch. Instead, the process of gaining consciousness is a gradual and oftentimes murky journey whose progress is difficult to ascertain.


“We don’t have very clear criteria for what makes a computational system conscious or not,” Bostrom says. “If you just take off the shelf the best theories we have on consciousness … it’s not a ridiculous idea by any means that there could be some forms of consciousness now or in the very near term emerging from these [AI] systems.”


Until recently most AI programmers didn’t really wrestle with these deep philosophical theories, they just wanted to make sure their large language models (LLMs) weren’t accidentally racist. But in the past couple years, engineers from both Google and OpenAI have questioned, controversially, whether these programs are actually conscious.


Bostrom argues that the advent of AI that convincingly speaks like a human, including platforms like Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini, is a big component fueling current consciousness claims. After all, when you speak to another person, you assume they’re conscious—and convincing digital minds appear to trigger a similar reaction.


This kind of makeshift consciousness has been the object of some obsession for Oxford’s Marcus du Sautoy, Ph.D. As the university’s professor for the public understanding of science, Sautoy has given talks and even written a book exploring the idea that AI could possibly be creative. However, it’s Sautoy’s background in mathematics that has centered his understanding of hybrid consciousness today.


“AI is just code, code is just algorithms, and algorithms are just math, so we are creating something mathematical in nature,” Sautoy says. “People get very mystical about what the brain does … but [the brain] also goes through some sort of algorithmic process.” The brain and artificial algorithms (a.k.a. a set of rules and calculations) are similar in many ways, both create types of synaptic connections, for example, but the human brain still remains superior when creating new synaptic connections. In other words, it’s better at “learning.”


If digital minds were to gain some degree of consciousness, even in a rudimentary rat-in-a-science experiment sort of way, we would likely owe the machines some sort of legal protection and moral courtesy. Bostrom says, for example, that engineers could direct an artificial consciousness’ overall personality, including taking into account its well being. This “state of mind” programming would ensure that the digital mind feels happy and eager to take on the day’s tasks.


WHILE POPULAR BOOKS AND FILMS have explored the myriad ways the rise of a digital consciousness could end us all (an outcome Bostrom says is still very much on the table) it’s also possible that a human-digital hybrid leads the way to a utopian future. In 2017, OpenAI CEO Sam Altman described this idea as “The Merge,” a future in which humanity peacefully co-exists with its digital creation. Compared to other disastrous outcomes, Bostrum posited a future in which AI accidentally transforms the world into paper clips, the idea of a Merge is “probably our best case scenario,” Altman wrote.


Billionaires like Elon Musk have taken this idea of The Merge quite literally, and invested billions to form companies like Neuralink whose aim is to physically connect biological components with mechanical ones. Neuralink’s first human clinical trials, known as Precise Robotically Implanted Brain-Computer Interface (PRIME), aims to help interpret neural activity for patients living with ALS, a neurological disease that destroys motor nerve function to experience “the joy of connecting with loved ones, browsing the web, or even playing games using only your thoughts,” according to a Neuralink promotional video.


However, Altman’s vision is more of a “soft” synthesis, one that began with the invention of the internet. It gained steam with the arrival of the smartphone, really took off during the social media era, and has finally brought us to this perplexing technological moment in time.


"Whether wired up like cyborgs or slowly erasing the boundary between our physical and digital lives, it’s hard to divine how humans will ultimately merge with their artificial creation, but Sautoy believes the risk is worth taking."


WITH THE IDEA OF CONSCIOUS DIGITAL MINDS on the horizon, it could be that humans are the proverbial frog in a boiling pot of water, and it’s only been a few years since things have started to feel a bit steamy. But as Bostrom argues in his book, this might be a boiling pot we don’t want to jump out of.


“We need to rethink what it means to be human in such a world where AI has taken care of all the practical tasks and we have a kind of a solved world,” Bostrom says. “You might have a much more radical form of automation, where maybe working for money at all becomes completely unnecessary because AI and robots can do everything better than we can do.”


But Bostrom says this is really only one layer of the philosophical onion.


“In the deeper layers, you realize that not just our economic efforts become obsolete but our instrumental efforts also,” Bostrom says. “You could still do these activities but they would be pointless in this condition, so what do we believe has value for its own sake and not as a means of achieving various other things?”


Sautoy also mentions that such a merger could be complicated by the fact that however digital consciousness does arise, it will most definitely be different from our own. That could lead to scenarios where our creations have no interest in having a biological buddy in the first place.


“The speed that AI will operate is not limited by embodiment,” Sautoy says. “Its pace of life will be so different to ours that maybe AI will look at us, the same way we look at a mountain.” While human lives rarely stretch beyond a century, mountains exist over millions of years. Similarly, the fast paced, data crunching “life” of AI could be equally unfathomable to our comparatively slow paced lives.


Whether wired up like cyborgs or slowly erasing the boundary between our physical and digital lives, it’s hard to divine how humans will ultimately merge with their artificial creation, but Sautoy believes the risk is worth taking.


“I think that we are headed toward a hybrid future,” Sautoy says. “We still believe that we are the only beings with a high level of consciousness. … This is part of the whole Copernican journey that we are not unique. We’re not at the center.”


And for Bostrom, the journey will question the very definition of what it means to be human.


“Yes, we want humanity to survive and prosper, but what exactly do we mean by humanity?” Bostrom says. “Does it have to have two legs, two arms, and two eyes, and die after 70 years, or maybe that’s not the real essence of humanity. You could imagine all of those things changing quite a lot … perhaps humankind will grow into something much bigger.”


More News

bottom of page