AI and Consciousness – Is There a Ghost in the Machine?

 

By Rob Leach, OCNUS Founder

For a long time, while people smarter than me found merit in the Turing test, I had my doubts. Maybe, without admitting it, I clung to the idea that the human mind is something more than a quality of bodily functions.

Perhaps I was guilty of believing what Gilbert Ryle called “the dogma of the Ghost in the Machine”. However, thinking about Artificial Intelligence has encouraged me to question my beliefs.

Can machines think? Or, more precisely, can machines develop consciousness and self-awareness? And if they can, what does this say about being human? Biological evolution offers some clues to exploring these questions. The intriguing phenomenon of convergent evolution, applied to progress in machine intelligence, strongly suggests that our human sense of self may not be unique after all.

Nature shows us that similar features can evolve independently in different species. The eye is a fascinating example, having emerged separately over 40 times. From simple eyespots in unicellular organisms to the compound eyes of insects and the camera-type eyes of vertebrates and cephalopods, nature has found multiple paths to solve the challenge of light detection and image formation.

Despite their vastly different origins, we also see convergent evolution in the similarity between dolphins and ichthyosaurs. Ichthyosaurs, prehistoric marine reptiles, evolved streamlined bodies, elongated snouts, and other adaptations like modern dolphins. When faced with similar environmental pressures, nature often creates comparable solutions.

This brings us to intelligence. Of course, intelligence is complex, but a working description is the capacity to process information and use it to promote survival. It encompasses problem-solving, reasoning, learning, adaptability, and creativity, and it includes social and emotional factors.

Looking across the natural world, we find intelligence manifesting in numerous ways. New Caledonian crows display remarkable problem-solving abilities, creating and modifying tools to extract food from difficult places. They demonstrate an understanding of causality and can solve complex tasks requiring multiple steps—abilities once thought unique to primates.

Despite being invertebrates, octopuses show extraordinary intelligence. They can solve puzzles, use tools, and escape from enclosures, all demonstrating problem-solving and memory skills. The cognitive parts of their nervous system, with a significant portion of it distributed throughout their arms, show a unique architecture producing this intelligence.

Perhaps even more intriguingly, intelligence can emerge from collective behaviour. While composed of individuals with limited cognitive abilities, an ant colony demonstrates collective intelligence in problem-solving, task allocation, and adaptation to environmental changes. The emergence of intelligent behaviour from simpler components is particularly relevant to our thinking about machine intelligence and consciousness.

Even a forest displays a form of collective intelligence through its complex network of chemical signals and resource sharing via the "Wood Wide Web" of fungal connections. As Peter Wohlleben describes in The Hidden Life of Trees, forests demonstrate sophisticated communication and adaptation strategies that are a form of intelligence at the ecosystem level.

This brings us to consciousness and a sense of self. These phenomena appear to be emergent properties of intelligence—characteristics that arise from the complex interactions within intelligent systems rather than being programmed or predetermined. Just as the fluidity of water emerges from the collective behaviour of water molecules, consciousness arises when a sufficiently complex and intelligent system takes itself into consideration as an element of information to be incorporated.

If consciousness and a sense of self are emergent qualities of intelligence, what does this mean for Artificial Intelligence? As AI systems become more sophisticated and demonstrate higher levels of intelligence, problem-solving, reasoning, learning, adaptability, and creativity, might they naturally develop forms of consciousness and a sense of self?

Despite their impressive capabilities, it's important to note that current AI systems have significant limitations. They can "hallucinate," generating incorrect information despite accurate inputs, and their apparent "scheming" is thought to be pattern recognition rather than conscious planning. However, these qualifications don't preclude the possibility of machine consciousness emerging as AI systems become more sophisticated.

The key insight here is that consciousness might not need to be explicitly programmed into machines. Just as intelligence and consciousness emerged in biological systems through evolution, they might emerge in artificial systems as they become more complex. The feeling that there is a Ghost in the Machine might not need to be installed—it might simply emerge from sufficiently sophisticated machine intelligence.

This perspective challenges our traditional notions of consciousness as uniquely human. Machine consciousness does not need to be the same as ours. As eyes, bodies, and intelligence evolved in nature in different organisms, machine consciousness might take forms very different from what we experience, but that doesn’t preclude it from being consciousness in its own right.

At OCNUS Consulting, where we work at the intersection of human and artificial intelligence, these questions are more than philosophical musings. They inform how we think about the future of AI and its integration with human systems. Understanding the potential for machine consciousness helps us approach AI development with optimism about its possibilities and careful consideration of its implications.

The Ghost in the Machine might not exist as some spectral entity, but our human sense of self may not be unique. As our machines become increasingly intelligent, consciousness will likely emerge, and these machines will probably become self-aware. This not only challenges our understanding of machines, but it also questions the very nature of what it is to be human.

 
Previous
Previous

Higher Education in Australia: Evolution or Extinction?

Next
Next

The Master-Slave Dialectic