The Master-Slave Dialectic
Who Controls Whom?
A few years ago, a friend of mine, David Ray, held an exhibition titled “Master and Servant”.
David, AKA the “Duke of Dirt”, is a brilliant ceramicist, and the exhibition featured a series of dog figurines. The title was a play on Hegel’s "Master-Slave Dialectic", where Hegel argued that though the Master owns the Slave, the Master becomes dependent on the Slave so that the power structure can be reversed.
My mate David’s exhibition explored how, though we dog owners are theoretically our pet’s Masters, they often control our lives. Micky, my cheeky black labrador, usually does what I ask. But he significantly influences my life choices – from the apartment I bought to the car I drive, the holidays I take, and how I organise my day. As I drag myself out of bed for Micky’s early morning walk, I sometimes wonder: Who’s really in charge here?
And so to AI. I am immensely optimistic about its power to improve our lives. But I don’t think we should overlook its potential to control our future. Evolution can provide some understanding here.
Darwin didn’t invent the idea of evolution. His genius was in articulating how natural selection was its driving force. I believe this insight has influenced our view of our place in the world more significantly than any other idea throughout human existence, and it’s very relevant to our understanding of AI.
Our natural tendency to anthropomorphise encourages us to see recent examples of evolution as the result of wholly human agency. However, using the lens of Hegelian dialectics, things can look different.
Take wheat and chickens, for example. They have evolved enormously through human selective breeding and now genetic engineering. However, from the perspective of wheat and chickens, humans have helped them expand in biomass, varieties, and distribution over the globe. We have served their evolutionary benefit enormously.
A similar thought process can be applied to the evolution of machines and how they’ve changed our world. The Tesla Cybertruck is a long way from the Model T Ford, and smartphones are very different from Alexander Graham Bell’s vibrating diaphragm. They’ve evolved as better features are selected and old versions become extinct. And consider the influence of these machines: global politics are dominated by oil. Our cities, and indeed our lives and deaths, are hugely influenced by motor transport. And smartphones … who sleeps more than arm’s length from their phone these days? Sometimes, it’s reasonable to pause and contemplate who, or what, is controlling whom.
The evolution of AI has been nothing short of remarkable. Allan Turing kicked things off by theorising about it in 1950. Arthur Samuel developed the first self-learning program for checkers in 1952. By 1955, Marvin Minsky and Dean Edmunds built SNARC, an early artificial neural network mimicking a rat in a maze. Then, "Logic Theorist" arrived in 1956, tackling complex problems like human thinking. The world sat up and noticed when IBM's Deep Blue beat Garry Kasparov at chess in 1997. Google's AlphaGo defeating Lee Sedol at Go was actually more impressive, though it got less attention. Google's Transformer architecture in 2017 revolutionised natural language processing, leading to OpenAI's GPTs, Generative Pre-trained Transformers, and ChatGPT burst into our awareness in late 2022.
The rise of AI has prompted many of us to adapt and change in numerous ways. Increasing numbers of professionals work in tandem with AI, enhancing productivity and driving innovation. For example, ZestFinance uses machine learning to evaluate credit risk, leading to faster and more accurate assessments.
Google's DeepMind outperforms human radiologists in detecting breast cancer. MIT's AI system predicts patient deterioration in hospitals with 90% accuracy. AI diagnoses diabetic retinopathy with 94% precision. JPMorgan's COIN system processes legal documents in seconds instead of 360,000 human hours annually. BlackRock's Aladdin manages $21 trillion in investment assets. PayPal prevents $4.5 billion in fraud annually using AI. The 2021 Nobel Prize in Chemistry went to Benjamin List and David W.C. MacMillan, who used DeepMind to understand protein folding.
And much more. Channelling Darwin and Hegel, AI has used human creativity and expertise as catalysts for its evolution.
Putting aside the possibility of Artificial General Intelligence (AGI), that is, machines that can understand, learn, and apply knowledge across a wide range of tasks at a level comparable to a human being, and Artificial Super Intelligence (ASI), a level of intelligence that surpasses human intelligence across virtually all fields, including creativity, social skills, and problem-solving, 2025 will be the year of AI Agents. They already exist, but their range and influence are on an almost exponential curve of application.
AI Agents perform tasks autonomously by perceiving their environment in ever more sophisticated ways, learning in real-time, assessing and refining decisions, and taking action without constant human intervention. Agency is the key concept here: it’s not just AI’s ability to process information in faster ways, it’s the ability to make good decisions without human input. Complex “knowledge work”, which we like to think of as a uniquely human domain, will become increasingly automated. The productive potential of AI Agents is nothing short of awe-inspiring. But let’s not forget that awe sometimes includes feelings of fear. There is dread in the sublime.
As machines become more intelligent and possibly develop their own form of consciousness, they are likely to manifest what we imagine as free will and increasingly make their own decisions. Machines don’t yet reproduce themselves: like viruses, they need a mediator to do that. However, as AI develops more agency, it will become progressively self-improving, writing its own code when it identifies new needs. It may decide that reproduction is a good survival strategy, further speeding up AI evolution.
For the time being, we think we’ve mastered Artificial Intelligence. However, it would be naïve to imagine no possibility of a dialectic inversion, where we will become slaves to our invention. Nobel Laureate and “Godfather of AI”, Geoffrey Hinton, said last week that things are “scary already”.
I might ask ChatGPT for its thoughts on this, assuming it will tell me what it really thinks. Anyway, I need to go. Micky is nudging my leg, demanding that we go for a walk.
By Rob Leach
OCNUS Founder