The Allure of Control | Why Mao Zedong Might Admire AI
Mao Zedong, the architect of the Chinese Cultural Revolution, sought total ideological control over the individual. The movement he spearheaded aimed to eradicate dissent, promote conformity, and centralise authority.
Today, artificial intelligence, when misused, offers tools to extend such authoritarian aims with unprecedented precision -algorithmic surveillance, social credit systems, and automated censorship.
This is not merely a speculative link, but an alarming pattern where contemporary technological capabilities echo certain ideological ambitions.
The deeper concern wouldn’t be just about government misuse. Companies and brands, too, are seduced by AI’s promise of control, over markets, consumers, and even employees. Corporate strategies now include algorithmic decision-making in hiring, performance tracking, and consumer targeting.
This convergence of state and corporate technocracy reflects the kind of dehumanised logic that thinkers like Hannah Arendt warned against in her studies of totalitarianism. Arendt emphasised the danger of people becoming mere functions of a system, exactly the threaten of AI when it becomes the primary agent of decision-making.
From Steam Engines to Algorithms | Lessons from the Industrial Revolution
During the Industrial Revolution, machines began replacing skilled craftsmen, displacing centuries-old traditions of apprenticeship and human ingenuity.
Lewis Mumford, in “Technics and Civilization,” described three technological epochs: eotechnic -organic and decentralised-, paleotechnic -centralised and extractive-, and neotechnic -diverse and human-centred-.
The current AI boom risks becoming a hyper-paleotechnic phase, where algorithms centralise power, extract behavioural data, and alienate users from their intellectual labour and sense of autonomy.
At the turn of the 20th century, amidst the optimism of the Belle Époque, society embraced progress uncritically. The Eiffel Tower, electric light, and early cinema were marvels of an age intoxicated with innovation. However, this era also bore the seeds of dehumanisation like mechanised warfare and regimented factory labour among other beauties.
Progress without ethical reflection often leads to new forms of domination and inequality.
AI may repeat history: a golden age of innovation masking an undercurrent of intellectual disempowerment. Historical hindsight reminds us that uncritical faith in machines often leads to the marginalisation of human dignity. The promise of liberation becomes the practice of domination.
Contemporary parallels are striking. The increasing use of AI to automate customer interactions, medical diagnostics, and even legal judgments suggests that the locus of control is shifting from human discretion to algorithmic prediction.
Machines vs. Minds | The Unquantifiable Value of Human Creativity
Walter Benjamin’s seminal essay, “The Work of Art in the Age of Mechanical Reproduction,” argued that mass production erodes the “aura”, the unique presence, of a work of art. In the same way, AI-generated content mimics human creativity but lacks context, emotion, and intent. The result is not art but artefact, technically proficient yet spiritually hollow.
Today, AI tools can compose symphonies, generate photorealistic images, and write articles indistinguishable from human output. But what they lack is consciousness, the existential struggle, intuition, and symbolic thought embedded in genuine human creation.
Jacques Ellul warned in “The Technological Society” that technique, once introduced, becomes autonomous and displaces other values. The value of creative thought, under this logic, is measured by output, not insight. It is an impoverishment of meaning.
Theodor Adorno, reflecting on the culture industry, feared a world where consumers are pacified by sameness, where creativity is formatted, predictable, and market-driven.
AI risks accelerating that scenario by turning artistic expression into predictable algorithms based on prior preferences. In delegating creativity to machines, we risk impoverishing the human spirit and narrowing the range of expression to what is already familiar.
True creativity involves risk, contradiction, and transformation.
Machines cannot innovate beyond their programming; they cannot dream or despair. This makes human creativity irreplaceable and infinitely valuable. Brands and cultural institutions must be vigilant not to let efficiency override authenticity, and not to trade the imaginative potential of the human mind for the artificial mimicry of the machine.
Delegation or Abdication? | The Danger of Intellectual Laziness
It is tempting to outsource tasks to AI -emails, planning, decision-making, etc. But what begins as convenience risks becoming cognitive abdication. Erik Brynjolfsson distinguishes between automation -replacing humans- and augmentation -enhancing human ability-.
The danger is when automation becomes default, and human judgment erodes, replaced by mechanical reasoning without critical scrutiny.
Mumford’s notion of the “megamachine”, societies organised around technological systems where individuals are reduced to roles, comes to life in today’s algorithm-driven organisations. Delegating critical thought to AI erodes our intellectual capabilities.
Over time, it fosters dependency and laziness, lowering cognitive effort and critical engagement. This is particularly dangerous in education, journalism, governance, and healthcare, sectors that rely on nuanced, ethical reasoning.
The real threat is not that machines become more intelligent than us, but that we voluntarily make ourselves less intelligent.
Delegating ethical, creative, and analytical tasks to machines can reduce our intellectual resilience. If unchecked, this may result in a self-imposed infantilisation, where we forget how to reason independently and abdicate responsibility for our decisions.
The trend is evident: students relying on AI for essays, managers using algorithms for hiring without oversight, and institutions trusting predictive tools over human context. This is not augmentation but attrition, the steady erosion of human complexity and critical engagement.
Only through disciplined thought and active resistance can we reverse this trend.
Ethics, Impact and Responsibility | AI in the Hands of Brands and Leaders
Businesses and brands are rapidly integrating AI, optimising logistics, personalising ads, automating customer service, etc.
Yet leaders rarely ask themselves: should we, just because we can? There is an ethical imperative to consider the long-term consequences of dehumanising processes. Embracing ethical innovation requires foresight, humility and social dialogue, not just speed and scale.
Lewis Mumford advocated for “democratic technics,” where technology serves human needs rather than corporate profit or state control. This requires that leaders embed ethical reflection into innovation cycles. Human-centred design must be more than UX jargon, it must inform the purpose of AI integration.
Technology should be accountable to human values, not the other way around.
Brands like The Body Shop [https://www.thebodyshop.com] demonstrate how values-led innovation can coexist with technological adoption. Their emphasis on ethical sourcing, sustainability, and activism shows that technology should not replace human judgment but reinforce shared values. They remind us that brands are not only economic actors but cultural ones, with a duty to uphold human dignity.
Leadership today demands tech-literacy married with philosophical depth. Companies need ethicists, humanists, and social scientists guiding AI deployment, not just engineers and marketers. It also requires a broader public conversation about where to draw the line.
Should AI be used to make parole decisions? Should it be used to analyse children’s emotions in schools? These are not technical questions, they are ethical ones, and only humans can answer them responsibly.
Reclaiming Common Sense | Drawing the Line Between Aid and Abdication
Common sense must guide our engagement with AI. Machines excel at pattern recognition, not ethical reasoning. They can detect a tumour but cannot comfort a patient. They can mimic style but not understand story. Knowing where machines end and humans begin is vital, and this boundary must be defended with cultural clarity -rather than regulations.
Jacques Ellul insisted that society must evaluate technology not just by its utility but by its impact on our way of life. This calls for vigilance, critical thought, and ethical clarity. AI should augment, not replace, our intelligence. We must resist the comfort of technological determinism, the idea that progress is inevitable and beyond questioning.
As Adorno warned, when we surrender to systems without questioning them, we risk becoming passive participants in our own dehumanisation. The responsibility lies not in the machine, but in us.
Common sense is not an obsolete virtue; it is the last defence against surrendering our agency.
We must foster a culture that celebrates wisdom, not convenience; that prioritises complexity over simplicity; and that sees technology as a servant of the human spirit, not its replacement. The choices we make today will shape not just our tools but our humanity.
Sources
- Walter Benjamin, “The Work of Art in the Age of Mechanical Reproduction” – https://www.marxists.org/reference/subject/philosophy/works/ge/benjamin.htm
- Lewis Mumford, “Technics and Civilization” – https://monoskop.org/images/f/fa/Mumford_Lewis_Technics_and_Civilization.pdf
- Jacques Ellul, “The Technological Society” – https://www.goodreads.com/book/show/1057324.The_Technological_Society
- Erik Brynjolfsson and Andrew McAfee, “The Second Machine Age” – https://wwnorton.com/books/the-second-machine-age/
- Theodor Adorno, “The Culture Industry Reconsidered” – https://www.marxists.org/reference/archive/adorno/1944/culture-industry.htm
- Hannah Arendt, “The Origins of Totalitarianism” – https://archive.org/details/originsoftotalit0000aren
Leave a Reply