AI AIN’T OK
“What I like is we’re smart enough to invent AI, dumb enough to need it, and still so stupid we can’t figure out if we did the right thing.” Jerry Seinfeld
In February 2026, the artificial intelligence company, OpenAI, announced it had agreed to make changes to the “opportunistic and sloppy” deal it had previously struck with the US government concerning the use of its technology in classified military operations. Chief Executive, Sam Altman, said the company would clarify the contents of its agreement, and explicitly prohibit the use of its systems to spy on Americans.
So, who is in charge – man or machine? AI or the people behind AI? When you hear pundits fantasising about AI taking over from human beings, you can be sure that you are being manipulated by the powerful human interests that develop and control the AI phenomenon. AI provides those interests with a useful scapegoat for mistakes or immoral decisions.
Science has immense potential for good. Sadly, it also has immense potential for evil. It can be a blessing or a curse, depending on the use to which it is put. Science gave us MRI scans and nuclear weapons, cortisone and covid. And science was invented by human beings once they realised that the cosmos is rational and therefore comprehensible by rational minds.
Science, like all human activities, can be perverted. Scientists are human, subject to the same flaws and foibles as the rest of us. You find as much greed, envy, lust, dishonesty, laziness, rage, and big egos in lab-coats as you do in business suits. Science, ideally, is rigorously rational, objective, and neutral; scientists are often emotional, subjective, and biased.
Science deals with material reality, things which can be weighed and measured. But reality is not confined to the world of matter. As a quote attributed to Albert Einstein tells us, “the things that count most cannot be counted.” Science can tell us how to make a nuclear weapon, but cannot tell us whether or not we should use it.
The unwarranted materialist assumptions of many people today, including many scientists, arise from a basic error that has plagued modern philosophy for 300 years and more. That is the common predisposition to conflate percepts with concepts, and ultimately, brain with mind. Percepts are mental images of objects perceived through the senses; concepts are products of the mind or intellect – immaterial, universal, and determinate.
This basic philosophical error is behind the misconception that the brain is just a computer, with the mind being the software. The facile reduction of thinking to mere mechanical computation is now commonplace, reinforced as it is by the materialist mindset of many academics and a great number of people in society at large, influenced as they are by the science-fiction content in popular entertainment.
AI does not have intelligence, but simply does what computer architecture is designed to do, that is, simulate intelligence. It is designed to produce effects that mimic intelligence without requiring any actual intelligence from the machine. This applies to machine learning and all the latest developments of AI research, just as it did to the Turing test of yesteryear.
The philosopher, John Searle, pointed out that AI algorithms respond to the syntax of the representations they process, but not the semantics; that is, they process only physical properties, and not the meanings that those physical properties represent.
People say AI will cause a debilitation of thinking, writing, and reading, but AI is actually the technical embodiment of the decline of all of those things. It produces much more efficiently the same flawed thinking and mechanical cliches instilled in most people for generations by other human beings, who are usefully designated as manipulators.
It is incoherent to argue that putting an algorithm into a computer somehow endows it with intelligence, or that human intellect is a kind of algorithm. The roots of emotion recognition software have been traced back to dubious research funded by the US Department of Defence in the 1960s. A recent survey of more than 1,000 research papers found no evidence to suggest that a person’s emotions can be reliably inferred from facial expressions.
Emotion detection software is developed on the misguided belief that technology can answer questions about human nature that are not technical at all. After 150 years, the soft science of psychology still wrestles with controversial issues relating to belief systems, personality, emotional dynamics, human motivation, and more. Correlating images to simple, predefined, emotional states without reference to culture and context is never going to achieve a proper understanding of complex individuals whose feelings change hundreds of times each day.
Research into how AI can have harmful effects is done by people and funding from the tech industry, heavily invested in future profits from AI. Disagreements about the ethical development and application of AI, and western society’s general moral confusion obviously have a huge impact on AI research, and many questions are simply not allowed to be asked.
The narrow, obsessive focus on technical development that excludes the essential consideration of how AI systems integrate with complex and critical social institutions like justice, healthcare, and education needs urgent attention. These systems are controlled by powerful financial, political, and commercial interests representing the establishment elites.
Technology has been unleashed that is already causing widespread damage to society as we know it, and it is completely unregulated. Our troubled contemporary world is controlled by some of the biggest frauds, liars, and criminals in the history of humankind, yet people still believe that we should leave government and big business free to build and deploy ‘infallible’ truth-telling machines.
Few people are aware of the costs of AI and the inevitable impact on the lives of billions of people all around the world. Nobel Prize-winning economist Daron Acemoglu, a professor at the Massachusetts Institute of Technology, maintains that AI is most unlikely to ever find a way to profitability given the mind-boggling capital required to build and maintain the required infrastructure. Data centres can cost tens of billions of dollars each. For example, IBM CEO Arvind Krishna acknowledged that building a data center that uses only 1 gigawatt will cost an estimated $80 billion.
Factor in the astonishing amounts of fresh water required for cooling in the data centres, and the massive drain on electricity supplies, and one gets the nagging suspicion that the lives of many human beings are now being deemed expendable by the establishment elites. And that brings us back to the ethical ramifications of AI development. As I wrote in an article, Artificial Leadership, a few years ago:
“What worldview defines the way the computer simulates thinking? Obviously, it is the one the programmer has put into it, whether it be his own, or that of his boss, or his boss’s boss. Does the computer’s output reflect the thinking of capitalists, socialists, secular humanists, or the establishment elites? Or an irreconcilable mishmash of all of the above? What moral universe does AI represent – natural law, virtue ethics, the Kantian categorical imperative, utilitarianism, emotivism, or the incoherent ethical hodgepodge all too common in the postmodern West?”
Complex robotic systems are typically designed with many layers of processing between detection and response. A large number of layers might conceivably make it impossible to know just how the system will respond to different developments, but that is no indication that the machine is making decisions. The robotic system will only execute the sequence of programmed steps provided by the algorithms, though the speed with which it functions will entice many to believe it is thinking. Robotic systems that cause death and destruction do so because of human agency, negligence, or error.
Consider the inescapable tsunami of propaganda to which we are all exposed every day: influencer operations, coordinated messaging, precision advertising, linked bot activity, algorithmically amplified indignation, contrived consensus, and swathes of on-line accounts promoting echo-chamber opinions. AI facilitates unprecedented levels of psychological warfare and propaganda, all designed to bury the truth.
And that is the quickest way to kill leadership.




Love this treatise Andre. I’ll share it.