The skeuomorphic fallacy


What something is isn’t the same as what it uses to manifest that being. Also, robot CEOs are nonsense.


Chris von Csefalvay


6 November 2023

It appears that in what is clearly a wonderful little PR stunt, a Polish rum company managed to do a Sophia and appoint an ‘AI-driven’ ‘robot’ as its ‘CEO’.

The other guilty party to this pile of steaming bovine excrement is Hanson Robotics, famous for giving us Sophia, the “world’s first robot citizen”. Most of what I’m saying here goes just as well for Sophia. It blows my mind that Hanson managed to get away with this nonsense not once, but twice.

Dictador has just announced hiring the first world ever AI robot as a CEO of a global company. The new CEO is a human-like robot, incorporating AI. The robot is a woman, named Mika. She will be the official face of Dictador, the world’s most forward-looking luxury rum producer. This bold move consolidates the company’s position as one of the most advanced and thought-leading organizations globally. It underlines the brand’s passion for new technology and offers a positive disruption by bringing the future to what can still be a very traditional world.

Dictador, via PR Newswire

Oh good grief.

Now, let’s for a moment try to get over the obvious bullshit here, such as the fact that corporate officers generally have to be natural persons in most places, and may at worst be legal persons, but a ‘robot’ or an ‘AI’ is neither. You can no more appoint a ‘robot’ or an ‘AI’ as your CEO than you can appoint a pet rock to be your corporate secretary. I am, to the surprise of many, a recovering corporate lawyer, so I have at one point convinced examiners that I know what I’m talking about when it comes to this stuff. There are complex aspects of law involved in the question of who can or cannot, legally, be a corporate officer. But let’s get over that.

“Mika”, the “CEO” of Dictador. Artificial, yes. Intelligence, not quite. Uncanny Valley, totally.

Let’s also get over what an absolutely disastrous piece of nonsense this seems to be on its face. Here’s a quote from the ‘CEO’ itself:

In a Dictador company video, Mika said that “with advanced artificial intelligence and machine learning algorithms, I can swiftly and accurately make data driven.”

Kayla Bailey, ‘Mika’ becomes world’s first AI human-like robot CEO, Fox Business

It would definitely be useful for Mika to be able to finish a sentence. In the industry, crap like this is called a ‘sell signal’, and in a slightly more sane economic environment, Dictador would have the market value of, uh, dunno, an expired carton of eggs after this nonsense. But let’s get over that, too.

The commentary on this nonsense is, if anything, worse than the nonsense itself. Forbes contributor Cindy Gordon titled her article on the subject “How Should CEO’s Embrace AI Or Will AI Assume CEO Roles?”. Clearly expecting an understanding of when to use apostrophes is now beyond Forbes contributors’ ken. The world is in safe hands: people who cannot construct a sentence that would pass muster in a third-grade English class are now writing about the future of corporate governance in the age of AI in Forbes.

Let’s instead talk about skeuomorphy.


A skeuomorph is a kind of design element that is no longer necessary for the function of the object, but is retained for aesthetic reasons. For example, your iPhone’s Notes app used to use some visual elements from physical note pads (such as the ‘legal pad yellow’ background) for a pretty long time, even though it was not in any sense necessary. Skeuomorphs are a way to acclimate our somewhat sluggish human brains to a changing world.

Skeuomorphs also illustrate an important point: what something appears as is not the same as what it is. The interface is not the object. The interface is, to say the blindingly obvious, the interface: the tool through which an object communicates. Confuse the two, and you get blinding bullshit like that ‘robot CEO’ nonsense.

On the other hand, we’re not in any way immune to what I shall call the skeuomorphic fallacy, the confusion that the visual, linguistic or other forms through which something communciates equate to the essence of that thing. Where something as sensitive as language is involved, which I have written about before, this hits particularly hard. For better or worse, we connect language so closely to the concept of being human that anything capable of producing decent linguistic output gets a level of human-like credence accorded to it prima facie.

And this is not entirely wrong, either. As I have stated before, language is a uniquely human trait that is so intricately intertwined with what it means to be human in a human society that it is hard to unlink it from personhood. It is not a necessary characteristic of personhood, but it has certainly been often seen as a sufficient one. To speak is to be able to speak up as well, and to be able to speak up is to be able to advocate for rights. It is a lot easier to accord human rights for one that can speak for those rights.

Harris makes this point, and my old law school professor Tim Endicott gives a wonderful refutation in The Infant in the Snow, one of the best papers on the philosophy of personhood and the origin of rights (and the qualifications needed to have them) that I have ever read.

Where skeuomorphy becomes a fallacy

The problem is, of course, that as noted above, the interface is not the substance. The reality is that Mika is no more human than my bedside lamp. The only reason why nobody has appointed my bedside lamp a CEO is, of course, that it does not pretend to look and speak like a human. But if we look at what it means to be a legal subject – be it a citizen (viz. Sophia) or a corporate officer (viz. Mika) –, it becomes abundantly clear that there’s much more to that than being able to pretend to look and speak like a human. It is to be able to be a human. And that is something that Mika, Sophia and all the other ‘AI’ and ‘robot’ nonsense out there is not.

I am really being incredibly generous by dignifying whatever these Hanson products do with the term ‘AI’. They are Disney animatronics with a bottom-tier language model and an average speech synthesiser. Artificial that may be, but intelligence it is not.

The skeuomorphic fallacy is the fallacy of confusing the interface with the substance, confusing the appearance of being human with the essence of being human. Where things go wrong is the confusion of something in a humanoid body (being generous here) and using a quintessentially human skill that we’re somewhat extremely attached to (i.e. language) on one hand with being human on the other.

Which is a problem, because appointing someone CEO isn’t a publicity stunt. The ‘O’ in CEO stands for officer. A corporate officer is, even given limited liability, the person with whom the buck stops. Limited liability does create legal (corporate) personhood, which insulates corporate officers of some aspects of liability, but not all of them. Lord Thurlow pointed out that a company couldn’t be expected to have a conscience if it had no body to kick and no soul to damn – the need to eventually have some nominate humans in the process serves to a great degree to provide that conscience and that body. Even notions like corporate homicide, which became a thing in England & Wales just when I took corporate law (and a lovely mess that did of finals papers!), are predicated on the notion that there are human beings who, collectively, operate a system that is ultimately responsible for the actions of the company. You don’t necessarily have to have a body to kick, nor really a soul to damn, but you do definitely have to have a mens rea, and that is something that a robot or an AI is not capable of.

This isn’t just a fine point about corporate law by a recovering legal philosopher. This is what happens when we lapse in our understanding of what it means to be human, and begin to accord human rights and responsibilities to non-human things. We’re cognitively geared to associate language with personhood, so this is not a particularly surprising lapse of reason. We’re humans, and that’s why our response to a speaking robot becoming CEO is “oh, that’s innovative” as opposed to “let’s consider if some people might need to spend some time away from their favourite hallucinogens and/or the general population”. If a company appointed my bedside lamp as a corporate officer, we’d be much more likely to consider that a sign that the latest office party went a little overly generous on the acid. But because we’re cognitively geared to consider language a uniquely human trait, we’re much more likely to be tempted to consider this some Valley quirkiness as opposed to frank insanity.

Why so serious?

It’s a legitimate question: why are you so bothered by this nonsense? It’s just a PR stunt, after all. And you’re right: it is just a PR stunt. But it is a PR stunt that is part of a larger pattern of nonsense that is being peddled by the likes of Hanson Robotics and others. And it’s damaging.

Freddy Fazbear, titular antagonist of the Five Nights at Freddy’s franchise.

We don’t consider Freddy Fazbear, an animatronic bear, to be an authority on what it means to be an ursine individual. Why is this more patently ridiculous than considering an animatronic human to be capable of bearing indiciae of human personhood, such as citizenship or corporate office?

It’s damaging to humanity, because it leads to a gradual misunderstanding of what the essence of human existence, of being human, is: it muddles the picture, confusing what things are with how things behave. It is Searle’s Chinese Room all over again: the idea that a system that behaves like a human is a human. It is not. It is a system that behaves like a human, no more, no less. And that is a very important distinction.

It’s damaging to the field of AI, because it leads to a misunderstanding of what AI is and what it can do. It is a field that is already rife with hype and nonsense, and this kind of nonsense just adds to the pile. It is a field that is already struggling to be taken seriously, and this kind of nonsense just makes it harder to be taken seriously.

There are powerful, sophisticated AI algorithms that identify disease in cytopathological specimens, find cracks in compressor blades and detect anomalies as part of intrusion detection algorithms. These are deeply sophisticated and intricate solutions. They are also deeply unsexy. They are not the kind of thing that gets you on the cover of Forbes – just the kind of thing that makes a difference in real lives day in, day out, the kind of thing that is being drowned out by the nonsense that is being peddled here. Many of these systems have outputs that wouldn’t look out of place in the 1990s: terminal outputs, text files, CSVs.

They make this world a better, safer, healthier place. They don’t need animatronic bodies. They just do something important, and do that well. And that is what AI is about: it is not about making a PR stunt out of a glorified animatronic. It is about making a difference, and that is what we should be focusing on.

Which makes Mika an unwelcome distraction, and one that should be a concern for all of us. It’s the symptom of a disease – a fallacy of confusing the interface with the substance – that is spreading. And it’s a disease that we should be fighting, lest it makes us worse at being humans, and worse at being AI researchers.

The world is looking to us for calling out this kind of bullshit. So, please, as part of our professional involvement with the world, let’s start doing that with extreme prejudice.


BibTeX citation:
  author = {Chris von Csefalvay},
  title = {The Skeuomorphic Fallacy},
  date = {2023-11-06},
  url = {},
  doi = {10.59350/k9kka-90x56},
  langid = {en-GB}
For attribution, please cite this work as:
Chris von Csefalvay. 2023. “The Skeuomorphic Fallacy.”