The Fear Factor

Update: Adam has a great response, much from his own (rather different yet nonetheless fascinating and infinitely important) perspective, that should be a valuable read to anyone who found this post even mildly interesting.

As I was perusing Twitter, I bumped into this sponsored tweet by Shell, promoting Sensabot, a ruggedised remote ops robot designed for operations in dangerous environments by Carnegie Mellon’s NREC and now apparently adapted/adopted by Shell:

That sounds great… unfortunately, it strikes me, it misunderstands a couple of things – namely,

  1. fear, and the function it has in the human mind (not just the psyche – fear is a primarily neural response, secondarily perhaps cognitive and far behind it is any psychological aspect thereof), and
  2. what a robot ought to do/know.

Now, this might just be a marketing puff, though it probably is in its own way true – I have yet to see robots with a specific system catering for fear. It is also hardly just Shell and NREC I’m singling out here – the points are much more generic and have more to do with that robots do and what they therefore ought to understand about being human.

The gift of fear

When I was a child, I once managed to piss off my grandfather enough to have him lock me in the shed. The shed smelled of chicken feed, something that turns my guts upside down to this day. Now, the shed was pretty ok, all things considered, albeit dark and damp. However, I shared it with what at my youthful estimation must have been several hundred of small insects, spiders, centipedes and other creepy-crawlies.

I love nature. I just hate the things it sometimes produces. Safe to say if it does not have a brain and does creepy-crawly stuff, I will in all likelihood be creeped out by it.

So there I was, locked in with what I now know were more along the lines of a few thousands of these critters. I screamed like a banshee, until my grandmother took pity on me and released me.

Years and years later, memories of this crawled their way back into my mind at the most inopportune times. It was an uncharacteristically cool midsummer day, in the ageless and timeless beauty that you only get at NSC Bisley, the capital of UK target rifle shooting. Looking through the high-resolution scope of my rifle at Stickledown, the long distance (1,000+ yd) match rifle range, I was immersed in doing calculations of wind and gust patterns and adjustments and projectile drop in my head, trying to look out for any sign of crosswind at what is rightly known as one of the most treacherous ranges in the world of long distance target rifle shooting. That was when I suddenly felt that familiar crawl of a spider up my right leg. Had it not been for the fact that freaking out like a lunatic at a firing range while holding a stupidly overpowered sniper rifle in the middle of a few dozen other shooters on edge at what is to many of them a career match is generally shunned upon, I would probably have screamed. Instead, I unloaded, kicked the spider off me and tried to settle down, but by that time, it was all lost. My heart, pickled in adrenalin, pounded and pounded and I got some stupidly bad shots away before I conceded. And that’s how great my first Imperial Meeting went, back in 2006.

I was so bothered by this episode, I applied my usual method to it – reading every single book ever written on the subject of fear and anxiety responses (years later, when I would once again have to face scary memories from my past, much of that reading would prove helpful in retaining a modicum of sanity). I read tomes of evolutionary biology, a field so unfamiliar to me I had to ask a fellow student to give me the Cliff’s Notes of it. I read a fantastic book, The Gift of Fear by Gavin de Becker, in which he explains the importance of fear signals in avoiding violence. I read copiously on the physiology of fear responses, of the need for the inotropy and chronotropy1That’s ‘making your heart beat stronger’ and ‘making your heart beat faster’ in human language, respectively. of adrenaline, and the reason why battle cries and battle shouts are a universal feature of human civilisations.

And eventually, I came to realise that while fear probably lost me any stab I had at the Halford, the much coveted 1,100yd/1,200yd trophy that year (if we’re being honest here, any such chances were… fairly slim at best), it was a crucial part of getting my species, and quite probably the individual self, to that point. Of course, because this is not a Hollywood blockbuster about facing one’s fears and winning, I never went back to Bisley after this event and I would in fact never again shoot in a public competition. I would, however, spend plenty of time on the more fortunate end of a rifle, and never have another problem with it – even in inhospitable climates with various nasty creepy-crawlies.

Now, there’s a reason I’m giving a little personal vignette here – and that is to understand that we’re more familiar with the adverse effects of fear than we are with its ‘gift’, to use de Becker’s term. We, as a society, are in the mindset I was as I walked down the firing point and tried to figure out how the hell I am going to explain all this to my coach without becoming the club joke. Fear is bad. Fear is so bad, if you get too much of it, many opt for taking medication or seeking professional help (and that’s perfectly right so!). “Freedom from fear”, one of FDR’s often-quoted four freedoms, has put fear on the level of starvation, religious persecution and oppression of free expression. Fear is a big deal.

And justly so. Fear is, well, not nice. It puts people ‘on edge’, which is just fine, but is a prioritisation mechanism – it puts efficiency and survival ahead of communication and courtesy, and leads you to be perceived as unpleasant. Fear, especially long-term levels of heightened stress response (known sometimes the ‘biological embedding of anxiety’) can have utterly deleterious effects on long-term health2Miller, Gregory E., Edith Chen, and Karen J. Parker. “Psychological stress in childhood and susceptibility to the chronic diseases of aging: moving toward a model of behavioral and biological mechanisms.” Psychological Bulletin 137.6 (2011): 959. and there seems now ample evidence that the damage is on a genetic level3Sasaki, Aya, Wilfred C. de Vega, and Patrick O. McGowan. “Biological embedding in mental health: An epigenomic perspective 1.” Biochemistry and Cell Biology 91.1 (2013): 14-21., i.e. capable of being passed on. The harm of fear thus becomes intergenerational.

At the same time, fear is necessary for humans, and it does not take much to think of a scenario when your entire ancestral line could have been wiped out if it had not been for your slightly anxious great-great-great^n-ancestor so pathologically afraid of floods that he insisted on dwelling on a hill or so fearful of sabre-tooth tigers that he always carried a sharp object that might just be credited for his survival. In fact, Marks and Nesse (1994) argue that eliminating fear would be by no means exclusively positive – and perhaps even the existence of a pathological state of low fear they term ‘hypophobic disorder’.4Nesse, Randolph M. “Fear and fitness: An evolutionary analysis of anxiety disorders.” Ethology and sociobiology 15.5 (1994): 247-261. Certainly the lack of fear response, such as that induced by ablation of metabotropic glutamate receptor subtype 75Masugi, Miwako, et al. “Metabotropic glutamate receptor subtype 7 ablation causes deficit in fear response and conditioned taste aversion.” The Journal of Neuroscience 19.3 (1999): 955-963. or interneuronal ablation by inhibition of the Dlx1 gene6Mao, Rong, et al. “Reduced conditioned fear response in mice that lack Dlx1 and show subtype-specific loss of interneurons.” Journal of Neurodevelopmental Disorders 1.3 (2009): 224. in the experimental setting or witnessed in the context of pervasive neurodevelopmental disorders both in models7Markram, Kamila, et al. “Abnormal fear conditioning and amygdala processing in an animal model of autism.” Neuropsychopharmacology 33.4 (2008): 901-912. and in vivo,8Consider DSM-IV 299.0, at Associated Features and Disorders, para.1 has significant evolutionary drawbacks – it is not hard to see how behaviour without fear in a world of danger can quickly lead to reduced life expectancies.

And so we get to the Sensabot, our ‘fearless’ robot. What benefits does his fearlessness yield us? None, I submit, for the reasons below.

Fear is a diverse phenomenon. Cutting it out is neurosurgery with a hatchet.

Fear is a single word (albeit one replete with synonyms) for a number of states of mind that connect somewhere in the human reactions they evoke via the amygdala and thence the limbic system, quite prominently the hypothalamic fight/flight response.9Said at risk of massive oversimplification. It is WAY more complex than that, of course, and the mPFC as well as other parts of the brain play a significant role. There is an increasing understanding that some of our most fundamental emotions like fear are as close as one gets to global in the brain! It bundles together your fear of spiders (a genuine phobia), your fear of nuclear war (a longer-term anxiety), your fear of wasting your life (an even less acute, more existential perception) and so on. Cutting it all out is doing neurosurgery with a hatchet. A not very sharp one, either.

To put it in the context of the robot: of course, a robot who is not worried about spiders, doesn’t hyperventilate and sweat when handling dangerous substances and doesn’t freeze at the sight of a few zombies emerging from the neighbouring compartment (a risk I doubt Shell needs to envisage at this point, of course!) lacks maladaptive manifestations of fear. These intersect somewhere in the shared fact that they’re either disproportionate to the risk (spiders) or unproductive in the situation (handling dangerous substances and freezing at the sight of a zombie).

But what about other aspects of fear? Fear is a natural way to warn us of existential danger. The Sensabot relies on the human operator to have at least some degree of that, but more autonomous bots will not be able to. Nor do operators act the same when they’re in the cockpit than when they’re piloting an unmanned vehicle.10As this song attests. Of course, Sensabots can be replaced far easier than humans, but that’s irrelevant here, both axiomatically and practically (the collateral damage of e.g. an explosion caused by an incorrect action might well be an actual human in the area). Nor does the bot have the neurophysiological advantages that fear – specifically, the cardiac effects of faster movement, better cognitive capabilities and so on. Fear is a reserve, and machines don’t have that reserve.

Fear prioritises.

Fear is best represented as a vector, having both magnitude and direction (example: “I’m very afraid (magnitude) of spiders (direction)”). Different magnitudes help prioritising for immediacy and apprehended risk (likelihood times expected loss). Of course, it is not possible to simply bestow this upon a computer, and there are other methods of prioritising risk, but the great benefit of fear is that it distills signals down into a simple and fast calculation that is remarkably rarely wrong. It does so by considering a current signal in the context of all signals, the signal space of all possible signals, as well as learned patterns and the wider context in which the entire process is taking place. The decision whether to be afraid of something is, actually, quite complex.

This prioritisation is often lampooned when it appears to go wrong, typically when people are afraid of certain rare risks than more frequent ones. Typically, such caricatures of the way human fears work get several things wrong – they reduce the situation to mere probability when in reality, the likelihood of loss, the manner of loss, its impact on others, its impact on your wider community at large and so on are taken into account. That’s why more people fear terrorist attacks than car accidents, even though the latter are much more frequent. Context is everything, and if we learn only one thing from fear, let it be that the evaluation of risks takes place in a very wide context, and with its holistic nature – involving the limbic system, the median prefrontal cortex (mPFC) for memory, somewhat affected by the person’s state of arousal and HPA axis function, mediated by sensory perceptions mixed with our interpretations thereof –, fear is a highly multifactorial response that can be promoted, mediated or inhibited by a number of factors on the way. There is now a degree of awareness in literature that learned and innate fears are differentiated in their propagation pathways (specifically, the involvement of the prelimbic mPFC),11Corcoran, Kevin A., and Gregory J. Quirk. “Activity in prelimbic cortex is necessary for the expression of learned, but not innate, fears.” The Journal of neuroscience 27.4 (2007): 840-844. indicating again that fear is a single response to different processes reacting to different stimuli. This underlines how fear, rather than simply getting one’s brain pickled in adrenaline, is a complex phenomenon. From an evolutionary perspective, this complexity calls for an explanation – the holistic nature of fear comes, of course, at the expense of the time it takes to trigger release of adrenaline and fight/flight responses. That explanation is, of course, that fear has to accomplish more objectives than merely recoiling, reliably and every time, from a trigger: it has to weigh whether a response is going to put us in more danger, it has to weigh our resources against plans of escape, it has to consider the entire context of a situation, including the past (via the memory activity of the prelimbic mPFC).

Humans are afraid. Their robot co-workers need to understand this.

My learned friend Adam Elkus has made a great point:

He’s entirely right – if robots and humans work together, robots need to have what is sometimes referred to as the ‘theory of mind’ – an internal concept of an external mind – in order to anticipate and understand their workmates. And humans, well, humans experience fear. There’s no way to getting around that. An interesting consideration here would be a fear related adaptation of Baron-Cohen’s Sally-Anne test,13Baron-Cohen, Simon, Alan M. Leslie, and Uta Frith. “Does the autistic child have a “theory of mind”?.” Cognition 21.1 (1985): 37-46. which I will call the Sally-Zombie test. Anne is, in this case, replaced by a rotting carcass reanimated by dark forces. A robot ought to be able to reason about Sally’s reaction to this. Merely predicting a response is unlikely – even in the absence of in-depth knowledge of Sally’s decisions in the past, it will be hard to predict whether she will freeze, run or reach for the nearest object to decapitate her once-friend-turned-zombie with. The robot, thus, ought to understand multiple things here:

  • The human emotional context: humans and their fear of their body being taken over, a fear rooted in the human appreciation for autonomy and capacity.
  • The human cultural context: zombie movies are a staple of Western culture and at least to Western observers, zombies are unequivocally scary (more advanced models would need to consider cultural differences, e.g. the response Sally would have, had she grown up with a culture, such as Haitian Voodoo or Palo Malombe, where zombies occupy a more complex albeit still fear-inducing position).
  • The human personal context: what are Sally’s experiences with zombies? With similar stressors? With the concept of losing a friend to an abomination?
  • The human physiological context: given Sally’s state, what is she most likely to experience when her body goes through the motions of fear/panic and the corresponding neural level reactions?

The fact is that the qualia of fear is a uniquely human perception.14Buck, Ross. “What is this thing called subjective experience? Reflections on the neuropsychology of qualia.” Neuropsychology 7.4 (1993): 490. No machine, however intricate, will ever have the qualia of fear. Whether it needs the qualia, however, in order to do its job is debatable. At the same time, its sheer richness and multifactorial nature, as well as extent of its manifestations, make fear unsuitable to a mere scripted response level of understanding. While some basic features can be easily responded to without having an understanding of fear (“If Joe is around tons of highly explosive material, Joe will have a heart rate exceeding his basal heart rate by approximately 10-25% and experience palmar hydrosis”), most cannot. And that means that robots, to be effective, have to develop something in between the unattainable qualia and the insufficient scripted understanding.

It’s important to understand that this is not merely for robots to understand how their human companions will think, but also to allow the latter to understand how their robot coworker would perceive a situation. A coworker who lacks, say, an ordinary human level of fear might not only endanger his companions, his actions are also likely to be unintelligible to them: why is Steve running towards the madly out-of-control spinning saw blades without any protective equipment?! Fear is so fundamental to being human that it is part of the unwritten set of shared presumptions that help us understand and anticipate each other. This is a system into which robots will, someday soon, integrate themselves.

Conclusion

Do we want robots to be afraid? Some applications, such as the recent proliferation of humanoid, emotionally expressive robots for therapeutic, educational,15Movellan, Javier, et al. “Sociable robot improves toddler vocabulary skills.” Proceedings of the 4th ACM/IEEE international conference on Human robot interaction. ACM, 2009. or general usage purposes16Consider in this field Hashimoto, Takuya, et al. “Development of the face robot SAYA for rich facial expressions.” 2006 SICE-ICASE International Joint Conference. IEEE, 2006. and Itoh, Kazuko, et al. “Various emotional expressions with emotion expression humanoid robot WE-4RII.” Robotics and Automation, 2004. TExCRA’04. First IEEE Technical Exhibition Based Conference on. IEEE, 2004. certainly rely on at least an understanding of where the display of the physiological-communicative responses to stimuli is appropriate. But that’s not where the story ought to end. In fact, robots that need to have more than trivial ability to reason on their own, especially if they need to do so in a human context. There is much that robots can do better than humans – they don’t feel pain, remorse, regret, doubt, boredom or fatigue. To inject into what one might perceive as almost perfect creatures the maladaptive aspects, too, of human responses to perceived dangers sounds counterintuitive. At the same time, on the large (evolutionary) scale as well as the individual long-term scale, fear is a gift. It is a gift we as humans must consider to pass on to our creations.

References   [ + ]

1. That’s ‘making your heart beat stronger’ and ‘making your heart beat faster’ in human language, respectively.
2. Miller, Gregory E., Edith Chen, and Karen J. Parker. “Psychological stress in childhood and susceptibility to the chronic diseases of aging: moving toward a model of behavioral and biological mechanisms.” Psychological Bulletin 137.6 (2011): 959.
3. Sasaki, Aya, Wilfred C. de Vega, and Patrick O. McGowan. “Biological embedding in mental health: An epigenomic perspective 1.” Biochemistry and Cell Biology 91.1 (2013): 14-21.
4. Nesse, Randolph M. “Fear and fitness: An evolutionary analysis of anxiety disorders.” Ethology and sociobiology 15.5 (1994): 247-261.
5. Masugi, Miwako, et al. “Metabotropic glutamate receptor subtype 7 ablation causes deficit in fear response and conditioned taste aversion.” The Journal of Neuroscience 19.3 (1999): 955-963.
6. Mao, Rong, et al. “Reduced conditioned fear response in mice that lack Dlx1 and show subtype-specific loss of interneurons.” Journal of Neurodevelopmental Disorders 1.3 (2009): 224.
7. Markram, Kamila, et al. “Abnormal fear conditioning and amygdala processing in an animal model of autism.” Neuropsychopharmacology 33.4 (2008): 901-912.
8. Consider DSM-IV 299.0, at Associated Features and Disorders, para.1
9. Said at risk of massive oversimplification. It is WAY more complex than that, of course, and the mPFC as well as other parts of the brain play a significant role. There is an increasing understanding that some of our most fundamental emotions like fear are as close as one gets to global in the brain!
10. As this song attests.
11. Corcoran, Kevin A., and Gregory J. Quirk. “Activity in prelimbic cortex is necessary for the expression of learned, but not innate, fears.” The Journal of neuroscience 27.4 (2007): 840-844.
12. Adam (Elkus
13. Baron-Cohen, Simon, Alan M. Leslie, and Uta Frith. “Does the autistic child have a “theory of mind”?.” Cognition 21.1 (1985): 37-46.
14. Buck, Ross. “What is this thing called subjective experience? Reflections on the neuropsychology of qualia.” Neuropsychology 7.4 (1993): 490.
15. Movellan, Javier, et al. “Sociable robot improves toddler vocabulary skills.” Proceedings of the 4th ACM/IEEE international conference on Human robot interaction. ACM, 2009.
16. Consider in this field Hashimoto, Takuya, et al. “Development of the face robot SAYA for rich facial expressions.” 2006 SICE-ICASE International Joint Conference. IEEE, 2006. and Itoh, Kazuko, et al. “Various emotional expressions with emotion expression humanoid robot WE-4RII.” Robotics and Automation, 2004. TExCRA’04. First IEEE Technical Exhibition Based Conference on. IEEE, 2004.

Pain and the Saint

Hardly have news of Mother Theresa of Calcutta’s beatification reached around the world, the age-old criticisms have been wheeled out once again and dusted off like it befits the tired tropes they are. Many of them are somewhere on the road between Ridiculousville and Obscenetown, and of course some are just plain exaggerations. There is one that is a little more complex, and that hits a little closer to home.

This one goes somehow along the following way: “Mother Theresa, because of her beliefs, glorified pain, and therefore let poor people die without adequate pain relief”. The first part is pure speculation, based on the second part, which derives from a letter by the surprisingly undistinguished Dr Robin Fox, editor of the Lancet between 1990-1995, that examined the state of medical care in Mother Theresa’s Calcutta institution.1http://www.sciencedirect.com/science/article/pii/S0140673694923531 Four things before all:

  • One, it’s what is called a ‘letter to’, and as such not subject to peer review before publishing. Especially not if it’s by the editor. It is not a peer reviewed study. It is not scientific evidence. It is not science. It is a travel report, presented without corroboration.
  • Two, Dr Fox is not an anaesthesiologist, a specialist in palliative care or an expert in terminal care analgesia. He did not bother to take one along, domestic or foreign.
  • Three, and most problematically: Dr Fox did not compare what he saw in Calcutta with what he would have seen at any other Indian hospital. The third of these is the most condemning, because if he had, he might have learned a little about the difficulties of obtaining adequate pain medication in India.
  • Four, saints are not perfect, and neither is the Lancet. In recent years, the Lancet has committed a lot of fairly egregious sins against good research, viz. the Burnham Iraq mortality paper,2http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(06)69491-9/abstract the Bristol Cancer Centre study3http://www.thelancet.com/journals/lancet/article/PII0140-6736(90)93402-B/abstract and more than a few other blunders. These are just the biggies. And these were actual papers, i.e. peer reviewed. Imagine the non-peer reviewed stuff. Non-scientists tend to have an elevated image of journals, especially those well-known and with a high Impact Factor, like Science, The Lancet or the NEJM, ignoring that they, too, are fairly flawed products of fairly flawed human institutions.

The religious angle

Let’s dispose of one of the more pernicious arguments right here. It is sometimes argued that Mother Theresa intentionally let people die in pain because suffering is great and Catholics hate adequate analgesia. That’s false on both counts.

There’s a difference between saying ‘suffering is meaningful’ and ‘suffering is great’. There’s nothing positive about avoidable suffering. Consider §2279 Cathechism of the Catholic Church:

Even if death is thought imminent, the ordinary care owed to a sick person cannot be legitimately interrupted. The use of painkillers to alleviate the sufferings of the dying, even at the risk of shortening their days, can be morally in conformity with human dignity if death is not willed as either an end or a means, but only foreseen and tolerated as inevitable. Palliative care is a special form of disinterested charity. As such it should be encouraged.

The above is completely in line with modern medical ethics, which permits ‘terminal sedation’ or ‘terminal analgesia’, administering adequate pain medication (which often can mean rather high doses), even if this will almost inevitably hasten death (due to the respiratory suppressive effects of opiates/opioids), but not intentional use of pain medication to kill. There is a whole realm of ethics, medical and otherwise, on the doctrine of double effect that is involved here,((As a good starter, interested readers should consider Gillon (1986)) but what is clear without a doubt is how far this position of the Church, binding on Mother Theresa and, insofar as one can tell, flawlessly applied, does not oppose proper analgesia at any point. As such, she was not getting kicks out of people dying in pain. In the first place, she started her work to do what she could to keep people from dying an undignified and horrible death on the streets, and instead spend their last days or hours in dignity. The cost of this were, as her own writings reveal, extreme emotional distress, nightmares and what could without doubt be diagnosed as a severe anxiety disorder. If anything about dispensing medicine at her house is strange, it is that she did not start dipping into the Xanax jar.

The medical angle

Perhaps it deserves mention that Mother Theresa’s order did not run a hospital, or a hospice in the modern sense of the world. In fact, hospices in the Western sense, which Dr Fox seems to compare Mother Theresa’s institutions with, did not exist in India at the time and remain fairly scarce. She ran an institution with very modest means and staffed by volunteers that aimed at giving dying people some dignity. None of them were forcibly picked up on the street by jack-booted nuns and told they’re going to go to Mother T’s, or else. It was up to those who did so to decide whether they wanted to or not.

As such, the criticism that the institution did not distinguish between the terminal and non-terminal is rather strange, because 1) they were not a hospital or hospice in the modern, Western sense of the word, 2) their care was not specific to the dying – the sick can derive significant help from being in a clean, safe environment, 3) they lacked the medical resources.

In an alternate universe, Mother Theresa had to her avail the suns needed to run a properly staffed medical institution, with doctors and referrals and all the drugs in the world. In that perfect universe, perhaps the abject poverty that meant the alternative would be dying on the streets would not have existed, either. Of course the care she administered was, when judged from the perspective of a hospital, inadequate. But she was at no point running a hospital. Much as you don’t expect your hairdresser to have an M.D., her institution was what it was. Equally, canonisation is what it is – it is not a medical doctorate, or the Church sending ‘atta girls’ for a hospital well run.

The personal angle

There’s a reason while this story hits home to me. I tend not to speak about this publicly, but I have been living a long, long time now with extremely severe, often interminable, pain. The international politics of pain medication and its availability, closely linked with narcopolitics, is one of my pet topics I can bore people with into a stone cold stupor. What it boils down to is this: proper pain relief is an integral part of human dignity. Being in unmanaged or inadequately managed pain means the patient is inadequately treated, and ignoring pain relief is the worst kind of non-profitary medical malpractice.

With that said, I’ve also been at the forefront of research into pain, both as a subject and as a participant. I have had the pleasure to try quite a few modern approaches to pain management. I’m hoping to be able to find some better ones. Throughout this, I was aware by the risks and complexity of pain analgesia. With a frail, terminal patient, it gets even more complicated.

Strong pain medication is not like Tylenol that you can simply pop a few of and things get better. In general, patients are started on a low dose PRN (‘as needed’) oral opioid in conjunction with an NSAID, then eventually the PRN oral opioid is increased (a process called ‘titration’) until it manages their needs. Then, the PRN opioid is converted into a long-term opioid, such as a matrix patch, which releases the drug into your fatty tissues over time (usually three to five days) or a long-lasting time-release opioid formulation, together with low doses of the PRN opioid for ‘breakthrough pains’, pain spikes that are no longer treated adequately by the long-term pain medication. Alternatively, severely ill/bedbound patients may be offered a solution like PCA, which injects a constant stream of an opioid with the option of the patient to add a given number of ‘bolus’ doses for pain spikes – these are used e.g. in the post-surgical context, for the first day or so after a surgery. More complex pain management issues exist for chronic complex pain, such as spinal catheters, neurosurgery, implantable pain pumps, implantable spinal cord stimulators and so on. This is the state of the art, today, in the West, in 2016. In Mother Theresa’s days, pain patches were barely existent and certainly not available. The only thing they could have had was oral morphine sulphate or IV morphine.

Pain medicine is one of the most expensive branches of medical care, despite the fact that most pain medications are cheap as chips. The reason for that is the incredible attention required and the risk involved in pain medication. Especially before antagonists like naloxone became widely available and financially feasible, it would have taken a host of highly qualified doctors to appropriately dispense pain medication in Mother Theresa’s institutions. Dr Fox admonishes her for not stocking strong opioids, but really, should she be not praised instead for not stocking potentially fatal pain medicine that takes specialist care to administer, specialist care that their people lack and that legally requires doctors their houses did not have and could not afford to have on staff?

Perhaps the more appropriate angle to this is immense personal gratitude that in our world, we have ways of managing pain that the poorest of Calcutta have no access to. Truly, we are blessed. As were they, when Mother Theresa gave them the small mercy of giving them a modicum of care and respect for their dignity in their last hours.

One should never stop hoping and demanding for the world to improve. But nor shall one confuse a desire for a better world with a blanket condemnation of those who made do with the world they were handed. In a demanding and dire situation, so dire that it drove Mother Theresa herself to anxiety and insomnia in the beginning, she made the best she could with what she got. The cost of aspiring to a better world should not be the denigration of those who had to live in this one.

References   [ + ]

1. http://www.sciencedirect.com/science/article/pii/S0140673694923531
2. http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(06)69491-9/abstract
3. http://www.thelancet.com/journals/lancet/article/PII0140-6736(90)93402-B/abstract

Immortal questions

When asked for a title for his 1979 collection of philosophical papers, my all-time favourite philosopher1That does not mean I agree with even half of what he’s saying. But I do undoubtedly acknowledge his talent, agility of mind, style of writing, his knowledge and his ability to write good and engaging papers that have not yet fallen victim to the neo-sophistry dominating universities. Thomas Nagel chose the title Mortal Questions, an apt title, for most of our philosophical preoccupations (and especially those pertaining to the broad realm of moral philosophy) stem from the simple fact that we’re all mortal, and human life is as such an irreplaceable good. By extension, most things that can be created by humans are capable of being destroyed by humans.

That time is ending, and we need a new ethics for that.

Consider the internet. We all know it’s vulnerable, but is it existentially vulnerable?2I define existential vulnerability as being capable of being destroyed by an adversary that does not require the adversary to accept an immense loss or undertake a nonsensically arduous task. For example, it is possible to kill the internet by nuking the whole planet, but that would be rather disproportionate. Equally, destruction of major lines of transmission may at best isolate bits of the internet (think of it in graph theory terms as turning the internet from a connected graph into a spanning acyclic tree), but it takes rather more to kill off everything. On the other hand, your home network is existentially vulnerable. I kill router, game over, good night and good luck. The answer is probably no. Neither would any significantly distributed self-provisioning pseudo-AI be. And by pseudo-AI, I don’t even mean a particularly clever or futuristic or independently reasoning system, but rather a system that can provision resources for itself in response to threat factors just as certain balancers and computational systems we write and use on a day to day basis can commission themselves new cloud resources to carry out their mandate. Based on their mandate, such systems are potentially existentially immortal/existentially indestructible.3As in, lack existential vulnerability.

The human factor in this is that such a system will be constrained by mandates we give them. Ergo,4According to my professors at Oxford, my impatience towards others who don’t see the connections I do has led me to try to make up for it by the rather annoying verbal tic of overusing ‘thus’ at the start of every other sentence. I wrote a TeX macro that automatically replaced it with neatly italicised ‘Ergo‘. Sometimes, I wonder why they never decided to drown me in the Cherwell. those mandates are as fit a subject for human moral reasoning as any other human action.

Which means we’re going to need that new ethics pretty darn’ fast, for there isn’t a lot of time left. Distributed systems, smart contracts, trustless M2M protocols, the plethora of algorithms that have arisen that each bring us a bit closer to a machine capable of drawing subtle conclusions from source data (hidden Markov models, 21st century incarnations of fuzzy logic, certain sorts of programmatic higher order logic and a few other factors are all moving towards an expansion of what we as humans can create and the freedom we can give our applications. Who, even ten years ago, would have thought that one day I will be able to give a computing cluster my credit card and if it ran out of juice, it could commission additional resources until it bled me dry and I had to field angry questions from my wife? And that was a simple dumb computing cluster. Can you teach a computing cluster to defend itself? Why the heck not, right?

Geeks who grew up on Asimov’s laws of robotics, myself included, think of this sort of problem as largely being one of giving the ‘right’ mandates to the system, overriding mandates to keep itself safe, not to harm humans,5…or at least not to harm a given list of humans or a given type of humans. or the like. But any sufficiently well-written system will eventually grow to the level of the annoying six-year-old, who lives for the sole purpose of trying to twist and redefine his parents’ words to mean the opposite of what they intended.6Many of these, myself included, are at risk of becoming lawyers. Parents, talk to your kids. If you don’t talk to them about the evils of law school, who will? In the human world, a mandate takes place in a context. A writ is executed within a legal system. An order by a superior officer is executed according to the applicable rules of military justice, including circumstances when the order ought not be carried out. Passing these complex human contexts, which most of us ignore as we do all the things we grew up with and take for granted, into a more complicated model may not be feasible. Rules cannot be formulated exhaustively,7H.L.A. Hart makes some good points regarding this as such a formulation by definition would have to encompass all past, present and future – all that potentially can happen. Thus, the issue moves on soon from merely providing mandates to what in the human world is known as ‘statutory construction’ or interpretation of legislative works. How are computers equipped to reason about symbolic propositions according to rules that we humans can predict? In other words, how can we teach rules of reasoning about rules in a way that is not inherently recursing this question (i.e. is not based on a simple conditional rule based framework).

Which means that the best that can be provided in such a situation is a framework based on values, and target optimisation algorithms (i.e. what’s the best way to reach the overriding objective with least damage to other objectives and so on). Which in turn will need a good bit of rethinking ethical norms.

But the bottom line is quite simple: we’re about to start creating immortals. Right now, you can put data on distributed file infrastructures like IPFS that’s effectively impossible to destroy using a reasonable amount of resources. Equally, distributed applications via survivable infrastructures such as the blockchain, as well as smart contract platforms, are relatively immortal. The creation of these is within the power of just about everyone with a modicum of computing skills. The rise of powerful distributed execution engines for smart contracts, like Maverick Labs’ Aletheia Platform,8Mandatory disclosure: I’m one of the creators of Aletheia, and a shareholder and CTO of its parent corporation. will give a burst of impetus to systems’ ability to self-provision, enter into contracts, procure services and thus even effect their own protection (or destruction). They are incarnate, and they are immortal. For what it’s worth, man is steps away from creating its own brand of deities.9For the avoidance of doubt: as a Christian, a scientist and a developer of some pretty darn complex things, I do not believe that these constructs, even if omnipotent, omniscient and omnipresent as they someday will be by leveraging IoT and surveillance networks, are anything like my capital-G God. For lack of space, there’s no way to go into an exhaustive level of detail here, but my God is not defined by its omniscience and omnipotence, it’s defined by his grace, mercy and love for us. I’d like to see an AI become incarnate and then suffer and die for the salvation of all of humanity and the forgiveness of sins. The true power of God, which no machine will ever come close to, was never as strongly demonstrated as when the child Jesus lay in the manger, among animals, ready to give Himself up to save a fallen, broken humanity. And I don’t see any machine ever coming close to that.

What are the ethics of creating a god? What is right and wrong in this odd, novel context? What is good and evil to a device?

The time to figure out these questions is running out with merciless rapidity.

Title image: God the Architect of the Universe, Codex Vindobonensis 2554, f1.v

References   [ + ]

1. That does not mean I agree with even half of what he’s saying. But I do undoubtedly acknowledge his talent, agility of mind, style of writing, his knowledge and his ability to write good and engaging papers that have not yet fallen victim to the neo-sophistry dominating universities.
2. I define existential vulnerability as being capable of being destroyed by an adversary that does not require the adversary to accept an immense loss or undertake a nonsensically arduous task. For example, it is possible to kill the internet by nuking the whole planet, but that would be rather disproportionate. Equally, destruction of major lines of transmission may at best isolate bits of the internet (think of it in graph theory terms as turning the internet from a connected graph into a spanning acyclic tree), but it takes rather more to kill off everything. On the other hand, your home network is existentially vulnerable. I kill router, game over, good night and good luck.
3. As in, lack existential vulnerability.
4. According to my professors at Oxford, my impatience towards others who don’t see the connections I do has led me to try to make up for it by the rather annoying verbal tic of overusing ‘thus’ at the start of every other sentence. I wrote a TeX macro that automatically replaced it with neatly italicised ‘Ergo‘. Sometimes, I wonder why they never decided to drown me in the Cherwell.
5. …or at least not to harm a given list of humans or a given type of humans.
6. Many of these, myself included, are at risk of becoming lawyers. Parents, talk to your kids. If you don’t talk to them about the evils of law school, who will?
7. H.L.A. Hart makes some good points regarding this
8. Mandatory disclosure: I’m one of the creators of Aletheia, and a shareholder and CTO of its parent corporation.
9. For the avoidance of doubt: as a Christian, a scientist and a developer of some pretty darn complex things, I do not believe that these constructs, even if omnipotent, omniscient and omnipresent as they someday will be by leveraging IoT and surveillance networks, are anything like my capital-G God. For lack of space, there’s no way to go into an exhaustive level of detail here, but my God is not defined by its omniscience and omnipotence, it’s defined by his grace, mercy and love for us. I’d like to see an AI become incarnate and then suffer and die for the salvation of all of humanity and the forgiveness of sins. The true power of God, which no machine will ever come close to, was never as strongly demonstrated as when the child Jesus lay in the manger, among animals, ready to give Himself up to save a fallen, broken humanity. And I don’t see any machine ever coming close to that.

Actually, yes, you should sometimes share your talent for free.

Adam Hess is a ‘comedian’. I don’t know what that means these days, so I’ll give him the benefit of doubt here and assume that he’s someone paid to be funny rather than someone living with their parents and occasionally embarrassing themselves at Saturday Night Open Mic. I came across his tweet from yesterday, in which he attempted some sarcasm aimed at an advertisement in which Sainsbury’s was looking for an artist who would, free of charge, refurbish their canteen in Camden.

Now, I’m married to an artist. I have dabbled in art myself, though with the acute awareness that I’ll never make a darn penny anytime soon given my utter lack of a) skills, b) talent. As such, I have a good deal of compassion for artists who are upset when clients, especially fairly wealthy ones, ask young artists and designers at the beginning of their career to create something for free. You wouldn’t tell a junior solicitor or a freshly qualified accountant to do your legal matters or your accounts for free to ‘gain experience’, ‘get some exposure’ and ‘perhaps get some future business’. It invalidates the fact that artists are, like any other profession, working for a living and have got bills to pay.

Then there’s the reverse of the medal. I spend my life in a profession that has a whole culture of giving our knowledge, skills and time away for free. The result is an immense body of code and knowledge that is, I repeat, publicly available for free. Perhaps, if you’re not in the tech industry, you might want to stop and think about this for five minutes. The multi-trillion industry that is the internet and its associated revenue streams, from e-commerce through Netflix to, uh, porn (regrettably, a major source of internet-based revenue), rely for its very operation on software that people have built for no recompense at all, and/or which was open-sourced by large companies. Over half of all web servers globally run Apache or nginx, both having open-source licences.1Apache and a BSD variant licence, respectively. To put it in other words – over half the servers on the internet use software for which the creators are not paid a single penny.

The most widespread blog engine, WordPress, is open source. Most servers running SaaS products use an open-source OS, usually something *nix based. Virtually all programming languages are open-source – freely available and provided for no recompense. Closer to the base layer of the internet, the entire TCP/IP stack is open, as is BIND, the de facto gold standard for DNS servers.2DNS servers translate verbose and easy-to-remember domain names to IP addresses, which are not that easy to remember. And whatever your field, chances are, there is a significant open source community in it.

Over the last decade and a bit, I have open-sourced quite a bit of code myself. That’s, to use Mr Hess’s snark, free stuff I produced to, among others, ‘impress’ employers. A few years ago, I attended an interview for the data department of a food retailer. As a ‘show and tell’ piece, I brought them a client for their API that I built and open-sourced over the days preceding the interview.3An API is the way third-party software can communicate with a service. API wrappers or API clients are applications written for a particular language that translate the API to objects native to that language. They were ready to offer me the job right there and then. But it takes patience and faith – patience to understand that rewards for this sort of work are not immediate and faith in one’s own skills to know that they will someday be recognised. That is, of course, not the sole reason – or even the main reason – why I open-source software, but I would lie if I pretended it was not sometimes at the back of my head.

At which point it’s somewhat ironic to see Mr Hess complain about an artist being asked to do something for free (and he wasn’t even approached – this is a public advertisement in a local fishwrap!) while using a software pipeline worth millions that people have built, and simply given away, for free, for the betterment of our species and our shared humanity.

Worse, it’s quite clear that this seems to be an initiative not by Sainsbury’s but rather by a few workers who want slightly nicer surroundings but cannot afford to pay for it. Note that it’s the staff canteen, rather than customer areas, that are to be decorated. At this point, Mr Hess sounds greedier than Sainsbury’s. Who, really, is ‘exploiting’ whom here?

In my business life, I would estimate the return I get from work done free of charge at 2-300% long term. That includes, for the avoidance of doubt, people for whom I’ve done work who ended up not paying me anything at all ever. I’m not sure how it works in comedy, but in the real world, occasionally doing something for someone else without demanding recompense is not only lucrative, it’s also beneficial in other ways:

  • It builds connections because it personalises a business relationship.
  • It builds character because it teaches the value of selflessness.
  • And it’s fun. Frankly, the best times I’ve had during my working career usually involved unpaid engagements, free-of-charge investments of time, open-source contributions or volunteer work.

The sad fact is that many, like Mr Hess, confuse righteous indignation about those who seek to profit off ‘young artists’ by exploiting them with the terrific, horrific, scary prospect of doing something for free just once in a blue moon.

Fortunately, there are plenty of young artists eager to show their skills who either have more business acumen than Mr Hess or more common sense than to publicly snub their noses at the fearsome prospect of actually doing something they are [supposed to be] enjoying for free. As such, I doubt that the Camden Sainsbury’s canteen will go undecorated.

Of the 800 or so retweets, I see few who would heed a word of wisdom, as I see the retweets are awash with remarks that are various degrees of confused, irate or just full of creative smuggity smugness), but for the rest, I’d venture the following word of wisdom:4Credited to Dale Carnegie, but reportedly in use much earlier.

If you want to make a million dollars, you’ve got to first make a million people happy.

The much-envied wealth of Silicon Valley did not happen because they greedily demanded an hourly rate for every line of code they ever produces. It happened because of the realisation that we all are but dwarfs on the shoulders of giants, and ultimately our lives are going to be made not by what we secret away but by what others share to lift us up, and what we share to lift up others with.

You are the light of the world. A city seated on a mountain cannot be hid. Neither do men light a candle and put it under a bushel, but upon a candlestick, that it may shine to all that are in the house. So let your light shine before men, that they may see your good works, and glorify your Father who is in heaven.

Matthew 5:14-16

Title image: The blind Orion carries Cedalion on his shoulders, from Nicolas Poussin’s The Blind Orion Searching for the Rising Sun, 1658. Oil on canvas; 46 7/8 x 72 in. (119.1 x 182.9 cm), Metropolitan Museum of Art.

References   [ + ]

1. Apache and a BSD variant licence, respectively.
2. DNS servers translate verbose and easy-to-remember domain names to IP addresses, which are not that easy to remember.
3. An API is the way third-party software can communicate with a service. API wrappers or API clients are applications written for a particular language that translate the API to objects native to that language.
4. Credited to Dale Carnegie, but reportedly in use much earlier.