# How to write error messages that don’t suck

I spend most of my life interacting with software I wrote, or know intimately. Most of this code is what you would consider ‘safety critical’, and for that reason, it’s written in a particularly stringent way of defensive programming. It also has to adhere to incredibly strict API contracts, so that its output has to be predictable. Error messages must be meticulous and detailed. But elsewhere, in the real world, with tough deadlines to keep, the best one can do is writing error messages that don’t suck. By that, I mean error messages that may not be works of art, but which do the job. Just how important this is I learned thanks to a recent incident.

A few days ago, I logged into my bank account, and saw that it looked like it had been raided by the Visigoth hordes. It turns out that I had an interminably long series of transactions –– not particularly big ones, but they do add up ––, for the same sum, to the same American airline, around the same time. I will name and shame them, because I want them to fix this issue: Delta, we’re talking about you.

You see, my wife has been spending Christmastide in the US, visiting friends and family (and giving me some much-needed alone time to finish the first half of my upcoming book), and for the transatlantic long haul segment of her homebound flight, we decided it would be prudent to upgrade her to a comfier seat. Except every time we tried to do so, it returned a cryptic error message – “Upgrade failed”, and a code: 4477. So we tried again. Even when my wife used Delta’s phone booking service, the booking clerk couldn’t provide her with better information as to where the problem may lie. Finally, using someone else’s card, it worked.

The problem was that each time, we were charged. The error message failed to convey one of the most important things an error message should tell you: what was changed, and what was not? Our assumption was that payment and seat upgrade was an atomic transaction: either you were charged and the seat upgrade succeeded, or the seat upgrade failed and you would not be charged. The fact that this was not so, and was not disclosed, led to what could have been quite vexing (such as our account being blocked for presumed fraudulent use).

And to think all of this could have been avoided by error messages that don’t suck.

## The five rules of writing error messages that don’t suck

There are error messages that are a pure work of art. They’re something to aspire to, but in the real world, you often won’t have time to create such works of creative beauty, and have to create something acceptable, which is going to be an error message that does not suck. Such an error message really only needs to explain four things.

• Why am I seeing this crap?
• What caused this crap?
• What, of all the things I wanted to happen, have happened before this crap hit?
• What, owing to this crap, has not happened, despite me wanting them to?

Bonus points are awarded for “how do I avoid this crap from happening ever again?”, but that’s more the domain of the documentation than of error messages. Let’s take these one by one.

### Why am I seeing this crap?

Programming languages written for adults, such as Python, allow you to directly instantiate even base-level exception classes, such as IndexError or KeyError. The drawback of this is that most of these often don’t explain what actually happened. I encountered this recently in some code that, it turned out, I myself have committed (sometimes git blame really lives up to its name!).

Consider the following function:

def mean_gb(image, x, y):
"""Returns the mean of the green and blue values of the pixel at (x, y)."""
mean_gb = (image[x][y][0] + image[x][y][1])/2
return math.floor(mean_gb)

To explain in brief: in Python, it is a convention to represent images as a tensor of rank 3 (a cube of numbers) using an n-dimensional array type (ndarray) from numpy, where one dimension each represents the x and y coordinates of each pixel and one dimension represents the channel’s sequential identifier.1 The function above retrieves the average of the green and blue channel values of an image represented in this way. To access a value $m_{i, j, k}$ of a tensor $m$ of rank 3, numpy uses the chained accessors syntax m[i][j][k]. And as long as we provide a tensor of rank 3, this function works just fine.

What, however, if we simply provide a tensor of rank 2 (a two-dimensional matrix)? We get… an IndexError:

In  [5]: mean_gb(a, 1, 1)
Out [5]:
--------------------------------------------------------------------------- IndexError
<ipython-input-4-743d4fa2a9dd> in <module>
----> 1 mean_gb(a, 1, 1)

<ipython-input-3-fe96408de555> in mean_gb(image, x, y)
2 """Returns the mean of the green and blue values of the pixel at (x, y)."""
3
----> 4 mean_gb = (image[x][y][0] + image[x][y][1])/2
IndexError: invalid index to scalar variable.

From the perspective of numpy, this is completely true: we did request a value that didn’t exist. In fact, we tried to index a single scalar (the first two accessors already yielded a single scalar, and a scalar cannot normally be indexed), which we are indeed being told, albeit in a slightly strange language (it’s not that the index to the scalar is invalid, it’s that the idea of indexing a scalar is invalid).

However, for the user, this makes no sense. It does not point out to the user that they submitted a tensor of rank 2 for a function that only makes sense with tensors of rank 3. The proper way to deal with this, of course, is to write a custom error message:

def mean_gb(image, x, y):
"""Returns the mean of the green and blue values of the pixel at (x, y)."""

if len(image.shape) == 3:
mean_gb = (image[x][y][0] + image[x][y][1])/2
return math.floor(mean_gb)
else:
raise IndexError(f"The image argument has incorrect dimensionality. The image you submitted has {len(image.shape)} dimensions, whereas this function requires the image to have 3 dimensions to work.")

Best practice: smart error messages
A good error message is as long as it needs to be and as short as it can be while not omitting any critical details.

The .shape property of a numpy.ndarray object provides a tuple containing the shape of the array. For instance, a 4-element vector has the shape (4,), a 3-by-5 matrix has shape (3, 5) and a tensor of rank 3 representing a 100×200 pixel image with three channels has shape (100, 200, 3). len(image.shape) therefore is a convenient way to determine the tensor dimensionality of an n-dimensional numpy array. In the above instance, if the dimensionality was anything other than three, an error message was raised. Note that the error message explained

• what’s wrong (incorrect dimensionality),
• why that dimensionality is wrong (it should be 3, and it’s something else), and
• what the value is and what it should be instead.

An equally correct, and shorter, error message would have been

IndexError(“The image parameter must be a three-dimensional ndarray.”)

Short but sweet. You could also subclass IndexError:

class Expected3DError(IndexError):
def __init__(self, *args, **kwargs):
super(IndexError, self).__init__("Expected ndarray to have rank 3.", *args, **kwargs)

Best practice: exception/error classes
If a particular kind of error is prone to occur, it makes sense to give it its own class. The name itself can already hint at the more specific problem (e.g. that it’s the number of dimensions that’s at issue, not just some random index, in the case outlined above).

This could then be simply raised with no arguments (or, of course, you can build in additional fanciness, such as displaying the actual dimensions in the error message if the image or its shape is passed into the error):

def mean_gb(image, x, y):
"""Returns the mean of the green and blue values of the pixel at (x, y)."""

if len(image.shape) == 3:
mean_gb = (image[x][y][0] + image[x][y][1])/2
return math.floor(mean_gb)
else:
raise Expected3DError()

This would yield the following output:

---------------------------------------------------------------------------
Expected3DError                           Traceback (most recent call last)
<ipython-input-49-743d4fa2a9dd> in <module>
----> 1 mean_gb(a, 1, 1)

<ipython-input-48-11cc8210b9d7> in mean_gb(image, x, y)
5         mean_gb = (image[x][y][0] + image[x][y][1])/2
6     else:
----> 7         raise Expected3DError()

Expected3DError: Expected ndarray to have rank 3.

As you can see, there are many ways to write error messages that tell you why the error has occurred. They do not need to replace a traceback, but they need to explain the initial (root) cause, not the proximate cause, of the error.

### What caused this crap?

Best practice: tracebacks/stack traces
Always ensure your stack traces are meaningful and root-cause-first (the first function to be mentioned is the innermost function).

Closely related to the previous, your error message should provide some explanation of root causes. For instance, in the above example, explaining that the function expected a 3-dimensional ndarray described not only the reason the error message appeared but also why the value provided was erroneous. How much detail this requires depends strongly on the overall context of the API. However, users will want to know what actually happened.There is a way to overdo this, of course, as anyone who has seen the horror that is a multi-page Java traceback. But in general, error messages that explain what is expected, and possibly also what was provided instead, spare long hours of debugging. In user-facing interfaces, this is often the only way for the user to understand what the issue is and try to fix it. For instance, a form checker that rejects an e-mail address should at least try to explain which of the relatively few categories (throwaway domain, barred e-mail address, missing @, domain name is missing at least one ., etc.) the issue falls under. Believe it or not, users do not like to play ‘let’s figure out what the issue is’! For instance, I often use e-mail subaddresses that help me filter stuff, so many of my subscriptions are under [email protected]. Not all e-mail field checkers allow the + symbol, even though it is part of a valid e-mail address2and so are some far more exotic things, such as escaped @-signs! To cut a long story short: tell your users where the issue is.

Best practice: error numbers
Error numbers are fine in larger projects, e.g. Django has a great system for it. However, if you do use error numbers, make sure you have 1) a consistent error number nomenclature, 2) a widely available explanation of what each error code means and 3) if possible, a link to the online error code resolver in the error body

There are multiple ways to accomplish this. Consider my failing channel averager function from the previous subsection. You could phrase the IndexError‘s message string in multiple ways. Here are two examples I considered:

# Version 1
IndexError("Dimension mismatch.")

# Version 2
IndexError("This function requires a 3-dimensional ndarray.")

If you have seen the source code, or spend some time thinking, both make sense. However, Version 2 is far superior: it explains what is expected, and while Dimension Mismatch sounds like a great name for an synthpop band, it isn’t exactly the most enlightening error message.

Keep in mind that users may insert a breakpoint and run a traceback on your code, user interfaces do not afford them this option. Therefore, errors on user-facing interfaces should make it very clear what was expected and what is being provided. While a technical user of your code, faced with the better Version 2 error message from above could simply examine the dimensionality of the ndarray you provided to the function, this is not an option for someone interacting with your code via, say, a web-based environment. Therefore, these environments should be very clear about why they rejected particular values, and try to identify what the user input should have looked like versus what it did look like.

### What, of all the things I wanted to happen, have happened before this crap hit?

The assumption by most people is that transactions are atomic: either the entire transaction goes through, or the entire transaction is rolled back. Therefore, if a transaction is not atomic, and permanently changes state, it should make this clear. The general human reaction to an error is to try it again, so it is helpful (Delta web devs, please listen carefully) to point out which part of the transaction has occurred.

Best practice: non-atomic transactions
Non-atomic transactions can fake atomicity by built-in rollback provisions. Always perform the steps that are furthest from your system (e.g. payment processing) last, because those will be harder to roll back than any steps or changes in your own system, over which you have more total and more immediate control.

This is especially important for iterator processors. Iterator processors go through a list of things – numbers, files, tables, whatever – and perform a task on them. Often, these lists can be quite long, and it is helpful if the user knows which, if any, of the items have been successfully processed and which have not. One of the worst culprits in this field that I have ever come across was a software that took in BLAST queries, each in a separate file, and rendered visualisations. It would then proudly proclaim at the end that it has successfully rendered 995 of 1,000 files –– without, of course, telling you which five files did not render, why they did not render, and least of all why the whole application did not have a verbose mode or a logging facility. Don’t be the guy who develops eldritch horrors like this.

### What, owing to this crap, has not happened, despite me wanting them to?

The reverse of the above is that it’s useful for users to know what has not been changed. This is crucial, since users need to have an awareness of what still needs to be done for a re-run. This occurs not only in the context of error messages. One of my persistent bugbears is the Unix adduser command which, after you have provided it with some details about the new user, asks you if that’s all ok, and notes the default answer is ‘yes’. However, you never get any confirmation as to whether the user has been created successfully. This is particularly annoying when batch creating users, as you cannot get a log of which user has been created successfully on stdout unless you write your own little hack that echoes the corresponding message if the exit code of adduser is 0.

Best practice: tracking sub-transaction status
Non-atomic transactions that consist of multiple steps should track the status of each of these, and be able to account for what steps have, and what steps have not, been performed.

Where a particular transaction is not atomic, it is not sufficient to simply point out its failure. Unless the transaction can ‘fake atomicity’ with rollbacks (see best practice note above), it must detail what has, and what has not, occurred. It is important for the code itself to support this, by keeping track of which steps have been successfully carried out, especially if the function consists of asynchronous calls.

## Conclusion

Writing great error messages is an art for in and of itself, and it takes time to master. Unfortunately, it is a vanishing art, usually under the pressures of developing software as fast as possible with various agile methodologies that then end up pushing these into a relatively unobtrusive backlog. Just because error messages don’t make a smidgen of sense does not mean they don’t pass tests –– but they are the things that can turn users off your product for good.

The good news is that your error messages don’t have to be perfect. They don’t even have to be good. It’s enough if they don’t suck. And my hope is that with this short guide, I have helped point out some ways in which you, too, can write error messages that don’t suck. With that in mind, you will be able to build user interfaces and APIs that users will love to use because when they get something wrong (and they will get things wrong, because that’s what we humans do), they will have a less frustrating experience. We have been letting ourselves off the hook for horrid error messages and worst practices like error codes that are not adequately resolved. It is time for all of us to step up and write code that doesn’t add insult to injury by frustrating error messages after failure.

References   [ + ]

 1 ↑ Typically ordered in reverse (blue, green, red) due to a quirk of OpenCV. 2 ↑ See RFC 822, Section 6.1.

# The Zimmermann Affair: a worthless victory

Note to press and other media: this blog entry has been up for barely an hour, but has already gotten a LOT of press inquiries, here as comments or by e-mail. Please use the Contact page for that. It gets to me faster, and allows me a more streamlined way to respond. I can also be reached on my Facebook page. I’m very happy to help you with anything, so let me know what you’re interested in.

Sometimes, I tend to say that every time a quack loses their license, a hungry kitten gets a bowl of milk. So I could say I’m doing it all for the kittens, but in reality, I like seeing quacks struck off for far more prosaic reasons. Deep down, I do believe that there is a public law duty incumbent on professional bodies – even where the ‘profession’ they regulate is of  a professionality as dubious as naturopathic ‘medicine’ – to maintain certain key standards within their regulated practitioners. But this is also a story about the insufficiency of self-regulation, and what can go wrong with it – the pitfalls inherent in the idea that the profession will regulate itself, and keep dangerous actors out of the sphere of public practice. It is the story of a case where the system worked well, and yet remains insufficient to protect the public, raising serious questions as to whether regulation of health professions should indeed be delegated to secondary bodies, whose ultima ratio is to expel a member, but who are powerless once the quack in question perpetrates their quackery outside their aegis. I present you… the Zimmermann affair.

A few months ago, you might have heard of a Canadian naturopath, Ms Anke Zimmermann,1 published a rather interesting little article on her blog, recounting a story of a 4-year-old with behavioural problems whom she claimed to have cured with… behavioural therapy? Paroxetine? Psychotherapy? DBT? Benzos? Nope. Those are all bad, evil Big Pharma products. Nope, she went for a state-of-art treatment (about 250 years old) called lyssinum. Which is exactly what you think it is: the saliva of a rabid dog.

When the inimitable 6E23, everyone’s best resource on just how delusional homeopaths are, has reposted it, I was genuinely convinced that this must be a troll.

At this point, I saw two scenarios. Either Ms Zimmermann was administering a placebo that did not contain what it claimed to have contained (that is, she was bullshitting her marks2), or, worse, she was administering an extremely harmful pathogen to a child, before even giving the proper medical subspeciality (child behavioural psychology) a chance.3 If it was the latter, something very bad was going on.

You see, rabies is one of the worst diseases in existence. Personally, I’d rather have Ebola haemorrhagic fever than symptomatic rabies, having seen both. There are all of six known cases of survival of symptomatic rabies, using the controversial and largely hit-and-miss Milwaukee Protocol. It’s not to be trifled with. It’s also a controlled biological agent. I do suppose I don’t need to point out that administering it to anyone, least of all a child, is not so much bad medicine as pure, blank insanity.

Giving her the benefit of the doubt, I asked Ms Zimmermann some straightforward questions – namely, is she lying or is she insane? Obviously, something better than the ‘memory of water’ was expected here. Alas, it was not forthcoming – insults, on the other hand, were.

## The law

In British Columbia, the locus delicti (where the act has taken place), ‘naturopathic physicians’ (“naturopaths“, because I’m loath to dignify them with the title ‘physician’), practice under the statutory instrument B.C. Reg. 282/2008 (the “Regulation“), made under the authority delegated by the Health Professions Act 1996, R.S.B.C. 1996 c. 183 (the “Act”). Part II of the Act defines regulated health practitioners, including ‘naturopaths’, their discipline and registration being subject to the Regulation. The Regulation further appoints the College of Naturopathic Physicians of B.C. (the “College“) as the responsible regulatory authority for naturopaths, although you might be forgiven for finding ‘responsible’, ‘regulatory’, ‘authority’ and ‘naturopaths’ in the same sentence more than a little oxymoronic. Even without the oxy-.

## The facts

On 17 April 2018, I lodged a formal complaint with the Registrar of the College. In it, I recited the pertinent facts. These bear repeating here.

Facts: Chronology of events
3. The Practitioner owns, among others, the domain drzimmerman.org (the “Website”) and the Twitter account @drzimmermann as well as the Facebook profile at @doctorzimmermann.
4. On 08 February 2018, the Practitioner posted on the Website, under the section “Successful Cases”, a case report titled “A Child with Aggression and Behavioural Problems” (the “Case Report”). The Case Report was available, at the time of this Statement, under the URL https://www.drzimmermann.org/successful-clinical-cases/a-child-with-aggression-and-behavioural-problems. It is also exhibited as Exhibit 1 to this e-mail.
5. In the Case Report, the Practitioner describes, in her words,
“A 4-year-old boy with sleep and behavioural problems, including aggression and violence towards school mates, hiding under tables and growling, improves nicely with a homeopathic remedy made from the saliva of a rabid dog, Lyssinum.”
6. The Case Report concerns a “nearly 5-year-old” boy named Jonah (the “Patient”). Jonah was, according to the Case Report, brought into the Practitioner’s care on 06 October 2018 with complaints of “sleep and behavioural problems”. Patient exhibited, according to the Case Report, “an inability to fall asleep at night, a fear of the dark, of wolves, werewolves, ghosts and zombies … [t]here is a history of a dog bit [sic] which drew blood.”
7. The Practitioner states that she “decided to give a homeopathic remedy made from rabies”. This is based on the theory that the “The dog who bit him may have recently been vaccinated with the rabies vaccine or the dog bite in and of itself may have affected the boy with the rabies miasm”.
8. The Practitioner describes the administration of Lyssinum 200CH, 2 pellets, and at a follow-up meeting, a “dose” (unspecified quantity) of Lyssinum 10M. The Case Report does not discuss a referral to an appropriate secondary referral service, such as a child psychotherapist, developmental psychotherapist or a psychiatrist experienced in early life affective and social integration disorders.
Facts: Rabies, “rabies miasm” and Lyssinum
9. Rabies is an infection of the central nervous system caused by the rabies virus (RABV) and, in rare cases, the Australian Bat Lyssavirus (ABLV). Rabies is a typical zoonotic virus that can exist in some animals without symptom, but is characteristically symptomatic – and almost always fatal – in canids and primates, including humans. RABV and ABLV are genetically related, and constitute part of the genus Lyssavirus, a genus of the family Rhabdoviridae, of which RABV is the type species. They belong to Group V (negative-sense single-strand RNA) of the Baltimore classification of viruses.
10. Rabies in the human is transmitted typically by bite, specifically by transfer of the virus through the saliva of an infected animal, with dogs being frequent carriers throughout the world, except in the Americas, where widespread vaccination of pets and strays has led to a vastly reduced prevalence of RABV in canines, with most cases caused by chiropteran (bat) bites. Rabies causes over 17,000 deaths per annum.
11. In humans, rabies has a widely varying incubation period, ranging from several days to up to six years, but typically around two to three months. During this period, rabies cannot be detected, as the virus is safely ensconced in nerve tissue. Post-exposure prophylaxis (rabies immunoglobulin and/or the rabies vaccine) is usually highly successful and not connected to severe side effects.
12. Symptomatic rabies initially manifests as non-specific viral symptoms, including headaches and fever. It eventually evolves rapidly to a meningoencephalitis or encephalitis, with the clinical course mirroring this as focal neurological deficits, anxiety, agitation, altered mental state and hallucinations as well as the characteristic aversion to water (hydrophobia). This is inevitably followed by delirium, coma and demise of the infected person, within 2-10 days after symptoms have first presented. A low number of patients (six, to be specific) have survived owing to a treatment developed by Rodney Willoughby, Jr. M.D. (the Wisconsin Protocol), in which a coma is induced using midazolam and ketamine and high-dose antivirals – ribavirin and amantadine – are administered. The overall survival rate of this treatment is, according to an ITT analysis, approximately 8%, and many patients survive with moderate to severe neurological sequelae. The above ought to recapitulate the seriousness of what is a fortunately only rarely seen infectious disease in the Western Hemisphere.
13. Rabies is not particularly infectious, but it is categorized as a BSL-2 agent. It is not a Select Agent but subject to export restrictions. Most health authorities recommend BSL-3 treatment.
14. The concept of rabies miasm is not known to any branch of science. It is not an acknowledged medical phenomenon. It is not an acknowledged biological phenomenon. It has no acceptable biological explanation that is in line with the current state of science.
15. The rabies prophylactic vaccine is an inactivated (killed) viral vaccine. Upon injection subcutaneously, it disperses in the subject’s bloodstream, stimulating the creation of antibodies. Neither antibodies nor the inactivated RABV antigen are present in any appreciable number in an animal’s saliva after vaccination. Nor are either RABV antigen or anti-RABV immunoglobulin associated with behavioural-cognitive symptoms that are in line with those reported in the Case Report for the Patient. The Practitioner’s theory that the history of dog bite may have caused his behavioural issues is unsupported by any clinical evidence or feasible scientific mechanism. It is also not an acknowledged medical phenomenon. It is not an acknowledged biological phenomenon.
16. Lyssinum, from the Greek lyssa ‘rage’ referring to rabies, also known as Hydrophobinum, referring to the characteristic symptom of hydrophobia in rabies, is described as a ‘nosode of rabies’ and is described as a trituration (solution of a water-insoluble substance in lactose in homeopathy) of the saliva of a rabid dog, according to the National Center for Homeopathy.
17. It is unclear whether Lyssinum contains any active viral particles. The dilutions described are 200CH and 10M. CH refers to centennial dilutions, in which the original substance, known as the “mother tincture” in homeopathy, is diluted to 1:100, and vigorously shaken. 200CH therefore amounts to 1:100^200, a vast number: there are approximately 10^80 atoms in the entire observable universe, therefore, a mere 40C dilution of a single small molecule would occupy the entire observable universe. In other words, it is unlikely that Lyssinum contains anything other than its excipients, and no active viral particles were present.
18. At the same time, the Practitioner claims that the vaccine was made “from the saliva of a rabid dog”. The probability may be infinitesimally small that any of the RABV in the original substance or “mother tincture”, if it ever existed, would remain in the substance, if we assume a uniform distribution of particles. But it cannot be conclusively excluded that this would not, in fact, be the case. For instance, it is improbable but not impossible for all the viral particles from the “mother tincture” to end up in a single, extremely infectious, granule. In that sense, the Practitioner is administering a high chance of an inert placebo or an extremely low but nonzero chance of a potentially fatal illness, to a 4-year-old with behavioural problems.
19. Practitioner’s responses to questions about her practice and her use of a substance that may potentially infect a patient with an incurable and virtually always fatal illness consisted of insults, the threat to use the scientific position – that her practice of using Lyssinum is either falsely stating that it is anything but lactose or otherwise it may carry a small but nonzero risk of an incurable and virtually always fatal illness – as proof of “ignorance” of homeopathy at a conference, as well as threats to report the Complainant to “the authorities” for “harrassment [sic]”.
20. It remains open what the Practitioner’s beliefs were as to the contents of the granules. If the practitioner believed that they were indeed made by sequential dilutions of a highly infectious substance, then she could not exclude with 100% certainty that she was not administering a pathogen causing an incurable and virtually always fatal illness. On the other hand, if she did not believe that the granules were created by sequential dilutions of a highly infectious substance, then she had no reason to make therapeutic or other claims, including the very claim that they were in fact made “from the saliva of a rabid dog”. These possibilities are mutually exclusive but jointly complementary.

On the same day, the College accepted jurisdiction in all five questions posed in my submission, namely:

21. Complainant hereby requests your consideration of the above outlined facts in order to determine whether
a. the use of Lyssinum was in compliance with the Practitioner’s duty towards the Patient,
b. the Practitioner has discharged her duty of care towards the Patient by treating him with a questionable remedy that has no scientific validity or evidence-based track record in treating aggression or childhood affective or social disorders instead of referring him to a more appropriate professional assessment,
c. the Practitioner has discharged her duty of care towards the Patient by the use of a homeopathic medicine ahead of excluding a possible psychiatric, psychological or neurological aetiology,
d. the Practitioner has endangered the Patient’s health by using a treatment that may potentially cause a lethal and untreatable infectious disease, or
e. the Practitioner has made claims about the origin of the treatment that are intended to present Lyssinum as having a therapeutic potential that is not warranted by suitable, clinically validated scientific evidence.

## Truth and consequences

On 27 November, I received a communication from the College, notifying me of a consent order between Ms Zimmermann and the College:

The Committee determined pursuant to section 33(6)(c) of the Act that this would be an appropriate case in which to seek a consent order in view of the Committee’s concerns of the Registrant’s professional conduct.The Registrant consented to the immediate cancellation of her registration with the College. She agreed not to apply for reinstatement or otherwise seek registration with the College for a minimum period of five (5) years from the date of the Consent Order. As such, the Registrant is not permitted to practice naturopathic medicine. The Registrant is further not permitted to use the reserved titles: “naturopath”, “naturopathic physician”, “naturopathic doctor”, “physician” or “doctor’.

As the Registrant’s Consent Order pertains to a “serious matter” as defined in the Act, it requires public notification. Accordingly, a notice regarding the cancellation of the Registrant’s registration has been placed on the College’s website, at www.cnpbc.bc.ca. Please note that the public notification does not contain any identifiable information about you. The Registrant’s Consent Order will be disclosed to the Inquiry Committee, the Discipline Committee and/or the Registration Committee in the event of any future proceedings following consideration by those committees of the merits of any future complaint or application.

The public notice is available here and the College’s Notice of Disposition is available to press and interested parties upon request. And ideally, the story were to be over here. But it’s not.

## Self-regulation and its insufficiency

What, then, is the outcome of l’affaire Zimmermann? It’s not merely a potentially dangerous and/or deluded homeopath who stood at trial, and was judged by a jury of her peers. Rather, the whole case is a sad and disappointing verdict on the Regulation and the self-regulatory regime of SCAM (supplements, complementary and alternative medicine) industry. In this case, self-regulation ultimately failed due to its own set boundaries: they may strip Ms Zimmermann of her fancy postnominals and the right to call herself a naturopathic physician, but can continue without a shred of a license as a homeopath, selling the same snake oil to the same clientele of the desperate, the crunchy and the credulous. What could have been a chance to get an immensely harmful practitioner of pseudoscience and make-believe medicine off the street who herself cannot honestly answer whether she is deluding her customers or gambling with a child’s life has been traded for a cheap consent order, under which she might not practice under the College’s aegis, and with that, that’s all done. Clearly, this proves that the naturopathic ‘profession’ cannot be trusted with self-regulation.

In the end, this is another sad story about the regulation of alternative “medicine”, and the state it is in. It is another story of missed opportunities, of could-have-beens and should-have-beens, and of a government that disclaimed all responsibility for the lives of the citizens who put it into power, delegating it to colleges of make-believe medicine who are fine with all sorts of pseudomedicine as long as it does not happen under their purview and with their name on the leaflet – and to hell with patient welfare, right?

Watch this space. I doubt this is the last we’ve heard of Ms Zimmermann. We can only hope that her next foray will not end up with consequences that are harder to repair than a child whose anger issues never got the adequate treatment but instead where inefficiently soothed by some balls of lactose.

References   [ + ]

 1 ↑ Naturopathic doctorates are not doctorates in any sense of the word, and I am not going to degrade my doctorate, or yours, by treating them as such 2 ↑ Homeopaths and naturopaths do not have patients. Doctors and nurses have patients. Homeopaths and naturopaths, like all fraudsters, have marks. 3 ↑ Funny that. Isn’t it the same crowd who complain about ‘allopathic medicine’ or ‘Schulmedizin’ trying to treat everything with drugs? Or is that not the case if globuli are involved?

# Blanche: a poem by Sophie Howard

It’s 11/11 or Remembrance Day, as it’s known throughout the Commonwealth. On this day, we commemorate those who fell in service to the Crown. No holiday or day of commemoration has spawned as much poetry as Remembrance Day. There’s the classic, For the Fallen by Laurence Binyon, Alan Seeger’s Rendezvous with Death, and, of course, In Flanders Fields by John McCrae, which established the poppy as the symbol of Remembrance Day. I’m glad there’s so much poetry around. Remembrance Day is a different subject for me to write about, so I get to resort to the wit and words of others.

This one is a little different. There’s plenty of poetry that deals with us, the protagonists of this struggle, very much at the expense of those who suffer most from war: the civilian population, especially children. A poetic reflection on a story about a young girl in World War II, this poem was written by my niece, Sophie Howard. It is her debut poem, and at the tender age of twelve no less. It’s rare for someone so young to understand something she has been so fortunate not to have to experience first-hand with such profundity and treat it with such tenderness and grace. Like her, this poem is pure amazingness, and an utter privilege to be let into this incredible young lady’s thinking about war from a child’s perspective.

### Blanche

#### by Sophie Howard

Men go off,
Down the street.
To the beat.
They hope to win,
So they need food.
No one argues
Though there’s long queues
I go to school,
I go back home.
This cycle goes
For weeks unknown.
Trucks go through town
On cobblestone.

One got wedged
In the mud.
A boy jumped out
Lands with a thud.
Runs away
That he tries
Straight into
The mayor’s open arms.
He gets dragged back
to the truck,
Which is now unstuck.
Now I wonder
Why he’s there.
Out of the truck
More faces stare.

I shall follow,
I run along.
I run fast
And strong.
Past signs
That say
RESTRICTED.

Through the trees
And bars I go.
The sticks and leaves
Scrape my face.
Like angry cats
I say.
When I feel
I cannot go on.
I reach a clearing
Before too long
I look around,
This is wrong.
Why are the men
Doing this?

Silent children from behind a fence,
Hardly took a breath.
Hungry eyes looking on,
Never taking rest.
Shouts and cries,
For food they long.
They are hungry,
But food I have none.
The cries go on,
Long and loud.
The noise reduced,
To distant sound.
Slowly I leave,
But will come back.
With the food
That they lack.

I go back,
Into the mist.
To the town,
Where I have lived.
Those children,
That is wrong.
They should live
A life that is long.
They should go
To school as well.
I cannot tell.
I reach my home
In not too long.
Behind the fence,
They are strong.
They live without
The things they need.
There are things,
they can achieve.

From my plate,
I take food.
More than I need
From the school.
I walk along,
With heavy bag.
To those that trapped,
I learn their names,
Like Jess and John.
We are friends,
before too long.
Why they are there,
I do not know.
Conditions get worse.
It starts to snow.

It’s now cold,
Snow and ice.
Men come back,
Lost the fight.
I still go,
to the fence.
Every day
None of rest.
Once I come,
There is none,
Nothing there,
And no more sun.
I smell smoke.
It’s a scare.
Hard to see,
Misty air.

Crunch of sticks,
Behind me now.
There’s nothing there,
No more sound.
I see a man,
A silhouette.
With a gun,
I now fret
I hear a shot,
I hit the ground.
Now I feel nothing,
There is no sound.
Nothing more
No more snow.
Spring has come,
For all to know.

(C) Sophie Howard, 2018.

Miss Howard (*2006) was born and raised in Queensland, Australia.

# Structuring R projects

There are some things that I call Smith goods:1 things I want, nay, require, but hate doing. A clean room is one of these – I have a visceral need to have some semblance of tidiness around me, I just absolutely hate tidying, especially in the summer.2 Starting and structuring packages and projects is another of these things, which is why I’m so happy things like cookiecutter exist that do it for you in Python. [su_pullquote align=”right”]While I don’t like structuring R projects, I keep doing it, because I know it matters. That’s a pearl of wisdom that came occasionally at a great price.[/su_pullquote]I am famously laid back about structuring R projects – my chill attitude is only occasionally compared to the Holy Inquisition, the other Holy Inquisition and Gunny R. Lee Ermey’s portrayal of Drill Sgt. Hartman, and it’s been months since I last gutted an intern for messing up namespaces.3 So while I don’t like structuring R projects, I keep doing it, because I know it matters. That’s a pearl of wisdom that came occasionally at a great price, some of which I am hoping to save you by this post.

## Five principles of structuring R projects

Every R project is different. Therefore, when structuring R projects, there has to be a lot more adaptability than there is normally When structuring R projects, I try to follow five overarching principles.

1. The project determines the structure. In a small exploratory data analysis (EDA) project, you might have some leeway as to structural features that you might not have when writing safety-critical or autonomously running code. This variability in R – reflective of the diversity of its use – means that it’s hard to devise a boilerplate that’s universally applicable to all kinds of projects.
2. Structure is a means to an end, not an end in itself. The reason why gutting interns, scalping them or yelling at them Gunny style are inadvisable is not just the additional paperwork it creates for HR. Rather, the point of the whole exercise is to create people who understand why the rules exists and organically adopt them, understanding how they help.
3. Rules are good, tools are better. When tools are provided that take the burden of adherence – linters, structure generators like cookiecutter, IDE plugins, &c. – off the developer, adherence is both more likely and simpler.
4. Structures should be interpretable to a wide range of collaborators. Even if you have no collaborators, thinking from the perspective of an analyst, a data scientist, a modeller, a data engineer and, most importantly, the client who will at the very end receive the overall product.
5. Structures should be capable of evolution. Your project may change objectives, it may evolve, it may change. What was a pet project might become a client product. What was designed to be a massive, error-resilient superstructure might have to scale down. And most importantly, your single-player adventure may end up turning into an MMORPG. Your structure has to be able to roll with the punches.

## A good starting structure

Pretty much every R project can be imagined as a sort of process: data gets ingested, magic happens, then the results – analyses, processed data, and so on – get spit out. The absolute minimum structure reflects this:

.
└── my_awesome_project
├── src
├── output
├── data
│   ├── raw
│   └── processed
├── run_analyses.R
└── .gitignore

In this structure, we see this reflected by having a data/ folder (a source), a folder for the code that performs the operations (src/) and a place to put the results (output/). The root analysis file (the sole R file on the top level) is responsible for launching and orchestrating the functions defined in the src/ folder’s contents.

## The data folder

The data folder is, unsurprisingly, where your data goes. In many cases, you may not have any file-formatted raw data (e.g. where the raw data is accessed via a *DBC connection to a database), and you might even keep all intermediate files there, although that’s pretty uncommon on the whole, and might not make you the local DBA’s favourite (not to mention data protection issues). So while the raw/ subfolder might be dispensed with, you’ll most definitely need a data/ folder.

When it comes to data, it is crucial to make a distinction between source data and generated data. Rich Fitzjohn puts it best when he says to treat

• source data as read-only, and
• generated data as disposable.

The preferred implementation I have adopted is to have

• a data/raw/ folder, which is usually is symlinked to a folder that is write-only to clients but read-only to the R user,4,
• a data/temp/ folder, which contains temp data, and
• a data/output/ folder, if warranted.

### The src folder

Some call this folder R– I find this a misleading practice, as you might have C++, bash and other non-R code in it, but is unfortunately enforced by R if you want to structure your project as a valid R package, which I advocate in some cases. I am a fan of structuring the src/ folder, usually by their logical function. There are two systems of nomenclature that have worked really well for me and people I work with:

• The library model: in this case, the root folder of src/ holds individual .R scripts that when executed will carry out an analysis. There may be one or more such scripts, e.g. for different analyses or different depths of insight. Subfolders of src/ are named after the kind of scripts they contain, e.g. ETL, transformation, plotting. The risk with this structure is that sometimes it’s tricky to remember what’s where, so descriptive file names are particularly important.
• The pipeline model: in this case, there is a main runner script or potentially a small number. These go through scripts in a sequence. It is a sensible idea in such a case to establish sequential subfolders or sequentially numbered scripts that are executed in sequence. Typically, this model performs better if there are at most a handful distinct pipelines.

Whichever approach you adopt, a crucial point is to keep function definition and application separate. This means that only the pipeline or the runner scripts are allowed to execute (apply) functions, and other files are merely supposed to define them. Typically, folder level segregation works best for this:

• keep all function definitions in subfolders of src/, e.g. src/data_engineering, and have the directly-executable scripts directly under src/ (this works better for larger projects), or
• keep function definitions in src/, and keep the directly executable scripts in the root folder (this is more convenient for smaller projects, where perhaps the entire data engineering part is not much more than a single script).

## output and other output folders

Output may mean a range of things, depending on the nature of your project. It can be anything from a whole D.Phil thesis written in a LaTeX-compliant form to a brief report to a client. There are a couple of conventions with regard to output folders that are useful to keep in mind.

### Separating plot output

[su_pullquote]My personal preference is that plot output folders should be subfolders of output/, rather than top-tier folders, unless the plots themselves are the objective.[/su_pullquote]It is common to have a separate folder for plots (usually called figs/ or plots/), usually so that they could be used for various purposes. My personal preference is that plot output folders should be subfolders of output folders, rather than top-tier folders, unless they are the very output of the project. That is the case, for instance, where the project is intended to create a particular plot on a regular basis. This was the case, for instance, with the CBRD project whose purpose was to regularly generate daily epicurves for the DRC Zaire ebolavirus outbreak.

With regard to maps, in general, the principle that has worked best for teams I ran was to treat static maps as plots. However, dynamic maps (e.g. LeafletJS apps), tilesets, layers or generated files (e.g. GeoJSON files) tend to deserve their own folder.

### Reports and reporting

[su_pullquote]For business users, automatically getting a beautiful PDF report can be priceless.[/su_pullquote]Not every project needs a reporting folder, but for business users, having a nice, pre-written reporting script that can be run automatically and produces a beautiful PDF report every day can be priceless. A large organisation I worked for in the past used this very well to monitor their Amazon AWS expenditure.5 A team of over fifty data scientists worked on a range of EC2 instances, and runaway spending from provisioning instances that were too big, leaving instances on and data transfer charges resulting from misconfigured instances6 was rampant. So the client wanted daily, weekly, monthly and 10-day rolling usage nicely plotted in a report, by user, highlighting people who would go on the naughty list. This was very well accomplished by an RMarkdown template that was ‘knit‘ every day at 0600 and uploaded as an HTML file onto an internal server, so that every user could see who’s been naughty and who’s been nice. EC2 usage costs have gone down by almost 30% in a few weeks, and that was without having to dismember anyone!7

Probably the only structural rule to keep in mind is to keep reports and reporting code separate. Reports are client products, reporting code is a work product and therefore should reside in src/.

## Requirements and general settings

I am, in general, not a huge fan of outright loading whole packages to begin with. Too many users of R don’t realise that

• you do not need to attach (library(package)) a package in order to use a function from it – as long as the package is available to R, you can simply call the function as package::function(arg1, arg2, ...), and
• importing a package using library(package) puts every single function from that package into the namespace, overwriting by default all previous entries. This means that in order to deterministically know what any given symbol means, you would have to know, at all times, the order of package imports. Needless to say, there is enough stuff to keep in one’s mind when coding in R to worry about this stuff.

However, some packages might be useful to import, and sometimes it’s useful to have an initialisation script. This may be the case in three particular scenarios:

• You need a particular locale setting, or a particularly crucial environment setting.
• You are not using packrat or some other package management solution, and definitely need to ensure some packages are installed, but prefer not to put the clunky install-if-not-present code in every single thing.

In these cases, it’s sensible to have a file you would source before every top-level script – in an act of shameless thievery from Python, I tend to call this requirements.R, and it includes some fundamental settings I like to rely on, such as setting the locale appropriately. It also includes a CRAN install check script, although I would very much advise the use of Packrat over it, since it’s not version-sensitive.

### Themes, house style and other settings

It is common, in addition to all this, to keep some general settings. If your institution has a ‘house style’ for ggplot2 (as, for instance, a ggthemr file), for instance, this could be part of your project’s config. But where does this best go?

[su_pullquote align=”right”]I’m a big fan of keeping house styles in separate repos, as this ensures consistency across the board.[/su_pullquote]It would normally be perfectly fine to keep your settings in a config.R file at root level, but a config/ folder is much preferred as it prevents clutter if you derive any of your configurations from a git submodule. I’m a big fan of keeping house styles and other things intended to give a shared appearance to code and outputs (e.g. linting rules, text editor settings, map themes) in separate – and very, very well managed! – repos, as this ensures consistency across the board over time. As a result, most of my projects do have a config folder instead of a single configuration file.

It is paramount to separate project configuration and runtime configuration:

• Project configuration pertains to the project itself, its outputs, schemes, the whole nine yards. For instance, the paper size to use for generated LaTeX documents would normally be a project configuration item. Your project configuration belongs in your config/ folder.
• Runtime configuration pertains to parameters that relate to individual runs. In general, you should aspire to have as few of these, if any, as possible – and if you do, you should keep them as environment variables. But if you do decide to keep them as a file, it’s generally a good idea to keep them at the top level, and store them not as R files but as e.g. JSON files. There are a range of tools that can programmatically edit and change these file formats, while changing R files programmatically is fraught with difficulties.

### Keeping runtime configuration editable

A few years ago, I worked on a viral forecasting tool where a range of model parameters to build the forecast from were hardcoded as R variables in a runtime configuration file. It was eventually decided to create a Python-based web interface on top of it, which would allow users to see the results as a dashboard (reading from a database where forecast results would be written) and make adjustments to some of the model parameters. The problem was, that’s really not easy to do with variables in an R file.

On the other hand, Python can easily read a JSON file into memory, change values as requested and export them onto the file system. So instead of that, the web interface would store the parameters in a JSON file, from which R would then read them and execute accordingly. Worked like a charm. Bottom line – configurations are data, and using code to store data is bad form.

## Dirty little secrets

Everybody has secrets. In all likelihood, your project is no different: passwords, API keys, database credentials, the works. The first rule of this, of course, is never hardcode credentials in code. But you will need to work out how to make your project work, including via version control, while also not divulging credentials to the world at large. My preferred solutions, in order of preference, are:

1. the keyring package, which interacts with OS X’s keychain, Windows’s Credential Store and the Secret Service API on Linux (where supported),
2. using environment variables,
3. using a secrets file that is .gitignored,
4. using a config file that’s .gitignored,
5. prompting the user.

Let’s take these – except the last one, which you should consider only as a measure of desperation, as it relies on RStudio and your code should aspire to run without it – in turn.

### Using keyring

keyring is an R package that interfaces with the operating system’s keychain management solution, and works without any additional software on OS X and Windows.8 Using keyring is delightfully simple: it conceives of an individual key as belonging to a keyring and identified by a service. By reference to the service, it can then be retrieved easily once the user has authenticated to access the keychain. It has two drawbacks to be aware of:

• It’s an interactive solution (it has to get access permission for the keychain), so if what you’re after is R code that runs quietly without any intervention, this is not your best bet.
• A key can only contain a username and a password, so it cannot store more complex credentials, such as 4-ple secrets (e.g. in OAuth, where you may have a consumer and a publisher key and secret each). In that case, you could split them into separate keyring keys.

However, for most interactive purposes, keyring works fine. This includes single-item secrets, e.g. API keys, where you can use some junk as your username and hold only on to the password. [su_pullquote align=”right”]For most interactive purposes, keyring works fine. This includes single-item secrets, e.g. API keys.[/su_pullquote] By default, the operating system’s ‘main’ keyring is used, but you’re welcome to create a new one for your project. Note that users may be prompted for a keychain password at call time, and it’s helpful if they know what’s going on, so be sure you document your keyring calls well.

To set a key, simply call keyring::key_set(service = "my_awesome_service", username = "my_awesome_user). This will launch a dialogue using the host OS’s keychain handler to request authentication to access the relevant keychain (in this case, the system keychain, as no keychain is specified), and you can then retrieve

• the username: using keyring::key_list("my_awesome_service")[1,2], and
• the password: using keyring::key_get("my_awesome_service").

### Using environment variables

[su_pullquote]The thing to remember about environment variables is that they’re ‘relatively private’: everyone in the user session will be able to read them.[/su_pullquote]Using environment variables to hold certain secrets has become extremely popular especially for Dockerised implementations of R code, as envvars can be very easily set using Docker. The thing to remember about environment variables is that they’re ‘relatively private’: they’re not part of the codebase, so they will definitely not accidentally get committed to the VCS, but everyone who has access to the particular user session  will be able to read them. This may be an issue when e.g. multiple people are sharing the ec2-user account on an EC2 instance. The other drawback of envvars is that if there’s a large number of them, setting them can be a pain. R has a little workaround for that: if you create an envfile called .Renviron in the working directory, it will store values in the environment. So for instance the following .Renviron file will bind an API key and a username:

api_username = "my_awesome_user"
api_key = "e19bb9e938e85e49037518a102860147"

So when you then call Sys.getenv("api_username"), you get the correct result. It’s worth keeping in mind that the .Renviron file is sourced once, and once only: at the start of the R session. Thus, obviously, changes made after that will not propagate into the session until it ends and a new session is started. It’s also rather clumsy to edit, although most APIs used to ini files will, with the occasional grumble, digest .Renvirons.

Needless to say, committing the .Renviron file to the VCS is what is sometimes referred to as making a chocolate fireman in the business, and is generally a bad idea.

### Using a .gitignored config or secrets file

config is a package that allows you to keep a range of configuration settings outside your code, in a YAML file, then retrieve them. For instance, you can create a default configuration for an API:

default:
my_awesome_api:
url: 'https://awesome_api.internal'
api_key: 'e19bb9e938e85e49037518a102860147'

From R, you could then access this using the config::get() function:

my_awesome_api_configuration <- config::get("my_awesome_api")

This would then allow you to e.g. refer to the URL as my_awesome_api_configuration$url, and the API key as my_awesome_api_configuration$api_key. As long as the configuration YAML file is kept out of the VCS, all is well. The problem is that not everything in such a configuration file is supposed to be secret. For instance, it makes sense for a database access credentials to have the other credentials DBI::dbConnect() needs for a connection available to other users, but keep the password private. So .gitignoreing a config file is not a good idea.

[su_pullquote align=”right”]A dedicated secrets file is a better place for credentials than a config file, as this file can then be wholesale .gitignored.[/su_pullquote]A somewhat better idea is a secrets file. This file can be safely .gitignored, because it definitely only contains secrets. As previously noted, definitely create it using a format that can be widely written (JSON, YAML).9 For reasons noted in the next subsection, the thing you should definitely not do is creating a secrets file that consists of R variable assignments, however convenient an idea that may appear at first. Because…

### Whatever you do…

One of the best ways to mess up is creating a fabulous way of keeping your secret credentials truly secret… then loading them into the global scope. Never, ever assign credentials. Ever.

You might have seen code like this:

dbuser <- Sys.getenv("dbuser")
dbpass <- Sys.getenv("dbpass")

conn <- DBI::dbConnect(odbc::odbc(), UID = dbuser, PWD = dbpass)

[su_pullquote]Never, ever put credentials into any environment if possible – especially not into the global scope.[/su_pullquote]This will work perfectly, except once its done, it will leave the password and the user name, in unencrypted plaintext (!), in the global scope, accessible to any code. That’s not just extremely embarrassing if, say, your wife of ten years discovers that your database password is your World of Warcraft character’s first name, but also a potential security risk. Never put credentials into any environment if possible, and if it has to happen, at least make it happen within a function so that they don’t end up in the global scope. The correct way to do the above would be more akin to this:

create_db_connection <- function() {
DBI::dbConnect(odbc::odbc(), UID = Sys.getenv("dbuser"), PWD = Sys.getenv("dbpass")) %>% return()
}

## Concluding remarks

Structuring R projects is an art, not just a science. Many best practices are highly domain-specific, and learning these generally happens by trial and pratfall error. In many ways, it’s the bellwether of an R developer’s skill trajectory, because it shows whether they possess the tenacity and endurance it takes to do meticulous, fine and often rather boring work in pursuance of future success – or at the very least, an easier time debugging things in the future. Studies show that one of the greatest predictors of success in life is being able to tolerate deferred gratification, and structuring R projects is a pure exercise in that discipline.

[su_pullquote align=”right”]Structuring R projects is an art, not just a science. Many best practices are highly domain-specific, and learning these generally happens by trial and error.[/su_pullquote]At the same time, a well-executed structure can save valuable developer time, prevent errors and allow data scientists to focus on the data rather than debugging and trying to find where that damn snippet of code is or scratching their head trying to figure out what a particularly obscurely named function does. What might feel like an utter waste of time has enormous potential to create value, both for the individual, the team and the organisation.

[su_pullquote]As long as you keep in mind why structure matters and what its ultimate aims are, you will arrive at a form of order out of chaos that will be productive, collaborative and useful.[/su_pullquote]I’m sure there are many aspects of structuring R projects that I have omitted or ignored – in many ways, it is my own experiences that inform and motivate these commentaries on R. Some of these observations are echoed by many authors, others diverge greatly from what’s commonly held wisdom. As with all concepts in development, I encourage you to read widely, get to know as many different ideas about structuring R projects as possible, and synthesise your own style. As long as you keep in mind why structure matters and what its ultimate aims are, you will arrive at a form of order out of chaos that will be productive, collaborative and mutually useful not just for your own development but others’ work as well.

My last commentary on defensive programming in R has spawned a vivid and exciting debate on Reddit, and many have made extremely insightful comments there. I’m deeply grateful for all who have contributed there. I hope you will also consider posting your observations in the comment section below. That way, comments will remain together with the original content.

References   [ + ]

 1 ↑ As in, Adam Smith. 2 ↑ It took me years to figure out why. It turns out that I have ZF alpha-1 antitrypsin deficiency. As a consequence, even minimal exposure to small particulates and dust can set off violent coughing attacks and impair breathing for days. Symptoms tend to be worse in hot weather due to impaired connective tissue something-or-other. 3 ↑ That’s a joke. I don’t gut interns – they’re valuable resources, HR shuns dismembering your coworkers, it creates paperwork and I liked every intern I’ve ever worked with – but most importantly, once gutted like a fish, they are not going to learn anything new. I prefer gentle, structured discussions on the benefits of good package structure. Please respect your interns – they are the next generation, and you are probably one of their first example of what software development/data science leadership looks like. The waves you set into motion will ripple through generations, well after you’re gone. You better set a good example. 4 ↑ Such a folder is often referred to as a ‘dropbox’, and the typical corresponding octal setting, 0422, guarantees that the R user will not accidentally overwrite data. 5 ↑ The organisation consented to me telling this story but requested anonymity, a request I honour whenever legally possible. 6 ↑ In case you’re unfamiliar with AWS: it’s a cloud service where elastic computing instances (EC2 instances) reside in ‘regions’, e.g. us-west-1a. There are (small but nonzero) charges for data transfer between regions. If you’re in one region but you configure the yum repo server of another region as your default, there will be costs, and, eventually, tears – provision ten instances with a few GBs worth of downloads, and there’ll be yelling. This is now more or less impossible to do except on purpose, but one must never underestimate what users are capable of from time to time! 7 ↑ Or so I’m told. 8 ↑ Linux users will need libsecret 0.16 or above, and sodium. 9 ↑ XML is acceptable if you’re threatened with waterboarding.

# Assignment in R: slings and arrows

Having recently shared my post about defensive programming in R on the r/rstats subreddit, I was blown away by the sheer number of comments as much as I was blown away by the insight many displayed. One particular comment by u/guepier struck my attention. In my previous post, I came out quite vehemently against using the = operator to effect assignment in R. u/guepier‘s made a great point, however:

But point 9 is where you’re quite simply wrong, sorry:

never, ever, ever use = to assign. Every time you do it, a kitten dies of sadness.

This is FUD, please don’t spread it. There’s nothing wrong with =. It’s purely a question of personal preference. In fact, if anything <- is more error-prone (granted, this is a very slight chance but it’s still higher than the chance of making an error when using =).

Now, assignment is no doubt a hot topic – a related issue, assignment expressions, has recently led to Python’s BDFL to be forced to resign –, so I’ll have to tread carefully. A surprising number of people have surprisingly strong feelings about assignment and assignment expressions. In R, this is complicated by its unusual assignment structure, involving two assignment operators that are just different enough to be trouble.

## A brief history of <-

There are many ways in which <- in R is anomalous. For starters, it is rare to find a binary operator that consists of two characters – which is an interesting window on the R <- operator’s history.

The <- operator, apparently, stems from a day long gone by, when keyboards existed for the programming language eldritch horror that is APL. When R’s precursor, S, was conceived, APL keyboards and printing heads existed, and these could print a single ← character. It was only after most standard keyboard assignments ended up eschewing this all-important symbol that R and S accepted the digraphic <- as a substitute.

## OK, but what does it do?

<- is one of the first operators anyone encounters when familiarising themselves with the R language. The general idea is quite simple: it is a directionally unambiguous assignment, i.e. it indicates quite clearly that the right-hand side value (rhs, in the following) will replace the left-hand side variable (lhs), or be assigned to the newly created lhs if it has not yet been initialised. Or that, at the very least, is the short story.

Because quite peculiarly, there is another way to accomplish a simple assignment in R: the equality sign (=). And because on the top level, a <- b and a = b are equivalent, people have sometimes treated the two as being quintessentially identical. Which is not the case. Or maybe it is. It’s all very confusing. Let’s see if we can unconfuse it.

### The Holy Writ

The Holy Writ, known to uninitiated as the R Manual, has this to say about assignment operators and their differences:

The operators <- and = assign into the environment in which they are evaluated. The operator <- can be used anywhere, whereas the operator = is only allowed at the top level (e.g., in the complete expression typed at the command prompt) or as one of the subexpressions in a braced list of expressions.

If this sounds like absolute gibberish, or you cannot think of what would qualify as not being on the top level or a subexpression in a braced list of expressions, welcome to the squad – I’ve had R experts scratch their head about this for an embarrassingly long time until they realised what the R documentation, in its neutron starlike denseness, actually meant.

### If it’s in (parentheses) rather than {braces}, = and <- are going to behave weird

To translate the scriptural words above quoted to human speak, this means = cannot be used in the conditional part (the part enclosed by (parentheses) as opposed to {curly braces}) of control structures, among others. This is less an issue between <- and =, and rather an issue between = and ==. Consider the following example:

x = 3

if(x = 3) 1 else 0
# Error: unexpected '=' in "if(x ="


So far so good: you should not use a single equality sign as an equality test operator. The right way to do it is:

> if(x == 3) 1 else 0
[1] 1

if(x <- 3) 1 else 0
# [1] 1


Oh, look, it works! Or does it?

if(x <- 4) 1 else 0
# [1] 1

The problem is that an assignment will always yield true if successful. So instead of comparing x to 4, it assigned 4 to x, then happily informed us that it is indeed true.

The bottom line is not to use = as comparison operator, and <- as anything at all in a control flow expression’s conditional part. Or as John Chambers notes,

Disallowing the new assignment form in control expressions avoids programming errors (such as the example above) that are more likely with the equal operator than with other S assignments.

### Chain assignments

One example of where  <- and = behave differently (or rather, one behaves and the other throws an error) is a chain assignment. In a chain assignment, we exploit the fact that R assigns from right to left. The sole criterion is that all except the rightmost members of the chain must be capable of being assigned to.

# Chain assignment using <-
a <- b <- c <- 3

# Chain assignment using =
a = b = c = 3

# Chain assignment that will, unsurprisingly, fail
a = b = 3 = 4
# Error in 3 = 4 : invalid (do_set) left-hand side to assignment


So we’ve seen that as long as the chain assignment is logically valid, it’ll work fine, whether it’s using <- or =. But what if we mix them up?

a = b = c <- 1
# Works fine...

a = b <- c <- 1
# We're still great...

a <- b = c = 1
# Error in a <- b = c = 1 : could not find function "<-<-"
# Oh.

The bottom line from the example above is that where <- and = are mixed, the leftmost assignment has to be carried out using =, and cannot be by <-. In that one particular context, = and <- are not interchangeable.

A small note on chain assignments: many people dislike chain assignments because they’re ‘invisible’ – they literally return nothing at all. If that is an issue, you can surround your chain assignment with parentheses – regardless of whether it uses <-, = or a (valid) mixture thereof:

a = b = c <- 3
# ...
# ... still nothing...
# ... ... more silence...

(a = b = c <- 3)
# [1] 3

### Assignment and initialisation in functions

This is the big whammy – one of the most important differences between <- and =, and a great way to break your code. If you have paid attention until now, you’ll be rewarded by, hopefully, some interesting knowledge.

= is a pure assignment operator. It does not necessary initialise a variable in the global namespace. <-, on the other hand, always creates a variable, with the lhs value as its name, in the global namespace. This becomes quite prominent when using it in functions.

Traditionally, when invoking a function, we are supposed to bind its arguments in the format parameter = argument.1 And as we know from what we know about functions, the keyword’s scope is restricted to the function block. To demonstrate this:

add_up_numbers <- function(a, b) {
return(a + b)
}

add_up_numbers(a = 3, b = 5)
# [1] 8

a + b
# Error: object 'a' not found

This is expected: a (as well as b, but that didn’t even make it far enough to get checked!) doesn’t exist in the global scope, it exists only in the local scope of the function add_up_numbers. But what happens if we use <- assignment?

add_up_numbers(a <- 3, b <- 5)
# [1] 8

a + b
# [1] 8

Now, a and b still only exist in the local scope of the function add_up_numbers. However, using the assignment operator, we have also created new variables called a and b in the global scope. It’s important not to confuse it with accessing the local scope, as the following example demonstrates:

add_up_numbers(c <- 5, d <- 6)
# [1] 11

a + b
# [1] 8

c + d
# [1] 11

In other words, a + b gave us the sum of the values a and b had in the global scope. When we invoked add_up_numbers(c <- 5, d <- 6), the following happened, in order:

1. A variable called c was initialised in the global scope. The value 5 was assigned to it.
2. A variable called d was initialised in the global scope. The value 6 was assigned to it.
3. The function add_up_numbers() was called on positional arguments c and d.
4. c was assigned to the variable a in the function’s local scope.
5. d was assigned to the variable b in the function’s local scope.
6. The function returned the sum of the variables a and b in the local scope.

It may sound more than a little tedious to think about this function in this way, but it highlights three important things about <- assignment:

1. In a function call, <- assignment to a keyword name is not the same as using =, which simply binds a value to the keyword.
2. <- assignment in a function call affects the global scope, using = to provide an argument does not.
3. Outside this context, <- and = have the same effect, i.e. they assign, or initialise and assign, in the current scope.

Phew. If that sounds like absolute confusing gibberish, give it another read and try playing around with it a little. I promise, it makes some sense. Eventually.

## So… should you or shouldn’t you?

Which raises the question that launched this whole post: should you use = for assignment at all? Quite a few style guides, such as Google’s R style guide, have outright banned the use of = as assignment operator, while others have encouraged the use of ->. Personally, I’m inclined to agree with them, for three reasons.

1. Because of the existence of ->, assignment by definition is best when it’s structured in a way that shows what is assigned to which side. a -> b and b <- a have a formal clarity that a = b does not have.
2. Good code is unambiguous even if the language isn’t. This way, -> and <- always mean assignment, = always means argument binding and == always means comparison.
3. Many argue that <- is ambiguous, as x<-3 may be mistyped as x<3 or x-3, or alternatively may be (visually) parsed as x < -3, i.e. compare x to -3. In reality, this is a non-issue. RStudio has a built-in shortcut (Alt/⎇ + ) for <-, and automatically inserts a space before and after it. And if one adheres to sound coding principles and surrounds operators with white spaces, this is not an issue that arises.

Like with all coding standards, consistency is key. Consistently used suboptimal solutions are superior, from a coding perspective, to an inconsistent mixture of right and wrong solutions.

References   [ + ]

 1 ↑ A parameter is an abstract ‘slot’ where you can put in values that configure a function’s execution. Arguments are the actual values you put in. So add_up_numbers(a,b) has the parameters a and b, and add_up_numbers(a = 3, b = 5) has the arguments 3 and 5.

# The Ten Rules of Defensive Programming in R

[su_pullquote align=”right”]Where R code is integrated into a pipeline, runs autonomously or is embedded into a larger analytical solution, writing code that fails well is going to be crucial.[/su_pullquote]The topic of defensive programming in R is, admittedly, a little unusual. R, while fun and powerful, is not going to run defibrillators, nuclear power plants or spacecraft. In fact, much – if not most! – R code is actually executed interactively, where small glitches don’t really matter. But where R code is integrated into a pipeline, runs autonomously or is embedded into a larger analytical solution, writing code that fails well is going to be crucial. So below, I have collected my top ten principles of defensive programming in R. I have done so with an eye to users who do not come from the life critical systems community and might not have encountered defensive programming before, so some of these rules apply to all languages.

## What is defensive programming?

The idea of defensive programming is not to write code that never fails. That’s an impossible aspiration. Rather, the fundamental idea is to write code that fails well. To me, ‘failing well’ means five things:

1. Fail fast: your code should ensure all criteria are met before they embark upon operations, especially if those are computationally expensive or might irreversibly affect data.
2. Fail safe: where there is a failure, your code should ensure that it relinquishes all locks and does not acquire any new ones, not write files, and so on.
3. Fail conspicuously: when something is broken, it should return a very clear error message, and give as much information as possible to help unbreak it.
4. Fail appropriately: failure should have appropriate effects. For every developer, it’s a judgment call to ensure whether a particular issue would be a a debug/info item, a warning or an error (which by definition means halting execution). Failures should be handled appropriately.
5. Fail creatively: not everything needs to be a failure. It is perfectly legitimate to handle problems. One example is repeating a HTTP request that has timed out: there’s no need to immediately error out, because quite frankly, that sort of stuff happens. Equally, it’s legitimate to look for a parameter, then check for a configuration file if none was provided, and finally try checking the arguments with which the code was invoked before raising an error.1

And so, without further ado, here are the ten ways I implement these in my day-to-day R coding practice – and I encourage you to do so yourself. You will thank yourself for it.

### The Ten Commandments of Defensive Programming in R

It’s a little surprising to even see this – I mean, shouldn’t you do this stuff anyway? Yes, you should, except some people think that because so much of R is done in the interpreter anyway, rules do not apply to them. Wrong! They very much do.

A few months ago, I saw some code written by a mentee of mine. It was infinitely long – over 250 standard lines! –, had half a dozen required arguments and did everything under the sun. This is, of course, as we’ll discuss later, a bad idea, but let’s put that aside. The problem is, I had no idea what the function was doing! After about half an hour of diligent row-by-row analysis, I figured it out, but that could have been half an hour spent doing something more enjoyable, such as a root canal without anaesthetic while listening to Nickelback. My friend could have saved me quite some hair-tearing by quite simply documenting his code. In R, the standard for documenting the code is called roxygen2, it’s got a great parser that outputs the beautiful LaTeX docs you probably (hopefully!) have encountered when looking up a package’s documentation, and it’s described in quite a bit of detail by Hadley Wickham. What more could you wish for?

Oh. An example. Yeah, that’d be useful. We’ll be documenting a fairly simple function, which calculates the Hamming distance between two strings of equal length, and throws something unpleasant in our face if they are not. Quick recap: the Hamming distance is the number of characters that do not match among two strings. Or mathematically put,

$H(s, t) = \sum_{k=1}^{\mathcal{l}(s)} I(s_k, t_k) \mid \mathcal{l}(s) = \mathcal{l}(t)$

where $\mathcal{l}()$ is the length function and $D(p, q)$ is the dissimilarity function, which returns 1 if two letters are not identical and 0 otherwise.

So, our function would look like this:

hamming <- function(s1, s2) {
s1 <- strsplit(s1, "")[[1]]
s2 <- strsplit(s2, "")[[1]]

return(sum(s1 != s2))
}

Not bad, and pretty evident to a seasoned R user, but it would still be a good idea to point out a thing or two. One of these would be that the result of this code will be inaccurate (technically) if the two strings are of different lengths (we could, and will, test for that, but that’s for a later date). The Hamming distance is defined only for equal-length strings, and so it would be good if the user knew what they have to do – and what they’re going to get. Upon pressing Ctrl/Cmd+Shift+Alt+R, RStudio helpfully whips us up a nice little roxygen2 skeleton:

#' Title
#'
#' @param s1
#' @param s2
#'
#' @return
#' @export
#'
#' @examples
hamming <- function(s1, s2) {
s1 <- strsplit(s1, "")[[1]]
s2 <- strsplit(s2, "")[[1]]

return(sum(s1 != s2))
}

So, let’s populate it! Most of the fields are fairly self-explanatory. roxygen2, unlike JavaDoc or RST-based Python documentation, does not require formal specification of types – it’s all free text. Also, since it will be parsed into LaTeX someday, you can go wild. A few things deserve mention.

• You can document multiple parameters. Since s1 and s2 are both going to be strings, you can simply write @param s1,s2 The strings to be compared.
• Use \code{...} to typeset something as fixed-width.
• To create links in the documentation, you can use \url{https://chrisvoncsefalvay.com} to link to a URL, \code{\link{someotherfunction}} to refer to the function someotherfunction in the same package, and \code{\link[adifferentpackage]{someotherfunction}} to refer to the function someotherfunction in the adifferentpackage package. Where your function has necessary dependencies outside the current script or the base packages, it is prudent to note them here.
• You can use \seealso{} to refer to other links or other functions, in this package or another, worth checking out.
• Anything you put under the examples will be executed as part of testing and building the documentation. If your intention is to give an idea of what the code looks like in practice, and you don’t want the result or even the side effects, you can surround your example with a \dontrun{...} environment.
• You can draw examples from a file. In this case, you use @example instead of @examples, and specify the path, relative to the file in which the documentation is, to the script: @example docs/examples/hamming.R would be such a directive.
• What’s that @export thing at the end? Quite simply, it tells roxygen2 to export the function to the NAMESPACE file, making it accessible for reference by other documentation files (that’s how when you use \code{\link[somepackage]{thingamabob}}, roxygen2 knows which package to link to.

With that in mind, here’s what a decent documentation to our Hamming distance function would look like that would pass muster from a defensive programming perspective:

#' Hamming distance
#'
#' Calculates the Hamming distance between two strings of equal length.
#'
#' @param s1
#' @param s2
#'
#' @return The Hamming distance between the two strings \code{s1} and \code{s2}, provided as an integer.
#'
#' @section Warning:
#'
#' For a Hamming distance calculation, the input strings must be of equal length. This code does NOT reject input strings of different lengths.
#'
#' @examples
#' hamming("AAGAGTGTCGGCATACGTGTA", "AAGAGCGTCGGCATACGTGTA")
#'
#' @export
hamming <- function(s1, s2) {
s1 <- strsplit(s1, "")[[1]]
s2 <- strsplit(s2, "")[[1]]

return(sum(s1 != s2))
}

The .Rd file generated from the hamming() function’s roxygen2 docstring: an intermediary format, resembling LaTeX, from which R can build a multitude of documentation outputsThis little example shows all that a good documentation does: it provides what to supply the function with and in what type, it provides what it will spit out and in what format, and adequately warns of what is not being checked. It’s always better to check input types, but warnings go a long way.2 From this file, R generates an .Rd file, which is basically a LaTeX file that it can parse into various forms of documentation (see left.) In the end, it yields the documentation below, with the adequate warning – a win for defensive programming!

## TWO: In God we trust, everyone else we verify.

In the above example, we have taken the user at face value: we assumed that his inputs will be of equal length, and we assumed they will be strings. But because this is a post on defensive programming, we are going to be suspicious, and not trust our user. So let’s make sure we fail early and check what the user supplies us with. In many programming languages, you would be using various assertions (e.g. the assert keyword in Python), but all R has, as far as built-ins are concerned, is stopifnot(). stopifnot() does as the name suggests: if the condition is not met, execution stops with an error message. However, on the whole, it’s fairly clunky, and it most definitely should not be used to check for user inputs. For that, there are three tactics worth considering.

1. assertthat is a package by Hadley Wickham (who else!), which implements a range of assert clauses. Most importantly, unlike stopifnot(), assertthat::assert_that() does a decent job at trying to interpret the error message. Consider our previous Hamming distance example: instead of gracelessly falling on its face, a test using
assert_that(length(s1) == length(s2))
1. would politely inform us that s1 not equal to s2. That’s worth it for the borderline Canadian politeness alone.
2. Consider the severity of the user input failure. Can it be worked around? For instance, a function requiring an integer may, if it is given a float, try to coerce it to a float. If you opt for this solution, make sure that you 1) always issue a warning, and 2) always allow the user to specify to run the function in ‘strict’ mode (typically by setting the strict parameter to TRUE), which will raise a fatal error rather than try to logic its way out of this pickle.
3. Finally, make sure that it’s your code that fails, not the system code. Users should have relatively little insight into the internals of the system. And so, if at some point there’ll be a division by an argument foo, you should test whether foo == 0 at the outset and inform the user that foo cannot be zero. By the time the division operation is performed, the variable might have been renamed baz, and the user will not get much actionable intelligence out of the fact that ‘division by zero’ has occurred at some point, and baz was the culprit. Just test early for known incompatibilities, and stop further execution. The same goes, of course, for potentially malicious code.

In general, your code should be strict as to what it accepts, and you should not be afraid to reject anything that doesn’t look like what you’re looking for. Consider for this not only types but also values, e.g. if the value provided for a timeout in minutes is somewhere north of the lifetime of the universe, you should politely reject such an argument – with a good explanation, of course.

Update: After posting this on Reddit, u/BillWeld pointed out a great idiom for checking user inputs that’s most definitely worth reposting here:

f <- function(a, b, c)
{
if (getOption("warn") > 0) {
stopifnot(
is.numeric(a),
is.vector(a),
length(a) == 1,
is.finite(a),
a > 0,
is.character(b),
# Other requirements go here
)
}

# The main body of the function goes here
}

I find this a great and elegant idiom, although it is your call, as the programmer, to decide which deviations and what degree of incompatibility should cause the function to fail as opposed to merely emit a warning.

## THREE: Keep functions short and sweet.

No function should be longer than what can be printed on a single sheet of paper in a standard reference format with one line per statement and one line per declaration. Typically, this means no more than about 60 lines of code per function.

Rationale: Each function should be a logical unit in the code that is understandable and verifiable as a unit. It is much harder to understand a logical unit that spans multiple screens on a computer display or multiple pages when printed. Excessively long functions are often a sign of poorly structured code.

In practice, with larger screen sizes and higher resolutions, much more than a measly hundred lines fit on a single screen. However, since many users view R code in a quarter-screen window in RStudio, an appropriate figure would be about 60-80 lines. Note that this does not include comments and whitespaces, nor does it penalise indentation styles (functions, conditionals, etc.).

Functions should represent a logical unity. Therefore, if a function needs to be split for compliance with this rule, you should do so in a manner that creates logical units. Typically, one good way is to split functions by the object they act on.

## FOUR: Refer to external functions explicitly.

In R, there are two ways to invoke a function, yet most people don’t tend to be aware of this. Even in many textbooks, the library(package) function is treated as quintessentially analogous to, say, import in Python. This is a fundamental misunderstanding.

In R, packages do not need to be imported in order to be able to invoke their functions, and that’s not what the library() function does anyway. library() attaches a package to the current namespace.

What does this mean? Consider the following example. The foreign package is one of my favourite packages. In my day-to-day practice, I get data from all sorts of environments, and foreign helps me import them. One of its functions, read.epiinfo(), is particularly useful as it imports data from CDC’s free EpiInfo toolkit. Assuming that foreign is in a library accessible to my instance of R,4, I can invoke the read.epiinfo() function in two ways:

• I can directly invoke the function using its canonical name, of the form package::function() – in this case, foreign::read.epiinfo().
• Alternatively, I can attach the entire foreign package to the namespace of the current session using library(foreign). This has three effects, of which the first tends to be well-known, the second less so and the third altogether ignored.
1. Functions in foreign will be directly available. Regardless of the fact that it came from a different package, you will be able to invoke it the same way you invoke, say, a function defined in the same script, by simply calling read.epiinfo().
2. If there was a package of identical name to any function in foreign, that function will be ‘shadowed’, i.e. removed from the namespace. The namespace will always refer to the most recent function, and the older function will only be available by explicit invocation.
3. When you invoke a function from the namespace, it will not be perfectly clear from a mere reading of the code what the function actually is, or where it comes from. Rather, the user or maintainer will have to guess what a given name will represent in the namespace at the time the code is running the particular line.

Controversially, my suggestion is

• to eschew the use of library() altogether, and
• write always explicitly invoke functions outside those functions that are in the namespace at startup.

This is not common advice, and many will disagree. That’s fine. Not all code needs to be safety-critical, and importing ggplot2 with library() for a simple plotting script is fine. But where you want code that’s easy to analyse, easy to read and can be reliably analysed as well, you want explicit invocations. Explicit invocations give you three main benefits:

1. You will always know what code will be executed. filter may mean dplyr::filter, stats::filter, and so on, whereas specifically invoking dplyr::filter is unambiguous. You know what the code will be (simply invoking dplyr::filter without braces or arguments returns the source), and you know what that code is going to do.
2. Your code will be more predictable. When someone – or something – analyses your code, they will not have to spend so much time trying to guess what at the time a particular identifier refers to within the namespace.
3. There is no risk that as a ‘side effect’ various other functions you seek to rely on will be removed from the namespace. In interactive coding, R usually warns you and lists all shadowed functions upon importing functions with the same name into the namespace using library(), but for code intended to be bulk executed, this issue has caused a lot of headache.

Obviously, all of this applies to require() as well, although on the whole the latter should not be applied in general.

## FIVE: Don’t use require() to import a package into the namespace.

Even seasoned R users sometimes forget the difference between library() and require(). The difference is quite simple: while both functions attempt to attach the package argument to the namespace,

• require() returns FALSE if the import failed, while
• library() simply loads the package and raises an error if the import failed.

Just about the only legitimate use for require() is writing an attach-or-install function. In any other case, as Yihui Xie points out, require() is almost definitely the wrong function to use.

## SIX: Aggressively manage package and version dependencies.

Packrat is one of those packages that have changed what R is like – for the better. Packrat gives every project a package library, akin to a private /lib/ folder. This is not the place to document the sheer awesomeness of Packrat – you can do so yourself by doing the walkthrough of Packrat. But seriously, use it. Your coworkers will love you for it.

Equally important is to make sure that you specify the version of R that your code is written against. This is best accomplished on a higher level of configuration, however.

## SEVEN: Use a consistent style and automated code quality tools.

This should be obvious – we’re programmers, which means we’re constitutionally lazy. If it can be solved by code faster than manually, then code it is! Two tools help you in this are lintr and styler.

• lintr is an amazingly widely supported (from RStudio through vim to Sublime Text 3, I hear a version for microwave ovens is in the works!) linter for R code. Linters improve code quality primarily by enforcing good coding practices rather than good style. One big perk of lintr is that it can be injected rather easily into the Travis CI workflow, which is a big deal for those who maintain multi-contributor projects and use Travis to keep the cats appropriately herded.
• styler was initially designed to help code adhere to the Tidyverse Style Guide, which in my humble opinion is one of the best style guides that have ever existed for R. It can now take any custom style files and reformat your code, either as a function or straight from an RStudio add-in.

So use them.

## EIGHT: Everything is a package.

Whether you’re writing R code for fun, profit, research or the creation of shareholder value, your coworkers and your clients – rightly! – expect a coherent piece of work product that has everything in one neat package, preferably version controlled. Sending around single R scripts might have been appropriate at some point in the mid-1990s, but it isn’t anymore. And so, your work product should always be structured like a package. As a minimum, this should include:

1. A DESCRIPTION and NAMESPACE file.
2. The source code, including comments.
3. Where appropriate, data mappings and other ancillary data to implement the code. These go normally into the data/ folder. Where these are large, such as massive shape files, you might consider using Git LFS.
4. Dependencies, preferably in a packrat repo.
5. The documentation, helping users to understand the code and in particular, if the code is to be part of a pipeline, explaining how to interact with the API it exposes.
6. Where the work product is an analysis rather than a bit of code intended to carry out a task, the analysis as vignettes.

To understand the notion of analyses as packages, two outstanding posts by Robert M. Flight are worth reading: part 1 explains the ‘why’ and part 2 explains the ‘how’. Robert’s work is getting a little long in the tooth, and packages like knitr have taken the place of vignettes as analytical outputs, but the principles remain the same. Inasmuch as it is possible, an analysis in R should be a self-contained package, with all the dependencies and data either linked or included. From the perspective of the user, all that should be left for them to do is to execute the analysis.

## NINE: Power in names.

There are only two hard things in Computer Science: cache invalidation and naming things.

Phil Karlton

In general, R has a fair few idiosyncrasies in naming things. For starters, dots/periods . are perfectly permitted in variable names (and thus function names), when in most languages, the dot operator is a binary operator retrieving the first operand object’s method called the second operand:

$a.b(args) = dot(a, b, args) = a_{Method: b}(args)$

For instance, in Python, wallet.pay(arg1, arg2) means ‘invoke the method pay of the object wallet with the arguments arg1 and arg2‘. In R, on the other hand, it’s a character like any other, and therefore there is no special meaning attached to it – you can even have a variable contaning multiple dots, or in fact a variable whose name consists entirely of dots5 – in R, .......... is a perfectly valid variable name. It is also a typcal example of the fact that justibecause you can do something doesn’t mean you also should do so.

A few straightforward rules for variable names in R are worth abiding by:

1. Above all, be consistent. That’s more important than whatever you choose.
2. Some style guides, including Google’s R style guide, treat variables, functions and constants as different entities in respect of naming. This is, in my not-so-humble opinion, a blatant misunderstanding of the fact that functions are variables of the type function, and not some distinct breed of animal. Therefore, I recommend using a unitary schema for all variables, callable or not.
3. In order of my preference, the following are legitimate options for naming:
• Underscore separated: average_speed
• Dot separated: average.speed
• JavaScript style lower-case CamelCase: averageSpeed
4. Things that don’t belong into identifiers: hyphens, non-alphanumeric characters, emojis (🤦🏼‍♂️) and other horrors.
5. Where identifiers are hierarchical, it is better to start representing them as hierarchical objects rather than assigning them to different variables. For example, instead of monthly_forecast_january, monthly_forecast_february and so on, it is better to have a list associative array called forecasts in which the forecasts are keyed by month name, and can then be retrieved using the key or the [key] accessors. If your naming has half a dozen components, maybe it’s time to think about structuring your data better. Finally, in some cases, the same data may be represented by multiple formats – for instance, data about productivity is first imported as a text file, and then converted into a data frame. In such cases, Hungarian notation may be legitimate, e.g. txt_productivity or productivity.txt vs df_productivity or productivity.df. This is more or less the only case in which Hungarian notation is appropriate.6 And while we’re at variables: never, ever, ever use = to assign. Every time you do it, a kitten dies of sadness. For file naming, some sensible rules have served me well, and will be hopefully equally useful for you: 1. File names should be descriptive, but no longer than 63 characters. 2. File names should be all lower case, separated by underscores, and end in .R. That’s a capital R. Not a lower-case R. EveR. 3. Where there is a ‘head’ script that sequentially invokes (sources) a number of subsidiary scripts, it is common for the head script to be called 00-.R, and the rest given a sequential corresponding to the order in which they are sourced and a descriptive name, e.g. 01-load_data_from_db.R, 02-transform_data_and_anonymise_records.R and so on. 4. Where there is a core script, but it does not invoke other files sequentially, it is common for the core script to be called 00-base.R or main.R. As long as it’s somewhere made clear to the user which file to execute, all is fair. 5. The injunction against emojis and other nonsense holds for file names, too. ## TEN: Know the rules and their rationale, so that you know when to break them. It’s important to understand why style rules and defensive programming principles exist. How else would we know which rules we can break, and when? The reality is that there are no rules, in any field, that do not ever permit of exceptions. And defensive programming rules are no exception. Rules are tools that help us get our work done better and more reliably, not some abstract holy scripture. With that in mind, when can you ignore these rules? • You can, of course, always ignore these rules if you’re working on your own, and most of your work is interactive. You’re going to screw yourself over, but that’s your right and privilege. • Adhering to common standards is more important than doing what some dude on the internet (i.e. my good self) thinks is good R coding practice. Coherence and consistency are crucial, and you’ll have to stick to your team’s style over your own ideas. You can propose to change those rules, you can suggest that they be altogether redrafted, and link them this page. But don’t go out and start following a style of your own just because you think it’s better (even if you’re right). • It’s always a good idea to appoint a suitable individual – with lots of experience and little ego, ideally! – as code quality standards coordinator (CQSC). They will then centrally coordinate adherence to standards, train on defensive coding practices, review operational adherence, manage tools and configurations and onboard new people. • Equally, having an ‘editor settings repo’ is pretty useful. This should support, at the very least, RStudio lintr and styler. • Some prefer to have the style guide as a GitHub or Confluence wiki – I generally advise against that, as that cannot be tracked and versioned as well as, say, a bunch of RST files that are collated together using Sphinx, or some RMarkdown files that are rendered automatically upon update using a GitHub webhook. ## Conclusion [su_pullquote align=”right”]Always code as if your life, or that of your loved ones, depended on the code you write – because it very well may someday.[/su_pullquote]In the end, defensive programming may not be critical for you at all. You may never need to use it, and even if you do, the chances that as an R programmer your code will have to live up to defensive programming rules and requirements is much lower than, say, for an embedded programmer. But algorithms, including those written in R, create an increasing amount of data that is used to support major decisions. What your code spits out may decide someone’s career, whether someone can get a loan, whether they can be insured or whether their health insurance will be dropped. It may even decide a whole company’s fate. This is an inescapable consequence of the algorithmic world we now live in. A few years ago, at a seminar on coding to FDA CDRH standards, the instructor finished by giving us his overriding rule: in the end, always code as if your life, or that of your loved ones, depended on the code you write (because it very well may someday!). This may sound dramatic in the context of R, which is primarily a statistical programming language, but the point stands: code has real-life consequences, and we owe it to those whose lives, careers or livelihoods depend on our code to give them the kind of code that we wish to rely on: well-tested, reliable and stable. References [ + ]  1 ↑ One crucial element here: keep in mind the order of parameters – because the explicit should override the implied, your code should look at the arguments first, then the environment variables, then the configuration file! 2 ↑ Eagle-eyed readers might spot the mutation in the example. This is indeed the T310C mutation on the TNFRSF13B gene (17p11.2), which causes autosomal dominant type 2 Common Variable Immune Deficiency, a condition I have, but manage through regular IVIG, prophylactic antibiotics and aggressive monitoring. 3 ↑ Holzmann, G.J. (2006). The Power of 10: Rules for developing safety-critical code. IEEE Computer 39(6):95-99. 4 ↑ R stores packages in libraries, which are basically folders where the packages reside. There are global packages as well as user-specific packages. For a useful overview of packages and package installation strategies, read this summary by Nathan Stephens. 5 ↑ The exception is ..., which is not a valid variable name as it collides with the splat operator. 6 ↑ In fact, the necessity of even this case is quite arguable. A much better practice would be simply importing the text as productivity, then transform it into a data frame, if there’s no need for both versions to continue coexisting. # Is Senomyx putting baby parts into your fruit juice? The answer may (not) surprise you. I know a great number of people who oppose abortion, and who are therefore opposed to the biomedical use of tissue or cells that have been derived from foetuses aborted for that very purpose, although most of them do not oppose the use of foetal tissue from foetuses aborted for some other reason.1 Over the last week or so, many have sent me a story involving a company named Senomyx, wondering if it’s true. In particular, the story claimed that Senomyx uses foetal cells, or substances derived from foetal cell lines, to create artificial flavourings. One article I have been referred to is straight from the immediately credible-sounding Natural News:2 3 Every time you purchase mass-produced processed “food” from the likes of Kraft, PepsiCo, or Nestle, you’re choosing, whether you realize it or not, to feed your family not only genetically engineered poisons and chemical additives, but also various flavoring agents manufactured using the tissue of aborted human babies. It’s true: A company based out of California, known as Senomyx, is in the business of using aborted embryonic cells to test fake flavoring chemicals, both savory and sweet, which are then added to things like soft drinks, candy and cookies. And Senomyx has admittedly partnered with a number of major food manufacturers to lace its cannibalistic additives into all sorts of factory foods scarfed down by millions of American consumers every single day. Needless to say, this is total bullcrap, and in the following, we’re going on a ride in the Magic Schoolbus through this fetid swamp of lies, half-truths and quarter-wits. ## In the beginning was the cell culture… About ten years before I was born, a healthy foetus was (legally) aborted in Leiden, the Netherlands, and samples were taken from the foetus’s kidney. They were much like the cells in your kidney. Like the cells in your kidney (hopefully!), they had a beginning and an end. That end is known as the Hayflick limit or ‘replicative senescence limit’. Cells contain small ‘caps’ at the end of their chromosomes, known as telomeres.4 At every mitosis, these shorten a little, like a countdown clock, showing how many divisions the cell has left. And once the telomeres are all gone, the cell enters a stage called cellular senescence, and is permanently stuck in the G1 phase of the cell cycle, unable to move on to phase S. This is a self-preservative mechanism: with every mitosis, the cell line ages, and becomes more likely to start suffering from errors it passes on to its descendants. Experimentally, we know that the Hayflick limit is around 40-60 divisions. But every rule has an exception. Most obviously, there are the cells that just won’t die – this is the case in cancers. And so, some cancer cells have this senescence mechanism disabled, and can divide an unlimited number of times. They have become practically immortal. This is, of course, a nuisance if they are inside a human, because proliferation and division of those cells causes unpleasant things like tumours. For researchers, however, they offer something precious: a cell line that will, as long as it’s fed and watered, live and divide indefinitely. This is called an immortal cell line. The most famous of them, HeLa, began life as cervical epithelial cells of a woman named Henrietta Lacks. Then something dreadful happened, and the cells’ cell cycle regulation was disrupted – Ms Lacks developed aggressive cervical cancer, and died on October 4, 1951. Before that, without her permission or consent, samples were taken from her cervix, and passed onto George Otto Gey, a cell biologist who realised what would be Ms Lacks’s most lasting heritage: her cells would divide and divide, well beyond the Hayflick limit, with no end in sight. Finally, cell biologists had the Holy Grail: a single cell line that produced near-exact copies of itself, all descendants of a single cell from the cervix of a destitute African-American woman, who died without a penny to her name, but whose cells would for decades continue to save lives – among others, Salk’s polio vaccine was cultured in HeLa cells.5 HeLa cells were immortal ‘by nature’ – they underwent changes that have rendered them cancerous/immortal, depending on perspective. But it’s also possible to take regular mortal cells and interfere with their cell cycle to render them immortal. And this brings us back to the kidney cells of the foetus aborted in the 1970s, a life that was never born yet will never die, but live on as HEK 293. A cell biologist, Frank Graham, working in Alex van Eb’s lab at the University of Leiden,6 used the E1 gene from adenovirus 5 to ‘immortalise’ the cell line, effectively rewriting the cell’s internal regulation mechanism to allow it to continue to divide indefinitely rather than enter cellular senescence.7 This became the cell culture HEK 293,8 one of my favourite cell cultures altogether, with an almost transcendental beauty in symmetry. HEK 293 is today a stock cell line, used for a range of experimental and production processes. You can buy it from Sigma-Aldrich for little north of £350 a vial. The cells you’re getting will be exact genetic copies of the initial cell sample, and its direct genetic descendants. That’s the point of an immortalised cell line: you can alter a single cell to effectively divide indefinitely as long as the requisite nutrients, space and temperature are present. They are immensely useful for two reasons: • You don’t need to take new cell samples every time you need a few million cells to tinker with. • All cells in a cell line are perfectly identical.9 It goes without saying just how important for reproducible science it is to have widely available, standardised, reference cell lines. Admittedly, this was a whistle-stop tour through cell cultures and immortal(ised) cell lines, but the basics should be clear. Right? I mean… right? ## Unless you’re Sayer Ji. Until about 1900 today, I had absolutely no idea of who Sayer Ji is, and I would lie if I asserted my life was drastically impoverished by that particular ignorance. Sayer Ji runs a website (which I am not going to link to, but I am going to link to RationalWiki’s entry on it, as well as Orac’s collected works on the man, the myth and the bullshit), and he describes himself as a ‘thought leader’. That alone sets off big alarms. Mr Ji, to the best of my knowledge, has no medical credentials whatsoever, nor any accredited credentials that would allow him to make the grandiose statements that he seems to indulge in with the moderation of Tony Montana when it comes to cocaine: There is not a single disease ever identified cause by a lack of a drug, but there are diseases caused by a lack of vitamins, minerals and nutrients. Why, then, do we consider the former – chemical medicine – the standard of care and food-as-medicine as quackery? Money and power is the obvious answer. You may question why I even engage with someone whose cognition is operating on this level, and you might be justified in doing so. Please ascribe it to my masochistic tendencies, or consider it a sacrifice for the common good. Either way, I pushed through a particular article of his which is cited quite extensively in the context of Senomyx, titled Biotech’s Dark Promise: Involuntary Cannibalism for All.10 In short, Mr Ji’s ‘article’ rests on the fundamental assertion that ‘abortion tainted vaccines’, among which he ranks anything derived from HEK 293 (which he just refers to as 293),11 constitute cannibalism. He is, of course, entirely misguided, and does not understand how cell line derived vaccines work: Whereas cannibalism is considered by most modern societies to be the ultimate expression of uncivilized or barbaric behavior, it is intrinsic to many of the Western world’s most prized biotechnological and medical innovations. Probably the most ‘taken for granted’ example of this is the use of live, aborted fetus cell lines from induced abortions to produce vaccines. Known as diploid cell vaccines (diploid cells have two (di-) sets of chromosomes inherited from human mother and father), they are non-continuous (unlike cancer cells), and therefore must be continually replaced, i.e. new aborted, live fetal tissue must be harvested periodically. For the time being, the VRBPAC has mooted but not approved tumour cell line derived vaccines, and is unlikely to do so anytime soon. However, the idea that diploid cell vaccines need a constant influx of cells is completely idiotic, and reveals Mr Ji’s profound ignorance.12 WI-38, for instance, is a diploid human cell line, and perfectly ‘continuous’ (by which he means immortal). There is no new “aborted, live fetal tissue” that “must be harvested periodically”. Equally, Mr Ji is unaware of the idea that vaccines do not typically contain cells from the culture, but only the proteins, VLPs or virions expressed by the cells. There’s no cannibalism if no cells are consumed, and the denaturation process (the attenuation part of attenuated vaccines) is already breaking whatever cellular parts there are to hell and back. On the infinitesimal chance that a whole cell has made it through, it will be blasted into a million little pieces by the body’s immune system – being a foreign cell around a bunch of adjuvant is like breaking into a bank vault right across the local police station while expressly alerting the police you’re about to crack the safe and, just to be sure, providing them with a handy link for a live stream.13 ## From cell lines to Diet Coke: Senomyx and high throughput receptor ligand screening If you have soldiered on until now, good job. This is where the fun part comes – debunking the fear propaganda against Senomyx by a collection of staggering ignorami. Senomyx is a company with a pretty clever business model. Instead of using human probands to develop taste enhancers (aka ‘flavourings’) and scents, Senomyx uses a foetal cell line, specifically HEK 293, to express certain receptors that mimic the taste receptors in the human body. Then, it tests a vast number of candidate compounds on them, and sees which elicit a particular reaction. That product (which typically is a complex organic chemical but has nothing to do with the foetal cell!) is then patented and goes into your food. To reiterate the obvious: no foetal cells ever get anywhere near your food. The kerfuffle around this is remarkably stupid because this is basically the same as High Throughput Screening (HTS), a core component of drug development today.14 Let’s go through how a drug is developed, with a fairly simple and entirely unrealistic example.15 We know that low postsynaptic levels of monoamines, especially of serotonin (but also dopamine and norepinephrine) correlate with low mood. One way to try to increase postsynaptic levels is by inhibiting the breakdown of monoamines, which happens by an enzyme called monoamine oxydase (MAO). But how do we find out which of the several thousand promising MAO inhibitors that our computer model spit out will actually work best? We can’t run a clinical trial for each. Not even an animal test. But we can run a high throughput screen. Here’s a much simplified example of how that could be done (it’s not how it’s actually done anymore, but it gets the idea across). 1. We take a microtitre plate (a flat plate with up to hundreds of little holes called ‘wells’ that each take about half a millilitre of fluid), and fill it with our favourite monoamine neurotransmitters. Mmm, yummy serotonin! But because we’re tricky, we label them with a fluorescent tag or fluorophore, a substance that gives off light if excited by light of a particular wavelength, but only as long as they’re not oxidated by MAO. 2. We add a tiny amounts of each of our putative drugs to a different well each of the microtitre plate. 3. Then, we add some monoamine oxidase to each well. 4. When illuminated by the particular wavelength of light that excites the fluorophores, some wells will light up pretty well, others will be fairly dark. The bright wells indicate that the candidate drug in that well has largely inhibited monoamine oxidase, and thus the monoamine neurotransmitter remained intact. A dark well indicates that most or all of the monoamines were oxidised and as such no longer give off light. This helps us whittle down thousands of candidate molecules to hundreds or even dozens. What Senomyx does is largely similar. While their process is proprietary, it is evident it’s largely analogous to high throughput screening. The human cell lines are modified to express receptors analogous to those in taste buds. These are exposed to potential flavourings, and the degree to which they stimulate the receptor can be quantified and even compared, so that e.g you can gauge what quantity of this new flavouring would offset 1 grams of sugar. The candidate substances are then tested on real people, too. Some of the substances are not sweeteners itself but taste intensifiers, which interact to intensify the sweet taste sensation of sugar. It’s a fascinating technology, and a great idea – and has a huge potential to reduce the amount of sugars, salts and other harmful dietary flavourings in many meals. The end product is a flavouring – a molecule that has nothing to do with aborted cells (an example, the potent cooling sensate S2227, is depicted to the left – I dare anyone to show me where the aborted foetal cells are!). Soylent green, it turns out, is not people babies after all. ## Conclusion Now, two matters are outstanding. One is the safety of these substances. That, however, is irrelevant to how they were isolated. The product of a high-throughput screen is no more or less likely to be toxic than something derived from nature. The second point is a little more subtle. A number of critics of Senomyx point out that this is, in a sense, deceiving the customer, and with pearl-clutching that would have won them awards in the 1940s, point out that companies want one thing, and it’s absolutely filthy:16 Companies like PepsiCo and Nestle S.A. seek to gain over competitors. To do so, they boast products like “reduced sugar”, “reduced sodium”, “no msg”, etc. Sales profits and stocks increase when consumers believe they eat or drink a healthier product. The Weston A Price Foundation carries skepticism. They believe that the bottom line is what’s important. Companies only want to decrease the cost of goods for increased profits. Shareholders only care about stock prices and investment potential. Err… and you expected what? There is absolutely no doubt that something that gives your body the taste of salt without the adverse effects of a high-sodium diet is A Good Thing – so consumers do not merely think they’re getting a healthier product, they’re getting a product with the same taste they prefer, but without the adverse dietary consequences (in other words: a healthier product). To people who have to adhere to a strict low-sodium diet due to kidney disease, heart disease/hypertension or other health issues, this could well give back a craved-for flavour and improve quality of life. To people struggling with obesity, losing weight without having to say no to their favourite drink can result in better health outcomes. People with peanut allergies can enjoy a Reese’s Cup with a synthetic and chemically different protein that creates the same taste sensation, without risking anaphylaxis. In the end, these are things that matter, and should matter more than the fact that – shock horror! – someone is making money out of this. All Senomyx does is what drug companies have been doing for decades, and an increasing number of companies will. But yet again, the cynics who see lizardoid conspiracies and corporate deceit behind every wall know the price of everything, but haven’t thought about the value of it for a second, have seized upon another talking point. They did so exploiting a genuine pro-life sentiment so many hold dear, intentionally misrepresenting or recklessly misunderstanding the fact that no aborted tissue will ever make its way into your Coke, nor will there be a need for a stream of abortions to feed a burgeoning industry for artificial taste bud cell lines. If anybody here is exploitative, it’s not Senomyx – it’s those who seize upon the universal human injunction against cannibalism and infanticide to push a scientifically incorrect, debunked agenda against something they themselves barely understand. References [ + ]  1 ↑ This is not a post about the politics of abortion, Roe v Wade, pro-life vs pro-choice, religion vs science or any of the rest. It is about laying a pernicious lie to rest. My position regarding abortion is quite irrelevant to this, as is theirs, but it deserves mention that all of the people who got in touch hold very genuine and consistent views about the sanctity of life. Please let’s not make it about something it isn’t about. 2 ↑ Needless to say, this is not my friends’ usual fare, they too have seen it on social media and were quite dubious. 3 ↑ In line with our linking policy, we do not link to pages that endorse violence, hate or discrimination. 4 ↑ From Greek τέλος, ‘end’. 5 ↑ Finally, Rebecca Skloot’s amazing book, The Immortal Life of Henrietta Lacks, paid a long overdue tribute to the mother of modern medicine in 2010. Her book is a must-read to anyone who wants to understand the ethical complexities of immortal cell lines, as well as the touching story of a woman whom we for so long have known by initials only, yet owe such a debt to. 6 ↑ Disclosure: the University of Leiden is my alma mater, I have spent a wonderful year there, and received great treatment at LUMC, the university hospital. I am not, and have never been, in receipt of funds from the university. 7 ↑ As such, these cells represent an immortalised cell line as opposed to an immortal cell line like HeLa, where the change to the cell cycle regulation has already occurred 8 ↑ Human, Embryonic, Kidney, from experiment #293. 9 ↑ Sort of. Like all human processes, cell culturing is not perfect. One risk is a so-called ‘contaminated cell line’, and the classical case study for that is the Chang Liver cell line, which turned out to be all HeLa, all the way. This is not only a significant problem, it is also responsible for huge monetary loss and wasted research money. You can read more about this, and what scientists are doing to combat the problem, here. 10 ↑ For ethical reasons, this blog refuses to link to, and thus create revenue for, quacks, extremists and pseudoscientists. However, where the source material is indispensable, an archival service is used to obtain a snapshot of the website, so that you, too, can safely peruse Mr Ji’s nonsense without making him any money. GreenMedInfo has a whole tedious page on how to cite their nonsense, which I am going to roundly ignore because a) it looks and reads like it was written by someone who flunked out of pre-law in his sophomore year, b) 17 U.S.C. §107 explicitly guarantees fair use rights for scholarship, research and criticism. 11 ↑ Curiously, he does not mention WI-38 and MRC-5, both cell lines derived from the lung epithelial cells of aborted foetuses, which are widely used in vaccine production… 12 ↑ The word ‘profound’ is especially meet in this context. 13 ↑ Which sounds like something someone MUST have done already. Come on. It’s 2018. 14 ↑ This article is a fantastic illustration of just how powerful this technique is! 15 ↑ Unrealistic, because we know all there is to know about monoamine oxidase inhibitors, and there’s no point in researching them much more – but it illustrates the point well! 16 ↑ Yes, I brought THAT meme in here! # On dealing with failed trials To survive this moment, you need to be able to look everyone in the eye and remind them that the trial was designed to rigorously answer an important question. If this is true, then you have likely just learned something useful, regardless of whether the intervention worked or not. Further, if the study was well designed, other people will be more likely to trust your result and act on it. In other words, there is no such thing as a “negative” trial result – the answer given by a well-designed trial is always useful, whether the intervention worked or not. So you simply remind everyone of this. People will still be disappointed of course – we’d be fools to test treatments if we didn’t think they worked, and it’s natural to hope. But at least we know we ran a good trial and added knowledge to the world – that’s not nothing. I rarely post quotes, but this is important enough to have it here. Go read the whole thing, now. I tend to say that for scientists, there are no failed trials. Failed trials are investor-speak: and that’s not to say anything bad about investors or their perspective. They’re in the business to make money, and to them, a trial that means the drug candidate they just blew a billion dollars to develop will never make it to the market is, in a profound sense, a failure. But to us as scientists, it’s just another step on the journey of understanding more and more about this world (and as any scientist knows, failure is largely the bread and butter of experimental work). We’ve found another way that doesn’t work. Maybe we’ve saved lives, even – failed experiments have often stemmed the tide of predatory therapies or found toxicities that explain certain reactions with other drugs as well. Maybe we’ve understood something that may be useful somewhere else. And at worst, we’ve learned which road not to go down. And to a scientist, that should not ever feel like a failure. # Bayesian reasoning in clinical diagnostics: a primer. We know, from the source of eternal wisdom that is Saturday Morning Breakfast Cereal, that insufficient math education is the basis of the entire Western economy.1 This makes Bayesian logic and reasoning about probabilities almost like a dark art, a well-kept secret that only a few seem to know (and it shouldn’t be… but that’s a different story). This weird-wonderful argument, reflecting a much-reiterated meme about vaccines and vaccine efficacy, is a good example: The argument, here, in case you are not familiar with the latest in anti-vaccination fallacies, is that vaccines don’t work, and they have not reduced the incidence of vaccine-preventable diseases. Rather, if a person is vaccinated for, say, measles, then despite displaying clinical signs of measles, he will be considered to have a different disease, and therefore all disease statistics proving the efficacy of vaccines are wrong. Now, that’s clearly nonsense, but it highlights one interesting point, one that has a massive bearing on computational systems drawing conclusions from evidence: namely, the internal Bayesian logic of the diagnostic process. Which, incidentally, is the most important thing that they didn’t teach you in school. Bayesian logic, that is. Shockingly, they don’t even teach much of it in medical school unless you do research, and even there it’s seen as a predictive method, not a tool to make sense of analytical process. Which is a pity. The reason why idiotic arguments like the above by @Cattlechildren proliferate is that physicians have been taught how to diagnose well, but never how to explain and reason about the diagnostic process. This was true for the generations before me, and is more or less true for those still in med school today. What is often covered up with nebulous concepts like ‘clinical experience’ is in fact solid Bayesian reasoning. Knowing the mathematical fundamentals of the thought process you are using day to day, and which help you make the right decisions every day in the clinic, helps you reason about it, find weak points, answer challenges and respond to them. For this reason, my highest hope is that as many MDs, epidemiologists, med students, RNs, NPs and other clinical decision-makers will engage with this topic, even if it’s a little long. I promise, it’s worth it. ## Some basic ideas about probability In probability, an event, usually denoted with a capital and customarily starting at $A$ (I have no idea why, as it makes things only more confusing!), is any outcome or incidence that we’re interested in – as long as they’re binary, that is, they either happen or don’t happen, and discrete, that is, there’s a clear definition for it, so that we can decide if it’s happened or not – no half-way events for now.2 In other words, an event can’t happen and not happen at the same time. Or, to get used to the notation of conditionality, $p(A \mid \neg A) = 0$.3 A thing cannot be both true and false. Now, we may be interested in how likely it is for an event to happen if another event happens: how likely is $A$ if $B$ holds true? This is denoted as $p(A|B)$, and for now, the most important thing to keep in mind about it is that it is not necessarily the same as $p(B|A)$!4 Bayesian logic deals with the notion of conditional probabilities – in other words, the probability of one event, given another.5 It is one of the most widely misunderstood part of probability, yet it is crucial to understand to our own idea of the way we reason about things. Just to understand how important this is, let us consider a classic example. ## Case study 1: speed cameras Your local authority is broke. And so, it does what local authorities do when they’re broke: play poker with the borough credit card set up a bunch of speed cameras and fine drivers. Over this particular stretch of road, the speed limit is 60mph. According to the manufacturer, the speed cameras are very sensitive, but not very specific. In other words, they never falsely indicate that a driver was below the speed limit, but they may falsely indicate that the driver was above it, in about 3% of the cases (the false positive rate). One morning, you’re greeted by a message in your postbox, notifying you that you’ve driven too fast and fining you a rather respectable amount of cash. What is the probability that you have indeed driven too fast? You may feel inclined to blurt out 97%. That, in fact, is wrong. ### Explanation It’s rather counter-intuitive at first to understand why, until we consider the problem in formal terms. We know the probability $p(A|\not B)$, that is, the probability of being snapped ($A$) even though you were not speeding ($\not B$). But what the question asks is what the likelihood that you were, in fact, speeding ($B$) given the fact that you were snapped ($A$). And as we have learned, the conditional probability operator is not commutative, that is, $p(A|B)$ is not necessarily the same as $p(B|A)$. Why is that the case? Because base rates matter. In other words, the probabilities of $A$ and $B$, in and of themselves, are material. Consider, for a moment, the unlikely scenario of living in that mythical wonderland of law-abiding citizens where nobody speeds. Then, it does not matter how many drivers are snapped – all of them are false positives, and thus $p(B|A)$, the probability of speeding ($B$) given that one got snapped by a speed camera ($A$), is actually zero. In other words, if we want to reverse the conditional operator, we need to make allowances for the ‘base frequency’, the ordinary frequency with which each event occurs on its own. To overcome base frequency neglect,6 we have a mathematical tool, courtesy of the good Revd. Thomas Bayes, who sayeth that, verily,latex p(B \mid A) = \frac{p(A \mid B) p(B)}{p(A)}

Or, in words: if you want to reverse the probabilities, you will have to take the base rates of each event into account. If what we know is the likelihood that you were not speeding if you were snapped and what we’re interested in is the likelihood that someone getting snapped is indeed speeding, we’ll need to know a few more things.

### Case study 1: Speed cameras – continued

• We know that the speed cameras have a Type II (false negative) error rate of zero – in other words, if you are speeding ($B$), you are guaranteed to get snapped ($A$) – thus, $p(A \mid B)$ is 1.
• We also know from the Highway Authority, who were using a different and more accurate measurement system, that approximately one in 1,000 drivers is speeding ($p(B) = 0.001$).
• Finally, we know that of 1,000 drivers, 31 will be snapped – the one speeder and 3% accounting for the false positive rate –, yielding $p(A) = 0.031$.

Putting that into our equation,

$p(B|A) = \frac{p(A \mid B) p(B)}{p(A)} = \frac{1 \cdot 0.001}{0.031} = 0.032$

In other words, the likelihood that we indeed did exceed the speed limit is just barely north of 3%. That’s a far cry from the ‘intuitive’ answer of 97% (quite accidentally, it’s almost the inverse).

## Diagnostics, probabilities and Bayesian logic

The procedure of medical diagnostics is ultimately a relatively simple algorithm:

1. create a list of possibilities, however remote (the process of differential diagnostics),
2. order them in order of likelihood,
3. update priors as you run tests.7

From a statistical perspective, this is implemented as follows.

1. We begin by running a number of tests, specifically $m$ of them. It is assumed that the tests are independent from each other, i.e. the value of one does not affect the value of another. Let $R_j$ denote the results of test $j \leq m$.
1. For each test, we need to iterate over all our differentials $D_{i \ldots n}$, and determine the probability of each in light of the new evidence, i.e. $latex p(D_i \mid R_j). 2. So, let’s take the results of test $j$ that yielded the results $R_j$, and the putative diagnosis $D_i$. What we’re interested in is $p(D_i \mid R_j)$, that is, the probability of the putative diagnosis given the new evidence. Or, to use Bayesian lingo, we are updating our prior: we had a previous probability assigned to $D_i$, which may have been a uniform probability or some other probability, and we are now updating it – seeing how likely it is given the new evidence, getting what is referred to as a posterior.8 3. To calculate the posterior $P(D_i | R_j)$, we need to know three things – the sensitivity and specificity of the test $j$ (I’ll call these $S^+_j$ and $S^-_j$, respectively), the overall incidence of $D_i$,9 and the overall incidence of the particular result $R_j$. 4. Plugging these variables into our beloved Bayesian formula, we get $p(D_i \mid R_j) = \frac{p(R_j \mid D_i) p(D_i)}{p(R_j)}$. 5. We know that $p(R_j \mid D_i)$, that is, the probability that someone will test a particular way if they do have the condition $D_i$, is connected to sensitivity and specificity: if $R_j$ is supposed to be positive if the patient has $D_i$, then $p(R_j \mid D_i) = S^-_j$ (sensitivity), whereas if the test is supposed to be negative if the patient has $D_i$, then $p(R_j \mid D_i) = S^+_j$ (specificity). 6. We also know, or are supposed to know, the overall incidence of $D_i$ and the probability of a particular outcome, $R_j$. With that, we can update our prior for $D_i \mid R_j$. 2. We iterate over each of the tests, updating the priors every time new evidence comes in. This may sound daunting and highly mathematical, but in fact most physicians have this down to an innate skill, so much so that when I explained this to a group of FY2 doctors, they couldn’t believe it – until they thought about how they thought. And that’s a key issue here: thinking about the way we arrive at results is important, because they are the bedrock of what we need to make those results intelligible to others. ## Case study 2: ATA testing for coeliac disease For a worked example of this in the diagnosis of coeliac disease, check Notebook 1: ATA case study. It puts things in the context of sensitivity and specificity in medical testing, and is in many ways quite similar to the above example, except here, we’re working with a real-world test with real-world uncertainties. There are several ways of testing for coeliac disease, a metabolic disorder in which the body responds to gluten proteins (gliadins and glutenins) in wheats, wheat hybrids, barley, oats and rye. One diagnostic approach looks at genetic markers in the HLA-DQ (Human Leukocyte Antigen type DQ), part of the MHC (Major Histocompatibility Complex) Class II receptor system. Genetic testing for a particular haplotype of the HLA-DQ2 gene, called DQ2.5, can lead to a diagnosis in most patients. Unfortunately, it’s slow and expensive. Another test, a colonoscopic biopsy of the intestines, looks at the intestinal villi, short protrusions (about 1mm long) into the intestine, for tell-tale damage – but this test is unpleasant, possibly painful and costly. So, a more frequent way is by looking for evidence of an autoantibody called anti-tissue transglutaminase antibody (ATA) – unrelated to this gene, sadly. ATA testing is cheap and cheerful, and relatively good, with a sensitivity ($S^+_{ATA}$) of 85% and specificity ($S^+_{ATA}$) of 97%.10 We also know the rough probability of a sample being from someone who actually has coeliac disease – for a referral lab, it’s about 1%. Let’s consider the following case study. A patient gets tested for coeliac disease using the ATA test described above. Depending on whether the test is positive or negative, what are the chances she has coeliac disease? If you’ve read the notebook, you know by now that the probability of having coeliac disease if testing positive is around 22%, or a little better than one-fifth. And from the visualisation to the left, you could see that small incremental improvements in specificity would yield a lot more increase in accuracy (marginal accuracy gain) than increases in sensitivity. While quite simple, this is a good case study because it emphasises a few essential things about Bayesian reasoning: • Always know your baselines. In this case, we took a baseline of 1%, even though the average incidence of coeliac disease in the population is closer to about 0.25% of that. Why? Because we don’t spot-test people for coeliac disease. People who do get tested get tested because they exhibit symptoms that may or may not be coeliac disease, and by definition they have a higher prevalence11 of coeliac disease. The factor is, of course, entirely imaginary – you would, normally, need to know or have a way to figure out the true baseline values. • Use independent baselines. It is absolutely crucial to make sure that you do not get the baselines from your own measurement process. In this case, for instance, the incidence of coeliac disease should not be calculated by reference to your own lab’s number of positive tests divided by total tests. This merely allows for further proliferation of false positives and negatives, however minuscule their effect. A good way is to do follow-up studies, checking how many of the patients tested positive or negative for ATA were further tested using other methodologies, many of which may be more reliable, and calculate the proportion of actual cases coming through your door by reference to that. ## Case study 3: Vaccines in differential diagnosis This case is slightly different, as we are going to compare two different scenarios. Both concern $D_{VPD}$, a somewhat contrived vaccine-preventable illness. $D_{VPD}$ produces a very particular symptom or symptom set, $S$, and produces this symptom or symptom set in every case, without fail.12 The question is – how does the vaccination status affect the differential diagnosis of two identical patients,13 presenting with the same symptoms $S$, one of whom is unvaccinated? It has been a regrettably enduring trope of the anti-vaccination movement that because doctors believe vaccines work, they will not diagnose a patient with a vaccine-preventable disease (VPD), simply striking it off the differential diagnosis or substitute a different diagnosis for it.14 The reality is explored in this notebook, which compares two scenarios, of the same condition, with two persons with the sole difference of vaccination status. That difference makes a massive – about 7,800x – difference between the likelihood of the vaccinated and the unvaccinated person having the disease. The result is that a 7,800 times less likely outcome slides down the differential. As NZ paediatrician Dr Greenhouse (@greenhousemd) noted in the tweet, “it’s good medical care”. In the words of British economist John Maynard Keynes,15 “when the facts change, I change my mind”. And so do diagnosticians. Quite absolutely simply put: it’s not an exclusion or fudging data or in any sensible way proof that “no vaccine in history has ever worked”. It’s quite simply a reflection of the reality that if in a population a condition is almost 8,000 times less likely, then, yes, other more frequent conditions push ahead. ## Lessons learned Bayesian analysis of the diagnostic procedure allows not only increased clarity about what one is doing as a clinician. Rather, it allows the full panoply of tools available to mathematical and logical reasoning to investigate claims, objections and contentions – and like in the case of the alleged non-diagnosis of vaccines, discard them. The most powerful tool anyone who utilises any process of structured clinical reasoning – be it clinical reasoning in diagnostics, algorithmic analysis, detective work or intelligence analysis – is to be able to formally reason about one’s own toolkit of structured processes. It is my hope that if you’ve never thought about your clinical diagnostic process in these terms, you will now be able to see a new facet of it. References [ + ]  1 ↑ The basis of non-Western economies tends to be worse. That’s about as much as Western economies have going for them. See: Venezuela and the DPRK. 2 ↑ There’s a whole branch of probability that deals with continuous probabilities, but discrete probabilities are crazy enough for the time being. 3 ↑ Read: The probability of A given not-A is zero. A being any arbitrary event: the stock market crashing, the temperature tomorrow exceeding 30ºC, &. 4 ↑ In other words, it may be the same, but that’s pure accident. Mathematically, they’re almost always different. 5 ↑ It’s tempting to assume that this implies causation, or that the second event must temporally succeed the first, but none of those are implied, and in fact only serve to confuse things more. 6 ↑ You will also hear this referred to as ‘base rate neglect’ or ‘base rate fallacy’. As an epidemiologist, ‘rate’ has a specific meaning for us – it generally means events over a span of time. It’s not a rate unless it’s necessarily over time. I know, we’re pedantic like that. 7 ↑ This presupposes that these tests are independent of each other, like observations of a random variable. They generally aren’t – for instance, we run the acute phase protein CRP, W/ESR (another acute phase marker) and a WBC count, but these are typically not independent from each other. In such cases, it’s legitimate to use $B = B_1 \cap B_2 \cap \ \ldots \cap B_n$$B = B_1 \cap B_2 \cap \ \ldots \cap B_n$ or, as my preferred notation goes, $B = \bigcap^n_{k=1} B_k$$B = \bigcap^n_{k=1} B_k$. I know ‘updating’ is the core mantra of Bayesianism, but knowing what to update and knowing where to simply calculate the conjoint probability is what experts in Bayesian reasoning rake in the big bucks for. 8 ↑ Note that a posterior from this step can, upon more new evidence, become the prior in the next round – the prior for $j$$j$ may be the inferred probability $p(D_i)$$p(D_i)$, but the prior for $j + 1$$j + 1$ is $p(D_i \mid R_j)$$p(D_i \mid R_j)$, and so on. More about multiple observations later. 9 ↑ It’s important to note that this is not necessarily the population incidence. For instance, the overall incidence and thus the relevant $D$$D$ for EBOV ($D_{EBOV}$$D_{EBOV}$) is going to be different for a haemorrhagic fever referral lab in Kinshasa and a county hospital microbiology lab in Michigan. 10 ↑ Lock, R.J. et al. (1999). IgA anti-tissue transglutaminase as a diagnostic marker of gluten sensitive enteropathy. J Clin Pathol 52(4):274-7. 11 ↑ More epidemiopedantry: ‘incidence’ refers to new cases over time, ‘prevalence’ refers to cases at a moment in time. 12 ↑ This is, of course, unrealistic. I will do a walkthrough of an example of multiple symptoms that each have an association with the illness in a later post. 13 ↑ It’s assumed gender is irrelevant to this disease. 14 ↑ Presumably hoping that refusing to diagnose a patient with diphtheria and instead diagnosing them with a throat staph infection will somehow get the patient okay enough that nobody will notice the insanely prominent pseudomembrane… 15 ↑ Or not… # Automagic epi curve plotting: part I As of 24 May 2018, the underlying data schema of the Github repo from which the epi curve plotter draws its data has changed. Therefore, a lot of the code had to be adjusted. The current code can be found here on Github. This also plots a classical epi curve. One of the best resources during the 2013-16 West African Ebola outbreak was Caitlin RiversGithub repo, which was probably one of the best ways to stay up to date on the numbers. For the current outbreak, she has also set up a Github repo, with really frequent updates straight from the WHO’s DON data and the information from DRC Ministry of Public Health (MdlS) mailing list.1 Using R, I have set up a simple script that I only have to run every time I want a pre-defined visualisation of the current situation. I am usually doing this on a remote RStudio server, which makes matters quite easy for me to quickly generate data on the fly from RStudio. ### Obtaining the most recent data Using the following little script, I grab the latest from the ebola-drc Github repo: # Fetch most recent DRC data. library(magrittr) library(curl) library(readr) library(dplyr) current_drc_data <- Sys.time() %>% format("%d%H%M%S%b%Y") %>% paste("raw_data/drc/", "drc-", ., ".csv", sep = "") %T>% curl_fetch_disk("https://raw.githubusercontent.com/cmrivers/ebola_drc/master/drc/data.csv", .) %>% read_csv() This uses curl (the R implementation) to fetch the most recent data and save it as a timestamped2 file in the data folder I set up just for that purpose.3 Simply sourcing this script (source("fetch_drc_data.R")) should then load the current DRC dataset into the environment.4 ### Data munging We need to do a little data munging. First, we melt down the data frame using reshape2‘s melt function. Melting takes ‘wide’ data and converumnts it into ‘long’ data – for example, in our case, the original data had a row for each daily report for each health zone, and a column for the various combinations of confirmed/probable/suspected over cases/deaths. Melting the data frame down creates a variable type column (say, confirmed_deaths and a value column (giving the value, e.g. 3). Using lubridate,5 the dates are parsed, and the values are stored in a numeric format. library(magrittr) library(reshape2) library(lubridate) current_drc_data %<>% melt(value_name = "value", measure.vars = c("confirmed_cases", "confirmed_deaths", "probable_cases", "probable_deaths", "suspect_cases", "suspect_deaths", "ruled_out")) current_drc_data$event_date <- lubridate::ymd(current_drc_data$event_date) current_drc_data$report_date <- lubridate::ymd(current_drc_data$report_date) current_drc_data$value <- as.numeric(current_drc_data$value) Next, we drop ruled_out cases, as they play no significant role for the current visualisation. current_drc_data <- current_drc_data[current_drc_data$variable != "ruled_out",]

We also need to split the type labels into two different columns, so as to allow plotting them as a matrix. Currently, data type labels (the variable column) has both the certainty status (confirmed, suspected or probable) and the type of indicator (cases vs deaths) in a single variable, separated by an underscore. We’ll use stringr‘s str_split_fixed to split the variable names by underscore, and join them into two separate columns, suspicion and mm, the latter denoting mortality/morbidity status.

current_drc_data %<>% cbind(., str_split_fixed(use_series(., variable), "_", 2)) %>%
subset(select = -c(variable)) %>%
set_colnames(c("event_date", "report_date", "health_zone", "value", "suspicion", "mm"))

Let’s filter out the health zones that are being observed but have no relevant data for us yet:

relevant_health_zones <- current_drc_data %>%
subset(select = c("health_zone", "value")) %>%
group_by(health_zone) %>%
summarise(totals = sum(value, na.rm=TRUE)) %>%
dplyr::filter(totals > 0) %>%
use_series(health_zone)

This gives us a vector of all health zones that are currently reporting cases. We can filter our DRC data for that:

current_drc_data %<>% dplyr::filter(health_zone %in% relevant_health_zones)

This whittles down our table by a few rows. Finally, we might want to create a fake health zone that summarises all other health zones’ respective data:

totals <- current_drc_data %>% group_by(event_date, report_date, suspicion, mm)
%>% summarise(value = sum(value), health_zone=as.factor("DRC total"))

# Reorder totals to match the core dataset
totals <- totals[,c(1,2,6,5,3,4)]

Finally, we bind these together to a single data frame:

current_drc_data %<>% rbind.data.frame(totals)

### Visualising it!

Of course, all this was in pursuance of cranking out a nice visualisation. For this, we need to do a couple of things, including first ensuring that “DRC total” is treated separately and comes last:

regions <- current_drc_data %>% use_series(health_zone) %>% unique()
regions[!regions == "DRC total"]
regions %<>% c("DRC total")

current_drc_data$health_zone_f <- factor(current_drc_data$health_zone, levels = regions)


I normally start out by declaring the colour scheme I will be using. In general, I tend to use the same few colour schemes, which I keep in a few gists. For simple plots, I prefer to use no more than five colours:

colour_scheme <- c(white = rgb(238, 238, 238, maxColorValue = 255),
light_primary = rgb(236, 231, 216, maxColorValue = 255),
dark_primary = rgb(127, 112, 114, maxColorValue = 255),
accent_red = rgb(240, 97, 114, maxColorValue = 255),
accent_blue = rgb(69, 82, 98, maxColorValue = 255))

With that sorted, I can invoke the ggplot method, storing the plot in an object, p. This is so as to facilitate later retrieval by the ggsave method.

p <- ggplot(current_drc_data, aes(x=event_date, y=value)) +

# Title and subtitle
ggtitle(paste("Daily EBOV status", "DRC", Sys.Date(), sep=" - ")) +
labs(subtitle = "(c) Chris von Csefalvay/CBRD (cbrd.co) - @chrisvcsefalvay") +

# This facets the plot based on the factor vector we created ear
facet_grid(health_zone_f ~ suspicion) +
geom_path(aes(group = mm, colour = mm, alpha = mm), na.rm = TRUE) +
geom_point(aes(colour = mm, alpha = mm)) +

# Axis labels
ylab("Cases") +
xlab("Date") +

# The x-axis is between the first notified case and the last
xlim(c("2018-05-08", Sys.Date())) +
scale_x_date(date_breaks = "7 days", date_labels = "%m/%d") +

# Because often there's an overlap and cases that die on the day of registration
# tend to count here as well, some opacity is useful.
scale_alpha_manual(values = c("cases" = 0.5, "deaths" = 0.8)) +
scale_colour_manual(values = c("cases" = colour_scheme[["accent_blue"]], "deaths" = colour_scheme[["accent_red"]])) +

# Ordinarily, I have these derive from a theme package, but they're very good
# defaults and starting poinnnnnntsssssts
theme(panel.spacing.y = unit(0.6, "lines"),
panel.spacing.x = unit(1, "lines"),
plot.title = element_text(colour = colour_scheme[["accent_blue"]]),
plot.subtitle = element_text(colour = colour_scheme[["accent_blue"]]),
axis.line = element_line(colour = colour_scheme[["dark_primary"]]),
panel.background = element_rect(fill = colour_scheme[["white"]]),
panel.grid.major = element_line(colour = colour_scheme[["light_primary"]]),
panel.grid.minor = element_line(colour = colour_scheme[["light_primary"]]),
strip.background = element_rect(fill = colour_scheme[["accent_blue"]]),
strip.text = element_text(colour = colour_scheme[["light_primary"]])
)


The end result is a fairly appealing plot, although if the epidemic goes on, one might want to consider getting rid of the point markers. All that remains is to insert an automatic call to the ggsave function to save the image:

Sys.time() %>%
format("%d%H%M%S%b%Y") %>%
paste("DRC-EBOV-", ., ".png", sep="") %>%
ggsave(plot = p, device="png", path="visualisations/drc/", width = 8, height = 6)

### Automation

Of course, being a lazy epidemiologist, this is the kind of stuff that just has to be automated! Since I run my entire RStudio instance on a remote machine, it would make perfect sense to regularly run this script. cronR package comes with a nice widget, which will allow you to simply schedule any task. Old-school command line users can, of course, always resort to ye olde command line based scheduling and execution. One important caveat: the context of cron execution will not necessarily be the same as of your R project or indeed of the R user. Therefore, when you source a file or refer to paths, you may want to refer to fully qualified paths, i.e. /home/my_user/my_R_stuff/script.R rather than merely script.R. cronR is very good at logging when things go awry, so if the plots do not start to magically accumulate at the requisite rate, do give the log a check.

The next step is, of course, to automate uploads to Twitter. But that’s for another day.

References   [ + ]

 1 ↑ Disclaimer: Dr Rivers is a friend, former collaborator and someone I hold in very high esteem. I’m also from time to time trying to contribute to these repos. 2 ↑ My convention for timestamps is the military DTG convention of DDHHMMSSMONYEAR, so e.g. 7.15AM on 21 May 2018 would be 21071500MAY2018. 3 ↑ It is, technically, bad form to put the path in the file name for the curl::curl_fetch_disk() function, given that curl::curl_fetch_disk() offers the path parameter just for that. However, due to the intricacies of piping data forward, this is arguably the best way to do it so as to allow the filename to be reused for the read_csv() function, too. 4 ↑ If for whatever reason you would prefer to keep only one copy of the data around and overwrite it every time, quite simply provide a fixed filename into curl_fetch_disk(). 5 ↑ As an informal convention of mine, when using the simple conversion functions of lubridate such as ymd, I tend to note the source package, hence lubridate::ymd over simply ymd.