Bayesian reasoning in clinical diagnostics: a primer.

We know, from the source of eternal wisdom that is Saturday Morning Breakfast Cereal, that insufficient math education is the basis of the entire Western economy.1 This makes Bayesian logic and reasoning about probabilities almost like a dark art, a well-kept secret that only a few seem to know (and it shouldn’t be… but that’s a different story). This weird-wonderful argument, reflecting a much-reiterated meme about vaccines and vaccine efficacy, is a good example:

The argument, here, in case you are not familiar with the latest in anti-vaccination fallacies, is that vaccines don’t work, and they have not reduced the incidence of vaccine-preventable diseases. Rather, if a person is vaccinated for, say, measles, then despite displaying clinical signs of measles, he will be considered to have a different disease, and therefore all disease statistics proving the efficacy of vaccines are wrong. Now, that’s clearly nonsense, but it highlights one interesting point, one that has a massive bearing on computational systems drawing conclusions from evidence: namely, the internal Bayesian logic of the diagnostic process.

Which, incidentally, is the most important thing that they didn’t teach you in school. Bayesian logic, that is. Shockingly, they don’t even teach much of it in medical school unless you do research, and even there it’s seen as a predictive method, not a tool to make sense of analytical process. Which is a pity. The reason why idiotic arguments like the above by @Cattlechildren proliferate is that physicians have been taught how to diagnose well, but never how to explain and reason about the diagnostic process. This was true for the generations before me, and is more or less true for those still in med school today. What is often covered up with nebulous concepts like ‘clinical experience’ is in fact solid Bayesian reasoning. Knowing the mathematical fundamentals of the thought process you are using day to day, and which help you make the right decisions every day in the clinic, helps you reason about it, find weak points, answer challenges and respond to them. For this reason, my highest hope is that as many MDs, epidemiologists, med students, RNs, NPs and other clinical decision-makers will engage with this topic, even if it’s a little long. I promise, it’s worth it.

Some basic ideas about probability

In probability, an event, usually denoted with a capital and customarily starting at A (I have no idea why, as it makes things only more confusing!), is any outcome or incidence that we’re interested in – as long as they’re binary, that is, they either happen or don’t happen, and discrete, that is, there’s a clear definition for it, so that we can decide if it’s happened or not – no half-way events for now.2 In other words, an event can’t happen and not happen at the same time. Or, to get used to the notation of conditionality, p(A \mid \neg A) = 0.3 A thing cannot be both true and false.

Now, we may be interested in how likely it is for an event to happen if another event happens: how likely is A if B holds true? This is denoted as p(A|B), and for now, the most important thing to keep in mind about it is that it is not necessarily the same as p(B|A)!4

Bayesian logic deals with the notion of conditional probabilities – in other words, the probability of one event, given another.5 It is one of the most widely misunderstood part of probability, yet it is crucial to understand to our own idea of the way we reason about things.

Just to understand how important this is, let us consider a classic example.


Case study 1: speed cameras

Your local authority is broke. And so, it does what local authorities do when they’re broke: play poker with the borough credit card set up a bunch of speed cameras and fine drivers. Over this particular stretch of road, the speed limit is 60mph.

According to the manufacturer, the speed cameras are very sensitive, but not very specific. In other words, they never falsely indicate that a driver was below the speed limit, but they may falsely indicate that the driver was above it, in about 3% of the cases (the false positive rate).

One morning, you’re greeted by a message in your postbox, notifying you that you’ve driven too fast and fining you a rather respectable amount of cash. What is the probability that you have indeed driven too fast?

You may feel inclined to blurt out 97%. That, in fact, is wrong.


Explanation

It’s rather counter-intuitive at first to understand why, until we consider the problem in formal terms. We know the probability p(A|\not B), that is, the probability of being snapped (A) even though you were not speeding (\not B). But what the question asks is what the likelihood that you were, in fact, speeding (B) given the fact that you were snapped (A). And as we have learned, the conditional probability operator is not commutative, that is, p(A|B) is not necessarily the same as p(B|A).

Why is that the case? Because base rates matter. In other words, the probabilities of A and B, in and of themselves, are material. Consider, for a moment, the unlikely scenario of living in that mythical wonderland of law-abiding citizens where nobody speeds. Then, it does not matter how many drivers are snapped – all of them are false positives, and thus p(B|A), the probability of speeding (B) given that one got snapped by a speed camera (A), is actually zero.

In other words, if we want to reverse the conditional operator, we need to make allowances for the ‘base frequency’, the ordinary frequency with which each event occurs on its own. To overcome base frequency neglect,6 we have a mathematical tool, courtesy of the good Revd. Thomas Bayes, who sayeth that, verily,

$latex p(B \mid A) = \frac{p(A \mid B) p(B)}{p(A)}

Or, in words: if you want to reverse the probabilities, you will have to take the base rates of each event into account. If what we know is the likelihood that you were not speeding if you were snapped and what we’re interested in is the likelihood that someone getting snapped is indeed speeding, we’ll need to know a few more things.


Case study 1: Speed cameras – continued

  • We know that the speed cameras have a Type II (false negative) error rate of zero – in other words, if you are speeding (B), you are guaranteed to get snapped (A) – thus, $p(A \mid B)$ is 1.
  • We also know from the Highway Authority, who were using a different and more accurate measurement system, that approximately one in 1,000 drivers is speeding (p(B) = 0.001).
  • Finally, we know that of 1,000 drivers, 31 will be snapped – the one speeder and 3% accounting for the false positive rate –, yielding p(A) = 0.031.

Putting that into our equation,

p(B|A) = \frac{p(A \mid B) p(B)}{p(A)} = \frac{1 \cdot 0.001}{0.031} = 0.032

In other words, the likelihood that we indeed did exceed the speed limit is just barely north of 3%. That’s a far cry from the ‘intuitive’ answer of 97% (quite accidentally, it’s almost the inverse).


Diagnostics, probabilities and Bayesian logic

The procedure of medical diagnostics is ultimately a relatively simple algorithm:

  1. create a list of possibilities, however remote (the process of differential diagnostics),
  2. order them in order of likelihood,
  3. update priors as you run tests.7

From a statistical perspective, this is implemented as follows.

  1. We begin by running a number of tests, specifically m of them. It is assumed that the tests are independent from each other, i.e. the value of one does not affect the value of another. Let R_j denote the results of test $j \leq m$.
    1. For each test, we need to iterate over all our differentials D_{i \ldots n}, and determine the probability of each in light of the new evidence, i.e. $latex p(D_i \mid R_j).
    2. So, let’s take the results of test j that yielded the results R_j, and the putative diagnosis D_i. What we’re interested in is p(D_i \mid R_j), that is, the probability of the putative diagnosis given the new evidence. Or, to use Bayesian lingo, we are updating our prior: we had a previous probability assigned to D_i, which may have been a uniform probability or some other probability, and we are now updating it – seeing how likely it is given the new evidence, getting what is referred to as a posterior.8
    3. To calculate the posterior P(D_i | R_j), we need to know three things – the sensitivity and specificity of the test j (I’ll call these S^+_j and S^-_j, respectively), the overall incidence of D_i,9 and the overall incidence of the particular result R_j.
    4. Plugging these variables into our beloved Bayesian formula, we get p(D_i \mid R_j) = \frac{p(R_j \mid D_i) p(D_i)}{p(R_j)}.
    5. We know that p(R_j \mid D_i), that is, the probability that someone will test a particular way if they do have the condition D_i, is connected to sensitivity and specificity: if R_j is supposed to be positive if the patient has D_i, then p(R_j \mid D_i) = S^-_j (sensitivity), whereas if the test is supposed to be negative if the patient has D_i, then p(R_j \mid D_i) = S^+_j (specificity).
    6. We also know, or are supposed to know, the overall incidence of D_i and the probability of a particular outcome, R_j. With that, we can update our prior for D_i \mid R_j.
  2. We iterate over each of the tests, updating the priors every time new evidence comes in.

This may sound daunting and highly mathematical, but in fact most physicians have this down to an innate skill, so much so that when I explained this to a group of FY2 doctors, they couldn’t believe it – until they thought about how they thought. And that’s a key issue here: thinking about the way we arrive at results is important, because they are the bedrock of what we need to make those results intelligible to others.


Case study 2: ATA testing for coeliac disease

For a worked example of this in the diagnosis of coeliac disease, check Notebook 1: ATA case study. It puts things in the context of sensitivity and specificity in medical testing, and is in many ways quite similar to the above example, except here, we’re working with a real-world test with real-world uncertainties.

There are several ways of testing for coeliac disease, a metabolic disorder in which the body responds to gluten proteins (gliadins and glutenins) in wheats, wheat hybrids, barley, oats and rye. One diagnostic approach looks at genetic markers in the HLA-DQ (Human Leukocyte Antigen type DQ), part of the MHC (Major Histocompatibility Complex) Class II receptor system. Genetic testing for a particular haplotype of the HLA-DQ2 gene, called DQ2.5, can lead to a diagnosis in most patients. Unfortunately, it’s slow and expensive. Another test, a colonoscopic biopsy of the intestines, looks at the intestinal villi, short protrusions (about 1mm long) into the intestine, for tell-tale damage – but this test is unpleasant, possibly painful and costly.

So, a more frequent way is by looking for evidence of an autoantibody called anti-tissue transglutaminase antibody (ATA) – unrelated to this gene, sadly. ATA testing is cheap and cheerful, and relatively good, with a sensitivity (S^+_{ATA}) of 85% and specificity (S^+_{ATA}) of 97%.10 We also know the rough probability of a sample being from someone who actually has coeliac disease – for a referral lab, it’s about 1%.

Let’s consider the following case study. A patient gets tested for coeliac disease using the ATA test described above. Depending on whether the test is positive or negative, what are the chances she has coeliac disease?

Sensitivity and specificity trade-off for an ATA test given various values of true coeliac disease prevalence in the population.

If you’ve read the notebook, you know by now that the probability of having coeliac disease if testing positive is around 22%, or a little better than one-fifth. And from the visualisation to the left, you could see that small incremental improvements in specificity would yield a lot more increase in accuracy (marginal accuracy gain) than increases in sensitivity.

While quite simple, this is a good case study because it emphasises a few essential things about Bayesian reasoning:

  • Always know your baselines. In this case, we took a baseline of 1%, even though the average incidence of coeliac disease in the population is closer to about 0.25% of that. Why? Because we don’t spot-test people for coeliac disease. People who do get tested get tested because they exhibit symptoms that may or may not be coeliac disease, and by definition they have a higher prevalence11 of coeliac disease. The factor is, of course, entirely imaginary – you would, normally, need to know or have a way to figure out the true baseline values.
  • Use independent baselines. It is absolutely crucial to make sure that you do not get the baselines from your own measurement process. In this case, for instance, the incidence of coeliac disease should not be calculated by reference to your own lab’s number of positive tests divided by total tests. This merely allows for further proliferation of false positives and negatives, however minuscule their effect. A good way is to do follow-up studies, checking how many of the patients tested positive or negative for ATA were further tested using other methodologies, many of which may be more reliable, and calculate the proportion of actual cases coming through your door by reference to that.

Case study 3: Vaccines in differential diagnosis

This case is slightly different, as we are going to compare two different scenarios. Both concern D_{VPD}, a somewhat contrived vaccine-preventable illness. D_{VPD} produces a very particular symptom or symptom set, S, and produces this symptom or symptom set in every case, without fail.12 The question is – how does the vaccination status affect the differential diagnosis of two identical patients,13 presenting with the same symptoms S, one of whom is unvaccinated?

No. That’s not how this works. That’s not how ANY of this works. Nnnnope.

It has been a regrettably enduring trope of the anti-vaccination movement that because doctors believe vaccines work, they will not diagnose a patient with a vaccine-preventable disease (VPD), simply striking it off the differential diagnosis or substitute a different diagnosis for it.14 The reality is explored in this notebook, which compares two scenarios, of the same condition, with two persons with the sole difference of vaccination status. That difference makes a massive – about 7,800x – difference between the likelihood of the vaccinated and the unvaccinated person having the disease. The result is that a 7,800 times less likely outcome slides down the differential. As NZ paediatrician Dr Greenhouse (@greenhousemd) noted in the tweet, “it’s good medical care”. In the words of British economist John Maynard Keynes,15 “when the facts change, I change my mind”. And so do diagnosticians.

Quite absolutely simply put: it’s not an exclusion or fudging data or in any sensible way proof that “no vaccine in history has ever worked”. It’s quite simply a reflection of the reality that if in a population a condition is almost 8,000 times less likely, then, yes, other more frequent conditions push ahead.


Lessons learned

Bayesian analysis of the diagnostic procedure allows not only increased clarity about what one is doing as a clinician. Rather, it allows the full panoply of tools available to mathematical and logical reasoning to investigate claims, objections and contentions – and like in the case of the alleged non-diagnosis of vaccines, discard them.

The most powerful tool anyone who utilises any process of structured clinical reasoning – be it clinical reasoning in diagnostics, algorithmic analysis, detective work or intelligence analysis – is to be able to formally reason about one’s own toolkit of structured processes. It is my hope that if you’ve never thought about your clinical diagnostic process in these terms, you will now be able to see a new facet of it.

References   [ + ]

1. The basis of non-Western economies tends to be worse. That’s about as much as Western economies have going for them. See: Venezuela and the DPRK.
2. There’s a whole branch of probability that deals with continuous probabilities, but discrete probabilities are crazy enough for the time being.
3. Read: The probability of A given not-A is zero. A being any arbitrary event: the stock market crashing, the temperature tomorrow exceeding 30ºC, &.
4. In other words, it may be the same, but that’s pure accident. Mathematically, they’re almost always different.
5. It’s tempting to assume that this implies causation, or that the second event must temporally succeed the first, but none of those are implied, and in fact only serve to confuse things more.
6. You will also hear this referred to as ‘base rate neglect’ or ‘base rate fallacy’. As an epidemiologist, ‘rate’ has a specific meaning for us – it generally means events over a span of time. It’s not a rate unless it’s necessarily over time. I know, we’re pedantic like that.
7. This presupposes that these tests are independent of each other, like observations of a random variable. They generally aren’t – for instance, we run the acute phase protein CRP, W/ESR (another acute phase marker) and a WBC count, but these are typically not independent from each other. In such cases, it’s legitimate to use B = B_1 \cap B_2 \cap \ \ldots \cap B_n or, as my preferred notation goes, B = \bigcap^n_{k=1} B_k. I know ‘updating’ is the core mantra of Bayesianism, but knowing what to update and knowing where to simply calculate the conjoint probability is what experts in Bayesian reasoning rake in the big bucks for.
8. Note that a posterior from this step can, upon more new evidence, become the prior in the next round – the prior for j may be the inferred probability p(D_i), but the prior for j + 1 is p(D_i \mid R_j), and so on. More about multiple observations later.
9. It’s important to note that this is not necessarily the population incidence. For instance, the overall incidence and thus the relevant D for EBOV (D_{EBOV}) is going to be different for a haemorrhagic fever referral lab in Kinshasa and a county hospital microbiology lab in Michigan.
10. Lock, R.J. et al. (1999). IgA anti-tissue transglutaminase as a diagnostic marker of gluten sensitive enteropathy. J Clin Pathol 52(4):274-7.
11. More epidemiopedantry: ‘incidence’ refers to new cases over time, ‘prevalence’ refers to cases at a moment in time.
12. This is, of course, unrealistic. I will do a walkthrough of an example of multiple symptoms that each have an association with the illness in a later post.
13. It’s assumed gender is irrelevant to this disease.
14. Presumably hoping that refusing to diagnose a patient with diphtheria and instead diagnosing them with a throat staph infection will somehow get the patient okay enough that nobody will notice the insanely prominent pseudomembrane…
15. Or not…

Automagic epi curve plotting: part I

As of 24 May 2018, the underlying data schema of the Github repo from which the epi curve plotter draws its data has changed. Therefore, a lot of the code had to be adjusted. The current code can be found here on Github. This also plots a classical epi curve.

One of the best resources during the 2013-16 West African Ebola outbreak was Caitlin RiversGithub repo, which was probably one of the best ways to stay up to date on the numbers. For the current outbreak, she has also set up a Github repo, with really frequent updates straight from the WHO’s DON data and the information from DRC Ministry of Public Health (MdlS) mailing list.1 Using R, I have set up a simple script that I only have to run every time I want a pre-defined visualisation of the current situation. I am usually doing this on a remote RStudio server, which makes matters quite easy for me to quickly generate data on the fly from RStudio.

Obtaining the most recent data

Using the following little script, I grab the latest from the ebola-drc Github repo:

# Fetch most recent DRC data.
library(magrittr)
library(curl)
library(readr)
library(dplyr)

current_drc_data <- Sys.time() %>%
  format("%d%H%M%S%b%Y") %>%
  paste("raw_data/drc/", "drc-", ., ".csv", sep = "") %T>%
  curl_fetch_disk("https://raw.githubusercontent.com/cmrivers/ebola_drc/master/drc/data.csv", .) %>%
  read_csv()

This uses curl (the R implementation) to fetch the most recent data and save it as a timestamped2 file in the data folder I set up just for that purpose.3 Simply sourcing this script (source("fetch_drc_data.R")) should then load the current DRC dataset into the environment.4

Data munging

We need to do a little data munging. First, we melt down the data frame using reshape2‘s melt function. Melting takes ‘wide’ data and converumnts it into ‘long’ data – for example, in our case, the original data had a row for each daily report for each health zone, and a column for the various combinations of confirmed/probable/suspected over cases/deaths. Melting the data frame down creates a variable type column (say, confirmed_deaths and a value column (giving the value, e.g. 3). Using lubridate,5 the dates are parsed, and the values are stored in a numeric format.

library(magrittr)
library(reshape2)
library(lubridate)

current_drc_data %<>% melt(value_name = "value", measure.vars = c("confirmed_cases", "confirmed_deaths", "probable_cases", "probable_deaths", "suspect_cases", "suspect_deaths", "ruled_out"))
current_drc_data$event_date <- lubridate::ymd(current_drc_data$event_date)
current_drc_data$report_date <- lubridate::ymd(current_drc_data$report_date)
current_drc_data$value <- as.numeric(current_drc_data$value)

Next, we drop ruled_out cases, as they play no significant role for the current visualisation.

current_drc_data <- current_drc_data[current_drc_data$variable != "ruled_out",]

We also need to split the type labels into two different columns, so as to allow plotting them as a matrix. Currently, data type labels (the variable column) has both the certainty status (confirmed, suspected or probable) and the type of indicator (cases vs deaths) in a single variable, separated by an underscore. We’ll use stringr‘s str_split_fixed to split the variable names by underscore, and join them into two separate columns, suspicion and mm, the latter denoting mortality/morbidity status.

current_drc_data %<>% cbind(., str_split_fixed(use_series(., variable), "_", 2)) %>% 
                 subset(select = -c(variable)) %>% 
                 set_colnames(c("event_date", "report_date", "health_zone", "value", "suspicion", "mm"))

Let’s filter out the health zones that are being observed but have no relevant data for us yet:

relevant_health_zones <- current_drc_data %>% 
                         subset(select = c("health_zone", "value")) %>% 
                         group_by(health_zone) %>% 
                         summarise(totals = sum(value, na.rm=TRUE)) %>% 
                         dplyr::filter(totals > 0) %>% 
                         use_series(health_zone)

This gives us a vector of all health zones that are currently reporting cases. We can filter our DRC data for that:

current_drc_data %<>% dplyr::filter(health_zone %in% relevant_health_zones)

This whittles down our table by a few rows. Finally, we might want to create a fake health zone that summarises all other health zones’ respective data:

totals <- current_drc_data %>% group_by(event_date, report_date, suspicion, mm) 
                           %>% summarise(value = sum(value), health_zone=as.factor("DRC total"))

# Reorder totals to match the core dataset
totals <- totals[,c(1,2,6,5,3,4)]

Finally, we bind these together to a single data frame:

current_drc_data %<>% rbind.data.frame(totals)

Visualising it!

Of course, all this was in pursuance of cranking out a nice visualisation. For this, we need to do a couple of things, including first ensuring that “DRC total” is treated separately and comes last:

regions <- current_drc_data %>% use_series(health_zone) %>% unique()
regions[!regions == "DRC total"]
regions %<>% c("DRC total")

current_drc_data$health_zone_f <- factor(current_drc_data$health_zone, levels = regions)

I normally start out by declaring the colour scheme I will be using. In general, I tend to use the same few colour schemes, which I keep in a few gists. For simple plots, I prefer to use no more than five colours:

colour_scheme <- c(white = rgb(238, 238, 238, maxColorValue = 255),
                   light_primary = rgb(236, 231, 216, maxColorValue = 255),
                   dark_primary = rgb(127, 112, 114, maxColorValue = 255),
                   accent_red = rgb(240, 97, 114, maxColorValue = 255),
                   accent_blue = rgb(69, 82, 98, maxColorValue = 255))

With that sorted, I can invoke the ggplot method, storing the plot in an object, p. This is so as to facilitate later retrieval by the ggsave method.

p <- ggplot(current_drc_data, aes(x=event_date, y=value)) +

  # Title and subtitle
  ggtitle(paste("Daily EBOV status", "DRC", Sys.Date(), sep=" - ")) +
  labs(subtitle = "(c) Chris von Csefalvay/CBRD (cbrd.co) - @chrisvcsefalvay") +
  
  # This facets the plot based on the factor vector we created ear 
  facet_grid(health_zone_f ~ suspicion) +
  geom_path(aes(group = mm, colour = mm, alpha = mm), na.rm = TRUE) +
  geom_point(aes(colour = mm, alpha = mm)) +

  # Axis labels
  ylab("Cases") +
  xlab("Date") +

  # The x-axis is between the first notified case and the last
  xlim(c("2018-05-08", Sys.Date())) +
  scale_x_date(date_breaks = "7 days", date_labels = "%m/%d") +

  # Because often there's an overlap and cases that die on the day of registration
  # tend to count here as well, some opacity is useful.
  scale_alpha_manual(values = c("cases" = 0.5, "deaths" = 0.8)) +
  scale_colour_manual(values = c("cases" = colour_scheme[["accent_blue"]], "deaths" = colour_scheme[["accent_red"]])) +

  # Ordinarily, I have these derive from a theme package, but they're very good
  # defaults and starting poinnnnnntsssssts
  theme(panel.spacing.y = unit(0.6, "lines"), 
        panel.spacing.x = unit(1, "lines"),
        plot.title = element_text(colour = colour_scheme[["accent_blue"]]),
        plot.subtitle = element_text(colour = colour_scheme[["accent_blue"]]),
        axis.line = element_line(colour = colour_scheme[["dark_primary"]]),
        panel.background = element_rect(fill = colour_scheme[["white"]]),
        panel.grid.major = element_line(colour = colour_scheme[["light_primary"]]),
        panel.grid.minor = element_line(colour = colour_scheme[["light_primary"]]),
        strip.background = element_rect(fill = colour_scheme[["accent_blue"]]),
        strip.text = element_text(colour = colour_scheme[["light_primary"]])
  )
DRC EBOV outbreak, 22 May 2018. The data has some significant gaps, owing to monitoring and recording issues, but some clear trends already emerge from this simple illustration.

The end result is a fairly appealing plot, although if the epidemic goes on, one might want to consider getting rid of the point markers. All that remains is to insert an automatic call to the ggsave function to save the image:

Sys.time() %>%
  format("%d%H%M%S%b%Y") %>%
  paste("DRC-EBOV-", ., ".png", sep="") %>%
  ggsave(plot = p, device="png", path="visualisations/drc/", width = 8, height = 6)

Automation

The cronR package has a built-in cron scheduler add-in for RStudio, allowing you to manage all your code scheduling needs.

Of course, being a lazy epidemiologist, this is the kind of stuff that just has to be automated! Since I run my entire RStudio instance on a remote machine, it would make perfect sense to regularly run this script. cronR package comes with a nice widget, which will allow you to simply schedule any task. Old-school command line users can, of course, always resort to ye olde command line based scheduling and execution. One important caveat: the context of cron execution will not necessarily be the same as of your R project or indeed of the R user. Therefore, when you source a file or refer to paths, you may want to refer to fully qualified paths, i.e. /home/my_user/my_R_stuff/script.R rather than merely script.R. cronR is very good at logging when things go awry, so if the plots do not start to magically accumulate at the requisite rate, do give the log a check.

The next step is, of course, to automate uploads to Twitter. But that’s for another day.

References   [ + ]

1. Disclaimer: Dr Rivers is a friend, former collaborator and someone I hold in very high esteem. I’m also from time to time trying to contribute to these repos.
2. My convention for timestamps is the military DTG convention of DDHHMMSSMONYEAR, so e.g. 7.15AM on 21 May 2018 would be 21071500MAY2018.
3. It is, technically, bad form to put the path in the file name for the curl::curl_fetch_disk() function, given that curl::curl_fetch_disk() offers the path parameter just for that. However, due to the intricacies of piping data forward, this is arguably the best way to do it so as to allow the filename to be reused for the read_csv() function, too.
4. If for whatever reason you would prefer to keep only one copy of the data around and overwrite it every time, quite simply provide a fixed filename into curl_fetch_disk().
5. As an informal convention of mine, when using the simple conversion functions of lubridate such as ymd, I tend to note the source package, hence lubridate::ymd over simply ymd.

Get your Jupyterhub box set up on Linode with a single go!

At CBRD, a lot of the research work we do is done on remote machines. For various reasons, we like being able to spin up and wind down these boxes at will, and auto-configure them at short notice according to a few standard variables. Depending on the installation, then, we would have a perfectly set up box with all the features we want, focused around R, Python and as frontends, RStudio and Jupyterhub.

Where StackScripts fit in

There are a wide range of ways to configure boxes – Puppet, Chef cookbooks, Terraform, Dockerfiles, and all that –, but for ease of use, we rely on simple shell files that can be run as Linode StackScripts.

StackScripts are an extremely convenient way to configure single Linode with the software you need. Unlike more complex systems like Terraform,

Basic usage

The Ares research node generator StackFile does a handful of things:

  • Update system and add the CRAN repo as a source
  • Install R and the RStudio version of choice
  • Install Python and JupyterHub version of choice
  • Install an opinionated set of system level packages (i.e. available to all users)
  • Configures ports and some other configuration items for the instance
  • Creates the root user
  • Daemonises the RStudio server and JupyterHub to automatically start at failure and automatically start at reboot

When deployed using Linode from its StackFile, it allows for a wide range of configuration options, including ports for both Jupyter and RStudio, and a completely configured first user set up both on JupyterHub and RStudio. In addition, you can configure some install settings. A ‘barebones’ install option exists that allows for a minimum set of packages to be installed – this is useful for testing or if the desired configuration diverges from the ordinary structure. In addition, OpenCV, deep learning tools and cartography tools can be selectively disabled or enabled, as these are not always required.

User administration for Jupyterhub and RStudio

In general, user administration is by preset attached to PAM, i.e. the built-in Linux administration structure. JupyterHub has its own administration features, described here. RStudio, on the other hand, authenticates by user group membership. The two share the same usergroup, specified in the configuration (by default and convention, this is jupyter, but you can change it), and because users created by JupyterHub fall into that user group, creating users in JupyterHub automatically grants them access to RStudio. This is overall acceptable as we tend to use both, but there might be a safety concern there. If so, you can change the auth-required-user-group=$USERGROUPNAME setting to a defined usergroup in the /etc/rstudio/rstudio.conf.

Issues

There are some glitches that we’re trying to iron out:

  • Cartography and GIS tools glitch a little due to issues with PROJ.4.
  • GPU/CUDA support is not implemented as this is not customarily used or provided on Linodes
  • certbot and Let's Encrypt is not really supported yet, as our boxes are never directly public-facing, but you should most definitely find a way to put your server behind some form of SSL/TLS.
  • Currently, only Ubuntu 16.04 LTS is supported, as it’s the most widely used version. CRAN does not yet support more recent versions yet, but these will be added once CRAN support is added.
New to Linodes? Linode is a great service offering a simple, uncomplicated way to set up a Linux server, just how you want it! 👍🏻 In fact, I’ve been using Linode servers for over four years, and have only good experiences with them. I keep recommending them to friends, and they’re all happy! So if this post was helpful, and you’re thinking of signing up for a Linode, would you consider doing so using my affiliate link? I run this blog as a free service and pay for hosting out of pocket, and I don’t do advertisements (and never will!) or paid reviews of things (like what, paid Tyvek reviews? “Best Ebola PPE”? Nope.) – so really, it would make me so happy if you could help out this way.

As always, do let me know how things work out for you! Feel free to leave comments and ideas below. If you’re getting value out of using this script, just let me know.

MedDRA + VAERS: A marriage made in hell?

This post is a Golden DDoS Award winner

So far, this blog was DDoS’d only three times within 24 hours of its publication. That deserves a prize.

Quick: what do a broken femur, Henoch-Schönlein purpura, fainting, an expired vaccine and a healthy childbirth have in common? If your answer was “they’re all valid MedDRA codes”, you’re doing pretty well. If you, from that, deduced that they all can be logged on VAERS as adverse effects of vaccination, you’re exceeding expectations. And if you also realise that the idea that Jane got an expired HPV vaccine, and as a consequence broke her femur, developed Henoch-Schönlein purpura, and suddenly gave birth to a healthy baby boy is completely idiotic and yet can be logged on VAERS, you’re getting where I’m going.

MedDRA is a medical nomenclature specifically developed for the purposes of pharmacovigilance. The idea is, actually, not dreadful – there are some things in a usual medical nomenclature like ICD-10 that are not appropriate for a nomenclature used for pharmacovigilance reporting (V97.33: sucked into jet engine comes to my mind), and then there are things that are specific to pharmacovigilance, such as “oh shoot, that was not supposed to go up his bum!” (MedDRA 10013659: vaccine administered at inappropriate site), “we overdosed little Johnny on the flu vaccine!” (MedDRA 10000381: drug overdose, accidental) and other joys that generally do only happen in the context of pharmacovigilance. So far, so good.

At the same time, MedDRA is non-hierarchical, at least on the coding level. Thus, while the ICD code V97.33 tells you that you’re dealing with an external cause of mortality and morbidity (V and Y codes), specifically air and space transport (V95-97), more specifically ‘other’ specific air transport accidents, specifically getting sucked into a jet engine (V97.33), there’s no way to extract from MedDRA 10000381 what the hell we’re dealing with. Not only do we not know if it’s a test result, a procedure, a test or a disease, we are hopelessly lost as to figuring out what larger categories it belongs to. To make matters worse, MedDRA is proprietary – which in and of itself is offensive to the extreme to the idea of open research on VAERS and other public databases: a public database should not rely on proprietary encoding! -, and it lacks the inherent logic of ICD-10. Consider the encoding of the clinical diagnosis of unilateral headache in both:

SOC: System Organ Class, HLGT: High Level Group Term, HLT: High Level Term, PT: Preferred Term, LLT: Lower Level Term

Attempting to encode the same concept in MedDRA and ICD-10, note that the final code in MedDRA (10067040) does not allow for the predecessors to be reverse engineered, unless a look-up table is used – which is proprietary. ICD-10, at the same time, uses a structure that encodes the entire ‘ancestry’ of an entity in the final code. Not only does this allow for intermediate codes, e.g. G44 for an ‘other’ headache syndrome until it is closer defined, it also allows for structured analysis of the final code and even without a look-up table (which is public, as is the whole ICD-10), the level of kinship between two IDC-10 codes can be ascertained with ease.

We know that an ICD code beginning with F will be something psychiatric and G will be neurological, and from that alone we can get some easy analytical approaches (a popular one is looking at billed codes and drilling down by hierarchical level of ICD-10 codes, something in which the ICD-10 is vastly superior to its predecessor). MedDRA, alas, does not help us such.

Garbage in, garbage out

OK, so we’ve got a nomenclature where the codes for needlestick injury, death, pneumonia, congenital myopathy and a CBC look all the same. That’s already bad enough. It gets worse when you can enter any and all of these into the one single field. Meet VAERS.

The idea of VAERS is to allow physicians, non-physicians and ‘members of the public’ to report incidents. These are then coded by the CDC and depending on seriousness, they may or may not be investigated (all reports that are regarded as ‘serious’ are investigated, according to the CDC). The problem is that this approach is susceptible to three particular vulnerabilities:

  • The single field problem: VAERS has a single field for ‘symptoms’. Everything’s a symptom. This includes pre-existing conditions, new onset conditions, vaccination errors, lab tests (not merely results, just the tests themselves!), interventions (without specifying if they’re before or after the vaccine), and so on. There is also no way to filter out factors that definitely have nothing to do with the vaccine, such as a pre-existing birth defect. The History/Allergies field is not coded.
  • The coding problem: what gets coded and what does not is sometimes imperfect. This being a human process, it’s impossible to expect perfection, but the ramifications to this to certain methods of analysis are immense. For instance. if there are 100 cases of uncontrollable vomiting, that may be a signal. But if half of those are coded as ‘gastrointestinal disorder’ (also an existing code), you have two values of 50, neither of which may end up being a signal.
  • The issue of multiple coding: because MedDRA is non-hierarchical, it is not possible to normalise at a higher level (say, with ICD-10 codes, at chapter or block level), and it is not clear if two codes are hierarchically related. In ICD-10, if a record contains I07 (rheumatic tricuspid valve disease) and I07.2 (tricuspid stenosis with tricuspid insufficiency), one can decide to retain the more specific or the less specific entry, depending on intended purpose of the analysis.

In the following, I will demonstrate each of these based on randomly selected reports from VAERS.

The Single Field Problem (SFP)

The core of the SFP is that there is only one codeable field, ‘symptoms’.

VAERS ID 375693-1 involves a report, in which the patient claims she developed, between the first and second round of Gardasil,

severe stomach pain, cramping, and burning that lasted weeks. Muscle aches and overall feeling of not being well. In August 2009 patient had flu like symptoms, anxiety, depression, fatigue, ulcers, acne, overall feeling of illness or impending death.

Below is the patient’s symptom transposition into MedDRA entities (under Symptoms):

Event details for VAERS report 375693-1

The above example shows the mixture of symptoms, diagnostic procedures and diagnostic entities that are coded in the ‘Symptoms’ field. The principal problem with this is that when considering mass correlations (all drugs vs all symptoms, for instance), this system would treat a blood test just as much as a contributor to a safety signal as anxiety or myalgia, which might be true issues, or depression, which is a true diagnosis. Unfiltered, this makes VAERS effectively useless for market basket analysis based (cooccurrence frequency) analyses.

Consider for instance, that PRR is calculated as

PRR_{V,R} = \frac{\Sigma (R \mid V) \/ \Sigma (V)}{\Sigma (R \mid \neg V) \/ \Sigma (\neg V)} = \frac{\Sigma (R \mid V)}{\Sigma (V)} \cdot \frac{\Sigma (\neg V)}{\Sigma (R \mid \neg V)}

where V denotes the vaccine of interest, R denotes the reaction of interest, and the \Sigma operator denotes the sum of rows or columns that fulfill the requisite criteria (a more detailed, matrix-based version of this equation is presented here). But if \{R\}, the set of all R, contains not merely diagnoses but also various ‘non-diagnoses’, the PRR calculation will be distorted. For constant V and an unduly large R, the values computationally obtained from the VAERS data that ought to be \Sigma(R \mid V) and \Sigma(R \mid \neg V) will both be inaccurately inflated. This will yield inaccurate final results.

Just how bad IS this problem? About 30% bad, if not more. A manual tagging of the top 1,000 symptoms (by N, i.e. by the number of occurrences) was used as an estimate for how many of the diagnostic entities do not disclose an actual problem with the vaccine.

In this exercise, each of the top 1,000 diagnostic codes (by N, i.e. by occurrence) were categorised into a number of categories, which in turn were divided into includable (yellow) and non-includable (blue) categories. An includable category reveals a relevant (=adverse) test result, a symptom, a diagnosis or a particular issue, while non-includables pertain to procedures, diagnostics without results, negative test results and administration, storage & handling defects.

According to the survey of the top 1,000 codes, only a little more than 70% of the codes themselves disclose a relevant issue with the vaccine. In other words, almost a third of disclosed symptoms must be pruned, and these cannot be categorically pruned because unlike ICD-10, MedDRA does not disclose hierarchies based on which such pruning would be possible. As far as the use of MedDRA goes, this alone should be a complete disaster.

Again, for effect: a third of the codes do not disclose an actual side effect of the medication. These are not separate or identifiable in any way other than manually classifying them and seeing whether they disclose an actual side effect or just an ancillary issue. Pharmacovigilance relies on accurate source data, and VAERS is not set up, with its current use of MedDRA, to deliver that.

The coding problem

Once a VAERS report is received, it is MedDRA coded at the CDC. Now, no manual coding is perfect, but that’s not the point here. The problem is that a MedDRA code does not, in and of itself,  indicate the level of detail it holds. For instance, 10025169 and 10021881 look all alike, where in fact the first is a lowest-level entity (an LLT – Lower-Level Term – in MedDRA lingo) representing Lyme disease, while the former is the top-level class (SOC – System Organ Class) corresponding to infectious diseases. What this means is that once we see a MedDRA coded entity as its code, we don’t know what level of specificity we are dealing with.

The problem gets worse with named entities. You see, MedDRA has a ‘leaf’ structure: every branch must terminate in one or more (usually one) LLT. Often enough, LLTs have the same name as their parent PT, so you get PT Lyme disease and LLT Lyme disease. Not that it terrifically matters for most applications, but when you see only the verbose output, as is the case in VAERS, you don’t know if this is a PT, an LLT, or, God forbid, a higher level concept with a similar name.

Finally, to put the cherry on top of the cake, where a PT is also the LLT, they have the same code. So for Lyme disease, the PT and LLT both have the code 10025169. I’m sure this seemed like a good idea at the time.

The issue of multiple coding

As this has been touched upon previously, because MedDRA lacks an inherent hierarchy, a code cannot be converted into its next upper level without using a lookup table, whereas with, say, ICD-10, one can simply normalise to the chapter and block (the ‘part left of the dot’). More problematically, however, the same code may be a PT or an LLT, as is the case for Lyme disease (10025169).

Let’s look at this formally. Let the operator \in^* denote membership under the transitive closure of the set membership relation, so that

  1. if x \in A, then x \in^* A,
  2. if x \in A and A \subseteq B, then x \in^* B.

and so on, recursively, ad infinitum. Let furthermore \in^*_{m} denote the depth of recursion, so that

  1. for x \in A:  x \in^*_{0} A,
  2. for x \in A \mid A \subseteq B:  x \in^*_{1} B,

and, once again, so on, recursively, ad infinitum.

Then let a coding scheme \{S_{1...n}\} exhibit the Definite Degree of Transitiveness (DDoT) property iff (if and only if) for any S_m \mid m \leq n, there exists exactly one p for which it is true that S_m \in^*_{p} S.

Or, in other words, two codes S_q, S_r \mid q, r \leq n, may not be representable identically if p_q \neq p_r. Less formally: two codes on different levels may not be identical. This is clearly violated in MedDRA, as the example below shows.

Violating the Definite Degree of Transitiveness (DDoT) property: PT Botulism and LLT Botulism have different p numbers, i.e. different levels, but have nonetheless the same code. This makes the entity 10006041 indeterminate – is it the PT or the LLT for botulism? This is a result of the ‘leaf node constraint’ of MedDRA’s design, but a bug by design is still a bug, not a feature.

Bonus: the ethical problem

To me as a public health researcher, there is a huge ethical problem with the use of MedDRA in VAERS. I believe very strongly in open data and in the openness of biomedical information. I’m not alone: for better or worse, the wealth – terabytes upon terabytes – of biomedical data, genetics, X-ray crystallography, models, sequences  prove that if I’m a dreamer, I’m not the only one.

Which is why it’s little short of an insult to the public that a pharmacovigilance system is using a proprietary encoding model.

Downloads from VAERS, of course, provide the verbose names of the conditions or symptoms, but not what hierarchical level they are, nor what structure they are on. For that, unless you are a regulatory authority or a ‘non-profit’ or ‘non-commercial’ (which would already exclude a blogger who unlike me has ads on their blog to pay for hosting, or indeed most individual researchers, who by their nature could not provide the documentation to prove they aren’t making any money), you have to shell out some serious money.

MedDRA is one expensive toy.

Worse, the ‘non-profit’ definition does not include a non-profit research institution or an individual non-profit researcher, or any of the research bodies that are not medical libraries or affiliated with educational institutions but are funded by third party non-profit funding:

This just keeps getting worse. Where would a non-profit, non-patient care provider, non-educational, grant-funded research institution go?

There is something rotten with the use of MedDRA, and it’s not just how unsuitable it is for the purpose, it is also the sheer obscenity of a public database of grave public interest being tied to a (vastly unsuitable and flawed, as I hope it has been demonstrated above) nomenclature.

Is VAERS lost?

Resolving the MedDRA issue

Unlike quite a few people in the field, I don’t think VAERS is hopelessly lost. There’s, in fact, great potential in it. But the way it integrates with MedDRA has to be changed. This is both a moral point – a point of commitment to opening up government information – and one of facilitating research.

There are two alternatives at this point for the CDC.

  1. MedDRA has to open up at least the 17% of codes, complete with hierarchy, that are used within VAERS. These should be accessible, complete with the hierarchy, within VAERS, including the CDC WONDER interface.
  2. The CDC has to switch to a more suitable system. ICD-10 alone is not necessarily the best solution, and there are few alternatives, which puts MedDRA into a monopoly position that it seems to mercilessly exploit at the time. This can – and should – change.

Moving past the Single Field Problem

MedDRA apart, it is crucial for VAERS to resolve the Single Field Problem. It is clear that from the issues presented in the first paragraph – a broken femur, Henoch-Schönlein purpura, fainting, an expired vaccine and a healthy childbirth – that there is a range of issues that need to be logged. A good structure would be

  1. pre-existing conditions and risk factors,
  2. symptoms that arose within 6 hours of administration,
  3. symptoms that arose within 48 hours of administration,
  4. symptoms that arose later than 48 hours of administration,
  5. non-symptoms,
  6. clinical tests without results,
  7. clinical tests segmented by positive and negative results, and
  8. ancillary circumstances, esp. circumstances pertaining to vaccination errors such as wrong vaccine administered, expired vaccine, etc.

The use of this segmentation would be able to differentiate not only time of occurrence, but also allow for adequate filtering to identify the correct denominators for the PRR.

A future with (for?) MedDRA

As said, I am not necessarily hostile to MedDRA, even if the closet libertarian in me bristles at the fact that MedDRA is mercilessly exploiting what is an effective monopoly position. But MedDRA can be better, and needs to be better – if not for its own economic interests, then for the interests of those it serves. There are three particular suggestions MedDRA needs to seriously consider.

  1. MedDRA’s entity structure is valuable – arguably, it’s the value in the entire project. If coding can be structured to reflect its internal hierarchy, MedDRA becomes parseable without a LUT,1 and kinship structures become parseable without the extra step of a LUT.
  2. MedDRA needs to open up, especially to researchers not falling within its narrowly defined confines of access. Especially given the inherent public nature of its use – PhV and regulation are quintessentially public functions, and this needs an open system.
  3. MedDRA’s entity structure’s biggest strength is that it comprises a range of different things, from administrative errors through physical injuries to test results and the simple fact of tests.

Conclusion

VAERS is a valuable system with a range of flaws. All of them are avoidable and correctable – but would require the requisite level of will and commitment – both on CDC’s side and that of MedDRA. For any progress in this field, it is imperative that the CDC understand that a public resource maintained in the public interest cannot be driven by a proprietary nomenclature, least of all one that is priced out of the range of the average interested individual: and if they cannot be served, does the entire system even fulfill its governmental function of being of the people and for the people? It is ultimately CDC’s asset, and it has a unique chance to leverage its position to ensure that at least as far as the 17% of MedDRA codes go that are used in VAERS, these are released openly.

In the end, however sophisticated our dissimilarity metrics, when 30% of all entities are non-symptoms and we need to manually prune the key terms to avoid denominator bloat due to non-symptom entities, such as diagnostic tests without results or clearly unconnected causes of morbidity and mortality like motor vehicle accidents, dissimilarity based approaches will suffer from serious flaws. In the absence of detailed administration and symptom tracking at an individual or institutional level, dissimilarity metrics are the cheapest and most feasible ways of creating value out of post marketing passive reports. If VAERS is to be a useful research tool, as I firmly believe it was intended to be, it must evolve to that capability for all.

References   [ + ]

1. Look-up table

Ebola! Graph databases! Contact tracing! Bad puns!

Thanks to the awesome folks at Neo4j Budapest and GraphAware, I will be talking tonight about Ebola, contact tracing, how graph databases help us understand epidemics and maybe prevent them someday. Now, if flying to Budapest on short notice might not work for you, you can listen to a livestream of the whole event here! It starts today, 13 February, at 1830 CET, 1730 GMT or 1230 Eastern Time, and I sincerely hope you will listen to it, live or later from the recording, also accessible here.

On the challenge of building HTTP REST APIs that don’t suck

Here’s a harsh truth: most RESTful HTTP APIs (in the following, APIs) suck, to some degree or another. Including the ones I’ve written.

Now, to an extent, this is not my fault, your fault or indeed anyone’s fault. APIs occupy a strange no man’s land between stuff designed for machines and stuff designed for humans. On one hand, APIs are intended to allow applications and services to communicate with each other. If humans want to interact with some service, they will do so via some wrapper around an API, be it an iOS app, a web application or a desktop client. Indeed, the tools you need to interact with APIs – a HTTP client – are orders of magnitude less well known and less ubiquitous than web browsers. Everybody has a web browser and knows how to use one. Few people have a dedicated desktop HTTP browser like Paw or know how to use something like curl. Quick, how do you do a token auth header in curl?

At the same time, even if the end user of the API is the under-the-hood part of a client rather than a human end user, humans have to deal with the API at some point, when they’re building whatever connects to the API. Tools like Swagger/OpenAPI were intended to somewhat simplify this process, and the idea was good – let’s have APIs generate a schema that they also serve up from which a generic client can then build a specific client. Except that’s not how it ended up working in practice, and in the overwhelming majority of cases, the way an API handler is written involves Dexedrine, coffee and long hours spent poring over the API documentation.

Which is why your API can’t suck completely. There’s no reason why your API can’t be a jumbled mess of methods from the perspective of your end user, who will interact without your API without needing to know what an API even is. That’s the beauty of it all. But if you want people to use your service – which you should very much want! -, you’ll have to have an API that people can get some use out of.

Now, the web is awash with stuff about best practices in developing REST APIs. And, quite frankly, most of these are chock-full of good advice. Yes, use plural nouns. Use HATEOAS. Use the right methods. Don’t create GET methods that can change state. And so on.

But the most important thing to know about rules, as a former CO of mine used to say, is to know when to break them, and what the consequences will be when you do. There’s a philosophy of RESTful API design called pragmatic REST that acknowledges this to an extent, and uses the ideas underlying REST as a guideline, rather than strict, immutable rules. So, the first step of building APIs that don’t suck is knowing the consequences of everything you do. The problem with adhering to doctrine or rules or best practices is that none of that tells you what the consequences of your actions are, whether you follow them or not. That’s especially important when considering the consequences of not following the rules: not pluralizing your nouns and using GET to alter state have vastly different consequences. The former will piss off your colleagues (rightly so), the latter will possibly endanger the safety of your API and lead to what is sometimes referred to in the industry as Some Time Spent Updating Your LinkedIn & Resume.

Secondly, and you can take this from me – no rules are self-explanatory. Even with all the guidance in the world in your hand, there’s a decent chance I’ll have no idea why most of your code is the way it is. The bottom line being: document everything. I’d much rather have an API breaking fifteen rules and giving doctrinaire rule-followers an apoplectic fit but which is well-documented over a super-tidy bit of best practices incarnate (wouldn’t that be incodeate, given that code is not strictly made of meat?) that’s missing any useful documentation, any day of the week. There are several ways to document APIs, and no strictly right one – in fact, I would use several different methods within the same project for different endpoints. So for instance a totally run-of-the-mill DELETE endpoint that takes an object UUID as an argument requires much less documentation than a complex filtering interface that takes fifty different arguments, some of which may be mandatory. A few general principles have served me well in the past when it comes to documenting APIs:

  • Keep as much of the documentation as you can out of the code and in the parts that make it into the documentation. For instance, in Python, this is the docstring.
  • If your documentation allows, put examples into the docstring. An example can easily be drawn from the tests, by the way, which makes it a twofer.
  • Don’t document for documentation’s sake. Document to help people understand. Avoid tedious, wordy explanation for a method that’s blindingly obvious to everyone.
  • Eschew the concept of ‘required’ fields, values, query parameters, and so on. Nothing is ‘required’ – the world will not end if a query parameter is not provided, and you will be able to make the request at the very least. Rather, make it clear what the consequences of not providing the information will be. What happens if you do not enter a ‘required’ parameter? Merely calling it ‘required’ does not really tell me if it will crash, yield a cryptic error message or simply fail silent (which is something you also should try to avoid).
  • Where something must have a particular type of value (e.g. an integer), where a value will have to be provided in a particular way (e.g. a Boolean encoded as o/1 or True/False) or has a limited set of possible values (e.g. in an application tracking high school students, the year query parameter may only take the values ['freshman', 'sophomore', 'junior', 'senior']), make sure this is clearly documented, and make it clear whether the values are case sensitive or not.
  • Only put things into inline comments that would only be required for someone who is reading your code. Anything a user of your methods/endpoints ought to know about them should be in the docstring or otherwise end up in your documentation – your docstring, of course, has the added benefit that it will be visible not only for people reading the documentation but also for whoever is reading your code.
  • If you envisage even the most remote possibility that your API will have to handle Unicode, emojis or other fancy things (basically, anything beyond ASCII), make sure you explain how your API handles such values.

Finally, eat your own dog food. Writing a wrapper for your API is not only a commercially sound idea (it is much more fun for other developers to just grab an API wrapper for their language of choice than having to homebrew one), it’s also a great way to gauge how painful it is to work with your API. Unless it’s anything above a 6 on the 1-10 Visual Equivalent Scale of Painful and Grumpy Faces, you’ll be fine. And if you need to make changes, make any breaking change part of a new version. An API version string doesn’t necessarily mean the API cannot change at all, but it does mean you may not make breaking changes – this means any method, any endpoint and any argument that worked on day 0 of releasing v1 will have to work on v1, forever.

Following these rules won’t ensure your API won’t suck. But they’ll make sucking much more difficult, which is half the victory. A great marksmanship instructor I used to know said that the essence of ‘technique’, be it in handling a weapon or writing an API, is to reduce the opportunity of making avoidable mistakes. Correct running technique will force you to run in a way that doesn’t even let you injure your ankle unless you deviate from the form. Correct shooting technique eliminates the risk of elevation divergences due to discrepancies in how much air remains in the lungs by simply making you squeeze the trigger at the very end of your expiration. Good API development technique keeps you from creating APIs that suck by restricting you to practices that won’t allow you to commit some of the more egregious sins of writing APIs. And the more you can see beyond the rules and synthesise them into a body of technique that keeps you from making mistakes, the better your code will be, without cramping your creativity.

Using screen to babysit long-running processes

In machine learning, especially in deep learning, long-running processes are quite common. Just yesterday, I finished running an optimisation process that ran for the best part of four days –  and that’s on a 4-core machine with an Nvidia GRID K2, letting me crunch my data on 3,072 GPU cores!  Of course, I did not want to babysit the whole process. Least of all did I want to have to do so from my laptop. There’s a reason we have tools like Sentry, which can be easily adapted from webapp monitoring to letting you know how your model is doing.

One solution is to spin up another virtual machine, ssh into that machine, then from that
ssh into the machine running the code, so that if you drop the connection to the first machine, it will not drop the connection to the second. There is also nohup, which makes sure that the process is not killed when you ‘hang up’ the ssh connection. You will, however, not be able to get back into the process again. There are also reparenting tools like reptyr, but the need they meet is somewhat different. Enter terminal multiplexers.

Terminal multiplexers are old. They date from the era of things like time-sharing systems and other antiquities whose purpose was to allow a large number of users to get their time on a mainframe designed to serve hundreds, even thousands of users. With the advent of personal computers that had decent computational power on their own, terminal multiplexers remained the preserve of universities and other weirdos still using mainframe architectures. Fortunately for us, two great terminal multiplexers, screen (aka GNU Screen ) and tmux , are still being actively developed, and are almost definitely available for your *nix of choice. This gives us a convenient tool to sneak a peek at what’s going on with our long-suffering process. Here’s how.

Step 1
ssh into your remote machine, and launch ssh. You may need to do this as sudo if you encounter the error where screen, instead of starting up a new shell, returns [screen is terminating] and quits. If screen is started up correctly, you should be seeing a slightly different shell prompt (and if you started it as sudo, you will now be logged in as root).
ssh into your machine, and launch screen (screen).
In some scenarios, you may want to ‘name’ your screen session. Typically, this is the case when you want to share your screen with another user, e.g. for pair programming. To create a named screen, invoke screen using the session name parameter -S, as in e.g. screen -S my_shared_screen.
Step 2
In this step, we will be launching the actual script to run. If your script is Python based and you are using virtualenv (as you ought to!), activate the environment now using source /<virtualenv folder>/bin/activate, replacing  virtualenv folderby the name of the folder where your virtualenvs live (for me, that’s the environments folder, often enough it’s something like ~/.virtualenvs) and by the name of your virtualenv (in my case, research). You have to activate your virtualenv even if you have done so outside of screen already (remember, screen means you’re in an entirely new shell, with all environment configurations, settings, aliases &c. gone)!

With your virtualenv activated, launch it as normal — no need to launch it in the background. Indeed, one of the big advantages is the ability to see verbose mode progress indicators. If your script does not have a progress logger to stdout but logs to a logfile, you can start it using nohup, then put it into the background (Ctrl--Z, then bg) and track progress using tail -f logfile.log (where logfile.log is, of course, to be substituted by the filename of the logfile.
Step 3
Press Ctrl--A followed by Ctrl--D to detach from the current screen. This will take you back to your original shell after noting the address of the screen you’re detaching from. These always follow the format <identifier>.<session id>.<hostname>, where hostname is, of course, the hostname of the computer from which the screen session was started, stands for the name you gave your screen if any, and is an autogenerated 4-6 digit socket identifier. In general, as long as you are on the same machine, the screen identifier or the session name will be sufficient – the full canonical name is only necessary when trying to access a screen on another host.

To see a list of all screens running under your current username, enter screen -list. Refer to that listing or the address echoed when you detached from the screen to reattach to the process using screen -r <socket identifier>[.<session identifier>.<hostname>]. This will return you to the script, which keeps executing in the background.
Result
Reattaching to the process running in the background, you can now follow the progress of the script. Use the key combination in Step 3 to step out of the process anytime and the rest of the step to return to it.

Bugs
There is a known issue, caused by strace, that leads to screen immediately closing, with the message [screen is terminating] upon invoking screen as a non-privileged user.

There are generally two ways to resolve this issue.

The overall effect of both solutions is the same. Notably, both may be undesirable from a security perspective. As always, weigh risks against utility.

Do you prefer screen to staying logged in? Do you have any other cool hacks to make monitoring a machine learning process that takes considerable time to run? Let me know in the comments!

Image credits: Zenith Z-19 by ajmexico on Flickr

A deep learning

There are posts that are harder to write than others. This one perhaps has been one of the hardest. It took me the best part of four months and dozens of rewrites.

Because it’s about something I love. And about someone I love. And about something else I love. And how these three came to come into a conflict. And, perhaps, what we all can learn from that.


As many of you might know, deep learning is my jam. Not in a faddish, ‘it’s what cool kids do these days’ sense. Nor, for that matter, in the sense so awfully prevalent in Silicon Valley, whereby the utility of something is measured in how many jobs it will get rid of, presumably freeing off humans to engage in more cerebral pursuits, or how it may someday cure intrinsically human problems if only those pesky humans were to listen to their technocratic betters for once. Rather, I’m a deep learning and AI researcher who believes in what he’s doing. I believe with all I am and all I’ve got that deep learning is right now our best chance to find better ways of curing cancer, producing more with less emissions, building structures that can withstand floods on a dime, identifying terrorists and, heck, create entertaining stuff. I firmly believe that it’s one of the few intellectual pursuits I am somewhat suited for that is also worth my time, not the least because I firmly believe that it will make me have more of it – and if not me, maybe someone equally worthy.

Which is why it was so hard for me to watch this video, of my lifelong idol Hayao Miyazaki ripping a deep learning researcher to shreds.

Now, quite frankly, I have little time for the researcher and his proposition. It’s badly made, dumb and pointless. Why one would inundate Miyazaki-san with it is beyond me. His putdown is completely on point, and not an ounce too harsh. All of his words are well deserved. As someone with a neurological chronic pain disorder that makes me sometimes feel like that creature writhing on the floor, I don’t have a shred of sympathy for this chap.1

Rather, it’s the last few words of Miyazaki-san that have punched a hole in my heart and have held my thoughts captive for months now, coming back into the forefront of my thoughts like a recurring nightmare.

“I feel like we are nearing the end of times,” he says, the camera gracefully hovering over his shoulder as he sketches through his tears. “We humans are losing faith in ourselves.”


Deep learning is something formidable, something incredible, something so futuristic yet so simple. Deep down (no pun intended), deep learning is really not much more than a combination of a few relatively simple tricks, some the best part of a century old, that together create something fantastic. Let me try to put it into layman’s terms (if you’re one of my fellow ML /AI nerds, you can just jump over this part).

Consider you are facing the arduous and yet tremendously important task of, say, identifying whether an image depicts a cat or a dog. In ML lingo, this is what we call a ‘classification’ task. One traditional approach used to be to define what cats are versus what dogs are, and provide rules. If it’s got whiskers, it’s a cat. If it’s got big puppy eyes, it’s, well, a puppy. If it’s got forward pointing eyes and a roughly circular face, it’s almost definitely a kitty. If it’s on a leash, it’s probably a dog. And so on, ad infinitum, your model of a cat-versus-dog becoming more and more accurate with each rule you add.

This is a fairly feasible approach, and is still used. In fact, there’s a whole school of machine learning called decision trees that relies on this kind of definition of your subjects. But there are three problems with it.

  1. You need to know quite a bit about cats and dogs to be able to do this. At the very least, you need to be able to, and take the time and effort to, describe cats and dogs. It’s not enough to merely feed images of each to the computer.2
  2. You are limited in time and ability to put down distinguishing features – your program cannot be infinitely large, nor do you have infinite time to write it. You must prioritise by identifying the factors with the greatest differentiating potential first. In other words, you need to know, in advance, what the most salient characteristics of cats versus dogs are – that is, what characteristics are almost omnipresent among cats but hardly ever occur among dogs (and vice versa)? All dogs have a snout and no cat has a snout, whereas some cats do have floppy ears and some dogs do have almost catlike triangular ears.
  3. You are limited to what you know. Silly as that may sound, there might be some differentia between cats and dogs that are so arcane, so mathematical that no human would think of it – but which might come trivially evident to a computer.

Deep learning, like friendship, is magic. Unlike most other techniques of machine learning, you don’t need to have the slightest idea of what differentiates cats from dogs. What you need is a few hundred images of each, preferably with a label (although that is not strictly necessary – classifiers can get by just fine without needing to be told what the names of the things they are classifying are: as long as they’re told how many different classes they are to split the images into, they will find differentiating features on their own and split the images into ‘images with thing 1’ versus  ‘images with thing 2’. – magic, right?). Using modern deep learning libraries like TensorFlow and their high level abstractions (e.g. keras, tflearn) you can literally write a classifier that identifies cats versus dogs with a very high accuracy in less than 50 lines of Python that will be able to classify thousands of cat and dog pics in a fraction of a minute, most of which will be taken up by loading the images rather than the actual classification.

Told you it’s magic.


What makes deep learning ‘deep’, though? The origins of deep learning are older than modern computers. In 1943, McCullough and Pitts published a paper3 that posited a model of neural activity based on propositional logic. Spurred by the mid-20th century advances in understanding how the nervous system works, in particular how nerve cells are interconnected, McCulloch and Pitts simply drew the obvious conclusion: there is a way you can represent neural connections using propositional logic (and, actually, vice versa). But it wasn’t until 1958 that this idea was followed up in earnest. Rosenblatt’s ground-breaking paper4 introduced this thing called the perceptron, something that sounds like the ideal robotic boyfriend/therapist but in fact was intended as a mathematical model for how the brain stores and processes information. A perceptron is a network of artificial neurons. Consider the cat/dog example. A simple single-layer perceptron has a list of input neurons   x_1 ,   x_2   and so on. Each of these describe a particular property. Does the animal have a snout? Does it go woof? Depending on how characteristic they are, they’re multiplied by a weight  w_n . For instance, all dogs and no cats have snouts, so   w_1   will be relatively high, while there are cats that don’t have long curly tails and dogs that do, so  w_n   will be relatively low.

At the end, the output neuron (denoted by the big  \Sigma  ) sums up these results, and gives an estimate as to whether it’s a cat or a dog.

What was initially designed to model the way the brain works has soon shown remarkable utility in applied computation, to the point that the US Navy was roped into building an actual, physical perceptron machine – the first application of computer vision. However, it was a complete bust. It turned out that a single layer perceptron couldn’t really recognise a lot of patterns. What it lacked was depth.

What do we mean by depth? Consider the human brain. The brain actually doesn’t have a single part devoted to vision. Rather, it has six separate areas5 – the striate cortex (V1) and the extrastriate areas (V2-V6). These form a feedforward pathway of sorts, where V1 feeds into V2, which feeds into V3 and so on. To massively oversimplify: V1 detects optical features like edges, which it feeds on to V2, which breaks these down into more complex features: shapes, orientation, colour &c. As you proceed towards the back of the head, the visual centres detect increasingly complex abstractions from the simple visual information. What was found is that by putting layers and layers of neurons after one another, even very complex patterns can be identified accurately. There is a hierarchy of features, as the facial recognition example below shows.

The first hidden layer recognises simple geometries and blobs at different parts of the zone. The second hidden layer fires if it detects particular manifestations of parts of the face – noses, eyes, mouths. Finally, the third layer fires if it ‘sees’ a particular combination of these. Much like an identikit image, a face is recognised because it contains parts of a face, which in turn are recognised because they contain a characteristic spatial alignment of simple geometries.

There’s much more to deep learning than what I have tried to convey in a few paragraphs. The applications are endless. With the cost of computing decreasing rapidly, deep learning applications have now become feasible in just about all spheres where they can be applied. And they excel everywhere, outpacing not only other machine learning approaches (which makes me absolutely stoked about the future!) but, at times, also humans.


Which leads me back to Miyazaki. You see, deep learning can’t just classify things or predict stock prices. It can also create stuff. To put an old misunderstanding to rest quite early: generative neural networks are genuinely creating new things. Rather than merely combining pre-programmed elements, they come as close as anything non-human can come to creativity.

The pinnacle of it all, generating enjoyable music, is still some ways off, and we have yet to enjoy a novel written by a deep learning engine. But to anyone who has been watching the rapid development of deep learning and especially generative algorithms based on deep learning, these are literally just questions of time.

Or perhaps, as Miyazaki said, questions of the ‘end of times’.

What sets a computer-generated piece apart from a human’s composition? Someday, they will be, as far as quality is concerned, indistinguishable. Yet something that will always set them apart is the absence of a creator.

In what is probably one of the worst written essays in  20th century literary criticism, a field already overflowing with bad prose for bad prose’s sake, Roland Barthes’s 1967 essay La mort de l’auteur posited a sort of separation between the author and the text, countering centuries of literary criticism that sought to explain the meaning of the latter by reference to the former.  According to Barthes, texts (and so, compositions, paintings &.) have a life and existence of their own. To liberate works of art of an  ‘interpretive  tyranny’ that is almost self-explanatorily imposed on it, they must be read, interpreted and understood by reference to its audience and not its author. Indeed, Barthes eschews the term in favour of the term ‘scriptor‘, the latter hearkening back to the Medieval monks who copied manuscripts: like them, the scriptor is not in control of the narrative or work of art that he or she composes. Devoid of the author’s authority, the work of art is now free to exist in a liberated state that allows you – the recipient – to establish its essential meaning.

Oddly, that’s not entirely what post-modernism seems to have created. If anything, there is now an increased focus on the author, at the very least in one particular sense. Consider the curious case of Wagner’s works in Israel. Because of his anti-Semitic views, arguably as well as due to the favour his music found during the tragic years of the Third Reich, Wagner’s works – even those that do not even remotely express a political position – are rarely played in Israel. Even in recent years, other than Holocaust survivor Mendi Roman’s performance of Siegfried in 2000, there have been very few instances of Wagner played in Israel – despite the curious fact that Theodor Herzl, founder of Zionism, admired Wagner’s music (if not his vile racial politics). Rather than the death of the author, we more often witness the death of the work. The taint of the author’s life comes to haunt the chords of his composition and the stanzas of his poetry, every brush-stroke of theirs forever imbued with the often very human sins and mistakes of their lives.

Less dramatic, perhaps, than Wagner’s case are the increasingly frequent boycotts, outbursts and protests against works of art solely based on the character of the author or composer. One must only look at the recent past to see protests, for instance, against the works of HP Lovecraft, themselves having to do more with eldritch horrors than racist horridness, due to the author’s admittedly reprehensible views on matters of race. Outrages about one author or another, one artist or the next, are commonplace, acted out on a daily basis on the Twitter gibbets and the Facebook  pillory. Rather than the death of the author, we experience the death of art, amidst an increasingly intolerant culture towards  the works of flawed or sinful creators.

This is, of course, not to excuse any of those sins or flaws. They should not, and cannot, be excused. Rather, perhaps, it is to suggest that part of a better understanding of humanity is that artists are a cross-section of us as a species, equally prone to be misled and deluded into adopting positions that, as the famous German anti-Fascist and children’s book author Erich Kästner said, ‘feed the animal within man’. Nor is this to condone or justify art that actively expresses those reprehensible views – an entirely different issue. Rather, I seek merely to draw attention to the increased tendency to condemn works of art for the artist’s political sins. In many cases, these sins are far from being as straightforward as Lovecraft’s bigotry and Wagner’s anti-Semitism. In many cases, these sins can be as subtle as going against the drift of public opinion, the Orwellian sin of ‘wrongthink’. With the internet having become a haven of mob mentality (something I personally was subjected to a few years ago), the threshold of what sins  of the creator shall be visited upon their creations has significantly decreased. It’s not the end of days, but you can see it from here.

In which case perhaps Miyazaki is right.

Perhaps what we need is art produced by computers.


As Miyazaki-san said, we are losing faith in ourselves. Not in our ability to create wonderful works of art, but in our ability to measure up to some flawless ethos, to some expectation of the artist as the flawless being. We are losing faith in our artists. We are losing faith in our creators, our poets and painters and sculptors and playwrights and composers, because we fear that with the inevitable revelation of greater – or perhaps lesser – misdeeds or wrongful opinions from their past shall not merely taint them: they shall no less taint us, the fans and aficionados and cognoscenti. Put not your faith in earthly artists, for they are fickle, and prone to having opinions that might be unacceptable, or be seen as such someday. Is it not a straightforward response then to  declare one’s love for the intolerable synthetic Baroque of Stanford machine learning genius Cary Kaiming Huang’s research? In a society where the artist’s sins taint the work of art and through that, all those who confessed to enjoy his works, there’s no other safe bet. Only the AI can cast the first stone.

And if the cost of that is truly the chirps of Cary’s synthetic Baroque generator, Miyazaki is right on the other point, too. It truly is the end of days.

References   [ + ]

1. Least of all because I know how rudimentary and lame his work is. I’ve built evolutionary models of locomotion where the first stages look like this. There’s no cutting edge science here.
2. There’s a whole aspect of the story called feature extraction, which I will ignore for the sake of simplicity, and assume that it just happens. It doesn’t, of course, and it plays a huge role in identifying things, but this story is complex enough already as it is.
3. McCulloch, W and Pitts, W (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics 5 (4): 115–133. doi:10.1007/BF02478259.
4. Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model For Information Storage And Organization In The Brain. Psych Rev 65 (6): 386–408. doi:10.1037/h0042519
5. Or five, depending on whether you consider the dorsomedial area a separate area of the extrastriate cortex

How I predicted Trump’s victory

Introit

“Can you, just once, explain it in intelligible words?”, my wife asked.

We’ve been talking for about an hour about American politics, and I made a valiant effort at trying to explain to her how my predictive model for the election worked, what it took into account and what it did… but twenty minutes in, I was torn between either using terms like stochastic gradient descent and confusing her, or having to start to build everything up from high school times tables onwards.

Now, my wife is no dunce. She is one of the most intelligent people I’ve ever had the honour to encounter, and I’ve spent years moving around academia and industry and science. She’s not only a wonderful artist and a passionate supporter of the arts, she’s also endowed with that clear, incisive intelligence that can whittle down the smooth, impure rock of a nascent theory into the Koh-I-Noor clarity of her theoretical work.

Yet, the fact is, we’ve become a very specialised industry. We, who are in the business of predicting the future, now do so with models that are barely intelligible to outsiders, and some even barely intelligible to those who do not share a subfield with you (I’m looking at you, my fellow topological analytics theorists!). Quite frankly, then: the world is run by algorithms that at best a fraction of us understand.

So when asked to write an account of how I predicted Trump’s victory, I’ve tried to write an account for a ‘popular audience’. 1 That means there’s more I want to get across than the way I built some model that for once turned out to be right. I also want to give you an insight into a world that’s generally pretty well hidden behind a wall made of obscure theory, social anxiety and plenty of confusing language. The latter, in and of itself, takes some time and patience to whittle down. People have asked me repeatedly what this support vector machine I was talking about all the time looked like, and were disappointed to hear it was not an actual machine with cranks and levers, just an algorithm. And the joke is not really on them, it’s largely on us. And so is the duty to make ourselves intelligible.

Prelude

I don’t think there’s been a Presidential election as controversial as Trump’s in recent history. Certainly I cannot remember any recent President having aroused the same sort of fervent reactions from supporters and opponents alike. As a quintessentially apolitical person, that struck me as the kind of odd that attracts data scientists like flies. And so, about a year ago, amidst moving stacks of boxes into my new office, I thought about modelling the outcome of the US elections.

It was a big gamble, and it was a game for a David with his sling. Here I was, with a limited (at best) understanding of the American political system, not much access to private polls the way major media and their court political scientists have, and generally having to rely on my own means to do it. I had no illusions about the chances.

After the first debate, I tweeted this:

Also, as so many asked: post debate indicators included, only 1 of over 200 ensemble models predict a HRC win. Most are strongly Trump win.

– Chris (@DoodlingData), September 28, 2016

To recall, this was a month and a half ago, and chances for Trump looked dim. He was assailed from a dozen sides. He was embroiled in what looked at the time as the largest mass accusation of sexual misconduct ever levelled against a candidate. He had, as many right and left were keen on pointing out, “no ground game”, polling unanimously went against him and I was fairly sure dinner on 10 November at our home will include crow.

But then, I had precious little to lose. I was never part of the political pundits’ cocoon, nor did I ever have a wish to be so. There’s only so much you can offer a man in consideration of a complete commonsensectomy. I do, however, enjoy playing with numbers – even if it’s a Hail Mary pass of predicting a turbulent, crazy election.

I’m not alone with that – these days, the average voter is assailed by a plethora of opinions, quantifications, pontifications and other -fications about the vote. It’s difficult to make sense of most of it. Some speak of their models downright with the same reverence one might once have invoked the name of the Pythiae of the Delphic Oracle. Others brashly assert that ‘math says’ one or other party has ‘already won’ the elections, a month ahead. And I would entirely forgive anyone who were to think that we are, all in all, a bunch of charlatans with slightly more high-tech dowsing rods and flashier crystal balls.

Like every data scientist, I’ve been asked a few times what I ‘really’ do. Do I wear a lab coat? I work in a ‘lab’, after all, so many deduced I would be some sort of experimental scientist. Or am I the Moneyball dude? Or Nate Silver?

Thankfully, neither of those is true. I hate working in the traditional experimental science lab setting (it’s too crowded and loud for my tastes), I don’t wear a lab coat (except as a joke at the expense of one of my long-term harassers), I don’t know anything about baseball statistics and, thanks be to God, I am not Nate Silver.

I am, however, in the business of predicting the future. Which sounds very much like theorising about spaceships and hoverboards, but is in fact quite a bit narrower. You see, I’m a data scientist specialising in several fields of working with data, one of which is ‘predictive analytics’ (PA). PA emerged from combinatorics (glorified dice throwing), statistics (lies, damned lies and ~) and some other parts of math (linear algebra, topology, etc.) and altogether aims to look at the past and find features that might help predicting the future. Over the last few years, this field has experienced an absolute explosion, thanks to a concept called machine learning (ML).

ML is another of those notions that evokes more passionate fear than understanding. In fact, when I explained to a kindly old lady with an abundance of curiosity that I worked in machine learning, she asked me what kind of machines I was teaching, and what I was teaching them – and whether I had taught children before. The reality is, we don’t sit around and read Moby Dick to our computers. Nor is ML some magic step towards artificial intelligence, like Cortana ingesting the entire Forerunner archives in Halo. No, machine learning is actually quite simple: it’s the art and science of creating applications that, at least when they work well, perform better each time than the time before.

It is high art and hard science. Most of modern ML is unintelligible without very solid mathematical foundations, and yet knowledge has never really been able to substitute for experience and a flair for constructing, applying and chaining mathematical methods to the point of accomplishing the best, most accurate result.

Wait, I haven’t talked about results yet! In machine learning, we have two kinds of ‘result’. We have processes we call ‘supervised learning’, where we give the computer a pattern and expect it to keep applying it. For instance, we give it a set (known in this context as the training set) of heart rhythm (ECG) tracings, and tell it which ones are fine and which ones are pathological. We then expect the computer to accurately label any heart rhythm we give to it.

There is also another realm of machine learning, called ‘unsupervised learning’. In unsupervised learning, we let the computer find the similarities and connections it wants to. One example would be giving the computer the same set of heart traces. It would then return what we call a ‘clustering’ – a group of heartbeats on one hand that are fine, and the pathological heartbeats on the other. We are somewhat less concerned with this type of machine learning. Electoral prediction is pretty much a straightforward supervised learning task, although there are interesting addenda that one can indeed do by leveraging certain unsupervised techniques. For instance, groups of people designated by certain characteristics might vote together, and a supervised model might be ‘told’ that a given number of people have to vote ‘as a block’.

These results are what we call ‘models’.

On models

Ever since Nate Silver allegedly predicted the Obama win, there has been a bit of a mystery-and-no-science-theatre around models, and how they work. Quite simply, a model is a function, like any other. You feed it source variables, it spits out a target variable. Like your washing machine:

f(C_d, W, E_{el}, P_w) = (C_c)

That is, put in dirty clothes (C_d ), water (W ), electricity (E_{el} ) and washing powder (P_w ), get clean clothes (C_c ) as a result. Simple, no?

The only reason why a model is a little different is that it is, or is supposed to be, based on the relationship between some real entities on each side of the equality, so that if we know what’s on the left side (generally easy-to-measure things), we can get what’s on the right side. And normally, models were developed in some way by reference to data where we do have both sides of the equation. An example for this is the tool known as Henssge’s nomogram, which is a tool called a nomogram, a visual representation of certain physical relationships. That particular model was developed from hundreds, if not thousands, of measurements of (get your retching bag ready), butthole temperature measurements of dead bodies where the time of death actually was known. As I’m certain you know, when you die, you slowly assume room temperature. There are a million factors that influence this, and to calculate the time since death could certainly break a supercomputer. And it would be accurate, but not much more accurate than Henssge’s method. Turns out, a gentleman called Claus Henssge discovered, that three and a half factors are pretty much enough to estimate the time since death with reasonable accuracy: the ambient temperature, the aforementioned butthole temperature, the decedent’s body weight, and a corrective factor to take account for the decedent’s state of nakedness. Those factors altogether give you 95% or so accuracy – which is pretty good.

The Henssge nomogram illustrates two features of every model:

  1. They’re all based on past or known data.
  2. They’re all, to an extent, simplifications.

Now, traditionally, a model used to be built by people who reasoned deductively, then did some inductive stuff such as testing to assuage the more scientifically obsessed. And so it was with the Henssge nomogram, where data was collected, but everyone had a pretty decent hunch that time of death will correlate best with body weight and the difference between ambient and core (= rectal) temperature. That’s because heat transfer from a body to its environment generally depends on the temperature differential and the area of the surface of exchange:

Q = hA(T_a - T_b)

where Q is heat transferred per unit time, h is the heat transfer coefficient, A is the area of the object and T_a - T_b is the temperature difference. So from that, it then follows that T_a and T_b can be measured, h is relatively constant for humans (most humans are composed of the same substance) and A can be relatively well extrapolated from body weight.2

The entire story of modelling can be understood to focus on one thing, and do it really well: based on a data set (the training set), it creates a model that seeks to describe the essence of the relationship between the variables involved in the training set. The simplest suich relationships are linear: for instance, if the training set consists of {number of hamburgers ordered; amount paid}, the model will be a straight line – for every increase on the hamburger axis, there will be the same increase on the amount paid axis. Some models are more complex – when they can no longer be described as a combination of straight lines, they’re called ‘nonlinear’. And eventually, they get way too complex to be adequately plotted. That is often the consequence of the training dataset consisting not merely of two fields (number of hamburgers and the target variable, i.e. price), but a whole list of other fields. These fields are called elements of the feature vector, and when there’s a lot of them, we speak of a high-dimensional dataset. The idea of a ‘higher dimension’ might sound mysterious, but true to fashion, mathematicians can make it sound boring. In data science, we regularly throw around data sets of several hundred or thousand dimensions or even more – so many, in fact, that there are whole techniques intended to reduce this number to something more manageable.

But just how do we get our models?

Building our model

In principle, you can sit down, think about a process and create a model based on some abstract simplifications and some other relationships you are aware of. That’s how the Henssge model was born – you need no experimental data to figure out that heat loss will depend on the radiating area, the temperature difference to ‘radiate away’ and the time the body has been left to assume room temperature: these things more or less follow from an understanding of how physics happens to work. You can then use data to verify or disprove your model, and if all goes well, you will get a result in the end.

There is another way of building models, however. You can feed a computer a lot of data, and have it come up with whatever representation gives the best result. This is known as machine learning, and is generally a bigger field than I could even cursorily survey here. It comes in two flavours – unsupervised ML, in which we let the computer loose on some data and hope it turns out ok, and supervised ML, in which we give the computer a very clear indication of what approrpiate outputs are for given input values. We’re going to be concerned with the latter. The general idea of supervised ML is as follows.

  1. Give the algorithm a lot of value pairs from both sides of the function – that is, show the algorithm what comes out given a particular input. The inputs, and sometimes even the outputs, may be high-dimensional – in fact, in the field I deal with normally, known as time series analytics, thousands of dimensions of data are pretty frequently encountered. This data set is known as the training set.
  2. Look at what the algorithm came up with. Start feeding it some more data to which you know the ‘correct’ output, so to speak, data which you haven’t used as part of the training set. Examine how well your model is doing predicting the test set.
  3. Tweak model parameters until you get closer to higher accuracy. Often, an algorithm called gradient descent is used, which is basically a fancy way of saying ‘look at whether changing a model parameter in a particular direction by \mu makes the model perform better, and if so, keep doing it until it doesn’t’. \mu is known as the ‘learning rate’, and determines on one hand how fast the model will get to a best possible approximation of the result (how fast the modell will converge), and on the other, how close it will be to the true best settings. Finding a good learning rate is more a dark art than science, but something people eventually get better at with practice.

In this case, I was using a modelling approach called a backpropagation neural network. An artificial neural network (ANN) is basically a bunch of nodes, known as neurons, connected to each other. Each node runs a function on the input value and spits it out to its output. An ANN has these neurons arranged in layers, and generally nodes feed in one direction (‘forward’), i.e. from one layer to the next, and never among nodes in the same layer.

Neurons are connected by ‘synapses’ that are basically weighted connections (weighting simply means multiplying each input to a neuron by a value that emphasises its significance, so that these values all add up to 1). The weights are the ‘secret sauce’ to this entire algorithm. For instance, you may have an ANN set to recognise handwritten digits. The layers would get increasingly complex. So one node may respond to whether the digit has a straight vertical line. The output node for the digit 1 would weight the output from this node quite strongly, while the output node for 8 would weight it very weakly. Now, it’s possible to pick the functions and determine the weights manually, but there’s something better – an algorithm called backpropagation that, basically, keeps adjusting weights using gradient descent (as described above) to reach an optimal weighting, i.e. one that’s most likely to return accurate values.

My main premise for creating the models was threefold.

  1. No polling. None at all. The explanation for that is twofold. First, I am not a political scientist. I don’t understand polls as well as I ought to, and I don’t trust things I don’t understand completely (and neither should you!). Most of all, though, I worry that polls are easy to influence. I witnessed the 1994 Hungarian elections, where the incumbent right-wing party won all polls and exit-poll surveys by a mile… right up until eventually the post-communists won the actual elections. How far that was a stolen election is a different question: what matters is that ever since, I have no faith at all in polling, and that hasn’t gotten better lately. Especially in the current elections, a stigma has developed around voting Trump – people have been beaten up, verbally assaulted and professionally ostracised for it. Clearly asking them politely will not give you the truth.
  2. No prejudice for or against particular indicators. The models were generated from a vast pool of indicators, and, to put it quite simply, a machine was created that looked for correlations between electoral results and various input indicators. I’m pretty sure many, even perhaps most, of those correlations were spurious. At the same time, spurious correlations don’t hurt a predictive model if you’re not intending to use the model for anything other than prediction.
  3. Assumed ergodicity. Ergodicity, quite simply, means that the average of an indicator over time is the same as the average of an indicator over space. To give you an example:3 assume you’re interested in the ‘average price’ of shoes. You may either spend a day visiting every shoe store and calculate the average of their prices (average over space), or you may swing past the window of the shoe store on your way to work and look at the prices every day for a year or so. If the price of shoes is ergodic, then the two averages will be the same. I thus made a pretty big and almost certainly false assumption, namely that the effect of certain indicators on individual Senate and House races is the same as on the Presidency. As said, while this is almost certainly false, it did make the model a little more accurate and it was the best model I could use for things for which I do not have a long history of measurements, such as Twitter prevalence.

One added twist was the use of cohort models. I did not want to pick one model to stake all on – I wanted to generate groups (cohorts) of 200 models each, wherein each would be somewhat differently tuned. Importantly, I did not want to create a ‘superteam’ of the best 200 models generated in different runs. Rather, I wanted to select the group of 200 models that is most likely to give a correct overall prediction, i.e. in which the actual outcome would most likely be the outcome predicted by the majority of the models. This allows for picking models where we know they will, ultimately, act together as an effective ensemble, and models will ‘balance out’ each other.

A supercohort of 1,000 cohorts of 200 models each was trained on electoral data since 1900. Because of the ergodicity assumption (as detailed above), the models included non-Presidential elections, but anything ‘learned’ from such elections was penalised. This is a decent compromise if we consider the need for ergodicity. For example, I have looked at the (normalised fraction4 of the) two candidates’ media appearances and their volume of bought advertising, but mass media hasn’t always been around for the last 116 years in its current form. So I looked at the effect that this had on smaller elections. All variables weighted to ‘decay’ depending on their age.

Tuning of model hyperparameters and deep architecture was attempted in two ways. I initially began with a classical genetic algorithm for tuning hyperparameters and architecture, aware that this was less efficient than gradient descent based algorithms but more likely to give you a diversity of hyperparameters and far more suited to multi-objective systems. Compared with gradient descent algorithms, genetic algorithms took longer but performed better. This was an acceptable tradeoff to me, so I eventually adapted a multi-objective genetic algorithm implementation, drawing on the Python DEAP package and some (ok, a LOT of) custom code. Curiously (or maybe not – I recently learned this was a ‘well known’ finding –  apparently not as well known after all!), the best models came out of ‘split training’: genetically optimised convolutional layers, genetically optimised structure but non-convolutional layers are trained using backpropagation.

Another twist was the use of ‘time contingent parameters’. That’s a fancy word of saying data that’s not available ab initio. An example for that would be post-debate changes of web search volumes for certain keywords associated with each candidate. Trivially, that information is not in existence until a week or so post-debate. These models were trained to ‘variants’. So if a particular model had information missing, it defaulted to an equally weighted model without the nodes that would have required that information. Much as this was a hacky solution, it was acceptable to me as I knew that by late October, every model would have complete information.

I wrote a custom mdoel runner in Python with an easy-as-heck output interface – I was not concerned with creating pretty, I was concerned with creating good. The runner first pulled all data it required once again, diffed it against the previous version, reran feature extractors where there was a change, then ran the models over the feature vectors. Outputs went into CSV files and simple outputs that looked like this (welcome to 1983):

CVoncsefalvay @ orinoco ~/Developer/mfarm/election2016 $ mrun –all

< lots of miscellaneous debug outputs go here >

[13:01:06.465 02 Nov 2016 +0000] OK DONE.
[13:01:06.590 02 Nov 2016 +0000] R 167; D 32; DNC 1
[13:01:06.630 02 Nov 2016 +0000] Output written to outputs/021301NOV2016.mconfdef.csv

That’s basically saying that (after spending the best part of a day scoring through all the models) 167 models were predicting a Republican victory, 32 a Democratic victory and one model crashed, did not converge somewhere or otherwise broke. The CSV output file would then give further data about each submodel, such as predicted turnout, predictions of the electoral college and popular vote, etc. The model was run with a tolerance of 1%, i.e. up to two models can break and the model would still be acceptable. Any more than that, and a rerun would be initiated automatically. One cool thing: this was my first application using the Twilio API to send me messages keeping me up to date on the model. Yes, I know, the 1990s called, they want SMS messaging back.

By the end of the week, the first models have phoned back. I was surprised: was Trump really that far ahead? The polls have slammed him, he seemed hopeless, he’s not exactly anyone’s idea of the next George Washington and he ran against more money, more media and more political capital. I had to spend the best part of a weekend confirming the models, going over them line by line, doing tests and cross-validation, until I was willing to trust my models somewhat.

But part of our story in science is to believe evidence with the same fervour we disbelieve assertions without it. And so, after being unable to find the much expected error in my code and the models, I concluded they must be right.

Living with the models

The unique exhilaration, but also the most unnerving feature, of creating these models was how different they are from my day-to-day fare. When I write predictive models, the approach is, and remains, quintessentially iterative. We build models, we try them, and iteratively improve on them. It is dangerous to fall in love with one’s own models – today’s hero is in all likelihood destined for tomorrow’s dungheap, with another, better model taking its place – until that model, too, is discarded for a better approach, and so on. We do this because of the understanding that reality is a harsh taskmaster, and it always has some surprises in store for us. This is not to say that data scientists build and sell half-assed, flawed products – quite the opposite: we give you the best possible insight we can with the information we’ve got. But how reality pans out will give us more new information, and we can work with that to move another step closer to the elusive truth of predicting the future. And one day, maybe, we’ll get there. But every day, if we play the game well, we get closer.

Predicting a one-time event is different. You don’t get pointers as to whether you are on the right track or not. There are no subtle indications of whether the model is going to work or not. I have rarely had a problem sticking by a model I built that I knew was correct, because I knew every day that new information would either confirm or improve my model – and after all, turning out the best possible model is the important part, not getting it in one shot, right? It was unnerving to have a model built on fairly experimental techniques, with the world predicting a Clinton win with a shocking unanimity. There were extremely few who predicted a Trump win, and we all were at risk of being labelled either partisans for Trump (a rather hilarious accusation when levelled at me!) or just plain crackpots. So I pledged not to discuss the technical details of my models unless and until the elections confirmed they were right.

So it came to pass that it was me, the almost apolitical one, rather than my extremely clever and politically very passionate wife, who stayed up until the early hours of the morning, watching the results pour in. With CNN, Fox and Twitter over three screens, refreshing all the time, I watched as Trump surged ahead early and maintained a steady win.

My model was right.

Coda

It’s the 16th of November today. It’s been almost a week since the elections, and America is slowly coming to terms with the unexpected. It is a long process, it is a traumatic process, and the polling and ‘quantitative social science’ professions are, to an extent, responsible for this. There was all kinds of sloppiness, multiplication of received wisdom, ‘models’ that in fact were thin confirmations of the author’s prejudices in mathematical terms, and a great deal of stupidity. That does sound harsh, but there’s no better way really to describe articles that, weeks before the election, state without a shade of doubt that we needed to ‘move on’, for Clinton had already won. I wonder if Mr Frischling had a good family recipe for crow? And on the note of election night menu, he may exchange tips with Dr Sam Wang, whom Wired declared 2016’s election data hero in an incredibly complimentary puff piece, apparently quite more on the basis that the author, Jeff Nesbit, hoped Wang was right rather than any indications for analytical superiority.

The fact is, the polling profession failed America and has no real reason to continue to exist. The only thing it has done is make campaigns more expensive and add to the pay-to-play of American politics. I don’t really see myself crying salt tears at the polling profession’s funeral.

The jury is still out on the ‘quantitative social sciences’, but it’s not looking good. The ideological homogeneity in social science faculties worldwide, but especially in America, has contributed to the kind of disaster that happens when people live in a bubble. As scientists, we should never forget to sanity check our conclusions against our experiences, and intentionally cultivate the most diverse circle of friends we can to get as many little slivers of the human experience as we can. When one’s entire milieu consists of pro-Clinton academics, it’s hard to even entertain doubt about who is going to win – the availability heuristic is a strong and formidable adversary, and the only way to beat it is by recruiting a wide array of familiar people, faces, notions, ideas and experiences to rely on.

As I write this, I have an inch-thick pile of papers next to me: calculations, printouts, images, drafts of a longer academic paper that explains the technical side of all this in detail. Over the last few days, I’ve fielded my share of calls from the media – which was somewhat flattering, but this is not my field. I’m just an amateur who might have gotten very lucky – or maybe not.

Time will tell.

In a few months, I will once again be sharing a conference room with my academic brethren. We will discuss, theorize, ideate and exchange views; a long, vivid conversation written for a 500-voice chorus, with all the beauty and passion and dizzying heights and tumbling downs of Tallis’s Spem in Alium. The election has featured prominently in those conversations last time, and no doubt that will be the case again. Many are, at least from an academic perspective, energised by what happened. Science is the only game where you actually want to lose from time to time. You want to be proven wrong, you want to see you don’t know anything, you want to be miles off, because that means there is still something else to discover, still some secrets this Creation conceals from our sight with sleights of hand and blurry mirrors. And so, perhaps the real winners are not those few, those merry few, who got it right this time. The real winners are those who, led by their curiosity about their failure to predict this election, find new solutions, new answers and, often enough, new puzzles.

That’s not a consolation prize. That’s how science works.

And while it’s cool to have predicted the election results more or less correctly, the real adventure is not the destination. The real adventure is the journey, and I hope that I have been able to grant you a little insight into this adventure some of us are on every hour of every day.

References   [ + ]

1. There is an academic paper with a lot more details forthcoming on the matter – incidentally, because republication is generally not permitted, it will contain many visualisations I was not able or allowed to put into this blog post. So just for that, it may be worth reading once it’s out. I will post a link to it here.
2. The reasoning here is roughly as follows. Assume the body is a sphere. All bodies are assumed of being made of the same material, which is also assumed to be homogenous. The volume of a sphere V = \frac{4}{3} \pi r^3 , and its weight is that multiplied by its density \rho . Thus the radius of a sphere of a matter of known density \rho can be calculated as r = \sqrt[3]{\frac{3}{4} \frac{M}{\pi \rho}} . From this, the surface area can be calculated (A = 4 \pi r^2 ). Thus, body weight is a decent stand-in for surface area.
3. I am indebted to Nassim Nicholas Taleb for this example.
4. Divide the smaller by the larger value, normalise to 1.

Give your Twitter account a memory wipe… for free.

The other day, my wife has decided to get rid of all the tweets on one of her twitter accounts, while of course retaining all the followers. But bulk deleting tweets is far from easy. There are, fortunately, plenty of tools that offer you the service of bulk deleting your tweets… for a cost, of course. One had a freemium model that allowed three free deletes per day. I quickly calculated that it would have taken my wife something on the order of twelve years to get rid of all her tweets. No, seriously. That’s silly. I can write some Python code to do that faster, can’t I?

Turns out you can. First, of course, you’ll need to create a Twitter app from the account you wish to wipe and generate an access token, since we’ll also be performing actions on behalf of the account.

import tweepy
import time

CONSUMER_KEY=<your consumer key>
CONSUMER_SECRET=<your consumer secret>
ACCESS_TOKEN=<your access token>
ACCESS_TOKEN_SECRET=<your access token secret>
SCREEN_NAME=<your screen name, without the @>

Time to use tweepy’s OAuth handler to connect to the Twitter API:

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)

api = tweepy.API(auth)

Now, we could technically write an extremely sophisticated script, which looks at the returned headers to determine when we will be cut off by the API throttle… but we’ll use the easy and brutish route of holding off for a whole hour if we get cut off. At 350 requests per hour, each capable of deleting 100 tweets, we can get rid of a 35,000 tweet account in a single hour with no waiting time, which is fairly decent.

The approach will be simple: we ask for batches of 100 tweets, then call the .destroy() method on each of them, which thanks to tweepy is now bound into the object representing every tweet we receive. If we encounter errors, we respond accordingly: if it’s a RateLimitError, an error object from tweepy that – as its name suggests – shows that the rate limit has been exceeded, we’ll hold off for an hour (we could elicit the reset time from headers, but this is much simpler… and we’ve got time!), if it can’t find the status we simply leap over it (sometimes that happens, especially when someone is doing some manual deleting at the same time) and otherwise, we break the loops.

def destroy():
    while True:
        q = api.user_timeline(screen_name=SCREEN_NAME,
                              count=100)
        for each in q:
            try:
                each.destroy()
            except tweepy.RateLimitError as e:
                print (u"Rate limit exceeded: {0:s}".format(e.message))
                time.sleep(3600)
            except tweepy.TweepError as e:
                if e.message == "No status found with that ID.":
                    continue
            except Exception as e:
                print (u"Encountered undefined error: {0:s}".format(e.message))
                break
        break

Finally, we’ll make sure this is called as the module default:

if __name__ == '__main__':
    destroy()

Happy destruction!