Automagic epi curve plotting: part I

As of 24 May 2018, the underlying data schema of the Github repo from which the epi curve plotter draws its data has changed. Therefore, a lot of the code had to be adjusted. The current code can be found here on Github. This also plots a classical epi curve.

One of the best resources during the 2013-16 West African Ebola outbreak was Caitlin RiversGithub repo, which was probably one of the best ways to stay up to date on the numbers. For the current outbreak, she has also set up a Github repo, with really frequent updates straight from the WHO’s DON data and the information from DRC Ministry of Public Health (MdlS) mailing list.1 Using R, I have set up a simple script that I only have to run every time I want a pre-defined visualisation of the current situation. I am usually doing this on a remote RStudio server, which makes matters quite easy for me to quickly generate data on the fly from RStudio.

Obtaining the most recent data

Using the following little script, I grab the latest from the ebola-drc Github repo:

# Fetch most recent DRC data.
library(magrittr)
library(curl)
library(readr)
library(dplyr)

current_drc_data <- Sys.time() %>%
  format("%d%H%M%S%b%Y") %>%
  paste("raw_data/drc/", "drc-", ., ".csv", sep = "") %T>%
  curl_fetch_disk("https://raw.githubusercontent.com/cmrivers/ebola_drc/master/drc/data.csv", .) %>%
  read_csv()

This uses curl (the R implementation) to fetch the most recent data and save it as a timestamped2 file in the data folder I set up just for that purpose.3 Simply sourcing this script (source("fetch_drc_data.R")) should then load the current DRC dataset into the environment.4

Data munging

We need to do a little data munging. First, we melt down the data frame using reshape2‘s melt function. Melting takes ‘wide’ data and converumnts it into ‘long’ data – for example, in our case, the original data had a row for each daily report for each health zone, and a column for the various combinations of confirmed/probable/suspected over cases/deaths. Melting the data frame down creates a variable type column (say, confirmed_deaths and a value column (giving the value, e.g. 3). Using lubridate,5 the dates are parsed, and the values are stored in a numeric format.

library(magrittr)
library(reshape2)
library(lubridate)

current_drc_data %<>% melt(value_name = "value", measure.vars = c("confirmed_cases", "confirmed_deaths", "probable_cases", "probable_deaths", "suspect_cases", "suspect_deaths", "ruled_out"))
current_drc_data$event_date <- lubridate::ymd(current_drc_data$event_date)
current_drc_data$report_date <- lubridate::ymd(current_drc_data$report_date)
current_drc_data$value <- as.numeric(current_drc_data$value)

Next, we drop ruled_out cases, as they play no significant role for the current visualisation.

current_drc_data <- current_drc_data[current_drc_data$variable != "ruled_out",]

We also need to split the type labels into two different columns, so as to allow plotting them as a matrix. Currently, data type labels (the variable column) has both the certainty status (confirmed, suspected or probable) and the type of indicator (cases vs deaths) in a single variable, separated by an underscore. We’ll use stringr‘s str_split_fixed to split the variable names by underscore, and join them into two separate columns, suspicion and mm, the latter denoting mortality/morbidity status.

current_drc_data %<>% cbind(., str_split_fixed(use_series(., variable), "_", 2)) %>% 
                 subset(select = -c(variable)) %>% 
                 set_colnames(c("event_date", "report_date", "health_zone", "value", "suspicion", "mm"))

Let’s filter out the health zones that are being observed but have no relevant data for us yet:

relevant_health_zones <- current_drc_data %>% 
                         subset(select = c("health_zone", "value")) %>% 
                         group_by(health_zone) %>% 
                         summarise(totals = sum(value, na.rm=TRUE)) %>% 
                         dplyr::filter(totals > 0) %>% 
                         use_series(health_zone)

This gives us a vector of all health zones that are currently reporting cases. We can filter our DRC data for that:

current_drc_data %<>% dplyr::filter(health_zone %in% relevant_health_zones)

This whittles down our table by a few rows. Finally, we might want to create a fake health zone that summarises all other health zones’ respective data:

totals <- current_drc_data %>% group_by(event_date, report_date, suspicion, mm) 
                           %>% summarise(value = sum(value), health_zone=as.factor("DRC total"))

# Reorder totals to match the core dataset
totals <- totals[,c(1,2,6,5,3,4)]

Finally, we bind these together to a single data frame:

current_drc_data %<>% rbind.data.frame(totals)

Visualising it!

Of course, all this was in pursuance of cranking out a nice visualisation. For this, we need to do a couple of things, including first ensuring that “DRC total” is treated separately and comes last:

regions <- current_drc_data %>% use_series(health_zone) %>% unique()
regions[!regions == "DRC total"]
regions %<>% c("DRC total")

current_drc_data$health_zone_f <- factor(current_drc_data$health_zone, levels = regions)

I normally start out by declaring the colour scheme I will be using. In general, I tend to use the same few colour schemes, which I keep in a few gists. For simple plots, I prefer to use no more than five colours:

colour_scheme <- c(white = rgb(238, 238, 238, maxColorValue = 255),
                   light_primary = rgb(236, 231, 216, maxColorValue = 255),
                   dark_primary = rgb(127, 112, 114, maxColorValue = 255),
                   accent_red = rgb(240, 97, 114, maxColorValue = 255),
                   accent_blue = rgb(69, 82, 98, maxColorValue = 255))

With that sorted, I can invoke the ggplot method, storing the plot in an object, p. This is so as to facilitate later retrieval by the ggsave method.

p <- ggplot(current_drc_data, aes(x=event_date, y=value)) +

  # Title and subtitle
  ggtitle(paste("Daily EBOV status", "DRC", Sys.Date(), sep=" - ")) +
  labs(subtitle = "(c) Chris von Csefalvay/CBRD (cbrd.co) - @chrisvcsefalvay") +
  
  # This facets the plot based on the factor vector we created ear 
  facet_grid(health_zone_f ~ suspicion) +
  geom_path(aes(group = mm, colour = mm, alpha = mm), na.rm = TRUE) +
  geom_point(aes(colour = mm, alpha = mm)) +

  # Axis labels
  ylab("Cases") +
  xlab("Date") +

  # The x-axis is between the first notified case and the last
  xlim(c("2018-05-08", Sys.Date())) +
  scale_x_date(date_breaks = "7 days", date_labels = "%m/%d") +

  # Because often there's an overlap and cases that die on the day of registration
  # tend to count here as well, some opacity is useful.
  scale_alpha_manual(values = c("cases" = 0.5, "deaths" = 0.8)) +
  scale_colour_manual(values = c("cases" = colour_scheme[["accent_blue"]], "deaths" = colour_scheme[["accent_red"]])) +

  # Ordinarily, I have these derive from a theme package, but they're very good
  # defaults and starting poinnnnnntsssssts
  theme(panel.spacing.y = unit(0.6, "lines"), 
        panel.spacing.x = unit(1, "lines"),
        plot.title = element_text(colour = colour_scheme[["accent_blue"]]),
        plot.subtitle = element_text(colour = colour_scheme[["accent_blue"]]),
        axis.line = element_line(colour = colour_scheme[["dark_primary"]]),
        panel.background = element_rect(fill = colour_scheme[["white"]]),
        panel.grid.major = element_line(colour = colour_scheme[["light_primary"]]),
        panel.grid.minor = element_line(colour = colour_scheme[["light_primary"]]),
        strip.background = element_rect(fill = colour_scheme[["accent_blue"]]),
        strip.text = element_text(colour = colour_scheme[["light_primary"]])
  )
DRC EBOV outbreak, 22 May 2018. The data has some significant gaps, owing to monitoring and recording issues, but some clear trends already emerge from this simple illustration.

The end result is a fairly appealing plot, although if the epidemic goes on, one might want to consider getting rid of the point markers. All that remains is to insert an automatic call to the ggsave function to save the image:

Sys.time() %>%
  format("%d%H%M%S%b%Y") %>%
  paste("DRC-EBOV-", ., ".png", sep="") %>%
  ggsave(plot = p, device="png", path="visualisations/drc/", width = 8, height = 6)

Automation

The cronR package has a built-in cron scheduler add-in for RStudio, allowing you to manage all your code scheduling needs.

Of course, being a lazy epidemiologist, this is the kind of stuff that just has to be automated! Since I run my entire RStudio instance on a remote machine, it would make perfect sense to regularly run this script. cronR package comes with a nice widget, which will allow you to simply schedule any task. Old-school command line users can, of course, always resort to ye olde command line based scheduling and execution. One important caveat: the context of cron execution will not necessarily be the same as of your R project or indeed of the R user. Therefore, when you source a file or refer to paths, you may want to refer to fully qualified paths, i.e. /home/my_user/my_R_stuff/script.R rather than merely script.R. cronR is very good at logging when things go awry, so if the plots do not start to magically accumulate at the requisite rate, do give the log a check.

The next step is, of course, to automate uploads to Twitter. But that’s for another day.

References   [ + ]

1. Disclaimer: Dr Rivers is a friend, former collaborator and someone I hold in very high esteem. I’m also from time to time trying to contribute to these repos.
2. My convention for timestamps is the military DTG convention of DDHHMMSSMONYEAR, so e.g. 7.15AM on 21 May 2018 would be 21071500MAY2018.
3. It is, technically, bad form to put the path in the file name for the curl::curl_fetch_disk() function, given that curl::curl_fetch_disk() offers the path parameter just for that. However, due to the intricacies of piping data forward, this is arguably the best way to do it so as to allow the filename to be reused for the read_csv() function, too.
4. If for whatever reason you would prefer to keep only one copy of the data around and overwrite it every time, quite simply provide a fixed filename into curl_fetch_disk().
5. As an informal convention of mine, when using the simple conversion functions of lubridate such as ymd, I tend to note the source package, hence lubridate::ymd over simply ymd.

Get your Jupyterhub box set up on Linode with a single go!

At CBRD, a lot of the research work we do is done on remote machines. For various reasons, we like being able to spin up and wind down these boxes at will, and auto-configure them at short notice according to a few standard variables. Depending on the installation, then, we would have a perfectly set up box with all the features we want, focused around R, Python and as frontends, RStudio and Jupyterhub.

Where StackScripts fit in

There are a wide range of ways to configure boxes – Puppet, Chef cookbooks, Terraform, Dockerfiles, and all that –, but for ease of use, we rely on simple shell files that can be run as Linode StackScripts.

StackScripts are an extremely convenient way to configure single Linode with the software you need. Unlike more complex systems like Terraform,

Basic usage

The Ares research node generator StackFile does a handful of things:

  • Update system and add the CRAN repo as a source
  • Install R and the RStudio version of choice
  • Install Python and JupyterHub version of choice
  • Install an opinionated set of system level packages (i.e. available to all users)
  • Configures ports and some other configuration items for the instance
  • Creates the root user
  • Daemonises the RStudio server and JupyterHub to automatically start at failure and automatically start at reboot

When deployed using Linode from its StackFile, it allows for a wide range of configuration options, including ports for both Jupyter and RStudio, and a completely configured first user set up both on JupyterHub and RStudio. In addition, you can configure some install settings. A ‘barebones’ install option exists that allows for a minimum set of packages to be installed – this is useful for testing or if the desired configuration diverges from the ordinary structure. In addition, OpenCV, deep learning tools and cartography tools can be selectively disabled or enabled, as these are not always required.

User administration for Jupyterhub and RStudio

In general, user administration is by preset attached to PAM, i.e. the built-in Linux administration structure. JupyterHub has its own administration features, described here. RStudio, on the other hand, authenticates by user group membership. The two share the same usergroup, specified in the configuration (by default and convention, this is jupyter, but you can change it), and because users created by JupyterHub fall into that user group, creating users in JupyterHub automatically grants them access to RStudio. This is overall acceptable as we tend to use both, but there might be a safety concern there. If so, you can change the auth-required-user-group=$USERGROUPNAME setting to a defined usergroup in the /etc/rstudio/rstudio.conf.

Issues

There are some glitches that we’re trying to iron out:

  • Cartography and GIS tools glitch a little due to issues with PROJ.4.
  • GPU/CUDA support is not implemented as this is not customarily used or provided on Linodes
  • certbot and Let's Encrypt is not really supported yet, as our boxes are never directly public-facing, but you should most definitely find a way to put your server behind some form of SSL/TLS.
  • Currently, only Ubuntu 16.04 LTS is supported, as it’s the most widely used version. CRAN does not yet support more recent versions yet, but these will be added once CRAN support is added.
New to Linodes? Linode is a great service offering a simple, uncomplicated way to set up a Linux server, just how you want it! 👍🏻 In fact, I’ve been using Linode servers for over four years, and have only good experiences with them. I keep recommending them to friends, and they’re all happy! So if this post was helpful, and you’re thinking of signing up for a Linode, would you consider doing so using my affiliate link? I run this blog as a free service and pay for hosting out of pocket, and I don’t do advertisements (and never will!) or paid reviews of things (like what, paid Tyvek reviews? “Best Ebola PPE”? Nope.) – so really, it would make me so happy if you could help out this way.

As always, do let me know how things work out for you! Feel free to leave comments and ideas below. If you’re getting value out of using this script, just let me know.

Installing RStudio Server on Debian 9

Oh, wouldn’t it be just wonderful if you could have your own RStudio installation on a server that you could then access from whatever device you currently have, including an iPad? It totally would. Except it’s some times far from straightforward. Here’s how to do it relatively painlessly.

Step 1: Get a server

Choose a suitable (and affordable) server, pick a location near you in the drop-down menu on the bottom, and press Add this Linode! to set up your first Linode.

I use Linode,1 and in general, their 4096 server is pretty good. Linodes can be very easily resized, so this should not be a worry.

Step 2: Set up the server

Once your Linode is up, it will turn up on your dashboard with a random name (linode1234567, typically). If you click on it, you will see your Linode is ‘Brand New’, which means you need to configure it. I usually keep them in groups depending on purpose: blog servers, various processing servers, hosts, research servers. Each of them then gets a name. Choose whatever nomenclature fits your needs best.

Give your Linode a name you can recognise it by, and assign it to a category.

Next, click on the Rebuild tab, and configure the root password, the operating system (we’ll be using Debian 9), the swap disk size (for R, it’s a good idea to set this as large as you can) and finally, set the deployment disk size. I usually set that for 66%.

Step 3: Install R

The Remote Access field shows the SSH access command, complete with the root user, as well as public IPs, default gateways and other networking stuff.

SSH into your Linode and log in as root. You will find your Linode’s IP address and other interesting factoids about it under the Remote access tab. I have obscured some information as I don’t want you scallywags messing around with my server, but the IP address is displayed both in the SSH link (the one that goes ssh [email protected] or something along these lines) and below under public IPs.

Using vim, open /etc/apt/sources.list and add the following source:

# CRAN server for Debian stretch (R and related stuff)
deb http://cran.rstudio.com/bin/linux/debian stretch-cran34/

Save the file, and next install dirmngr (sudo apt-get install dirmngr). Then, add the requisite GPG key for CRAN, update the repository and install r-base:

sudo apt-key adv --keyserver keys.gnupg.net --recv-key 'E19F5F87128899B192B1A2C2AD5F960A256A04AF'

sudo apt update
sudo apt install r-base

At this point, you can enter R to test if your R installation works, and try to install a package, like ggplot2. It should work, but may ask you to select an installation server. If all is well, this is what you should be getting:

[email protected]:~# R

R version 3.4.4 (2018-03-15) -- "Someone to Lean On"
Copyright (C) 2018 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

In the prompt, enter install.packages("ggplot2"). Select a server if requested. Otherwise, watch ggplot2 (and a bazillion other packages) install. Then, quit R by calling q().

Step 4: Downgrade libssl

For some inscrutable reason, RStudio is currently set up to work with libssl1.0.0, whereas Debian 9 comes with libssl1.1.0 out of the box. Clearly that’s not going to work, so we’ll have to roll back our libssl. We’ll do so by creating a file called /etc/apt/sources.list.d/jessie.list using vim. And we’ll fill it with the following:

deb http://httpredir.debian.org/debian jessie main contrib non-free
deb-src http://httpredir.debian.org/debian jessie main contrib non-free

deb http://security.debian.org/ jessie/updates main contrib non-free
deb-src http://security.debian.org/ jessie/updates main contrib non-free

In case you’re curious: this creates a sources list called jessie, and allows you to draw from Debian Jessie. So let’s have apt get with the program (sudo apt update), and install libssl1.0.0 (sudo apt install libssl1.0.0). Ift may be prudent to also install the openssl tool corresponding to the libssl version (sudo apt install openssl/jessie).

Step 5: Time to install RStudio Server!

First, install gDebi, a package installer, by typing sudo apt-get install gdebi. Next, we’ll be grabbing the latest RStudio version using wget, and installing it using gDebi. Make sure you either do this in your home directory or in /tmp/, ideally. Note that the versions of RStudio tend to change – 1.1.442 was the current version, released 12 March 2018, at the time of writing this post, but may by now have changed. You can check the current version number here.

wget https://download2.rstudio.org/rstudio-server-1.1.442-amd64.deb
sudo gdebi rstudio-server-1.1.442-amd64.deb

If all is well and you said yes to the dress question about whether you actually want to install RStudio Server (no, you’ve been going through this whole pain in the rear for excrement and jocularity, duh) , you should see something like this:

(Reading database ... 55651 files and directories currently installed.)
Preparing to unpack rstudio-server-1.1.442-amd64.deb ...
Unpacking rstudio-server (1.1.442) ...
Setting up rstudio-server (1.1.442) ...
groupadd: group 'rstudio-server' already exists
rsession: no process found
Created symlink /etc/systemd/system/multi-user.target.wants/rstudio-server.service → /etc/systemd/system/rstudio-server.service.
● rstudio-server.service - RStudio Server
   Loaded: loaded (/etc/systemd/system/rstudio-server.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2018-03-31 21:59:25 UTC; 1s ago
  Process: 16478 ExecStart=/usr/lib/rstudio-server/bin/rserver (code=exited, status=0/SUCCESS)
 Main PID: 16479 (rserver)
    Tasks: 3 (limit: 4915)
   CGroup: /system.slice/rstudio-server.service
           └─16479 /usr/lib/rstudio-server/bin/rserver

Mar 31 21:59:25 localhost systemd[1]: Starting RStudio Server...
Mar 31 21:59:25 localhost systemd[1]: Started RStudio Server.

Now, if all goes well, you can navigate to your server’s IP at port 8787, and you should behold something akin to this:

The login interface. Technically, you could simply log in with your root password and root as the username. But just don’t yet.

There are a few more things you wish to install at this point – these are libraries that will help with SSL and other functionality from within R. Head back to the terminal and install the following:

sudo apt-get install libssl-dev libcurl4-openssl-dev libssh2-1-dev

Step 6: Some finishing touches

The login screen will draw on PAM, Unix’s own authentication module, in lieu of a user manager. As such, to create access, you will have to create a new Unix user with adduser, and assign a password to it. This will grant it a directory of its own, and you the ability to fine-tune what they should, and what they shouldn’t, have access to. Win-win! This will allow you to share a single installation among a range of people.

Step 7: To reverse proxy, or to not reverse proxy?

There are diverging opinions as to whether a reverse proxy such as NGINX carries any benefit. In my understanding, they do not, and there’s a not entirely unpleasant degree of security by obscurity in having a hard to guess port (which, by the way, you can change). It also makes uploads, on which you will probably rely quite a bit, more difficult. On the whole, there are more arguments against than in favour of a reverse proxy, but I may add specific guidance on reverse proxying here if there’s interest.

References   [ + ]

1. Using this link gives me a referral bonus of $20 as long as you remain a customer for 90 days. If you do not wish to do so, please use this link. It costs the same either way, and I would be using Linode anyway as their service is superbly reliable.

MedDRA + VAERS: A marriage made in hell?

This post is a Golden DDoS Award winner

So far, this blog was DDoS’d only three times within 24 hours of its publication. That deserves a prize.

Quick: what do a broken femur, Henoch-Schönlein purpura, fainting, an expired vaccine and a healthy childbirth have in common? If your answer was “they’re all valid MedDRA codes”, you’re doing pretty well. If you, from that, deduced that they all can be logged on VAERS as adverse effects of vaccination, you’re exceeding expectations. And if you also realise that the idea that Jane got an expired HPV vaccine, and as a consequence broke her femur, developed Henoch-Schönlein purpura, and suddenly gave birth to a healthy baby boy is completely idiotic and yet can be logged on VAERS, you’re getting where I’m going.

MedDRA is a medical nomenclature specifically developed for the purposes of pharmacovigilance. The idea is, actually, not dreadful – there are some things in a usual medical nomenclature like ICD-10 that are not appropriate for a nomenclature used for pharmacovigilance reporting (V97.33: sucked into jet engine comes to my mind), and then there are things that are specific to pharmacovigilance, such as “oh shoot, that was not supposed to go up his bum!” (MedDRA 10013659: vaccine administered at inappropriate site), “we overdosed little Johnny on the flu vaccine!” (MedDRA 10000381: drug overdose, accidental) and other joys that generally do only happen in the context of pharmacovigilance. So far, so good.

At the same time, MedDRA is non-hierarchical, at least on the coding level. Thus, while the ICD code V97.33 tells you that you’re dealing with an external cause of mortality and morbidity (V and Y codes), specifically air and space transport (V95-97), more specifically ‘other’ specific air transport accidents, specifically getting sucked into a jet engine (V97.33), there’s no way to extract from MedDRA 10000381 what the hell we’re dealing with. Not only do we not know if it’s a test result, a procedure, a test or a disease, we are hopelessly lost as to figuring out what larger categories it belongs to. To make matters worse, MedDRA is proprietary – which in and of itself is offensive to the extreme to the idea of open research on VAERS and other public databases: a public database should not rely on proprietary encoding! -, and it lacks the inherent logic of ICD-10. Consider the encoding of the clinical diagnosis of unilateral headache in both:

SOC: System Organ Class, HLGT: High Level Group Term, HLT: High Level Term, PT: Preferred Term, LLT: Lower Level Term

Attempting to encode the same concept in MedDRA and ICD-10, note that the final code in MedDRA (10067040) does not allow for the predecessors to be reverse engineered, unless a look-up table is used – which is proprietary. ICD-10, at the same time, uses a structure that encodes the entire ‘ancestry’ of an entity in the final code. Not only does this allow for intermediate codes, e.g. G44 for an ‘other’ headache syndrome until it is closer defined, it also allows for structured analysis of the final code and even without a look-up table (which is public, as is the whole ICD-10), the level of kinship between two IDC-10 codes can be ascertained with ease.

We know that an ICD code beginning with F will be something psychiatric and G will be neurological, and from that alone we can get some easy analytical approaches (a popular one is looking at billed codes and drilling down by hierarchical level of ICD-10 codes, something in which the ICD-10 is vastly superior to its predecessor). MedDRA, alas, does not help us such.

Garbage in, garbage out

OK, so we’ve got a nomenclature where the codes for needlestick injury, death, pneumonia, congenital myopathy and a CBC look all the same. That’s already bad enough. It gets worse when you can enter any and all of these into the one single field. Meet VAERS.

The idea of VAERS is to allow physicians, non-physicians and ‘members of the public’ to report incidents. These are then coded by the CDC and depending on seriousness, they may or may not be investigated (all reports that are regarded as ‘serious’ are investigated, according to the CDC). The problem is that this approach is susceptible to three particular vulnerabilities:

  • The single field problem: VAERS has a single field for ‘symptoms’. Everything’s a symptom. This includes pre-existing conditions, new onset conditions, vaccination errors, lab tests (not merely results, just the tests themselves!), interventions (without specifying if they’re before or after the vaccine), and so on. There is also no way to filter out factors that definitely have nothing to do with the vaccine, such as a pre-existing birth defect. The History/Allergies field is not coded.
  • The coding problem: what gets coded and what does not is sometimes imperfect. This being a human process, it’s impossible to expect perfection, but the ramifications to this to certain methods of analysis are immense. For instance. if there are 100 cases of uncontrollable vomiting, that may be a signal. But if half of those are coded as ‘gastrointestinal disorder’ (also an existing code), you have two values of 50, neither of which may end up being a signal.
  • The issue of multiple coding: because MedDRA is non-hierarchical, it is not possible to normalise at a higher level (say, with ICD-10 codes, at chapter or block level), and it is not clear if two codes are hierarchically related. In ICD-10, if a record contains I07 (rheumatic tricuspid valve disease) and I07.2 (tricuspid stenosis with tricuspid insufficiency), one can decide to retain the more specific or the less specific entry, depending on intended purpose of the analysis.

In the following, I will demonstrate each of these based on randomly selected reports from VAERS.

The Single Field Problem (SFP)

The core of the SFP is that there is only one codeable field, ‘symptoms’.

VAERS ID 375693-1 involves a report, in which the patient claims she developed, between the first and second round of Gardasil,

severe stomach pain, cramping, and burning that lasted weeks. Muscle aches and overall feeling of not being well. In August 2009 patient had flu like symptoms, anxiety, depression, fatigue, ulcers, acne, overall feeling of illness or impending death.

Below is the patient’s symptom transposition into MedDRA entities (under Symptoms):

Event details for VAERS report 375693-1

The above example shows the mixture of symptoms, diagnostic procedures and diagnostic entities that are coded in the ‘Symptoms’ field. The principal problem with this is that when considering mass correlations (all drugs vs all symptoms, for instance), this system would treat a blood test just as much as a contributor to a safety signal as anxiety or myalgia, which might be true issues, or depression, which is a true diagnosis. Unfiltered, this makes VAERS effectively useless for market basket analysis based (cooccurrence frequency) analyses.

Consider for instance, that PRR is calculated as

PRR_{V,R} = \frac{\Sigma (R \mid V) \/ \Sigma (V)}{\Sigma (R \mid \neg V) \/ \Sigma (\neg V)} = \frac{\Sigma (R \mid V)}{\Sigma (V)} \cdot \frac{\Sigma (\neg V)}{\Sigma (R \mid \neg V)}

where V denotes the vaccine of interest, R denotes the reaction of interest, and the \Sigma operator denotes the sum of rows or columns that fulfill the requisite criteria (a more detailed, matrix-based version of this equation is presented here). But if \{R\}, the set of all R, contains not merely diagnoses but also various ‘non-diagnoses’, the PRR calculation will be distorted. For constant V and an unduly large R, the values computationally obtained from the VAERS data that ought to be \Sigma(R \mid V) and \Sigma(R \mid \neg V) will both be inaccurately inflated. This will yield inaccurate final results.

Just how bad IS this problem? About 30% bad, if not more. A manual tagging of the top 1,000 symptoms (by N, i.e. by the number of occurrences) was used as an estimate for how many of the diagnostic entities do not disclose an actual problem with the vaccine.

In this exercise, each of the top 1,000 diagnostic codes (by N, i.e. by occurrence) were categorised into a number of categories, which in turn were divided into includable (yellow) and non-includable (blue) categories. An includable category reveals a relevant (=adverse) test result, a symptom, a diagnosis or a particular issue, while non-includables pertain to procedures, diagnostics without results, negative test results and administration, storage & handling defects.

According to the survey of the top 1,000 codes, only a little more than 70% of the codes themselves disclose a relevant issue with the vaccine. In other words, almost a third of disclosed symptoms must be pruned, and these cannot be categorically pruned because unlike ICD-10, MedDRA does not disclose hierarchies based on which such pruning would be possible. As far as the use of MedDRA goes, this alone should be a complete disaster.

Again, for effect: a third of the codes do not disclose an actual side effect of the medication. These are not separate or identifiable in any way other than manually classifying them and seeing whether they disclose an actual side effect or just an ancillary issue. Pharmacovigilance relies on accurate source data, and VAERS is not set up, with its current use of MedDRA, to deliver that.

The coding problem

Once a VAERS report is received, it is MedDRA coded at the CDC. Now, no manual coding is perfect, but that’s not the point here. The problem is that a MedDRA code does not, in and of itself,  indicate the level of detail it holds. For instance, 10025169 and 10021881 look all alike, where in fact the first is a lowest-level entity (an LLT – Lower-Level Term – in MedDRA lingo) representing Lyme disease, while the former is the top-level class (SOC – System Organ Class) corresponding to infectious diseases. What this means is that once we see a MedDRA coded entity as its code, we don’t know what level of specificity we are dealing with.

The problem gets worse with named entities. You see, MedDRA has a ‘leaf’ structure: every branch must terminate in one or more (usually one) LLT. Often enough, LLTs have the same name as their parent PT, so you get PT Lyme disease and LLT Lyme disease. Not that it terrifically matters for most applications, but when you see only the verbose output, as is the case in VAERS, you don’t know if this is a PT, an LLT, or, God forbid, a higher level concept with a similar name.

Finally, to put the cherry on top of the cake, where a PT is also the LLT, they have the same code. So for Lyme disease, the PT and LLT both have the code 10025169. I’m sure this seemed like a good idea at the time.

The issue of multiple coding

As this has been touched upon previously, because MedDRA lacks an inherent hierarchy, a code cannot be converted into its next upper level without using a lookup table, whereas with, say, ICD-10, one can simply normalise to the chapter and block (the ‘part left of the dot’). More problematically, however, the same code may be a PT or an LLT, as is the case for Lyme disease (10025169).

Let’s look at this formally. Let the operator \in^* denote membership under the transitive closure of the set membership relation, so that

  1. if x \in A, then x \in^* A,
  2. if x \in A and A \subseteq B, then x \in^* B.

and so on, recursively, ad infinitum. Let furthermore \in^*_{m} denote the depth of recursion, so that

  1. for x \in A:  x \in^*_{0} A,
  2. for x \in A \mid A \subseteq B:  x \in^*_{1} B,

and, once again, so on, recursively, ad infinitum.

Then let a coding scheme \{S_{1...n}\} exhibit the Definite Degree of Transitiveness (DDoT) property iff (if and only if) for any S_m \mid m \leq n, there exists exactly one p for which it is true that S_m \in^*_{p} S.

Or, in other words, two codes S_q, S_r \mid q, r \leq n, may not be representable identically if p_q \neq p_r. Less formally: two codes on different levels may not be identical. This is clearly violated in MedDRA, as the example below shows.

Violating the Definite Degree of Transitiveness (DDoT) property: PT Botulism and LLT Botulism have different p numbers, i.e. different levels, but have nonetheless the same code. This makes the entity 10006041 indeterminate – is it the PT or the LLT for botulism? This is a result of the ‘leaf node constraint’ of MedDRA’s design, but a bug by design is still a bug, not a feature.

Bonus: the ethical problem

To me as a public health researcher, there is a huge ethical problem with the use of MedDRA in VAERS. I believe very strongly in open data and in the openness of biomedical information. I’m not alone: for better or worse, the wealth – terabytes upon terabytes – of biomedical data, genetics, X-ray crystallography, models, sequences  prove that if I’m a dreamer, I’m not the only one.

Which is why it’s little short of an insult to the public that a pharmacovigilance system is using a proprietary encoding model.

Downloads from VAERS, of course, provide the verbose names of the conditions or symptoms, but not what hierarchical level they are, nor what structure they are on. For that, unless you are a regulatory authority or a ‘non-profit’ or ‘non-commercial’ (which would already exclude a blogger who unlike me has ads on their blog to pay for hosting, or indeed most individual researchers, who by their nature could not provide the documentation to prove they aren’t making any money), you have to shell out some serious money.

MedDRA is one expensive toy.

Worse, the ‘non-profit’ definition does not include a non-profit research institution or an individual non-profit researcher, or any of the research bodies that are not medical libraries or affiliated with educational institutions but are funded by third party non-profit funding:

This just keeps getting worse. Where would a non-profit, non-patient care provider, non-educational, grant-funded research institution go?

There is something rotten with the use of MedDRA, and it’s not just how unsuitable it is for the purpose, it is also the sheer obscenity of a public database of grave public interest being tied to a (vastly unsuitable and flawed, as I hope it has been demonstrated above) nomenclature.

Is VAERS lost?

Resolving the MedDRA issue

Unlike quite a few people in the field, I don’t think VAERS is hopelessly lost. There’s, in fact, great potential in it. But the way it integrates with MedDRA has to be changed. This is both a moral point – a point of commitment to opening up government information – and one of facilitating research.

There are two alternatives at this point for the CDC.

  1. MedDRA has to open up at least the 17% of codes, complete with hierarchy, that are used within VAERS. These should be accessible, complete with the hierarchy, within VAERS, including the CDC WONDER interface.
  2. The CDC has to switch to a more suitable system. ICD-10 alone is not necessarily the best solution, and there are few alternatives, which puts MedDRA into a monopoly position that it seems to mercilessly exploit at the time. This can – and should – change.

Moving past the Single Field Problem

MedDRA apart, it is crucial for VAERS to resolve the Single Field Problem. It is clear that from the issues presented in the first paragraph – a broken femur, Henoch-Schönlein purpura, fainting, an expired vaccine and a healthy childbirth – that there is a range of issues that need to be logged. A good structure would be

  1. pre-existing conditions and risk factors,
  2. symptoms that arose within 6 hours of administration,
  3. symptoms that arose within 48 hours of administration,
  4. symptoms that arose later than 48 hours of administration,
  5. non-symptoms,
  6. clinical tests without results,
  7. clinical tests segmented by positive and negative results, and
  8. ancillary circumstances, esp. circumstances pertaining to vaccination errors such as wrong vaccine administered, expired vaccine, etc.

The use of this segmentation would be able to differentiate not only time of occurrence, but also allow for adequate filtering to identify the correct denominators for the PRR.

A future with (for?) MedDRA

As said, I am not necessarily hostile to MedDRA, even if the closet libertarian in me bristles at the fact that MedDRA is mercilessly exploiting what is an effective monopoly position. But MedDRA can be better, and needs to be better – if not for its own economic interests, then for the interests of those it serves. There are three particular suggestions MedDRA needs to seriously consider.

  1. MedDRA’s entity structure is valuable – arguably, it’s the value in the entire project. If coding can be structured to reflect its internal hierarchy, MedDRA becomes parseable without a LUT,1 and kinship structures become parseable without the extra step of a LUT.
  2. MedDRA needs to open up, especially to researchers not falling within its narrowly defined confines of access. Especially given the inherent public nature of its use – PhV and regulation are quintessentially public functions, and this needs an open system.
  3. MedDRA’s entity structure’s biggest strength is that it comprises a range of different things, from administrative errors through physical injuries to test results and the simple fact of tests.

Conclusion

VAERS is a valuable system with a range of flaws. All of them are avoidable and correctable – but would require the requisite level of will and commitment – both on CDC’s side and that of MedDRA. For any progress in this field, it is imperative that the CDC understand that a public resource maintained in the public interest cannot be driven by a proprietary nomenclature, least of all one that is priced out of the range of the average interested individual: and if they cannot be served, does the entire system even fulfill its governmental function of being of the people and for the people? It is ultimately CDC’s asset, and it has a unique chance to leverage its position to ensure that at least as far as the 17% of MedDRA codes go that are used in VAERS, these are released openly.

In the end, however sophisticated our dissimilarity metrics, when 30% of all entities are non-symptoms and we need to manually prune the key terms to avoid denominator bloat due to non-symptom entities, such as diagnostic tests without results or clearly unconnected causes of morbidity and mortality like motor vehicle accidents, dissimilarity based approaches will suffer from serious flaws. In the absence of detailed administration and symptom tracking at an individual or institutional level, dissimilarity metrics are the cheapest and most feasible ways of creating value out of post marketing passive reports. If VAERS is to be a useful research tool, as I firmly believe it was intended to be, it must evolve to that capability for all.

References   [ + ]

1. Look-up table

SafeGram: visualising drug safety

Update: an RMarkdown notebook explaining the whole process is available here.

Visualising vaccine safety is hard. Doing so from passive (or, as we say it in Britain, ‘spontaneous’!) pharmacovigilance (PhV) sources is even harder. Unlike in active or trial pharmacovigilance, where you are essentially dividing the number of incidents by the person-time or the number of patients in the cohort overall, in passive PhV, only incidents are reported. This makes it quite difficult to figure out their prevalence overall, but fortunately, we have some metrics we can use to better understand the issues with a particular medication or vaccine. The proportional reporting ratio (PRR) is a metric that can operate entirely on spontaneous reporting, and reflect how frequent a particular symptom is for a particular treatment versus all other treatments.

Defining PRR

For convenience’s sake, I will use the subscript * operator to mean a row or column sum of a matrix, so that

N_{i,*} = \displaystyle \sum_{j=1}^{n} N_{i,j}

and

N_{*,j} = \displaystyle \sum_{i=1}^{m} N_{i,j}

and furthermore, I will use the exclusion operator * \neg to mean all entities except the right hand value. So e.g.

N_{i, * \neg k} = \displaystyle \sum_{j=1, j \neq k}^m N_{i,j}

Conventionally, the PRR is often defined to with reference to a 2×2 contingency table that cross-tabulates treatments (m axis) with adverse effects (n axis):

Adverse effect of interest
(i)
All other adverse effects
(\neg i)
TOTAL
Treatment of interest
(j)
a = D_{i,j} b = D_{i, * \neg j} a + b = D_{i, *} = \displaystyle \sum_{j = 1}^{n} D_{i, j}
All other treatments
(\neg j)
c = D_{* \neg i, j} d = D_{* \neg i, * \neg j} c + d = D_{* \neg i, *} = \displaystyle \sum_{k=1, k \neq i}^{m} \sum_{l = 1}^{n} D_{k, l}

 

With reference to the contingency table, the PRR is usually defined as

\frac{a / (a+b)}{c / (c+d)} = \frac{a}{a + b} \cdot \frac{c + d}{c}

However, let’s formally define it over any matrix D.

Definition 1. PRR. Let D be an m \times n matrix that represents the frequency with which each of the m adverse effects occur for each of the n drugs, so that D_{i,j} (i \in m, j \in n) represents the number of times the adverse effect j has occurred with the treatment i.

For convenience’s sake, let D_{*,j} denote \sum_{i=1}^{m} D_{i,j}, let D_{i,*} denote \sum_{j=1}^{n} D_{i,j}, and let D_{*,*} denote \sum_{i=1}^{m} \sum_{j=1}^{n} D_{i,j}. Furthermore, let D_{* \neg i, j} denote \sum_{k \neq i}^{m} D_{k,j} and D_{i, * \neg j} denote \sum_{k \neq j}^{n} D_{i, k}.

Then, PRR can be calculated for each combination D_{i,j} by the following formula:

PRR_{i,j} = \frac{D_{i,j} / D_{i,*}}{D_{* \neg i, j} / D_{* \neg i, *}} = \frac{D_{i,j}}{D_{i,*}} \cdot \frac{D_{*\neg i, *}}{D_{*\neg i, j}}

Expanding this, we get

PRR_{i,j} = \frac{D_{i,j}}{\displaystyle\sum_{q=1}^n D_{i,q}} \cdot \frac{\displaystyle\sum_{r=1, r\ne i}^{m} \displaystyle\sum_{s=1}^{n} D_{r,s}}{\displaystyle\sum_{t=1, t\ne i}^{m} D_{t,j}}

Which looks and sounds awfully convoluted until we start to think of it as a relatively simple query operation: calculate the sum of each row, then calculate the quotient of the ADR of interest associated with the treatment of interest divided by all uses of the treatment of interest on one hand and the ADR of interest associated with all other drugs (j \mid \neg i or c) divided by all ADRs associated with all treatments other than the treatment of interest. Easy peasy!

Beyond PRR

However, the PRR only tells part of the story. It does show whether a particular symptom is disproportionately often reported – but does it show whether that particular symptom is frequent at all? Evans (1998) suggested using a combination of an N-minimum, a PRR value and a chi-square value to identify a signal.1 In order to represent the overall safety profile of a drug, it’s important to show not only the PRR but also the overall incidence of each risk. The design of the SafeGram is to show exactly that, for every known occurred side effect. To show a better estimate, instead of plotting indiviual points (there are several hundreds, or even thousands, of different side effects), the kernel density is plotted.

This SafeGram shows four vaccines – meningococcal, oral and injectable polio and smallpox -, and their safety record based on VAERS data between 2006 and 2016.

The reason why SafeGrams are so intuitive is because they convey two important facts at once. First, the PRR cut-off (set to 3.00 in this case) conclusively excludes statistically insignificant increases of risk.2 Of course, anything above that is not necessarily dangerous or proof of a safety signal. Rather, it allows the clinician to reason about the side effect profile of the particular medication.

  • The meningococcal vaccine (left upper corner) had several side effects that occurred frequently (hence the tall, ‘flame-like’ appearance). However, these were largely side effects that were shared among other vaccines (hence the low PRR). This is the epitome of a safe vaccine, with few surprises likely.
  • The injectable polio vaccine (IPV) has a similar profile, although the wide disseminated ‘margin’ (blue) indicates that ht has a wider range of side effects compared to the meningococcal vaccine, even though virtually all of these were side effects shared among other vaccines to the same extent.
  • The oral polio vaccine (OPV, left bottom corner) shows a flattened pattern typical for vaccines that have a number of ‘peculiar’ side effects. While the disproportionately frequently reported instances are relatively infrequent, the ‘tail-like’ appearance of the OPV SafeGram is a cause for concern. The difference between meningococcal and IPV on one hand and OPV on the other is explained largely by the fact that OPV was a ‘live’ vaccine, and in small susceptible groups (hence the low numbers), they could provoke adverse effects.
  • The smallpox vaccine, another live vaccine, was known to have a range of adverse effects, with a significant part of the population (about 20%) having at least one contraindication. The large area covered indicates that there is a rather astonishing diversity of side effects, and many of these – about half of the orange kernel – lies above the significance boundary of 3.00. The large area covered by the kernel density estimate and the reach into the right upper corner indicates a very probable safety signal worth examining.

Interpretation

A SafeGram for each vaccine shows the two-dimensional density distribution of two things – the frequency and the proportional reporting rate of each vaccine (or drug or device or whatever it is applied to). When considering the safety of a particular product, the most important question is whether a particular adverse effect is serious – a product with a low chance of an irreversible severe side effect is riskier than one with a high probability of a relatively harmless side effect, such as localized soreness after injection. But the relative severity of a side effect is hard to quantify, and a better proxy for that is to assume that in general, most severe side effects will be unique to a particular vaccine. So for instance while injection site reactions and mild pyrexia following inoculation are common to all vaccines and hence the relative reporting rates are relatively low, reflecting roughly the number of inoculations administered, serious adverse effects tend to be more particular to the vaccine (e.g. the association of influenza vaccines with Guillain-Barré syndrome in certain years means that GBS has an elevated PRR, despite the low number of occurrences, for the flu vaccines). Discarding vaccines with a very low number of administered cases, the SafeGram remains robust to differences between the number of vaccines administered. Fig. 1 above shows a number of typical patterns. In general, anything to the left of the vertical significance line can be safely ignored, as they are generally effects shared between most other vaccines in general and exhibit no specific risk signal for the particular vaccine. On the other hand, occurrences to the right of the vertical significance line may – but don’t necessarily do – indicate a safety signal. Of particular concern are right upper quadrant signals – these are frequent and at the same time peculiar to a particular vaccine, suggesting that it is not part of the typical post-inoculation syndrome (fever, fatigue, malaise) arising from immune activation but rather a specific issue created by the antigen or the adjuvant. In rare cases, there is a lower right corner ‘stripe’, such as for the OPV, where a wide range of unique but relatively infrequent effects are produced. These, too, might indicate the need for closer scrutiny. It is crucial to note that merely having a density of signals in the statistically significant range does not automatically mean that there is a PhV concern, but rather that such a concern cannot be excluded. Setting the PRR significance limit is somewhat arbitrary, but Evans et al. (2001) have found a PRR of 2, more than 3 cases over a two year period and a chi-square statistic of 4 or above to be suggestive of a safety signal. Following this lead, the original SafeGram code looks at a PRR of 3.0 and above, and disregards cases with an overall frequency of 3Y, where Y denotes the number of years considered.

Limitations

The SafeGram inherently tries to make the best out of imperfect data. Acknowledging that passive reporting data is subject to imperfections, some caveats need to be kept in mind.

  • The algorithm assigns equal weight to every ‘symptom’ reported. VAERS uses an unfiltered version of MedDRA, a coding system for regulatory activities, and this includes a shocking array of codes that do not suggest any pathology. For instance, the VAERS implementation of MedDRA contains 530 codes for normal non-pathological states (e.g. “abdomen scan normal”), and almost 18,000 (!) events involve at least one of these ‘everything is fine!’ markers. This may be clinically useful because they may assist in differential diagnosis and excluding other causes of symptoms, but since they’re not treated separately from actually pathological symptoms, they corrupt the data to a minor but not insignificant extent. The only solution is manual filtering, and with tens of thousands of MedDRA codes, one would not necessarily be inclined to do so. The consequence is that some symptoms aren’t symptoms at all – they’re the exact opposite. This is not a problem for the PRR because it compares a symptom among those taking a particular medication against the same symptom among those who are not.
  • A lot of VAERS reports are, of course, low quality reports, and there is no way for the SafeGram to differentiate. This is a persistent problem with all passive reporting systems.
  • The SafeGram gives an overall picture of a particular drug’s or vaccine’s safety. It does not differentiate between the relative severity of a particular symptom.
  • As usual, correlation does not equal causation. As such, none of this proves the actual risk or danger of a vaccine, but rather the correlation or, in other words, potential safety signals that are worth examining.
Grouped by pathogen, the safety of a range of vaccines was examined by estimating the density of adverse event occurrence versus adverse event PRR. Note that adverse events reported in VAERS do not show or prove causation, only correlation. This shows that for the overwhelming majority of vaccines, most AEFI reports are below the PRR required to be considered a true safety signal.

SafeGrams are a great way to show the safety of vaccines, and to identify which vaccines have frequently occurring and significantly distinct (high-PRR) AEFIs that may be potential signals. It is important to note that for most common vaccines, including controversial ones like HPV, the centre of the density kernel estimate are below the margin of the PRR signal limit. The SafeGram is a useful and visually appealing proof of the safety of vaccines that can get actionable intelligence out of VAERS passive reporting evidence that is often disregarded as useless.

References   [ + ]

1. Evans, S. J. W. et al. (1998). Proportional reporting ratios: the uses of epidemiological methods for signal generation. Pharmacoepidemiol Drug Saf, 7(Suppl 2), 102.
2. According to Evans et al., the correct figure for PRR exclusion is 2.00, but they also use N-restriction and a minimum chi-square of 4.0.

Using screen to babysit long-running processes

In machine learning, especially in deep learning, long-running processes are quite common. Just yesterday, I finished running an optimisation process that ran for the best part of four days –  and that’s on a 4-core machine with an Nvidia GRID K2, letting me crunch my data on 3,072 GPU cores!  Of course, I did not want to babysit the whole process. Least of all did I want to have to do so from my laptop. There’s a reason we have tools like Sentry, which can be easily adapted from webapp monitoring to letting you know how your model is doing.

One solution is to spin up another virtual machine, ssh into that machine, then from that
ssh into the machine running the code, so that if you drop the connection to the first machine, it will not drop the connection to the second. There is also nohup, which makes sure that the process is not killed when you ‘hang up’ the ssh connection. You will, however, not be able to get back into the process again. There are also reparenting tools like reptyr, but the need they meet is somewhat different. Enter terminal multiplexers.

Terminal multiplexers are old. They date from the era of things like time-sharing systems and other antiquities whose purpose was to allow a large number of users to get their time on a mainframe designed to serve hundreds, even thousands of users. With the advent of personal computers that had decent computational power on their own, terminal multiplexers remained the preserve of universities and other weirdos still using mainframe architectures. Fortunately for us, two great terminal multiplexers, screen (aka GNU Screen ) and tmux , are still being actively developed, and are almost definitely available for your *nix of choice. This gives us a convenient tool to sneak a peek at what’s going on with our long-suffering process. Here’s how.

Step 1
ssh into your remote machine, and launch ssh. You may need to do this as sudo if you encounter the error where screen, instead of starting up a new shell, returns [screen is terminating] and quits. If screen is started up correctly, you should be seeing a slightly different shell prompt (and if you started it as sudo, you will now be logged in as root).
ssh into your machine, and launch screen (screen).
In some scenarios, you may want to ‘name’ your screen session. Typically, this is the case when you want to share your screen with another user, e.g. for pair programming. To create a named screen, invoke screen using the session name parameter -S, as in e.g. screen -S my_shared_screen.
Step 2
In this step, we will be launching the actual script to run. If your script is Python based and you are using virtualenv (as you ought to!), activate the environment now using source /<virtualenv folder>/bin/activate, replacing  virtualenv folderby the name of the folder where your virtualenvs live (for me, that’s the environments folder, often enough it’s something like ~/.virtualenvs) and by the name of your virtualenv (in my case, research). You have to activate your virtualenv even if you have done so outside of screen already (remember, screen means you’re in an entirely new shell, with all environment configurations, settings, aliases &c. gone)!

With your virtualenv activated, launch it as normal — no need to launch it in the background. Indeed, one of the big advantages is the ability to see verbose mode progress indicators. If your script does not have a progress logger to stdout but logs to a logfile, you can start it using nohup, then put it into the background (Ctrl--Z, then bg) and track progress using tail -f logfile.log (where logfile.log is, of course, to be substituted by the filename of the logfile.
Step 3
Press Ctrl--A followed by Ctrl--D to detach from the current screen. This will take you back to your original shell after noting the address of the screen you’re detaching from. These always follow the format <identifier>.<session id>.<hostname>, where hostname is, of course, the hostname of the computer from which the screen session was started, stands for the name you gave your screen if any, and is an autogenerated 4-6 digit socket identifier. In general, as long as you are on the same machine, the screen identifier or the session name will be sufficient – the full canonical name is only necessary when trying to access a screen on another host.

To see a list of all screens running under your current username, enter screen -list. Refer to that listing or the address echoed when you detached from the screen to reattach to the process using screen -r <socket identifier>[.<session identifier>.<hostname>]. This will return you to the script, which keeps executing in the background.
Result
Reattaching to the process running in the background, you can now follow the progress of the script. Use the key combination in Step 3 to step out of the process anytime and the rest of the step to return to it.

Bugs
There is a known issue, caused by strace, that leads to screen immediately closing, with the message [screen is terminating] upon invoking screen as a non-privileged user.

There are generally two ways to resolve this issue.

The overall effect of both solutions is the same. Notably, both may be undesirable from a security perspective. As always, weigh risks against utility.

Do you prefer screen to staying logged in? Do you have any other cool hacks to make monitoring a machine learning process that takes considerable time to run? Let me know in the comments!

Image credits: Zenith Z-19 by ajmexico on Flickr

Fixing the mysterious Jupyter Tensorflow import bug

There’s a weird bug afoot that you might encounter when setting up a ‘lily white’ (brand new) development environment to play around with Tensorflow.  As it seems to have vexed quite a few people, I thought I’ll put my solution here to help future  tensorflowers find their way.  The problem presents after you have set up your new  virtualenv . You install Jupyter and Tensorflow, and  when importing, you get this:

In [1]:   import tensorflow as tf

---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in ()
----> 1 import tensorflow as tf

ModuleNotFoundError: No module named 'tensorflow'

Oh.

Added perplexion

Say you are a dogged pursuer of bugs, and wish to check if you might have installed Tensorflow and Jupyter into different virtualenvs. One way to do that is to simply activate your virtualenv (using activate or source activate, depending on whether you use virtualenvwrapper), and starting a Python shell. Perplexingly, importing Tensorflow here will work just fine.

The solution

Caution
At this time, this works only for CPython aka ‘regular Python’ (if you don’t know what kind of Python you are running, it is in all likelihood CPython).
Note

In general, it is advisable to start fixing these issues by destroying your virtualenv and starting anew, although that’s not strictly necessary. Create a virtualenv, and note the base Python executable’s version (it has to be a version for which there is a Tensorflow wheel for your platform, i.e. 2.7 or 3.3-3.6).

Step 1

Go to the PyPI website to find the Tensorflow installation appropriate to your system and your Python version (e.g. cp36 for Python 3.6). Copy the path of the correct version, then open up a terminal window and declare it as the environment variable TF_BINARY_URL. Use pip to install from the URL you set as the environment variable, then install Jupyter.

[email protected] ~ $ export TF_BINARY_URL=https://pypi.python.org/packages/b1/74/873a5fc04f1aa8d275ef1349b25c75dd87cbd7fb84fe41fc8c0a1d9afbe9/tensorflow-1.1.0rc2-cp36-cp36m-macosx_10_11_x86_64.whl#md5=c9b6f7741d955d1d3b4991a7942f48b9
[email protected] ~ $ pip install --upgrade $TF_BINARY_URL jupyter                 
Collecting tensorflow==1.1.0rc2 from https://pypi.python.org/packages/b1/74/873a5fc04f1aa8d275ef1349b25c75dd87cbd7fb84fe41fc8c0a1d9afbe9/tensorflow-1.1.0rc2-cp36-cp36m-macosx_10_11_x86_64.whl#md5=c9b6f7741d955d1d3b4991a7942f48b9
  Using cached tensorflow-1.1.0rc2-cp36-cp36m-macosx_10_11_x86_64.whl
Collecting jupyter
  Using cached jupyter-1.0.0-py2.py3-none-any.whl

(... lots more installation steps to follow ...)

Successfully installed ipykernel-4.6.1 ipython-6.0.0 jedi-0.10.2 jinja2-2.9.6 jupyter-1.0.0 jupyter-client-5.0.1 jupyter-console-5.1.0 notebook-5.0.0 prompt-toolkit-1.0.14 protobuf-3.2.0 qtconsole-4.3.0 setuptools-35.0.1 tensorflow-1.1.0rc2 tornado-4.5.1 webencodings-0.5.1 werkzeug-0.12.1
Step 2
Now for some magic. If you launch Jupyter now, there’s a good chance it won’t find Tensorflow. Why? Because you just installed Jupyter, your shell might not have updated the jupyter alias to point to that in the virtualenv, rather than your system Python installation.

Enter which jupyter to find out where the Jupyter link is pointing. If it is pointing to a path within your virtualenvs folder, you’re good to go. Otherwise, open a new terminal window and activate your virtualenv. Check where the jupyter command is pointing now – it should point to the virtualenv.

Step 3
Fire up Jupyter, and import tensorflow. Voila – you have a fully working Tensorflow environment!

As always, let me know if it works for you in the comments, or if you’ve found some alternative ways to fix this issue. Hopefully, this helps you on your way to delve into Tensorflow and explore this fantastic deep learning framework!

Header image: courtesy of Jeff Dean, Large Scale Deep Learning for Intelligent Computer Systems, adapted from Untangling invariant object recognition by DiCarlo and Cox (2007).

cookiecutter-flask-ask: a quick(er) start to Alexa skills!

If you develop for Amazon’s Alexa-powered devices, you must at some point have come across Flask-Ask, a project by John Wheeler that lets you quickly and easily build Python-based Skills for Alexa. It’s so easy, in fact, that John’s quickstart video, showing the creation of a Flask-Ask based Skill from zero to hero, takes less than five minutes! How awesome is that? Very awesome.

Bootstrapping a Flask-Ask project is not difficult – in fact, it’s pretty easy, but also pretty repetitive. And so, being the ingenious lazy developer I am, I’ve come up with a (somewhat opinionated) cookiecutter template for Flask-Ask.

Usage

Using the Flask-Ask cookiecutter should be trivial.  Make sure you have cookiecutter installed, either in a virtualenv that you have activated or your system installation of Python. Then, simply use  cookiecutter gh:chrisvoncsefalvay/cookiecutter-flask-ask to get started. Answer the friendly assistant’s questions, and voila! You have the basics of a Flask-Ask project all scaffolded.

Once you have scaffolded your project, you will have to create a virtualenv for your project and install dependencies by invoking pip install -r requirements.txt. You will also need ngrok to test your skill from your local device.

What’s in the box?

The cookiecutter has been configured with my Flask-Ask development preferences in mind, which in turn borrow heavily from John Wheeler‘s. The cookiecutter provides a scaffold of a Flask application, including not only session start handlers and an example intention but also a number of handlers for built-in Alexa intents, such as Yes, No and Help.

There is also a folder structure you might find useful, including an intent schema for some basic Amazon intents and a corresponding empty sample_utterances.txt file, as well as a gitkeep’d folder for custom slot types. Because I’m a huge fan of Sphinx documentation and strongly believe that voice apps need to be assiduously documented to live up to their potential, there is also a docs/ folder with a Makefile and an opinionated conf.py configuration file.

Is that all?!

Blissfully, yes, it is. Thanks to John’s extremely efficient and easy-to-use Flask-Ask project, you can discourse with your very own skill less than twenty minutes after starting the scaffolding!

You can find the cookiecutter-flask-ask project here. Issues, bugs and other woes are welcome, as are contributions (simply raise a pull request). For help and advice, you can find me on the Flask-Ask Gitter a lot during daytime CET.

cookiecutter-flask-ask is, of course, Swabware.

Diffie-Hellman in under 25 lines

How can you and I agree on a secret without anyone eavesdropping being able to intercept our communications? At first, the idea sounds absurd – for the longest time, without a pre-shared secret, encryption was seen as impossible. In World War II, the Enigma machines relied on a fairly complex pre-shared secret – the Enigma configurations (consisting of the rotor drum wirings and number of rotors specific to the model, the Ringstellung of the day, and Steckbrett configurations) were effectively the pre-shared key. During the Cold War, field operatives were provided with one-time pads (OTPs), randomly (if they were lucky) or pseudorandomly (if they weren’t, which was most of the time) generated1 one time pads (OTPs) with which to encrypt their messages. Cold War era Soviet OTPs were, of course, vulnerable because like most Soviet things, they were manufactured sloppily.2 But OTPs are vulnerable to a big problem: if the key is known, the entire scheme of encryption is defeated. And somehow, you need to get that key to your field operative.

Enter the triad of Merkle, Diffie and Hellman, who in 1976 found a way to exploit the fact that multiplying primes is simple but decomposing a large number into the product of two primes is difficult. From this, they derived the algorithm that came to be known as the Diffie-Hellman algorithm.3

5535098

How to cook up a key exchange algorithm

The idea of a key exchange algorithm is to end up with a shared secret without having to exchange anything that would require transmission of the secret. In other words, the assumption is that the communication channel is unsafe. The algorithm must withstand an eavesdropper knowing every single exchange.

Alice and Bob must first agree to use a modulus p and a baseg, so that the base is a primitive root modulo the modulus.

Alice and Bob each choose a secret key a and b respectively – ideally, randomly generated. The parties then exchange A = g^a \mod(p) (for Alice) and B = g^b \mod(p) (for Bob).

Alice now has received B. She goes on to compute the shared secret s by calculating B^a \mod(p) and Bob computes it by calculating A^b \mod(p).

The whole story is premised on the equality of

A^b \mod(p) = B^a \mod(p)

That this holds nearly trivially true should be evident from substituting g^b for B and g^a for A. Then,

g^{ab} \mod(p) = g^{ba} \mod(p)

Thus, both parties get the same shared secret. An eavesdropper would be able to get A and B. Given a sufficiently large prime for p, in the range of 6-700 digits, the discrete logarithm problem of retrieving a from B^a \mod(p) in the knowledge of B and p is not efficiently solvable, not even given fairly extensive computing resources. Read more

References   [ + ]

1. As a child, I once built a pseudorandom number generator from a sound card, a piece of wire and some stray radio electronics, which basically rested on a sampling of atmospheric noise. I was surprised to learn much later that this was the method the KGB used as well.
2. Under pressure from the advancing German Wehrmacht in 1941, they had duplicated over 30,000 pages worth of OTP code. This broke the golden rule of OTPs of never, ever reusing code, and ended up with a backdoor that two of the most eminent female cryptanalysts of the 20th, Genevieve Grotjan Feinstein and Meredith Gardner, on whose shoulders the success of the Venona project rested, could exploit.
3. It deserves noting that the D-H key exchange algorithm was another of those inventions that were invented twice but published once. In 1975, the GCHQ team around Clifford Cocks invented the same algorithm, but was barred from publishing it. Their achievements weren’t recognised until 1997.