Structuring R projects

There are some things that I call Smith goods:1 things I want, nay, require, but hate doing. A clean room is one of these – I have a visceral need to have some semblance of tidiness around me, I just absolutely hate tidying, especially in the summer.2 Starting and structuring packages and projects is another of these things, which is why I’m so happy things like cookiecutter exist that do it for you in Python. [su_pullquote align=”right”]While I don’t like structuring R projects, I keep doing it, because I know it matters. That’s a pearl of wisdom that came occasionally at a great price.[/su_pullquote]I am famously laid back about structuring R projects – my chill attitude is only occasionally compared to the Holy Inquisition, the other Holy Inquisition and Gunny R. Lee Ermey’s portrayal of Drill Sgt. Hartman, and it’s been months since I last gutted an intern for messing up namespaces.3 So while I don’t like structuring R projects, I keep doing it, because I know it matters. That’s a pearl of wisdom that came occasionally at a great price, some of which I am hoping to save you by this post.

Five principles of structuring R projects

Every R project is different. Therefore, when structuring R projects, there has to be a lot more adaptability than there is normally When structuring R projects, I try to follow five overarching principles.

  1. The project determines the structure. In a small exploratory data analysis (EDA) project, you might have some leeway as to structural features that you might not have when writing safety-critical or autonomously running code. This variability in R – reflective of the diversity of its use – means that it’s hard to devise a boilerplate that’s universally applicable to all kinds of projects.
  2. Structure is a means to an end, not an end in itself. The reason why gutting interns, scalping them or yelling at them Gunny style are inadvisable is not just the additional paperwork it creates for HR. Rather, the point of the whole exercise is to create people who understand why the rules exists and organically adopt them, understanding how they help.
  3. Rules are good, tools are better. When tools are provided that take the burden of adherence – linters, structure generators like cookiecutter, IDE plugins, &c. – off the developer, adherence is both more likely and simpler.
  4. Structures should be interpretable to a wide range of collaborators. Even if you have no collaborators, thinking from the perspective of an analyst, a data scientist, a modeller, a data engineer and, most importantly, the client who will at the very end receive the overall product.
  5. Structures should be capable of evolution. Your project may change objectives, it may evolve, it may change. What was a pet project might become a client product. What was designed to be a massive, error-resilient superstructure might have to scale down. And most importantly, your single-player adventure may end up turning into an MMORPG. Your structure has to be able to roll with the punches.

A good starting structure

Pretty much every R project can be imagined as a sort of process: data gets ingested, magic happens, then the results – analyses, processed data, and so on – get spit out. The absolute minimum structure reflects this:

.
└── my_awesome_project
    ├── src
    ├── output
    ├── data
    │   ├── raw
    │   └── processed
    ├── README.md
    ├── run_analyses.R 
    └── .gitignore

In this structure, we see this reflected by having a data/ folder (a source), a folder for the code that performs the operations (src/) and a place to put the results (output/). The root analysis file (the sole R file on the top level) is responsible for launching and orchestrating the functions defined in the src/ folder’s contents.

The data folder

The data folder is, unsurprisingly, where your data goes. In many cases, you may not have any file-formatted raw data (e.g. where the raw data is accessed via a *DBC connection to a database), and you might even keep all intermediate files there, although that’s pretty uncommon on the whole, and might not make you the local DBA’s favourite (not to mention data protection issues). So while the raw/ subfolder might be dispensed with, you’ll most definitely need a data/ folder.

When it comes to data, it is crucial to make a distinction between source data and generated data. Rich Fitzjohn puts it best when he says to treat

  • source data as read-only, and
  • generated data as disposable.

The preferred implementation I have adopted is to have

  • a data/raw/ folder, which is usually is symlinked to a folder that is write-only to clients but read-only to the R user,4,
  • a data/temp/ folder, which contains temp data, and
  • a data/output/ folder, if warranted.

The src folder

Some call this folder R– I find this a misleading practice, as you might have C++, bash and other non-R code in it, but is unfortunately enforced by R if you want to structure your project as a valid R package, which I advocate in some cases. I am a fan of structuring the src/ folder, usually by their logical function. There are two systems of nomenclature that have worked really well for me and people I work with:

  • The library model: in this case, the root folder of src/ holds individual .R scripts that when executed will carry out an analysis. There may be one or more such scripts, e.g. for different analyses or different depths of insight. Subfolders of src/ are named after the kind of scripts they contain, e.g. ETL, transformation, plotting. The risk with this structure is that sometimes it’s tricky to remember what’s where, so descriptive file names are particularly important.
  • The pipeline model: in this case, there is a main runner script or potentially a small number. These go through scripts in a sequence. It is a sensible idea in such a case to establish sequential subfolders or sequentially numbered scripts that are executed in sequence. Typically, this model performs better if there are at most a handful distinct pipelines.

Whichever approach you adopt, a crucial point is to keep function definition and application separate. This means that only the pipeline or the runner scripts are allowed to execute (apply) functions, and other files are merely supposed to define them. Typically, folder level segregation works best for this:

  • keep all function definitions in subfolders of src/, e.g. src/data_engineering, and have the directly-executable scripts directly under src/ (this works better for larger projects), or
  • keep function definitions in src/, and keep the directly executable scripts in the root folder (this is more convenient for smaller projects, where perhaps the entire data engineering part is not much more than a single script).

output and other output folders

Output may mean a range of things, depending on the nature of your project. It can be anything from a whole D.Phil thesis written in a LaTeX-compliant form to a brief report to a client. There are a couple of conventions with regard to output folders that are useful to keep in mind.

Separating plot output

[su_pullquote]My personal preference is that plot output folders should be subfolders of output/, rather than top-tier folders, unless the plots themselves are the objective.[/su_pullquote]It is common to have a separate folder for plots (usually called figs/ or plots/), usually so that they could be used for various purposes. My personal preference is that plot output folders should be subfolders of output folders, rather than top-tier folders, unless they are the very output of the project. That is the case, for instance, where the project is intended to create a particular plot on a regular basis. This was the case, for instance, with the CBRD project whose purpose was to regularly generate daily epicurves for the DRC Zaire ebolavirus outbreak.

With regard to maps, in general, the principle that has worked best for teams I ran was to treat static maps as plots. However, dynamic maps (e.g. LeafletJS apps), tilesets, layers or generated files (e.g. GeoJSON files) tend to deserve their own folder.

Reports and reporting

[su_pullquote]For business users, automatically getting a beautiful PDF report can be priceless.[/su_pullquote]Not every project needs a reporting folder, but for business users, having a nice, pre-written reporting script that can be run automatically and produces a beautiful PDF report every day can be priceless. A large organisation I worked for in the past used this very well to monitor their Amazon AWS expenditure.5 A team of over fifty data scientists worked on a range of EC2 instances, and runaway spending from provisioning instances that were too big, leaving instances on and data transfer charges resulting from misconfigured instances6 was rampant. So the client wanted daily, weekly, monthly and 10-day rolling usage nicely plotted in a report, by user, highlighting people who would go on the naughty list. This was very well accomplished by an RMarkdown template that was ‘knit‘ every day at 0600 and uploaded as an HTML file onto an internal server, so that every user could see who’s been naughty and who’s been nice. EC2 usage costs have gone down by almost 30% in a few weeks, and that was without having to dismember anyone!7

Probably the only structural rule to keep in mind is to keep reports and reporting code separate. Reports are client products, reporting code is a work product and therefore should reside in src/.

Requirements and general settings

I am, in general, not a huge fan of outright loading whole packages to begin with. Too many users of R don’t realise that

  • you do not need to attach (library(package)) a package in order to use a function from it – as long as the package is available to R, you can simply call the function as package::function(arg1, arg2, ...), and
  • importing a package using library(package) puts every single function from that package into the namespace, overwriting by default all previous entries. This means that in order to deterministically know what any given symbol means, you would have to know, at all times, the order of package imports. Needless to say, there is enough stuff to keep in one’s mind when coding in R to worry about this stuff.

However, some packages might be useful to import, and sometimes it’s useful to have an initialisation script. This may be the case in three particular scenarios:

  • You need a particular locale setting, or a particularly crucial environment setting.
  • It’s your own package and you know you’re not going to shadow already existing symbols.
  • You are not using packrat or some other package management solution, and definitely need to ensure some packages are installed, but prefer not to put the clunky install-if-not-present code in every single thing.

In these cases, it’s sensible to have a file you would source before every top-level script – in an act of shameless thievery from Python, I tend to call this requirements.R, and it includes some fundamental settings I like to rely on, such as setting the locale appropriately. It also includes a CRAN install check script, although I would very much advise the use of Packrat over it, since it’s not version-sensitive.

Themes, house style and other settings

It is common, in addition to all this, to keep some general settings. If your institution has a ‘house style’ for ggplot2 (as, for instance, a ggthemr file), for instance, this could be part of your project’s config. But where does this best go?

[su_pullquote align=”right”]I’m a big fan of keeping house styles in separate repos, as this ensures consistency across the board.[/su_pullquote]It would normally be perfectly fine to keep your settings in a config.R file at root level, but a config/ folder is much preferred as it prevents clutter if you derive any of your configurations from a git submodule. I’m a big fan of keeping house styles and other things intended to give a shared appearance to code and outputs (e.g. linting rules, text editor settings, map themes) in separate – and very, very well managed! – repos, as this ensures consistency across the board over time. As a result, most of my projects do have a config folder instead of a single configuration file.

It is paramount to separate project configuration and runtime configuration:

  • Project configuration pertains to the project itself, its outputs, schemes, the whole nine yards. For instance, the paper size to use for generated LaTeX documents would normally be a project configuration item. Your project configuration belongs in your config/ folder.
  • Runtime configuration pertains to parameters that relate to individual runs. In general, you should aspire to have as few of these, if any, as possible – and if you do, you should keep them as environment variables. But if you do decide to keep them as a file, it’s generally a good idea to keep them at the top level, and store them not as R files but as e.g. JSON files. There are a range of tools that can programmatically edit and change these file formats, while changing R files programmatically is fraught with difficulties.

Keeping runtime configuration editable

A few years ago, I worked on a viral forecasting tool where a range of model parameters to build the forecast from were hardcoded as R variables in a runtime configuration file. It was eventually decided to create a Python-based web interface on top of it, which would allow users to see the results as a dashboard (reading from a database where forecast results would be written) and make adjustments to some of the model parameters. The problem was, that’s really not easy to do with variables in an R file.

On the other hand, Python can easily read a JSON file into memory, change values as requested and export them onto the file system. So instead of that, the web interface would store the parameters in a JSON file, from which R would then read them and execute accordingly. Worked like a charm. Bottom line – configurations are data, and using code to store data is bad form.

Dirty little secrets

Everybody has secrets. In all likelihood, your project is no different: passwords, API keys, database credentials, the works. The first rule of this, of course, is never hardcode credentials in code. But you will need to work out how to make your project work, including via version control, while also not divulging credentials to the world at large. My preferred solutions, in order of preference, are:

  1. the keyring package, which interacts with OS X’s keychain, Windows’s Credential Store and the Secret Service API on Linux (where supported),
  2. using environment variables,
  3. using a secrets file that is .gitignored,
  4. using a config file that’s .gitignored,
  5. prompting the user.

Let’s take these – except the last one, which you should consider only as a measure of desperation, as it relies on RStudio and your code should aspire to run without it – in turn.

Using keyring

keyring is an R package that interfaces with the operating system’s keychain management solution, and works without any additional software on OS X and Windows.8 Using keyring is delightfully simple: it conceives of an individual key as belonging to a keyring and identified by a service. By reference to the service, it can then be retrieved easily once the user has authenticated to access the keychain. It has two drawbacks to be aware of:

  • It’s an interactive solution (it has to get access permission for the keychain), so if what you’re after is R code that runs quietly without any intervention, this is not your best bet.
  • A key can only contain a username and a password, so it cannot store more complex credentials, such as 4-ple secrets (e.g. in OAuth, where you may have a consumer and a publisher key and secret each). In that case, you could split them into separate keyring keys.

However, for most interactive purposes, keyring works fine. This includes single-item secrets, e.g. API keys, where you can use some junk as your username and hold only on to the password. [su_pullquote align=”right”]For most interactive purposes, keyring works fine. This includes single-item secrets, e.g. API keys.[/su_pullquote] By default, the operating system’s ‘main’ keyring is used, but you’re welcome to create a new one for your project. Note that users may be prompted for a keychain password at call time, and it’s helpful if they know what’s going on, so be sure you document your keyring calls well.

To set a key, simply call keyring::key_set(service = "my_awesome_service", username = "my_awesome_user). This will launch a dialogue using the host OS’s keychain handler to request authentication to access the relevant keychain (in this case, the system keychain, as no keychain is specified), and you can then retrieve

  • the username: using keyring::key_list("my_awesome_service")[1,2], and
  • the password: using keyring::key_get("my_awesome_service").

Using environment variables

[su_pullquote]The thing to remember about environment variables is that they’re ‘relatively private’: everyone in the user session will be able to read them.[/su_pullquote]Using environment variables to hold certain secrets has become extremely popular especially for Dockerised implementations of R code, as envvars can be very easily set using Docker. The thing to remember about environment variables is that they’re ‘relatively private’: they’re not part of the codebase, so they will definitely not accidentally get committed to the VCS, but everyone who has access to the particular user session  will be able to read them. This may be an issue when e.g. multiple people are sharing the ec2-user account on an EC2 instance. The other drawback of envvars is that if there’s a large number of them, setting them can be a pain. R has a little workaround for that: if you create an envfile called .Renviron in the working directory, it will store values in the environment. So for instance the following .Renviron file will bind an API key and a username:

api_username = "my_awesome_user"
api_key = "e19bb9e938e85e49037518a102860147"

So when you then call Sys.getenv("api_username"), you get the correct result. It’s worth keeping in mind that the .Renviron file is sourced once, and once only: at the start of the R session. Thus, obviously, changes made after that will not propagate into the session until it ends and a new session is started. It’s also rather clumsy to edit, although most APIs used to ini files will, with the occasional grumble, digest .Renvirons.

Needless to say, committing the .Renviron file to the VCS is what is sometimes referred to as making a chocolate fireman in the business, and is generally a bad idea.

Using a .gitignored config or secrets file

config is a package that allows you to keep a range of configuration settings outside your code, in a YAML file, then retrieve them. For instance, you can create a default configuration for an API:

default:
    my_awesome_api:
        url: 'https://awesome_api.internal'
        username: 'my_test_user'
        api_key: 'e19bb9e938e85e49037518a102860147'

From R, you could then access this using the config::get() function:

my_awesome_api_configuration <- config::get("my_awesome_api")

This would then allow you to e.g. refer to the URL as my_awesome_api_configuration$url, and the API key as my_awesome_api_configuration$api_key. As long as the configuration YAML file is kept out of the VCS, all is well. The problem is that not everything in such a configuration file is supposed to be secret. For instance, it makes sense for a database access credentials to have the other credentials DBI::dbConnect() needs for a connection available to other users, but keep the password private. So .gitignoreing a config file is not a good idea.

[su_pullquote align=”right”]A dedicated secrets file is a better place for credentials than a config file, as this file can then be wholesale .gitignored.[/su_pullquote]A somewhat better idea is a secrets file. This file can be safely .gitignored, because it definitely only contains secrets. As previously noted, definitely create it using a format that can be widely written (JSON, YAML).9 For reasons noted in the next subsection, the thing you should definitely not do is creating a secrets file that consists of R variable assignments, however convenient an idea that may appear at first. Because…

Whatever you do…

One of the best ways to mess up is creating a fabulous way of keeping your secret credentials truly secret… then loading them into the global scope. Never, ever assign credentials. Ever.

You might have seen code like this:

dbuser <- Sys.getenv("dbuser")
dbpass <- Sys.getenv("dbpass")

conn <- DBI::dbConnect(odbc::odbc(), UID = dbuser, PWD = dbpass)
[su_pullquote]Never, ever put credentials into any environment if possible – especially not into the global scope.[/su_pullquote]This will work perfectly, except once its done, it will leave the password and the user name, in unencrypted plaintext (!), in the global scope, accessible to any code. That’s not just extremely embarrassing if, say, your wife of ten years discovers that your database password is your World of Warcraft character’s first name, but also a potential security risk. Never put credentials into any environment if possible, and if it has to happen, at least make it happen within a function so that they don’t end up in the global scope. The correct way to do the above would be more akin to this:

create_db_connection <- function() {
    DBI::dbConnect(odbc::odbc(), UID = Sys.getenv("dbuser"), PWD = Sys.getenv("dbpass")) %>% return()
}

Concluding remarks

Structuring R projects is an art, not just a science. Many best practices are highly domain-specific, and learning these generally happens by trial and pratfall error. In many ways, it’s the bellwether of an R developer’s skill trajectory, because it shows whether they possess the tenacity and endurance it takes to do meticulous, fine and often rather boring work in pursuance of future success – or at the very least, an easier time debugging things in the future. Studies show that one of the greatest predictors of success in life is being able to tolerate deferred gratification, and structuring R projects is a pure exercise in that discipline.

[su_pullquote align=”right”]Structuring R projects is an art, not just a science. Many best practices are highly domain-specific, and learning these generally happens by trial and error.[/su_pullquote]At the same time, a well-executed structure can save valuable developer time, prevent errors and allow data scientists to focus on the data rather than debugging and trying to find where that damn snippet of code is or scratching their head trying to figure out what a particularly obscurely named function does. What might feel like an utter waste of time has enormous potential to create value, both for the individual, the team and the organisation.

[su_pullquote]As long as you keep in mind why structure matters and what its ultimate aims are, you will arrive at a form of order out of chaos that will be productive, collaborative and useful.[/su_pullquote]I’m sure there are many aspects of structuring R projects that I have omitted or ignored – in many ways, it is my own experiences that inform and motivate these commentaries on R. Some of these observations are echoed by many authors, others diverge greatly from what’s commonly held wisdom. As with all concepts in development, I encourage you to read widely, get to know as many different ideas about structuring R projects as possible, and synthesise your own style. As long as you keep in mind why structure matters and what its ultimate aims are, you will arrive at a form of order out of chaos that will be productive, collaborative and mutually useful not just for your own development but others’ work as well.

My last commentary on defensive programming in R has spawned a vivid and exciting debate on Reddit, and many have made extremely insightful comments there. I’m deeply grateful for all who have contributed there. I hope you will also consider posting your observations in the comment section below. That way, comments will remain together with the original content.

References   [ + ]

1.As in, Adam Smith.
2.It took me years to figure out why. It turns out that I have ZF alpha-1 antitrypsin deficiency. As a consequence, even minimal exposure to small particulates and dust can set off violent coughing attacks and impair breathing for days. Symptoms tend to be worse in hot weather due to impaired connective tissue something-or-other.
3.That’s a joke. I don’t gut interns – they’re valuable resources, HR shuns dismembering your coworkers, it creates paperwork and I liked every intern I’ve ever worked with – but most importantly, once gutted like a fish, they are not going to learn anything new. I prefer gentle, structured discussions on the benefits of good package structure. Please respect your interns – they are the next generation, and you are probably one of their first example of what software development/data science leadership looks like. The waves you set into motion will ripple through generations, well after you’re gone. You better set a good example.
4.Such a folder is often referred to as a ‘dropbox’, and the typical corresponding octal setting, 0422, guarantees that the R user will not accidentally overwrite data.
5.The organisation consented to me telling this story but requested anonymity, a request I honour whenever legally possible.
6.In case you’re unfamiliar with AWS: it’s a cloud service where elastic computing instances (EC2 instances) reside in ‘regions’, e.g. us-west-1a. There are (small but nonzero) charges for data transfer between regions. If you’re in one region but you configure the yum repo server of another region as your default, there will be costs, and, eventually, tears – provision ten instances with a few GBs worth of downloads, and there’ll be yelling. This is now more or less impossible to do except on purpose, but one must never underestimate what users are capable of from time to time!
7.Or so I’m told.
8.Linux users will need libsecret 0.16 or above, and sodium.
9.XML is acceptable if you’re threatened with waterboarding.

Are you looking for a data science sensei?

Maybe you’re a junior data scientist, maybe you’re a software developer who wants to go into data science, or perhaps you’ve dabbled in data for years in Excel but are ready to take the next step.

If so, this post is all about you, and an opportunity I offer every year.

You see, life has been very good to me in terms of training as a data scientist. I have been spoiled, really – I had the chance to learn from some of the best data scientists, work with some exceptional epidemiologists, experience some unusual challenges and face many of the day-to-day hurdles of working in data analytics. I’ve had the fortune to see this profession in all its contexts, from small enterprises to multi-million dollar FTSE100 companies, from well-run agile start-ups to large and sometimes pretty slow dinosaurs, from government through the private sector to NGOs: I’ve seen it all. I’ve done some great things. And I’ve made some superbly dumb mistakes.

And so, at the start of every year, I have opened applications for young, start-of-career data scientists looking for their Mr. Miyagi. Don’t worry: no car waxing involved. I will be choosing a single promising young data scientist and pass on as much as I can of my so-called wisdom. At the end, your skills will shine like Mr. Miyagi’s 1947 Ford Deluxe Convertible. There’s no catch, no hidden trap, no fees or charges involved (except the one mentioned below).

Eligibility criteria

To be eligible, you must be:

  • 18 or above if you are taking a gap year or not attending a university/college.
  • You do not have to have a formal degree in data science or a relevant subject, but you must have completed it if you do. In other words: if you’re in your 3rd year of an English Lit degree, you’re welcome to apply, but if you’re in the middle of your CS degree, you have to wait until you’re finished – sorry. The same goes if you intend to go straight on to a data science-related postgrad within the year.
  • Have a solid basis in mathematics: decent statistics, combinatorics, linear algebra and some high school calculus are the very minimum.
  • You must be familiar with Python (3.5 and above), and either familiar with the scientific Python stack (SciPy, NumPy, Pandas, matplotlib) or willing to pick up a lot on the go.
  • Be willing to put in the work: we’ll be convening about once every week to ten days by Skype for an hour, and you’ll probably be doing 6-10 hours’ worth of reading and work for the rest of the week. Please be realistic if you can sustain this.
  • If, as recommended, you are working on an AWS EC2 instance, be aware this might cost money and make sure you can cover the costs. In practice, these are negligible.
  • You must understand that this is a physically and intellectually strenuous endeavor, and it is your responsibility to know whether you’re physically and mentally up for the job. However, no physical or mental disabilities are regarded as automatically excluding you of consideration.
  • You must not live in, reside in or be a citizen of any of the countries listed in CFR Title 22 Part 126, §126.1(d)(1) and (2).
  • You must not have been convicted of a felony anywhere. This includes ‘spent’ UK criminal convictions.

Sounds good? Apply here.

Preferred applicants

When assessing applications, the following groups are given preference:

  • Persons with mental or physical disabilities whose disability precludes them from finding conventional employment – please outline this situation on the application form.
  • Honourably discharged (or equivalent) veterans of NATO forces and the IDF – please include member 4 copy of DD-214, Wehrdienstzeitbescheinigung or equivalent document that lists type of discharge.

What we’ll be up to

Don’t worry. None of this car waxing crap.

Over the 42 weeks to follow, you will be undergoing a rigorous and structured semi-self-directed training process. This will take your background, interests and future ambitions into account, but at the core, you will:

  • master Python’s data processing stack,
  • learn how to visualize data in Python,
  • work with networks and graph databases, including Neo4j,
  • acquire the correct way of presenting results in data science to stakeholders,
  • delve into cutting-edge methods of machine learning, such as deep learning using keras,
  • work on problems in computer vision and get familiar with the Python bindings of OpenCV,
  • scrape data from social networks, and
  • learn convenient ways of representing, summarizing and distributing our results.

The programme is divided into three ‘terms’ of 14 weeks each, which each consist of 9 weeks of directed study, 4 weeks of self-directed project work and one week of R&R.

What you’ll be getting out of this

Since the introduction of Docker, tolerance for wanton destruction as part of coursework has increased, but still won’t earn you a passing grade by itself.

In the past years, mentees have noted the unusual breadth of knowledge they have acquired about data science, as well as the diversity of practical topics and the realistic question settings, with an emphasis on practical applications of data science such as presenting data products. I hope that this year, too, I’ll be able to convey the same important topics. Every year is a little different as I try to adjust the course to meet the individual participant’s needs.

The programme is not, of course, accredited by any accreditation body, but a certificate of completion will be issued to any participant who wishes so.

Application process

Simply fill in the form below and send it off by 14 January 2018. The top contenders will be contacted by e-mail or telephone for a brief conversation thereafter. Finally, a lucky winner will be picked by the 21st January 2018. Easy peasy!

 

FAQ

Q: What does ‘semi-self-directed’ mean? Is there a fixed curriculum?

A: No. There are some basic topics (see list above) that I think are quite likely to come up, but ultimately, this is about making you the data scientist you want to be. For this reason, we’ll begin by planning out where you want to improve – kinda like a PT gives you a training plan before you start out at their gym. We will then adjust as needed. This is not an exam prep, it’s a learning experience, and for that reason, we can focus on delving deeper and getting the fundaments right over other cramming in a particular curriculum.

Q: Can I bring your own data?

A: Sure. In general, we’ll be using standard data sets, because they’re well-known and high-quality data. But if you have a dataset you collected or are otherwise entitled to use that would do equally well, there’s no reason why we couldn’t use it! Note that you must have the right to use and share the data set, meaning it’s unlikely you’re able to use data sets from your day job.

Q: Will this give me an employment advantage?

A: I don’t quite know – it’s impossible to predict. The field of data science degrees is something of a Wild West still, and while some reputable degrees have emerged, others are dubious. Employers still don’t know what to go by. However, you will most definitely be better prepared for an employment interview in data science!

Q: Why are you so keen on presenting data the right way?

A: Because as data scientists, we’re expected to not merely understand the data and draw the right conclusions, but also to convey them to stakeholders at various levels, from plant management to C-suite, in a way that gets the right message across at the first go.

Q: You’re a computational epidemiologist. Can I apply even if my work doesn’t really involve healthcare?

A: Sure. The principles are the same, and we’re largely focusing on generic topics. You might be exposed to bits and pieces of epidemiology, but I can guarantee it won’t hurt.

Q: Why do you only take on one mentee?

A: To begin with, my life is pretty busy – I have a demanding job, a family and – shock horror! – I even need to sleep every once in a while. More importantly, I want to devote my undivided attention to a worthy candidate.

Q: How come I’ve never heard of this before?

A: Until now, I’ve largely gotten mentees by word of mouth. I am concerned that this is keeping some talented people out and limiting the pool of people we should have in. That’s why this year, I have tried to make this process much more transparent.

Q: You’re rather fond of General ‘Mad Dog’ Mattis. Will there be yelling?

No.

Q: There seems to be no upper age limit. Is that a mistake?

No.

Q: I have more questions.

A: You can ask them here.

10 tips for passing the Neo4j Certified Professional examination

Everybody loves a good certification. Twice so when it’s for free and quadruply so if it’s in a cool new technology like Neo4j. In case you’re unfamiliar with Neo4j, it’s a graph database – a novel database concept that belongs to the NoSQL class of databases, i.e. it does not follow a relational model. Rather, it allows for the storage of, and computation on, graphs.

From a purely mathematical perspective, a graph G(V,E) is formally defined as an ordered pair of vertices V (called nodes in Neo4j) and edges E (known as relationships in Neo4j). In other words, the first class citizens of a graph are ‘things’ and ‘connections between things’. No doubt you can already think of a lot of problems that can be conceptualised as graph problems. Indeed, for a surprising number of things that don’t sound very graph-y at all, it is possible to make use of graph databases. Not that you should always do so (no single technology is a panacea to every problem and I would look very suspiciously at someone who would implement time series as a graph database), but that does not mean it’s not possible in most cases.

Which leads me to the appeal of Neo4j. In general, you had two approaches to graph operations until graph databases entered the scene. One was to write your own graph object model and have it persist in memory. That’s not bad, but a database it sure ain’t. Meanwhile, an alternative is to decompose the graph into a table of vertices and its properties and another table of connections between vertices (an adjacency matrix) and then store it in a regular RDBMS or, somewhat more efficiently, in a NoSQL key-value store. That’s a little better, but it still requires considerable reinvention of the wheel.

The strength of graph databases is that they facilitate more complex operations, way beyond storage and retrieval of graphs, such as searching for patterns, properties and paths. One done-to-death example would be the famous problem known as Six Degrees of Kevin Bacon, a pop culture version of Erdös numbers: for an actor A and a Kevin Bacon K within a graph G_{Actors} with A, K \in G_{Actors}, what is the shortest path (and is it below six jumps?) to get from A to K? Graph databases turn this into a simple query. Neo4j is one of the first industrial grade graph DBs, with an enterprise grade product that you can safely deploy in a production system without worrying too much about it. Written in Java, it’s stable, fast and has enough API wrappers to have some left over for the presents next Christmas. Alongside the more traditional APIs, it’s got a very friendly and very visual web-based interface that immediately plots your query results and a somewhat weird but ultimately not very counter-intuitive query language known as Cypher. As such, if graph problems are the kind of problem you deal with on a regular basis, taking Neo4j for a spin might be a very good idea.

Which in turn leads me to the Neo4j certification. For the unbeatable price of $0.00, you can now sit for the esteemed title of Neo4j Certified Professional – that is, if you pass the 80-question, 60-minute time-capped test with a score of 80% or above. Now, let not the fact that it’s offered for free deter you – the test is pretty ferocious. It takes a fairly in-depth knowledge of Neo4j to pass (I’ve been around Neo4j ever since it has been around, and while I’ve never tried it and passed at first try recently, it has been surprisingly hard even for me!), the time cap means that even if you do decide to refer to your notes (I am not sure if that’s not cheating – I personally did not, as it was just so time-intensive), you won’t be able to pass merely from notes. Worse, there are no test exams and preparation material is scarce outside (rather pricey!) trainings. As such, I’ve written up the ten things I wish I had known before embarking upon the exam. While I did pass at the first try, it was a lot harder than I expected and I would definitely have prepared for it differently, had I known what it would be like! Fortunately, you can attempt it as often as you would like for no cost, and as such it’s by no means an impossible task,1 but you’re in for a ride if you wish to pass with a good score. Fasten your seat belt, flip up the tray table and put your seat in a fully upright position – it’s time to get Neo4j’d!

1. This is not a user test… it’s a user and DBA test.

I haven’t heard of a single Neo4j shop that had a dedicated Neo4j DBA to support graph operations. Which is ok – compared to the relatively arcane art of (enterprise) RDBMS DBAs, Neo4j is a breeze to configure. At the same time, the model seems to expect users to know what they’re doing themselves and be confident with some close-to-the-metal database tweaking. Good.

The downside is that about a quarter or so of the questions have to do with the configuration of Neo4j, and they do get into the nitty-gritty. You’re expected, for instance, to know fairly detailed minutiae of Enterprise edition High Availability server settings.

2. Pay attention to Cypher queries. The devil’s in the details.

If you’ve done as many multiple choice tests as I have, you know you’ve learned one thing for sure: all of them follow the same pattern. Two answers are complete bunk and anyone who’s done their reading can spot that. The remaining two are deceptively similar, however, and both sound ‘correct enough’. In the Neo4j test, this is mainly in the realm of the Cypher queries. A number of questions involve a ‘problem’ being described and four possible Cypher queries. The candidate must then spot which of these, or which several of these, answer the problem description. Often the correct answer may be distinguished from the incorrect one by as little as a correctly placed colon or a bracket closed in the right order. When in doubt, have a very sharp look at the Cypher syntax.

Oh, incidentally? The test makes relatively liberal use of the ‘both directions match’ (a)-[:RELATION]-(b) query pattern. This catches (a)-[:RELATION]->(b) as well as (b)-[:RELATION]->(a). The lack of the little arrow is easy to overlook and can lead you down the wrong path…

3. Develop query equivalence to second nature.

Python was built so that there would be one, and exactly one, right way to do everything. Sort of. Cypher is the opposite – there are dozens of ways to express certain relations, largely owing to the equivalence of relationships. As such, be aware of two equivalences. One is the equivalence of inline parameters and WHERE parameters:

MATCH (a:Person {name: "John Smith"})-[:REL]->(b)
RETURN a;
MATCH (a:Person)-[:REL]->(b)
WHERE a.name = "John Smith"
RETURN a;

Also, the following partials are equivalent, but not always:

(a)-[:FIRST_REL]->(b)<-[:SECOND_REL]-(c)
(a)-[:FIRST_REL]->(b)
(c)-[:SECOND_REL]->(b)

When you see a Cypher statement, you should be able to see all of its forms. Recap question: when are the statements in the second pair NOT equivalent?

4. The test is designed on the basis of the Enterprise edition.

Neo4j comes in two ‘flavours’ – Community and Enterprise. The latter has a lot of cool features, such as an error-resilient, distributed ‘High Availability’ mode. The certification’s premise is that you are familiar – and familiar to a fairly high degree, actually! – with many of the Enterprise-only features of Neo4j. As such, unless you’re fortunate enough to be an enterprise user, it might repay itself to download the 30-day evaluation version of Neo4j Enterprise.

5. The test is generally well-written.

In other words, most things are fairly clear. By fairly clear, I mean that there is little ambiguity and it uses the same language as the reference (although comparing test questions to phrases that stuck in my head which I ended up checking after the test, just enough words are changed to deter would-be cheaters from Ctrl+F-ing through the manual! There are no trick questions – so try to understand the questions in their most ‘mundane’, ‘trivial’ way. Yes, sometimes it is that simple!

6. TRUNCATE BRAINSPACE sql_clauses;

A lot of traditional SQL clauses (yes, TRUNCATE is one example – so is JOIN and its multifarious siblings, which describe a concept that simply does not exist in Neo4j) come up as red herrings in Cypher application questions. Try to force your brain to make a switch from SQL to Cypher – and don’t fall for the trap of instinctively thinking of the clauses in the SQL solution! Forget SQL. And most of all, forget its logic of selection – MATCHing is something rather different than SELECTing in SQL.

7. Have a 30,000ft overview of the subject

In particular, have an overview of what your options are to get particular things done. How can you access Neo4j? You might have spent 99% of your time on the web interface and/or interacting using the SDK, but there is actually a shell. How can you backup from Neo4j, and what does backup do? What are your options to monitor Neo4j? Once again, most users are more likely to think of one solution, perhaps two, when there are several more. The difficult thing about this test is that it requires you to be exhaustive – both in breadth and in depth.

8. Algorithms, statistics and aggregation

As far as I’m aware, everyone gets slightly different questions, but my test did not include anything about the graph algorithms inherent in Neo4j (good news for philistines people who want to get stuff done). It did, however, include quite a bit of detail about aggregation functions. You make of that what you will.

9. Practice on Northwind but know the Movie DB like the back of your hand.

Out of the box, if you install Neo4j Community on your computer, you have two sample databases that the Browser offers to load into your instance – Movie and Northwind. The latter should be highly familiar to you if you have a past in relational databases. Meanwhile, the former is a Neo4j favourite, not the least for the Kevin Bacon angle. If you did the self-paced Getting Started training (as you should have!), you’ll have used the Movie DB enough to get a good grip of it. Most of the questions on the text pertain or relate in some way to that graph, so a degree of familiarity can help you spot errors faster. At the same time, Northwind is both a better and bigger database, more fun to use and allows for more complex queries. Northwind should therefore be your educational tool, but you should know Movie rather well for that little plus of familiar feeling that can make the difference between passing and failing. Oh, by the way – while Getting Started is a great course, you will not stand a snowball’s chance in hell without the Production course. This is so even if you’ve done your fill of deployments and integrations – quite simply put, the breadth of the test is statistically very likely to be beyond your own experiences, even if you’ve done e.g. High Availability deployments yourself. In the real world, we specialise – for the test, however, you must be a generalist.

10. Refcards are your friends.

Start with the one for Cypher. Then build your own for High Availability. Laminate them and carry them around, if need be – or take the few functions or clauses that are your weak spots, put them on post-its and plaster them on your wall. Whatever helps – unless you’re writing Cypher code 24/7 (in which case, what are you doing here?), which I doubt happens a lot, there’s quite simply no substitute for seeing correct code and being able to get a feeling for good versus bad code. The test is incredibly fast paced – 80 questions over 60 minutes gives you 45 seconds for a turnkey execution. At least 15-20 of that is reading the question, if not more (it definitely was more for me – as noted, most questions repay a thorough reading!). Realistically, if you want to make that and have time to think about the more complex questions, you’ve got to be able to bang out simple Cypher questions (I’d say there were about 8-10 of them altogether, worth an average number of points, though I (and I do regret this now) didn’t count them.

 

While the Neo4j certification exam is far from easy, it is doable (hey, if I can do it, so can you!). As graph databases are becoming increasingly important due to the recognition that they have the potential to accelerate certain calculations on graph data, coupled with the understanding that a lot of natural processes are in reality closer to relationship-driven interactions than the static picture that traditional RDBMS logic seeks to convey, knowing Neo4j is a definite asset for you and your team. Regardless of your intent to get certified and/or view on certifications in general (mine, too, is in general more on the less complimentary side), what you learn can be an indispensable asset in research and operations as well. Of course, I’m happy to answer any questions about Neo4j and the certification exam, insofar as my subjective views can make a valid contribution to the matter.

Update 15.02.2016: Neo4j community caretaker Michael Hunger has been so kind as to leave a comment on this article, pointing out that the scant feedback is intentional – it prevents re-takers from simply banging in the correct answers from the feedback e-mail. That makes perfect sense – and is not something I thought of. Thanks, Michael. He is also encouraging recent test takers to propose questions for the test – to me, it’s an unprecedented amazingness for a certificate provider to actually ask the community what they believe to be to be the cornerstone and benchmarks of knowledge in a particular field. So do take him up on that offer – his e-mail is in his comment below.

 

Title image credits: Dr Tamás Nepusz, Reconstructing the structure of the world-wide music scene with Last.fm.

References   [ + ]

1.I’ve been told that feedback on failed tests is fairly terrible – there is no feedback to most questions, and you’re not given the correct answers.