What’s the value of a conditional clause?

No, seriously, bear with me. I haven’t lost my mind. Consider the following.

Joe, a citizen of Utopia, makes Ut$142,000 a year. In Utopia, you pay 25% on your first Ut$120,000 and from there on 35% on all earnings above. Let’s calculate Joe’s tax rate.

Trivial, no? JavaScript:

var income = 142000;
var tax_to_pay = 0;

if(income <= 120000){
 tax_to_pay = income * 0.25;
} else {
 tax_to_pay = 30000 + (income - 120000) * 0.35;
}

console.log(tax_to_pay);

And Python:

income = 142000

if income <= 120000
    tax_to_pay = income * 0.25
else
    tax_to_pay = 30000 + (income - 120000) * 0.35

print(tax_to_pay)

And so on. Now let’s consider the weirdo in the ranks, Julia:

income = 142000

if income <= 120000:
    tax_to_pay = income * 0.25
else
    tax_to_pay = 30000 + (income - 120000) * 0.35
end

Returns 37700.0 all right. But now watch what Julia can do and the other languages (mostly) can’t! The following is perfectly correct Julia code.

income = 142000

tax_to_pay = (if income <= 120000
                  income * 0.25
              else
                  30000 + (income - 120000) * 0.35
              end)

print(tax_to_pay)

This, too, will return 37700.0. Now, you might say, that’s basically no different from a ternary operator. Except unlike with ternary ops in most languages, you can actually put as much code there as you want and have as many side effects as your heart desires, while still being able to assign the result of the last calculation within the block that ends up getting executed to the variable at the head.

Now, that raises the question of what the value of a while expression is. Any guesses?

i = 0

digits = (while i < 10
                i = i + 1
          end)

Well, Julia says 0123456789 when executing it, so digits surely must be…

julia> digits


Wait, wat?! That must be wrong. Let’s type check it.

julia> typeof(digits)
Void

So there you have it. A conditional has a value, a while loop doesn’t… even if both return a value. Sometimes, you’ve gotta love Julia, kick back with a stiff gin and listen to Gary Bernhardt.

Posted in Uncategorized

10 tips for passing the Neo4j Certified Professional examination

Everybody loves a good certification. Twice so when it’s for free and quadruply so if it’s in a cool new technology like Neo4j. In case you’re unfamiliar with Neo4j, it’s a graph database – a novel database concept that belongs to the NoSQL class of databases, i.e. it does not follow a relational model. Rather, it allows for the storage of, and computation on, graphs.

From a purely mathematical perspective, a graph G(V,E) is formally defined as an ordered pair of vertices V (called nodes in Neo4j) and edges E (known as relationships in Neo4j). In other words, the first class citizens of a graph are ‘things’ and ‘connections between things’. No doubt you can already think of a lot of problems that can be conceptualised as graph problems. Indeed, for a surprising number of things that don’t sound very graph-y at all, it is possible to make use of graph databases. Not that you should always do so (no single technology is a panacea to every problem and I would look very suspiciously at someone who would implement time series as a graph database), but that does not mean it’s not possible in most cases.

Which leads me to the appeal of Neo4j. In general, you had two approaches to graph operations until graph databases entered the scene. One was to write your own graph object model and have it persist in memory. That’s not bad, but a database it sure ain’t. Meanwhile, an alternative is to decompose the graph into a table of vertices and its properties and another table of connections between vertices (an adjacency matrix) and then store it in a regular RDBMS or, somewhat more efficiently, in a NoSQL key-value store. That’s a little better, but it still requires considerable reinvention of the wheel.

The strength of graph databases is that they facilitate more complex operations, way beyond storage and retrieval of graphs, such as searching for patterns, properties and paths. One done-to-death example would be the famous problem known as Six Degrees of Kevin Bacon, a pop culture version of Erdös numbers: for an actor A and a Kevin Bacon K within a graph G_{Actors} with A, K \in G_{Actors}, what is the shortest path (and is it below six jumps?) to get from A to K? Graph databases turn this into a simple query. Neo4j is one of the first industrial grade graph DBs, with an enterprise grade product that you can safely deploy in a production system without worrying too much about it. Written in Java, it’s stable, fast and has enough API wrappers to have some left over for the presents next Christmas. Alongside the more traditional APIs, it’s got a very friendly and very visual web-based interface that immediately plots your query results and a somewhat weird but ultimately not very counter-intuitive query language known as Cypher. As such, if graph problems are the kind of problem you deal with on a regular basis, taking Neo4j for a spin might be a very good idea.

Which in turn leads me to the Neo4j certification. For the unbeatable price of $0.00, you can now sit for the esteemed title of Neo4j Certified Professional – that is, if you pass the 80-question, 60-minute time-capped test with a score of 80% or above. Now, let not the fact that it’s offered for free deter you – the test is pretty ferocious. It takes a fairly in-depth knowledge of Neo4j to pass (I’ve been around Neo4j ever since it has been around, and while I’ve never tried it and passed at first try recently, it has been surprisingly hard even for me!), the time cap means that even if you do decide to refer to your notes (I am not sure if that’s not cheating – I personally did not, as it was just so time-intensive), you won’t be able to pass merely from notes. Worse, there are no test exams and preparation material is scarce outside (rather pricey!) trainings. As such, I’ve written up the ten things I wish I had known before embarking upon the exam. While I did pass at the first try, it was a lot harder than I expected and I would definitely have prepared for it differently, had I known what it would be like! Fortunately, you can attempt it as often as you would like for no cost, and as such it’s by no means an impossible task,1I’ve been told that feedback on failed tests is fairly terrible – there is no feedback to most questions, and you’re not given the correct answers. but you’re in for a ride if you wish to pass with a good score. Fasten your seat belt, flip up the tray table and put your seat in a fully upright position – it’s time to get Neo4j’d!

1. This is not a user test… it’s a user and DBA test.

I haven’t heard of a single Neo4j shop that had a dedicated Neo4j DBA to support graph operations. Which is ok – compared to the relatively arcane art of (enterprise) RDBMS DBAs, Neo4j is a breeze to configure. At the same time, the model seems to expect users to know what they’re doing themselves and be confident with some close-to-the-metal database tweaking. Good.

The downside is that about a quarter or so of the questions have to do with the configuration of Neo4j, and they do get into the nitty-gritty. You’re expected, for instance, to know fairly detailed minutiae of Enterprise edition High Availability server settings.

2. Pay attention to Cypher queries. The devil’s in the details.

If you’ve done as many multiple choice tests as I have, you know you’ve learned one thing for sure: all of them follow the same pattern. Two answers are complete bunk and anyone who’s done their reading can spot that. The remaining two are deceptively similar, however, and both sound ‘correct enough’. In the Neo4j test, this is mainly in the realm of the Cypher queries. A number of questions involve a ‘problem’ being described and four possible Cypher queries. The candidate must then spot which of these, or which several of these, answer the problem description. Often the correct answer may be distinguished from the incorrect one by as little as a correctly placed colon or a bracket closed in the right order. When in doubt, have a very sharp look at the Cypher syntax.

Oh, incidentally? The test makes relatively liberal use of the ‘both directions match’ (a)-[:RELATION]-(b) query pattern. This catches (a)-[:RELATION]->(b) as well as (b)-[:RELATION]->(a). The lack of the little arrow is easy to overlook and can lead you down the wrong path…

3. Develop query equivalence to second nature.

Python was built so that there would be one, and exactly one, right way to do everything. Sort of. Cypher is the opposite – there are dozens of ways to express certain relations, largely owing to the equivalence of relationships. As such, be aware of two equivalences. One is the equivalence of inline parameters and WHERE parameters:

MATCH (a:Person {name: "John Smith"})-[:REL]->(b)
RETURN a;
MATCH (a:Person)-[:REL]->(b)
WHERE a.name = "John Smith"
RETURN a;

Also, the following partials are equivalent, but not always:

(a)-[:FIRST_REL]->(b)<-[:SECOND_REL]-(c)
(a)-[:FIRST_REL]->(b)
(c)-[:SECOND_REL]->(b)

When you see a Cypher statement, you should be able to see all of its forms. Recap question: when are the statements in the second pair NOT equivalent?

4. The test is designed on the basis of the Enterprise edition.

Neo4j comes in two ‘flavours’ – Community and Enterprise. The latter has a lot of cool features, such as an error-resilient, distributed ‘High Availability’ mode. The certification’s premise is that you are familiar – and familiar to a fairly high degree, actually! – with many of the Enterprise-only features of Neo4j. As such, unless you’re fortunate enough to be an enterprise user, it might repay itself to download the 30-day evaluation version of Neo4j Enterprise.

5. The test is generally well-written.

In other words, most things are fairly clear. By fairly clear, I mean that there is little ambiguity and it uses the same language as the reference (although comparing test questions to phrases that stuck in my head which I ended up checking after the test, just enough words are changed to deter would-be cheaters from Ctrl+F-ing through the manual! There are no trick questions – so try to understand the questions in their most ‘mundane’, ‘trivial’ way. Yes, sometimes it is that simple!

6. TRUNCATE BRAINSPACE sql_clauses;

A lot of traditional SQL clauses (yes, TRUNCATE is one example – so is JOIN and its multifarious siblings, which describe a concept that simply does not exist in Neo4j) come up as red herrings in Cypher application questions. Try to force your brain to make a switch from SQL to Cypher – and don’t fall for the trap of instinctively thinking of the clauses in the SQL solution! Forget SQL. And most of all, forget its logic of selection – MATCHing is something rather different than SELECTing in SQL.

7. Have a 30,000ft overview of the subject

In particular, have an overview of what your options are to get particular things done. How can you access Neo4j? You might have spent 99% of your time on the web interface and/or interacting using the SDK, but there is actually a shell. How can you backup from Neo4j, and what does backup do? What are your options to monitor Neo4j? Once again, most users are more likely to think of one solution, perhaps two, when there are several more. The difficult thing about this test is that it requires you to be exhaustive – both in breadth and in depth.

8. Algorithms, statistics and aggregation

As far as I’m aware, everyone gets slightly different questions, but my test did not include anything about the graph algorithms inherent in Neo4j (good news for philistines people who want to get stuff done). It did, however, include quite a bit of detail about aggregation functions. You make of that what you will.

9. Practice on Northwind but know the Movie DB like the back of your hand.

Out of the box, if you install Neo4j Community on your computer, you have two sample databases that the Browser offers to load into your instance – Movie and Northwind. The latter should be highly familiar to you if you have a past in relational databases. Meanwhile, the former is a Neo4j favourite, not the least for the Kevin Bacon angle. If you did the self-paced Getting Started training (as you should have!), you’ll have used the Movie DB enough to get a good grip of it. Most of the questions on the text pertain or relate in some way to that graph, so a degree of familiarity can help you spot errors faster. At the same time, Northwind is both a better and bigger database, more fun to use and allows for more complex queries. Northwind should therefore be your educational tool, but you should know Movie rather well for that little plus of familiar feeling that can make the difference between passing and failing. Oh, by the way – while Getting Started is a great course, you will not stand a snowball’s chance in hell without the Production course. This is so even if you’ve done your fill of deployments and integrations – quite simply put, the breadth of the test is statistically very likely to be beyond your own experiences, even if you’ve done e.g. High Availability deployments yourself. In the real world, we specialise – for the test, however, you must be a generalist.

10. Refcards are your friends.

Start with the one for Cypher. Then build your own for High Availability. Laminate them and carry them around, if need be – or take the few functions or clauses that are your weak spots, put them on post-its and plaster them on your wall. Whatever helps – unless you’re writing Cypher code 24/7 (in which case, what are you doing here?), which I doubt happens a lot, there’s quite simply no substitute for seeing correct code and being able to get a feeling for good versus bad code. The test is incredibly fast paced – 80 questions over 60 minutes gives you 45 seconds for a turnkey execution. At least 15-20 of that is reading the question, if not more (it definitely was more for me – as noted, most questions repay a thorough reading!). Realistically, if you want to make that and have time to think about the more complex questions, you’ve got to be able to bang out simple Cypher questions (I’d say there were about 8-10 of them altogether, worth an average number of points, though I (and I do regret this now) didn’t count them.

 

While the Neo4j certification exam is far from easy, it is doable (hey, if I can do it, so can you!). As graph databases are becoming increasingly important due to the recognition that they have the potential to accelerate certain calculations on graph data, coupled with the understanding that a lot of natural processes are in reality closer to relationship-driven interactions than the static picture that traditional RDBMS logic seeks to convey, knowing Neo4j is a definite asset for you and your team. Regardless of your intent to get certified and/or view on certifications in general (mine, too, is in general more on the less complimentary side), what you learn can be an indispensable asset in research and operations as well. Of course, I’m happy to answer any questions about Neo4j and the certification exam, insofar as my subjective views can make a valid contribution to the matter.

Update 15.02.2016: Neo4j community caretaker Michael Hunger has been so kind as to leave a comment on this article, pointing out that the scant feedback is intentional – it prevents re-takers from simply banging in the correct answers from the feedback e-mail. That makes perfect sense – and is not something I thought of. Thanks, Michael. He is also encouraging recent test takers to propose questions for the test – to me, it’s an unprecedented amazingness for a certificate provider to actually ask the community what they believe to be to be the cornerstone and benchmarks of knowledge in a particular field. So do take him up on that offer – his e-mail is in his comment below.

 

Title image credits: Dr Tamás Nepusz, Reconstructing the structure of the world-wide music scene with Last.fm.

References   [ + ]

1. I’ve been told that feedback on failed tests is fairly terrible – there is no feedback to most questions, and you’re not given the correct answers.
Posted in Uncategorized

(I)IoT security is not SCADA security

The other day, at the annual Worldwide Threats hearing of the Senate Armed Services Committee (SASC) – the literal sum of all fears of the intelligence community and the US military -, the testimony of DNI James Clapper made notice of the emerging threats of hacking the Internet of Things:

“Smart” devices incorporated into the electric grid, vehicles—including autonomous vehicles—and household appliances are improving efficiency, energy conservation, and convenience. However, security industry analysts have demonstrated that many of these new systems can threaten data privacy, data integrity, or continuity of services. In the future, intelligence services might use the IoT for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.1Testimony of DNI James R. Clapper to the Senate Armed Services Committee, 9 February 2016. Available here.

It’s good to hear a degree of concern for safety where IoT applications are concerned. The problem is, they come in two flavours, neither of which is helpful.

One is the “I can’t see your fridge killing you” approach. Beloved of Silicon Valley, it is generally a weak security approach based on the fact that for what it’s worth, most of IoT is a toy you can live without. So what if your internet-connected egg monitor stops sending packages to your MQTT server about your current egg stock? What if your fridge misregulates because some teenager is playing a prank on you?

The problem with this approach is that it entirely ignores the fact that Industrial IoT exists. Beyond that, you find a multiverse of incredibly safety-critical devices with IoT connectivity. This includes pain pumps, anaesthesia and vitals monitors, navigation systems, road controls and so on. A few of these are air-gapped, but if Stuxnet is anything to go by, that’ll be of little avail altogether. In other words, just because some IoT devices are toys doesn’t mean all of them are.

The other does take security seriously… but it smacks of SCADA talk. Yes, the dangers that affect interfacing a computer with the controls of any real world object, up to and including fridges, cat feeders and nuclear power plants, mean that there are particular dangers, and whether that interface is a 1980s protocol or the Internet of Things makes no great diference. But the threat of this perspective is that it ignores the IoT specific part of the risk profile. This includes, for instance, the vulnerabilities inherent in the #1 carrier medium of non-industrial IoT traffic – 802.11b/g. Any vulnerability in the main wireless protocols is in turn a vulnerability of IoT connected devices. I’d wager that wasn’t much of an issue back when Reagan was President.

Moreover, a scary part of IoT is that a lot of the devices interface directly with customers, rather than professionals. You cannot rely on anything that’s not in the box. The wireless network will be badly configured, the passwords will be the user’s birthday and generally, everything will be as unsafe as it gets. Of course, this isn’t even true for a minority of users, but it is the worst case scenario, and that’s what determines the risk profile of a component.

The bottom line is that if you ignore the incredible diversity of devices caught by the new favourite buzzword IoT has become, you get a slanted risk profile, towards toys (IoT connected floating thingies in your pool that measure your cannonballs? Are you serious?) or towards an aging system of industrial control protocols that some hope will be revived by, rather than supplanted by, IoT.

That’s a problem if you scratch anything but the top layer. Below the very specific layers, a number of fairly generic interconnection layers lie. And the consequence of that is that is that every layer, protocol or method of communication that can conceivably be used for IoT data transmission eventually will be used for that, and as such must have a defensible risk profile.

And in turn, every (I)IoT application must consider, in its risk profile, what the risk profiles of the stack it is based on is. Every element of the stack contributes to the risk, and quite often these risks are encapsulated in vendor risk estimates of higher-up layers sold as composite. Thus for instance an entire IoT appliance might be sold with a particular risk profile, but in reality, its risk profile is the sum of the risk profiles of all of its layers: its sensors, its hardware (especially where sex via hex is concerned – something that might arouse older engineers entirely the wrong way), the radio frequencies, the susceptibility of the radio hardware to certain unpleasant hacks, heck, even the possibility of van Eck phreaking on screens.2A few years ago, a large and undisclosed metallurgical company was concerned about information leaking onto the open market that indicated possible hacking of their output values at one of their largest plants, values that were, due to some oddities in the markets, quite significant in global commodity prices. After an enormously long hunt and scouring through the entire corporate and production network, they were about to give up when they found a van Eck loop antenna curled to the backside of a subsidiary monitor that an engineer has ‘hacked’ never to go into power save. As a recovering lawyer, I entirely foresee that the expected standard from non-private users of IoT devices, as well as those who build IoT devices, will be examining the whole stack rather than relying in previous promises. It may be turtles all the way down, but you’re expected to know all about those turtles now. Merely subscribing to one of the two prevalent risk models will not suffice.

References   [ + ]

1. Testimony of DNI James R. Clapper to the Senate Armed Services Committee, 9 February 2016. Available here.
2. A few years ago, a large and undisclosed metallurgical company was concerned about information leaking onto the open market that indicated possible hacking of their output values at one of their largest plants, values that were, due to some oddities in the markets, quite significant in global commodity prices. After an enormously long hunt and scouring through the entire corporate and production network, they were about to give up when they found a van Eck loop antenna curled to the backside of a subsidiary monitor that an engineer has ‘hacked’ never to go into power save.
Posted in Uncategorized

The sinful algorithm

In 1318, amidst a public sentiment that was less than enthusiastic about King Edward II, a young clerk from Oxford, John Deydras of Powderham, claimed to be the rightful ruler of England. He spun a long and rather fantastic tale that involved sows biting off the ears of children and other assorted lunacy.1It is now more or less consensus that Deydras was mentally ill and made the whole story up. Whether he himself believed it or not is another question. Edward II took much better to the pretender than his wife, the all-around badass Isabella of France, who was embarrassed by the whole affair, and Edward’s barons, who feared more sedition if they let this one slide. As such, eventually, Deydras was tried for sedition.

Deydras’s defence was that he has been convinced to engage in this charade by his cat, through whom the devil appeared to him.2As an obedient servant to a kitten, I have trouble believing this! That did not meet with much leniency, it did however result in one of the facts that exemplified the degree to which medieval criminal jurisprudence was divorced from reason and reality: besides Deydras, his cat, too, was tried, convicted, sentenced to death and hung, alongside his owner.

Before the fashionable charge of unreasonableness is brought against the Edwardian courts, let it be noted that other times and cultures have fared no better. In the later middle ages, it was fairly customary for urban jurisdictions to remove objects that have been involved in a crime beyond the city limits, giving rise to the term extermination (ex terminare, i.e., [being put] beyond the ends).3Falcón y Tella, Maria J. (2014). Justice and law, 60. Brill Nijhoff, Leiden The Privileges of Ratisbon (1207) allowed the house in which a crime took place or which harboured an outlaw to be razed to the ground – the house itself was as guilty as its owner.4Falcón y Tella, Maria J. and Falcón y Tella, Fernando (2006). Punishment and Culture: a right to punish? Nijhoff, Leiden. And even a culture as civilised and rationalistic as the Greeks fared no better, falling victim to the same surge of unreason. Hyde describes

The Prytaneum was the Hôtel de Ville of Athens as of every Greek town. In it was the common hearth of the city, which represented the unity and vitality of the community. From its perpetual fire, colonists, like the American Indians, would carry sparks to their new homes, as a symbol of fealty to the mother city, and here in very early times the prytanis or chieftain probably dwelt. In the Prytaneum at Athens the statues of Eirene (Peace) and Hestia (Hearth) stood; foreign ambassadors, famous citizens, athletes, and strangers were entertained there at the public expense; the laws of the great law-giver Solon were displayed within it and before his day the chief archon made it his home.
One of the important features of the Prytaneum at Athens were the curious murder trials held in its immediate vicinity. Many Greek writers mention these trials, which appear to have comprehended three kinds of cases. In the first place, if a murderer was unknown or could not be found, he was nevertheless tried at this court. Then inanimate things – such as stones, beams, pliece of iron, etc., – which had caused the death of a man by falling upon him-were put on trial at the Prytaneum, and lastly animals, which had similarly been the cause of death.
Though all these trials were of a ceremonial character, they were carried on with due process of law. Thus, as in all murder trials at Athens, because of the religious feeling back of them that such crimes were against the gods as much as against men, they took place in the open air, that the judges might not be contaminated by the pollution supposed to exhale from the prisoner by sitting under the same roof with him.
(…)
[T]he trial of things, was thus stated by Plato:
“And if any lifeless thing deprive a man of life, except in the case of a thunderbolt or other fatal dart sent from the gods – whether a man is killed by lifeless objects falling upon him, or his falling upon them, the nearest of kin shall appoint the nearest neighbour to be a judge and thereby acquit himself and the whole family of guilt. And he shall cast forth the guilty thing beyond the border.”
Thus we see that this case was an outgrowth from, or amplification of the [courts’ jurisdiction trying and punishing criminals in absentia]; for if the murderer could not be found, the thing that was used in the slaying, if it was known, was punished.5Hyde, Walter W. (1916). The Prosecution and Punishment of Animals and Lifeless Things in the Middle Ages and Modern Times. 64 U.Pa.LRev. 696.


Looking at the current wave of fashionable statements about the evils of algorithms have reminded me eerily of the superstitious pre-Renaissance courts, convening in damp chambers to mete out punishments not only on people but also on impersonal objects. The same detachment from reality, from the Prytaneum through Xerxes’s flogging of the Hellespont through hanging cats for being Satan’s conduits, is emerging once again, in the sophisticated terminology of ‘systematized biases’:

Clad in the pseudo-sophistication of a man who bills himself as ‘one of the world’s leading thinkers‘, a wannabe social theorist with an MBA from McGill and a career full of buzzwords (everything is ‘foremost’, ‘agenda-setting’ or otherwise ‘ultimative’!) that now apparently qualifies him to discuss algorithms, Mr Haque makes three statements that have now become commonly accepted dogma among certain circles when discussing algorithms.

  1. Algorithms are means to social control, or at the very least, social influence.
  2. Algorithms are made by a crowd of ‘geeks’, a largely homogenous, socially self-selected group that’s mostly white, male, middle to upper middle class and educated to a Masters level.
  3. ‘Systematic biases’, by which I presume he seeks to allude to the concept of institutional -isms in the absence of an actual propagating institution, mean that these algorithms are reflective of various biases, effectively resulting in (at best) disadvantage and (at worst) actual prejudice and discrimination against groups that do not fit the majority demographic of those who develop code.

Needless to say, leading thinkers and all that, this is absolute, total and complete nonsense. Here’s why.


A geek’s-eye view of algorithms

We live in a world governed by algorithms – and we have ever since men have mastered basic mathematics. The Polynesian sailors navigating based on stars and the architects of Solomon’s Temple were no less using algorithms than modern machine learning techniques or data mining outfits are. Indeed, the very word itself is a transliteration of the name of the 8th century Persian mathematician Al-Khwarazmi.6Albeit what we currently regard as the formal definition of an algorithm is largely informed by the work of Hilbert in the 1920s, Church’s lambda calculus and, eventually, the emergence of Turing machines. And for most of those millennia of unwitting and untroubled use of algorithms, there were few objections.

The problem is that algorithms now play a social role. What you read is determined by algorithms. The ads on a website? Algorithms. Your salary? Ditto. A million other things are algorithmically calculated. This has endowed the concept of algorithms with an air of near-conspiratorial mystery. You totally expect David Icke to jump out of your quicksort code one day.

Whereas, in reality, algorithms are nothing special to ‘us geeks’. They’re ways to do three things:

  1. Execute things in a particular order, sometimes taking the results of previous steps as starting points. This is called sequencing.
  2. Executing things a particular number of times. This is called iteration.
  3. Executing things based on a predicate being true or false. This is conditionality.

From these three building blocks, you can literally reconstruct every single algorithm that has ever been used. There. That’s all the mystery.

So quite probably, what people mean when they rant about ‘algorithms’ is not the concept of algorithms but particular types of algorithm. In particular, social algorithms, content filtering, optimisation and routing algorithms are involved there.

Now, what you need to understand is that geeks care relatively little about the real world ‘edges’ of problems. They’re not doing this out of contempt or not caring, but rather to compartmentalise problems to manageable little bits. It’s easier to solve tiny problems and make sure the solutions can interoperate than creating a single, big solution that eventually never happens.

To put it this way: to us, most things, if not everything, is an interface. And this largely determines what it means when we talk about the performance of an algorithm.

Consider your washing machine: it can be accurately modelled in the following way.

The above models a washing machine. Given supply water and power, if you put in dirty clothes and detergent, you will eventually get clean clothes and grey water. And that is all.
The above models a washing machine. Given supply water and power, if you put in dirty clothes and detergent, you will eventually get clean clothes and grey water. And that is all.

Your washing machine is an algorithm of sorts. It’s got parameters (water, power, dirty clothes) and return values (grey water, clean clothes). Now, as long as your washing machine fulfils a certain specification (sometimes called a promise7I discourage the promise terminology here as I’ve seen it confuzzled with the asynchronous meaning of the word way too often or a contract), according to which it will deliver a given set of predictable outputs to a given set of inputs, all will be well. Sort of.

“Sort of”, because washing machines can break. A defect in an algorithm is defined as ‘betraying the contract’, in other words, the algorithm has gone wrong if it has been given the right supply and yields the wrong result. Your washing machine might, however, fail internally. The motor might die. A sock might get stuck in it. The main control unit might short out.


Now consider the following (extreme simplification of an) algorithm. MD5 is what we call a cryptographic hash function. It takes something – really, anything that can be expressed in binary – and gives a 128-bit hash value. On one hand, it is generally impossible to invert the process (i.e. it is not possible to conclusively deduce what the original message was), while at the same time the same message will always yield the same hash value.

Without really having an understanding of what goes on behind the scenes,8In case you’re interested, RFC1321 explains MD5’s internals in a lot of detail. you can rely on the promise given by MD5. This is so in every corner of the universe. The value of MD5("Hello World!") is 0xed076287532e86365e841e92bfc50d8c in every corner of the universe. It was that value yesterday. It will be that value tomorrow. It will be that value at the heat death of the universe. What we mean when we say that an algorithm is perfect is that it upholds, and will uphold, its promise. Always.

At the same time, there are aspects of MD5 that are not perfect. You see, perfection of an algorithm is quite context-dependent, much as the world’s best, most ‘perfect’ hammer is utterly useless when what you need is a screwdriver. As such, for instance, we know that MD5 has to map every possible bit value of every possible length to a limited number of possible hash values (128 bit worth of values, to be accurate, which equates to 2^128 or approximately 3.4×10^38 distinct values). These seem a lot, but are actually teensy when you consider that they are used to map every possible amount of binary data, of every possible length. As such, it is known that sometimes different things can have the same hash value. This is called a ‘collision’, and it is a necessary feature of all hash algorithms. It is not a ‘fault’ or a ‘shortcoming’ of the algorithm, no more than we regard the non-commutativity of division a ‘shortcoming’.

Which is why it’s up to you, when you’re using an algorithm, to know what it can and cannot do. Algorithms are tools. Unlike the weird perception in Mr Haque’s swirl of incoherence, we do not worship algorithms. We don’t tend to sacrifice small animals to quicksort and you can rest assured we don’t routinely bow to a depiction of binary search trees. No more do we believe in the ‘perfection’ of algorithms than a surgeon believes in the ‘perfection’ of his scalpel or a pilot believes in the ‘perfection’ of their aircraft. Both know their tools have imperfections. They merely rely on the promise that if used with an understanding of its limitations, you can stake your, and others’, lives on it. That’s not tool-worship, that’s what it means to be a tool-using human being.


The Technocratic Spectre

We don’t know the name of the first human who banged two stones together to make fire, and became the archetype for Prometheus, but I’m rather sure he was rewarded by his fellow humans rewarded with very literally tearing out his very literal and very non-regrowing liver. Every progress in the history of humanity had those who not merely feared progress and the new, but immediately saw seven kinds of nefarious scheming behind it. Beyond (often justified!) skepticism and a critical stance towards new inventions and a reserved approach towards progress (all valid positions!), there is always a caste of professional fear-mongerers, who, after painting a spectre of disaster, immediately proffer the solution: which, of course, is giving them control over all things new, for they are endowed with the mythical talents that one requires to be so presumptuous as to claim to be able to decide for others without even hearing their views.

The difference is that most people have become incredibly lazy. The result is that there is now a preference for fear over informed understanding that comes at the price of investing some time in reading up on the technologies that now are playing such a transformative role. How many Facebook users do you think have re-posted the “UCC 1-308 and Rome Statute” nonsense? And how many of them, you reckon, actually know how Facebook uses their data? While much of what they do is proprietary, the Facebook graph algorithms are partly primitives9Building blocks commonly used that are well-known and well-documented and partly open. If you wanted, you could, with a modicum of mathematical and computing knowledge, have a good stab at understanding what is going on. On the other hand, posting bad legalese is easier. Much easier.

And thus, as a result, we have a degree of skepticism towards ‘algorithms’, mostly by people like Mr Haque who do not quite understand what they are talking about and are not actually referring to algorithms but their social use.

And there lieth the Technocratic Spectre. It has always been a fashionable argument against progress, good or ill, that it is some mysterious machination by a scientific-technical elite aimed at the common man’s detriment. There is now a new iteration of this philosophy, and it is quite surprising how the backwards, low-information edges of the far right reach hands to the far left’s paranoid and misinformed segment. At least the Know-Nothings of the right live in an honest admission of ignorance, eschewing the over-blown credentials and inflated egos of their left-wing brethren like Mr Haque. But in ignorance, they both are one another’s match.

The left-wing argument against technological progress is an odd one, for the IT business, especially the part heavy on research and innovation that comes up with algorithms and their applications, is a very diverse and rather liberal sphere. Nor does this argument square too well with the traditional liberal values of upholding civil liberties, first and foremost that of freedom of expression and conscience. Instead, the objective seems to be an ever more expansive campaign, conducted entirely outside parliamentary procedure (basing itself on regulating private services from the inside and a goodly amount of shaming people into doing their will through the kind of agitated campaigning that I have never had the displeasure to see in a democracy), of limiting the expression of ideas to a rather narrowly circumscribed set, with the pretense that some minority groups are marginalised and even endangered by wrongthink.10Needless to say, a multiple-times-over minority in IT, the only people who have marginalised and endangered me were these stalwart defenders of the right never to have to face a controversial opinion.


Their own foray at algorithms has not fared well. One need only look at the misguided efforts of a certain Bay Area developer notorious for telling people to set themselves on fire. Her software, intended to block wrongthink on the weirder-than-weird cultural phenomenon of Gamergate by blocking Twitter users who have followed a small number of acknowledged wrongthinkers, expresses the flaws of this ideology beautifully. Not only is subtleness and a good technical understanding lacking. There is also a distinct shortage of good common sense and, most of all, an understanding of how to use algorithms. While terribly inefficient and horrendously badly written 11Especially for someone who declaims, with pride, her 15-year IT business experience…, the algorithm behind the GGAutoblocker is sound. It does what its creator intended it to do on a certain level: allow you to block everyone who is following controversial personalities. That this was done without an understanding of the social context (e.g. that this is a great way to block the uncommitted and those who wish to be as widely informed as possible, is of course the very point.

The problem is not with “geeks”.

The problem is when “geeks” decide to play social engineering. Whey they suddenly throw down their coding gear and decide they’re going to transform who talks with whom and how information is exchanged. The problem is exactly the opposite: it happens when geeks cease to be geeks.

It happens when Facebook experiments with users’ timelines without their consent. It happens when companies implement policies aimed at a really laudable goal (diversity and inclusion) that leads to statements by employees that should make any sane person shudder (You know who you are, Bay Area). It happens when Twitter decides they are going to experiment with their only asset. This is how it is rewarded.

$TWTR crash
Top tip: when you’ve got effectively no assets to speak of, screwing with your user base can be very fatal very fast.

The problem is not geeks seeing a technical solution to every socio-political issue.

The problem is a certain class of ‘geeks’ seeing a socio-political use to every tool.


Sins of the algorithm

Why algorithms? Because algorithms are infinitely dangerous: because they are, as I noted above, within their area of applicability universally true and correct.

But they’re also resilient. An algorithm feels no shame. An algorithm feels no guilt. You can’t fire it. You can’t tell them to set themselves on fire or, as certain elements have done to me for a single statistical analysis, threaten to rape my wife and/or kill me. An algorithm cannot be guilted into ‘right-think’. And worst of all, algorithms cannot be convincingly presented as having an internal political bias. Quicksort is not Republican. R/B trees are not Democrats. Neural nets can’t decide to be homophobic.

And for people whose sole argumentation lies on the plane of politics, in particular grievance and identity politics, this is a devastating strike. Algorithms are the greased eels unable to be framed for the ideological sins that are used to attack and remove undesirables from political and social discourse. And to those who wish to govern this discourse by fear and intimidation, a bunch of code that steadfastly spits out results and to hell with threats is a scary prospect.

And so, if you cannot invalidate the code, you have to invalidate the maker. Algorithms perpetuate real equality by being by definition unable to exercise the same kind of bias humans do (not that they don’t have their own kind of bias, but the similarity ends with the word – if your algorithm has a racial or ethnic or gender bias, you’re using it wrong). Algorithms are meritocratic, being immune to nepotism and petty politicking. A credit scorer does not care about your social status the way Mr Jones at the bank might privilege the child of his golf partners over a young unmarried ethnic couple. Trading algorithms don’t care whether you’re a severely ill young man playing the markets from hospital.12It was a great distraction. Without human intervention, algorithms have a purity and lack of bias that cannot easily be replicated once humans have touched the darn things.

And so, those whose stock in life is a thorough education in harnessing grievances for their own gain are going after “the geeks”.


Perhaps the most disgusting thing about Mr Haque’s tweet is the contraposition between “geeks” and “regular humans”, with the assumption that “regular humans” know all about algorithms and unlike the blindly algorithm-worshipping geeks, understand how ‘life is more complicated’ and algorithms are full of geeky biases.

For starters, this is hard to take seriously when in the same few tweets, Mr Haque displays a lack of understanding of algorithms that doesn’t befit an Oregon militia hick, never mind somebody who claims spurious credentials as a foremost thinker.

“Regular humans”, whatever they are that geeks aren’t (and really, I’m not one for geek supremacy, but if Mr Haque had spent five minutes among geeks, he’d know the difference is not what, and where, he thinks it is), don’t have some magical understanding of the shortcomings of algorithms. Heck, usually, they don’t have a regular understanding of algorithms, never mind magical. But it sure sounds good when you’re in the game of shaming some of the most productive members of society unless they contribute to the very problem you’re complaining about. For of course ‘geeks’ can atone for their ‘geekdom’ by becoming more of a ‘regular human’, by starting to engage in various ill-fated political forays that end with the problems that sent the blue bird into a dive on Friday.


Little of this is surprising, though. Anyone who has been paying attention could see the warning signs of a forced politicisation of technology, under the guise of making it more equal and diverse. In my experience, diverse teams perform better, yield better results, work a little faster, communicate better and make fewer big mistakes (albeit a little more small ones). In particular, gender-diverse and ethnically diverse teams are much more than the sum of their parts. This is almost universally recognised, and few businesses that have intentionally resisted creating diverse, agile teams have fared well in the long run.13Not that any statement about this matter is not shut down by reference to ludicrous made-up words like ‘mansplaining’. I’m a huge fan of diversity – because it lives up to a meritocratic ideal, one to which I am rather committed after I’ve had to work my way into tech through a pretty arduous journey.

Politicising a workplace, on the other hand, I am less fond of. Quite simply, it’s not our job. It’s not our job, because for what it’s worth, we’re just a bunch of geeks. There are things we’re better at. Building algorithms is one.

But they are now the enemy. And because they cannot be directly attacked, we’ll become targets. With the passion of a zealot, it will be taught that algorithms are not clever mathematical shortcuts but merely geeks’ prejudices expressed in maths.

And that’s a problem. If you look into the history of mathematics, most of it is peppered by people who held one kind of unsavoury view or another. Moore was a virulent racist. Pauli loved loose women. Half the 20th century mathematicians were communists at some point of their career. Haldane thought Stalin was a great man. And I could go on. But I don’t, because it does not matter. Because they took part in the only truly universal human experience: discovery.


But discovery has its enemies and malcontents. The attitude they display, evidenced by Haque’s tweet too, is ultimately eerily reminiscent of the letter that sounded the death knell on the venerable pre-WW II German mathematical tradition. Titled Kunst des Zitierens (The Art of Citing), it was written in 1934 by Ludwig Bieberbach, a vicious anti-Semite and generally unpleasant character, who was obsessed with the idea of a ‘German mathematics’, free of the Hilbertian internationalism, of what he saw as Jewish influence, of the liberalism of the German mathematical community in the inter-war years. He writes:

“Ein Volk, das eingesehen hat, wie fremde Herrschaftsgelüste an seinem Marke nagen, wie Volksfremde daran arbeiten, ihm fremde Art aufzuzwingen, muss Lehrer von einem ihm fremden Typus ablehnen.”

 

“A people that has recognised how foreign ambitions of power attack its brand, how aliens work on imposing foreign ways on it, has to reject teachers from a type alien to it.”

Algorithms, and the understanding of what they do, protect us from lunatics like Bieberbach. His ‘German mathematics’, suffused with racism and Aryan mysticism, was no less delusional than the idea that a cabal of geeks is imposing a ‘foreign way’ of algorithmically implementing their prejudices, as if geeks actually cared about that stuff.

Every age will produce its Lysenko and its Bieberbach, and every generation has its share of zealots that demand ideological adherence and measure the merit of code and mathematics based on the author’s politics.

Like on Lysenko and Bieberbach, history will have its judgment on them, too.

Head image credits: Max Slevogt, Xerxes at the Hellespont (Allegory on Sea Power). Bildermann 13, Oct. 5, 1916. With thanks to the President and Fellows of Harvard College.

References   [ + ]

1. It is now more or less consensus that Deydras was mentally ill and made the whole story up. Whether he himself believed it or not is another question.
2. As an obedient servant to a kitten, I have trouble believing this!
3. Falcón y Tella, Maria J. (2014). Justice and law, 60. Brill Nijhoff, Leiden
4. Falcón y Tella, Maria J. and Falcón y Tella, Fernando (2006). Punishment and Culture: a right to punish? Nijhoff, Leiden.
5. Hyde, Walter W. (1916). The Prosecution and Punishment of Animals and Lifeless Things in the Middle Ages and Modern Times. 64 U.Pa.LRev. 696.
6. Albeit what we currently regard as the formal definition of an algorithm is largely informed by the work of Hilbert in the 1920s, Church’s lambda calculus and, eventually, the emergence of Turing machines.
7. I discourage the promise terminology here as I’ve seen it confuzzled with the asynchronous meaning of the word way too often
8. In case you’re interested, RFC1321 explains MD5’s internals in a lot of detail.
9. Building blocks commonly used that are well-known and well-documented
10. Needless to say, a multiple-times-over minority in IT, the only people who have marginalised and endangered me were these stalwart defenders of the right never to have to face a controversial opinion.
11. Especially for someone who declaims, with pride, her 15-year IT business experience…
12. It was a great distraction.
13. Not that any statement about this matter is not shut down by reference to ludicrous made-up words like ‘mansplaining’.
Posted in Uncategorized