# Using screen to babysit long-running processes

In machine learning, especially in deep learning, long-running processes are quite common. Just yesterday, I finished running an optimisation process that ran for the best part of four days –  and that’s on a 4-core machine with an Nvidia GRID K2, letting me crunch my data on 3,072 GPU cores!  Of course, I did not want to babysit the whole process. Least of all did I want to have to do so from my laptop. There’s a reason we have tools like `Sentry`, which can be easily adapted from webapp monitoring to letting you know how your model is doing.

One solution is to spin up another virtual machine, `ssh` into that machine, then from that
`ssh` into the machine running the code, so that if you drop the connection to the first machine, it will not drop the connection to the second. There is also `nohup`, which makes sure that the process is not killed when you ‘hang up’ the `ssh` connection. You will, however, not be able to get back into the process again. There are also reparenting tools like `reptyr`, but the need they meet is somewhat different. Enter terminal multiplexers.

Terminal multiplexers are old. They date from the era of things like time-sharing systems and other antiquities whose purpose was to allow a large number of users to get their time on a mainframe designed to serve hundreds, even thousands of users. With the advent of personal computers that had decent computational power on their own, terminal multiplexers remained the preserve of universities and other weirdos still using mainframe architectures. Fortunately for us, two great terminal multiplexers, `screen` (aka `GNU Screen` ) and `tmux` , are still being actively developed, and are almost definitely available for your *nix of choice. This gives us a convenient tool to sneak a peek at what’s going on with our long-suffering process. Here’s how.

Step 1
`ssh` into your remote machine, and launch `ssh`. You may need to do this as `sudo` if you encounter the error where `screen`, instead of starting up a new shell, returns `[screen is terminating]` and quits. If `screen` is started up correctly, you should be seeing a slightly different shell prompt (and if you started it as `sudo`, you will now be logged in as root).

In some scenarios, you may want to ‘name’ your `screen` session. Typically, this is the case when you want to share your screen with another user, e.g. for pair programming. To create a named screen, invoke `screen` using the session name parameter `-S`, as in e.g. `screen -S my_shared_screen`.
Step 2
In this step, we will be launching the actual script to run. If your script is Python based and you are using `virtualenv` (as you ought to!), activate the environment now using `source /<virtualenv folder>/bin/activate, `replacing  `virtualenv folder`by the name of the folder where your `virtualenv`s live (for me, that’s the `environments` folder, often enough it’s something like `~/.virtualenvs`) and by the name of your virtualenv (in my case, `research`). You have to activate your virtualenv even if you have done so outside of `screen` already (remember, `screen` means you’re in an entirely new shell, with all environment configurations, settings, aliases &c. gone)!

With your `virtualenv` activated, launch it as normal — no need to launch it in the background. Indeed, one of the big advantages is the ability to see verbose mode progress indicators. If your script does not have a progress logger to `stdout` but logs to a logfile, you can start it using `nohup`, then put it into the background (`Ctrl--Z`, then `bg`) and track progress using `tail -f logfile.log` (where `logfile.log` is, of course, to be substituted by the filename of the logfile.
Step 3
Press `Ctrl--A` followed by `Ctrl--D` to detach from the current screen. This will take you back to your original shell after noting the address of the screen you’re detaching from. These always follow the format `<identifier>.<session id>.<hostname>`, where hostname is, of course, the hostname of the computer from which the `screen` session was started, stands for the name you gave your screen if any, and is an autogenerated 4-6 digit socket identifier. In general, as long as you are on the same machine, the screen identifier or the session name will be sufficient – the full canonical name is only necessary when trying to access a screen on another host.

To see a list of all screens running under your current username, enter `screen -list`. Refer to that listing or the address echoed when you detached from the `screen` to reattach to the process using `screen -r <socket identifier>[.<session identifier>.<hostname>]`. This will return you to the script, which keeps executing in the background.
Result
Reattaching to the process running in the background, you can now follow the progress of the script. Use the key combination in Step 3 to step out of the process anytime and the rest of the step to return to it.

Bugs
There is a known issue, caused by strace, that leads to `screen` immediately closing, with the message `[screen is terminating]` upon invoking `screen` as a non-privileged user.

There are generally two ways to resolve this issue.

• Use a privileged user account and always invoke `screen` as `sudo`.
• As a privileged user, change the permissions of `screen` to `2775` by entering `sudo chmod 2775 \$(which screen)`. The first digit is responsible for a privilege elevation upon execution to sudo, which means that repeated sudoing will not be necessary.

The overall effect of both solutions is the same. Notably, both may be undesirable from a security perspective. As always, weigh risks against utility.

Do you prefer `screen` to staying logged in? Do you have any other cool hacks to make monitoring a machine learning process that takes considerable time to run? Let me know in the comments!

Image credits: Zenith Z-19 by ajmexico on Flickr

# Fixing the mysterious Jupyter Tensorflow import bug

There’s a weird bug afoot that you might encounter when setting up a ‘lily white’ (brand new) development environment to play around with Tensorflow.  As it seems to have vexed quite a few people, I thought I’ll put my solution here to help future  tensorflowers find their way.  The problem presents after you have set up your new  `virtualenv` . You install Jupyter and Tensorflow, and  when importing, you get this:

```In [1]:   import tensorflow as tf

---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in ()
----> 1 import tensorflow as tf

ModuleNotFoundError: No module named 'tensorflow'```

Oh.

Say you are a dogged pursuer of bugs, and wish to check if you might have installed Tensorflow and Jupyter into different virtualenvs. One way to do that is to simply activate your virtualenv (using activate or source activate, depending on whether you use virtualenvwrapper), and starting a Python shell. Perplexingly, importing Tensorflow here will work just fine.

### The solution

Caution
At this time, this works only for CPython aka ‘regular Python’ (if you don’t know what kind of Python you are running, it is in all likelihood CPython).
Note

In general, it is advisable to start fixing these issues by destroying your virtualenv and starting anew, although that’s not strictly necessary. Create a virtualenv, and note the base Python executable’s version (it has to be a version for which there is a Tensorflow wheel for your platform, i.e. 2.7 or 3.3-3.6).

Step 1

Go to the PyPI website to find the Tensorflow installation appropriate to your system and your Python version (e.g. cp36 for Python 3.6). Copy the path of the correct version, then open up a terminal window and declare it as the environment variable `TF_BINARY_URL`. Use `pip` to install from the URL you set as the environment variable, then install Jupyter.

```CVoncsefalvay@orinoco ~ \$ export TF_BINARY_URL=https://pypi.python.org/packages/b1/74/873a5fc04f1aa8d275ef1349b25c75dd87cbd7fb84fe41fc8c0a1d9afbe9/tensorflow-1.1.0rc2-cp36-cp36m-macosx_10_11_x86_64.whl#md5=c9b6f7741d955d1d3b4991a7942f48b9
CVoncsefalvay@orinoco ~ \$ pip install --upgrade \$TF_BINARY_URL jupyter
Collecting tensorflow==1.1.0rc2 from https://pypi.python.org/packages/b1/74/873a5fc04f1aa8d275ef1349b25c75dd87cbd7fb84fe41fc8c0a1d9afbe9/tensorflow-1.1.0rc2-cp36-cp36m-macosx_10_11_x86_64.whl#md5=c9b6f7741d955d1d3b4991a7942f48b9
Using cached tensorflow-1.1.0rc2-cp36-cp36m-macosx_10_11_x86_64.whl
Collecting jupyter
Using cached jupyter-1.0.0-py2.py3-none-any.whl

(... lots more installation steps to follow ...)

Successfully installed ipykernel-4.6.1 ipython-6.0.0 jedi-0.10.2 jinja2-2.9.6 jupyter-1.0.0 jupyter-client-5.0.1 jupyter-console-5.1.0 notebook-5.0.0 prompt-toolkit-1.0.14 protobuf-3.2.0 qtconsole-4.3.0 setuptools-35.0.1 tensorflow-1.1.0rc2 tornado-4.5.1 webencodings-0.5.1 werkzeug-0.12.1```
Step 2
Now for some magic. If you launch Jupyter now, there’s a good chance it won’t find Tensorflow. Why? Because you just installed Jupyter, your shell might not have updated the jupyter alias to point to that in the virtualenv, rather than your system Python installation.

Enter which jupyter to find out where the Jupyter link is pointing. If it is pointing to a path within your virtualenvs folder, you’re good to go. Otherwise, open a new terminal window and activate your virtualenv. Check where the jupyter command is pointing now – it should point to the virtualenv.

Step 3
Fire up Jupyter, and import tensorflow. Voila – you have a fully working Tensorflow environment!

As always, let me know if it works for you in the comments, or if you’ve found some alternative ways to fix this issue. Hopefully, this helps you on your way to delve into Tensorflow and explore this fantastic deep learning framework!

Header image: courtesy of Jeff Dean, Large Scale Deep Learning for Intelligent Computer Systems, adapted from Untangling invariant object recognition by DiCarlo and Cox (2007).

If you develop for Amazon’s Alexa-powered devices, you must at some point have come across Flask-Ask, a project by John Wheeler that lets you quickly and easily build Python-based Skills for Alexa. It’s so easy, in fact, that John’s quickstart video, showing the creation of a Flask-Ask based Skill from zero to hero, takes less than five minutes! How awesome is that? Very awesome.

Bootstrapping a Flask-Ask project is not difficult – in fact, it’s pretty easy, but also pretty repetitive. And so, being the ingenious lazy developer I am, I’ve come up with a (somewhat opinionated) cookiecutter template for Flask-Ask.

## Usage

Using the Flask-Ask cookiecutter should be trivial.  Make sure you have cookiecutter installed, either in a virtualenv that you have activated or your system installation of Python. Then, simply use  `cookiecutter gh:chrisvoncsefalvay/cookiecutter-flask-ask` to get started. Answer the friendly assistant’s questions, and voila! You have the basics of a Flask-Ask project all scaffolded.

Once you have scaffolded your project, you will have to create a virtualenv for your project and install dependencies by invoking `pip install -r requirements.txt`. You will also need ngrok to test your skill from your local device.

## What’s in the box?

The cookiecutter has been configured with my Flask-Ask development preferences in mind, which in turn borrow heavily from John Wheeler‘s. The cookiecutter provides a scaffold of a Flask application, including not only session start handlers and an example intention but also a number of handlers for built-in Alexa intents, such as `Yes`, `No` and `Help`.

There is also a folder structure you might find useful, including an intent schema for some basic Amazon intents and a corresponding empty `sample_utterances.txt` file, as well as a gitkeep’d folder for custom slot types. Because I’m a huge fan of Sphinx documentation and strongly believe that voice apps need to be assiduously documented to live up to their potential, there is also a `docs/` folder with a `Makefile` and an opinionated `conf.py` configuration file.

## Is that all?!

Blissfully, yes, it is. Thanks to John’s extremely efficient and easy-to-use Flask-Ask project, you can discourse with your very own skill less than twenty minutes after starting the scaffolding!

You can find the cookiecutter-flask-ask project here. Issues, bugs and other woes are welcome, as are contributions (simply raise a pull request). For help and advice, you can find me on the Flask-Ask Gitter a lot during daytime CET.

# Diffie-Hellman in under 25 lines

How can you and I agree on a secret without anyone eavesdropping being able to intercept our communications? At first, the idea sounds absurd – for the longest time, without a pre-shared secret, encryption was seen as impossible. In World War II, the Enigma machines relied on a fairly complex pre-shared secret – the Enigma configurations (consisting of the rotor drum wirings and number of rotors specific to the model, the Ringstellung of the day, and Steckbrett configurations) were effectively the pre-shared key. During the Cold War, field operatives were provided with one-time pads (OTPs), randomly (if they were lucky) or pseudorandomly (if they weren’t, which was most of the time) generated[1] one time pads (OTPs) with which to encrypt their messages. Cold War era Soviet OTPs were, of course, vulnerable because like most Soviet things, they were manufactured sloppily.[2] But OTPs are vulnerable to a big problem: if the key is known, the entire scheme of encryption is defeated. And somehow, you need to get that key to your field operative.

Enter the triad of Merkle, Diffie and Hellman, who in 1976 found a way to exploit the fact that multiplying primes is simple but decomposing a large number into the product of two primes is difficult. From this, they derived the algorithm that came to be known as the Diffie-Hellman algorithm.[3]

# How to cook up a key exchange algorithm

The idea of a key exchange algorithm is to end up with a shared secret without having to exchange anything that would require transmission of the secret. In other words, the assumption is that the communication channel is unsafe. The algorithm must withstand an eavesdropper knowing every single exchange.

Alice and Bob must first agree to use a modulus $p$ and a base$g$, so that the base is a primitive root modulo the modulus.

Alice and Bob each choose a secret key $a$ and $b$ respectively – ideally, randomly generated. The parties then exchange $A = g^a \mod(p)$ (for Alice) and $B = g^b \mod(p)$ (for Bob).

Alice now has received $B$. She goes on to compute the shared secret $s$ by calculating $B^a \mod(p)$ and Bob computes it by calculating $A^b \mod(p)$.

The whole story is premised on the equality of

$A^b \mod(p) = B^a \mod(p)$

That this holds nearly trivially true should be evident from substituting $g^b$ for $B$ and $g^a$ for $A$. Then,

$g^{ab} \mod(p) = g^{ba} \mod(p)$

Thus, both parties get the same shared secret. An eavesdropper would be able to get $A$ and $B$. Given a sufficiently large prime for $p$, in the range of 6-700 digits, the discrete logarithm problem of retrieving $a$ from $B^a \mod(p)$ in the knowledge of $B$ and $p$ is not efficiently solvable, not even given fairly extensive computing resources. Read more

References   [ + ]

 1 ↑ As a child, I once built a pseudorandom number generator from a sound card, a piece of wire and some stray radio electronics, which basically rested on a sampling of atmospheric noise. I was surprised to learn much later that this was the method the KGB used as well. 2 ↑ Under pressure from the advancing German Wehrmacht in 1941, they had duplicated over 30,000 pages worth of OTP code. This broke the golden rule of OTPs of never, ever reusing code, and ended up with a backdoor that two of the most eminent female cryptanalysts of the 20th, Genevieve Grotjan Feinstein and Meredith Gardner, on whose shoulders the success of the Venona project rested, could exploit. 3 ↑ It deserves noting that the D-H key exchange algorithm was another of those inventions that were invented twice but published once. In 1975, the GCHQ team around Clifford Cocks invented the same algorithm, but was barred from publishing it. Their achievements weren’t recognised until 1997.