Windows, and Docker, and Kubernetes (2020)
@ Openthought · Saturday, Jun 27, 2020 · 8 minute read · Update at Jun 27, 2020

In this tutorial, we’re going to cover installing Docker on Windows, enabling Kubernetes, installing helm and testing it all by running a machine learning notebook server. And then we’ll cover a few extras like docker-compose and Visual Studio Code Extensions.

Getting Started

Firstly the example environment I’m setting this up in has a normal internet connection (no complex proxy configuration) and is a 64-bit PC running a 64-bit copy of Windows 10 Pro with 4 cores and 16GB of RAM. Minimally you need around 8GB of RAM and 2 cores to be able to run anything sensible.

Pre Installed:

Download:

Bringing Docker to life

  1. Run the docker for windows install, and leave the “Use windows containers…” unchecked.

  1. Select close and restart, this will restart your machine and start docker when windows starts.

  1. Once Windows has restarted you will see a little animated docker icon in your taskbar which shows that docker is starting, this can take anything from 20 seconds to a couple of minutes.

  1. Once it has completed it’s startup procedure you will see a welcome card. Logging into and registering with docker is unnecessary to use docker on windows.

  1. Open a bash prompt (Win key -> bash -> Enter). You can then run the docker hello-world container.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
$ docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pulling fs layer
1b930d010525: Verifying Checksum
1b930d010525: Download complete
1b930d010525: Pull complete
Digest: sha256:f9dfddf63636d84ef479d645ab5885156ae030f611a56f3a7ac7f2fdd86d7e4e
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
...<rest of message remove>

The hello-world container test will verify:

  • Docker service is installed and running correctly.
  • The docker command is in your execution path and can communicate with the docker service.
  • You have permission to run docker commands.
  • The docker service can reach DockerHub the docker container repo.
  • The docker service can run that container.

What if it all goes wrong?

Debugging

The docker install and running process can be a little temperamental so you can access the diagnostic logs through the Docker dashboard. Click the Troubleshoot Icon and in diagnostics pane, there is a link to the logfile which will open in Notepad. Or you can find it at “C:\Users\User\AppData\Local\Docker”

Accessing the diagnostics logfile

Set sail with Kubernetes

If you don’t know what Kubernetes (k8s for short) is or what it is good for, then there is a good explanation from the K8s team you can look through. Kubernetes is Greek for helmsman or pilot, hence the icon being a ships helm and the nautical themes you’ll see everywhere. K8s on windows used to be extremely painful to set up but it has been made a lot easier. Before enabling k8s it is probably sensible to increase the RAM available to docker from the default 2GB to 4GB, or if you have the capacity 8GB will mean it will run much quicker. Settings are found in the docker dashboard launched from the taskbar icon and clicking on the Cog symbol. Click “Apply &Restart” for the new memory limits to take effect.

Kubernetes can then be enabled by simply clicking “Enable Kubernetes” and then “Apply & Restart”. This will then download and initialise all the k8s containers required to get a local one node k8s cluster running on your local machine. Go and make a coffee this can take a long time. If after 15–20min it has still not finished (usually due to low memory) restart the machine and it should all start up with windows. You can test that k8s is all running by using the kubectl command below.

1
2
3
$ kubectl get nodes
NAME             STATUS   ROLES    AGE    VERSION
docker-desktop   Ready    master   167m   v1.15.5

Let’s deploy something!

Grab the Helm

As you may have read k8s is super powerful and able to run huge clusters and manage very large numbers of resources, but it can be, to put it delicately horrible to administer. To help with this, and provide pre-packaged deployments to run on your cluster, the Helm project was created. A relatively easy to use package-manager for k8s. Unzip the helm package you downloaded earlier and find the helm.exe file. This file can be added to your path anywhere, but I prefer to put it in “C:\Program Files\Docker\Docker\resources\bin” with all the other docker command-line tools.

Launching an example application

To get a helm application running you need to add the repo, update the index and install the chart. The helm install usually gives some helpful instructions at the end of the deployment information to get you connected to your newly created service.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
$ helm repo update
$ helm install stable/tensorflow-notebook --generate-name
NAME: tensorflow-notebook-1584196283
LAST DEPLOYED: Sat Mar 14 14:31:25 202
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w tensorflow-notebook-1584196283'
export SERVICE_IP=$(kubectl get svc --namespace default tensorflow-notebook-1584196283 -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo notebook_url=http://$SERVICE_IP:8888
echo tensorboard_url=http://$SERVICE_IP:6006

The chart is sent to the k8s cluster and the application is initialized, but this isn’t immediate. It will take time to download the container images and run them.

The chart is still starting up

The helm chart is trying to create 2 containers, but it’s still creating them. This can take a while.

The chart is up and running

Helm lists the application as deployed even if the pods aren’t ready. Once a chart has deployed there are sometimes default credentials that need to be used to log in. These can be specified in the helm chart but can also be generated automatically and be unique to each instance. To get the credentials stored in the kubectl secrets system you need to retrieve them using the secret command. e.g.

1
$ kubectl get secret tensorflow-notebook-<unique_id> -o yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
data:
  password: XXXXXXXXXXX==
kind: Secret
metadata:
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
  creationTimestamp: "2020-03-14T14:31:25Z"
  labels:
    app: tensorflow-notebook
    chart: tensorflow-notebook-0.1.3
    heritage: Helm
    release: tensorflow-notebook-1584196283
  name: tensorflow-notebook-1584196283
  namespace: default
  resourceVersion: "8011"
  selfLink: /api/v1/namespaces/default/secrets/tensorflow-notebook-1584196283
  uid: 2fa1812e-b892-48de-b4ee-f7aa33fe27af
type: Opaque

The output of the secrets YAML The encoded secret can then be decoded using the base64 command.

1
echo ‘XXXXXXXXXXX==’ |base64 — decode
Now you can log in to your k8s hosted machine learning Tensorflow Jupyter Notebook service at http://127.0.0.1:8888. And a Tensorboard management interface at http://127.0.0.1:6006.

Tensorflow Jupyter notebook server

Tensorflow Dashboard

Give yourself a high five, you have an awesome setup

Some useful extras

Docker Compose

Before all the fancy cloud and enterprise-ready tools from k8s appeared there was docker-compose. This is not usually used these days for deploying enterprise applications, but for messing about with containers in a much simpler and easier to get started way docker-compose is essential. Using a super simple compose file you can run a bunch of services with a simple “docker-compose up -d” command.

Simple docker-compose file
1
2
3
4
5
6
version: '3.3'
services:
  web:
    image: nginx
    ports:
      - 80:80

Make a directory like compose_test, put the snippet above in a file called docker-compose.yml, run “docker-compose up” in that directory and you have a running web service. I use docker-compose for development, testing and POC deployments 90% of the time. It’s a lot easier to understand and work with for development work.

Visual Studio Code

If you’ve not used Visual Studio Code yet, where’ve you been! It’s an excellent free light-weight editor from Microsoft, it has no dependencies or requirements from Visual Studio and has extensive language support. As part of the ecosystem of extensions, there are a large number of excellent add ons that work with k8s, helm, Docker and compose files. I highly recommend installing the “Kubernetes” and “Docker” extensions by Microsoft.

Something to look out for

Some Caveats

Docker for Windows uses the Hyper-V virtualisation built into more recent versions of windows. This is the toolset behind WSL2 and Microsoft’s virtualisation services. Up until recently, VirtualBox has been the best option for doing quick and easy virtualisation on windows and if you want Hyper-V and VirtualBox to coexist on the same system that has been impossible. With VirtualBox 6.1.x this limitation has supposedly been removed and you can run VirtualBox using the Hyper-V backend. I have not tested this so it may not work well if you have existing VMs you want to use. If it doesn’t work it can be rolled back to just VirtualBox by uninstalling and disabling Hyper-V, so no harm in trying it out.

Next Steps

Openthought.com

Open Thoughts
Articles for the technology minded

apache-spark career conflit containers data devops docker documentation download games getting-things-done git gitlab gtd helm home how-to inspire java javascript kubernetes management meeting microsoft office pandas programming pyspark python remote-working scala scripting spark teams tech4good tensorflow testing tutorial typing windows

Social Links