In this tutorial, we’re going to cover installing Docker on Windows, enabling Kubernetes, installing helm and testing it all by running a machine learning notebook server. And then we’ll cover a few extras like docker-compose and Visual Studio Code Extensions.
Firstly the example environment I’m setting this up in has a normal internet connection (no complex proxy configuration) and is a 64-bit PC running a 64-bit copy of Windows 10 Pro with 4 cores and 16GB of RAM. Minimally you need around 8GB of RAM and 2 cores to be able to run anything sensible.
- Git for Windows installed (https://gitforwindows.org/) this gives you git command-line tools and a bash shell. Very useful.
- Docker Desktop for Windows installer ( https://docs.docker.com/docker-for-windows/install/)
- Helm for windows (https://github.com/helm/helm/releases)
Bringing Docker to life
- Run the docker for windows install, and leave the “Use windows containers…” unchecked.
- Select close and restart, this will restart your machine and start docker when windows starts.
- Once Windows has restarted you will see a little animated docker icon in your taskbar which shows that docker is starting, this can take anything from 20 seconds to a couple of minutes.
- Once it has completed it’s startup procedure you will see a welcome card. Logging into and registering with docker is unnecessary to use docker on windows.
- Open a bash prompt (Win key -> bash -> Enter). You can then run the docker hello-world container.
The hello-world container test will verify:
- Docker service is installed and running correctly.
- The docker command is in your execution path and can communicate with the docker service.
- You have permission to run docker commands.
- The docker service can reach DockerHub the docker container repo.
- The docker service can run that container.
What if it all goes wrong?
The docker install and running process can be a little temperamental so you can access the diagnostic logs through the Docker dashboard. Click the Troubleshoot Icon and in diagnostics pane, there is a link to the logfile which will open in Notepad. Or you can find it at “C:\Users\User\AppData\Local\Docker”
Set sail with Kubernetes
If you don’t know what Kubernetes (k8s for short) is or what it is good for, then there is a good explanation from the K8s team you can look through. Kubernetes is Greek for helmsman or pilot, hence the icon being a ships helm and the nautical themes you’ll see everywhere. K8s on windows used to be extremely painful to set up but it has been made a lot easier. Before enabling k8s it is probably sensible to increase the RAM available to docker from the default 2GB to 4GB, or if you have the capacity 8GB will mean it will run much quicker. Settings are found in the docker dashboard launched from the taskbar icon and clicking on the Cog symbol. Click “Apply &Restart” for the new memory limits to take effect.
Kubernetes can then be enabled by simply clicking “Enable Kubernetes” and then “Apply & Restart”. This will then download and initialise all the k8s containers required to get a local one node k8s cluster running on your local machine. Go and make a coffee this can take a long time. If after 15–20min it has still not finished (usually due to low memory) restart the machine and it should all start up with windows. You can test that k8s is all running by using the kubectl command below.
Let’s deploy something!
Grab the Helm
As you may have read k8s is super powerful and able to run huge clusters and manage very large numbers of resources, but it can be, to put it delicately horrible to administer. To help with this, and provide pre-packaged deployments to run on your cluster, the Helm project was created. A relatively easy to use package-manager for k8s. Unzip the helm package you downloaded earlier and find the helm.exe file. This file can be added to your path anywhere, but I prefer to put it in “C:\Program Files\Docker\Docker\resources\bin” with all the other docker command-line tools.
Launching an example application
To get a helm application running you need to add the repo, update the index and install the chart. The helm install usually gives some helpful instructions at the end of the deployment information to get you connected to your newly created service.
The chart is sent to the k8s cluster and the application is initialized, but this isn’t immediate. It will take time to download the container images and run them.
The helm chart is trying to create 2 containers, but it’s still creating them. This can take a while.
Helm lists the application as deployed even if the pods aren’t ready.
Once a chart has deployed there are sometimes default credentials that need to be used to log in. These can be specified in the helm chart but can also be generated automatically and be unique to each instance. To get the credentials stored in the kubectl secrets system you need to retrieve them using the secret command. e.g.
$ kubectl get secret tensorflow-notebook-<unique_id> -o yaml
The output of the secrets YAML The encoded secret can then be decoded using the base64 command.
echo ‘XXXXXXXXXXX==’ |base64 — decode
Give yourself a high five, you have an awesome setup
Some useful extras
Before all the fancy cloud and enterprise-ready tools from k8s appeared there was docker-compose. This is not usually used these days for deploying enterprise applications, but for messing about with containers in a much simpler and easier to get started way docker-compose is essential. Using a super simple compose file you can run a bunch of services with a simple “docker-compose up -d” command.
Make a directory like compose_test, put the snippet above in a file called docker-compose.yml, run “docker-compose up” in that directory and you have a running web service. I use docker-compose for development, testing and POC deployments 90% of the time. It’s a lot easier to understand and work with for development work.
Visual Studio Code
If you’ve not used Visual Studio Code yet, where’ve you been! It’s an excellent free light-weight editor from Microsoft, it has no dependencies or requirements from Visual Studio and has extensive language support. As part of the ecosystem of extensions, there are a large number of excellent add ons that work with k8s, helm, Docker and compose files. I highly recommend installing the “Kubernetes” and “Docker” extensions by Microsoft.
Something to look out for
Docker for Windows uses the Hyper-V virtualisation built into more recent versions of windows. This is the toolset behind WSL2 and Microsoft’s virtualisation services. Up until recently, VirtualBox has been the best option for doing quick and easy virtualisation on windows and if you want Hyper-V and VirtualBox to coexist on the same system that has been impossible. With VirtualBox 6.1.x this limitation has supposedly been removed and you can run VirtualBox using the Hyper-V backend. I have not tested this so it may not work well if you have existing VMs you want to use. If it doesn’t work it can be rolled back to just VirtualBox by uninstalling and disabling Hyper-V, so no harm in trying it out.