This week in Sudo deals with Docker and the power of containerisation. For a lot of you, you might not have come across VMs – so Docker may seem a strange place to start.

We'll start by explaining Docker & Containerisation, there's some installation instructions and some 🍑 juicy demos down below that.

What is Containerisation?

In general terms, containerisation is the result of splitting up our application into multiple distinct runtime environments (places we plan to have them run in) and specifically – having only one container for each component or subroutine of our application.

For example: in a LAMP stack, instead of a single server with our Website, Application and Database all on the one single virtualised machine, we would separate the intents out such that each one ran with a) Only the dependencies that components would need, and b) Just running that component as it's own container.

Deconstructed, our Apache (Runtime) would serve as the web container, that serves content from our PHP (Program) application that we can call our app container, that connects to a MySQL db container that we'll store our data in – etc.

Dockerhttps://www.docker.com/ – is the main engine behind containerisation, that allow us to define and run Containers, define applications and their runtimes in code, cache images and apply scaling across more than just one host.

Installation

Download here:
https://www.docker.com/get-started (Ensure you meet the minimum requirements incl. Win 7/10 Pro or better [Home will not do])

Sudo Docker Examples:
To get started with the Demos, this guide uses the Sudo Docker git repo on Github. You can download it by running Git Clone.

git clone https://github.com/sudouc/docker-example.git

Demo 1 – Basic Website Running

For this example, we'll download and install an nginx docker image that we'll run using the docker run command. From there, we say "Our port 8888 will be bound to the container's port 80" (-p) so we can communicate with the container through localhost. After that, we'll mount the volume (-v) of our current working directory to /usr/share/nginx/html with read-only permissions, and Lastly, we'll call the container "sudo-web-example" (--name).

Go to /docker-example/basic-web/ in the terminal & run the following code:

cd docker-example/basic-web
docker run -p 8888:80 -v `pwd`/public:/usr/share/nginx/html:ro --name sudo-web-example -d nginx

Then navigate to http://localhost:8888 & you should get the following

Demo 2 – Compile and Run Languages: Rust

This one is pretty simple. Docker also provides a way for us to compile and run programs without having to install any of the dependencies needed to run it onto our local machine.

Let's say we want to run some Rust code:

fn main() {
    println!("Hello, Sudo!");
}

this becomes really easy in Docker, using the following command, we can very easily compile run the code within the following code:

cd docker-example/rust-example
docker build -t my-rust-app .
docker run -it --rm --name my-running-app my-rust-app

Demo 3 – Scaling

Last Demo! Scaling is where we balance the load of our app across multiple containers.

For this docker-compose.yml file seen below, we're going to use the docker-image/haproxy image that automatically listens on port 80 (http) & 443 (https) – that serves 20 sudoapp's replicas of this container – which we can pretend is a single-threaded, large application, something that computes a lot of things in order to respond to the user.

It's actually 6 lines of javascript, but use your imagination.

You'll also notice that our update_config specifies how we want this stack to be updated – 10 at a time with 0s delay in-between updates

version: '3'

services:
  sudoapp:
   image: sudoapp
   ports:
     - 8080
   environment:
     - SERVICE_PORTS=8080
   deploy:
     replicas: 20
     update_config:
       parallelism: 10
       delay: 0s
     restart_policy:
       condition: on-failure
       max_attempts: 3
       window: 120s
   networks:
     - web

  proxy:
    image: dockercloud/haproxy
    depends_on:
      - sudoapp
    environment:
      - BALANCE=leastconn
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 80:80
    networks:
      - web
    deploy:
      placement:
        constraints: [node.role == manager]

networks:
  web:
    driver: overlay

So that just leaves deploying out application.

cd docker-example/scaling
docker swarm init
docker build -t sudoapp .
docker stack deploy --compose-file=docker-compose.yml prod

that will result in something like this:

Deploying a new version of the application.

Probably the most fun part. Deploying a new application can happen in stages – which can prevent downtime, as docker knows if we deploy applications, and the exit-early (meaning they crashed), it knows to stop the deployment process, and HAProxy will continue serving the ones with the already-deployed code.

This means although new code may crash the server, you'll continue being able to serve customers visiting the site.

To do this, we need to modify the JS file in a way that makes it obvious something has changed. I personally said Hello to my friend Chris with the following:

var http = require('http');
var os = require('os');

http.createServer(function (req, res) {
	res.writeHead(200, {'Content-Type': 'text/html'});
	res.end(`<h1>I'm ${os.hostname()} - Hello Christian</h1>`);
}).listen(8080);

Then we'll build the new image using a tag v2 and deploy it to our stack. You'll notice the name of our service to update is `{{stack_name}}_{{app_name}}`.

docker build -t sudoapp:v2 .
docker service update --image sudoapp:v2 prod_sudoapp

after you do that, we can see our staging deployment in action:

Removing the deployment from docker

All the containers running will continue to run unless removed. To remove the deployment run the following:

docker stack rm prod