NML Blog

Recent posts

Demystifying Docker

Docker containers have received a lot of attention for a while now, because they let you very quickly and conveniently create and destroy environments. From development to deployments, containers let you create consistent, isolated environments in just a few steps. In this post we'll look at what containers are and how they can help you alleviate the pains of environment configuration.

What is Docker?

Docker is a system that enables you to package an application and its dependencies in a self contained environment with a predefined configuration for quicker deployments and less maintenance. You define what your application requires its environment to look like and how your application starts up in a file called a Dockerfile. Deploying your application is then as simple as building an image from the file and telling Docker to start a container from it.

To get a simple container running (for example, one that just starts up a bash shell in an Ubuntu environment) all you need to do is install Docker and run this command:

docker run -it --rm ubuntu  

Output Screenshot

Output from running an Ubuntu container.

While downloading the image used for the container, you'll notice multiple parts were downloaded independently. Docker images, which containers are built from, consist of multiple layers that each define a change or a set of changes to the file system. Each command defined in a Dockerfile generates a new layer, which can be cached and reused by other images, provided they branch in the same way. This modular approach means that starting hundreds of containers using a common image, only requires a single download of that image.

Defining environments in this way, means you have more assurance that all environments will behave the same way, without manual installation and configuration. You can even use Docker containers on development machines during testing, to avoid "it works on my machine" problems. Many of the major cloud providers, for example AWS and Azure, already provide support for deploying Docker containers.

Docker containers are meant to be lightweight and can be created and destroyed within seconds. File storage inside a container is therefore ephemeral. Persistent storage is instead achieved by attaching volumes to the container, which are mappings from directories on the host machine to directories on the container. When files are written to those volumes, they are persisted to the host filesystem and will be available again should the container be recreated for whatever reason.

Configuration options for communicating between Docker containers on the same host and to outside networks, are flexible. To enable containers to communicate with each other, you can create links between containers allowing them to access each other via an alias. To allow the container to communicate with the outside world, you can specify ports that can be forwarded via the host.

How do containers differ from Virtual Machines?

One of the most common phrases you might hear when people are explaining Docker containers, is "lightweight VMs". Although they are similar in purpose, they are technically quite different.

Difference between virtual machines (Left) and Docker containers (Right) (Source)

Virtual Machines

Virtual machines are complete operating systems (guest operating systems) running on top of a hypervisor (also called virtual machine monitors). Hypervisors run on the host machine and translate the system calls made by guest operating systems to the equivalent calls on the host machine. Each virtual machine gets allocated a portion of the available resources (CPUs, disk space etc.) on creation, regardless of how much it is going to use, which limits the number of VMs that can run on a host. There is also a performance impact from this, as the virtual machine has to boot up a whole new copy of the operating system kernel.

Docker containers

Docker containers on the other hand, only include the libraries and binaries specific to the operating system you want to run. The containers then share resources the host machine resources. Resource limits can be placed on each container, though, to prevent them from overwhelming a system. They do not include their own operating system kernel, but instead use the host's kernel, eliminating the overhead caused by the hypervisor. Starting up a container only starts the user processes required, as the host kernel has already been booted. The separation of these container processes are provided by utilizing kernel features like namespaces.

Sharing the host kernel allows containers to start up within seconds, as opposed to minutes required by virtual machines. Containers also do not need to be assigned resources up front and the host can run as many as it can handle, simultaneously.

Due to the reduced isolation between Docker containers compared to VMs, they might not be the best choice in all situations.

How do Linux containers run on Windows and OSX?

Due to the fact that containers need to share the host OS's kernel, you can only run Linux containers on Linux machines. Linux containers can be run on Windows and OSX using a hypervisor, though.

On Windows, switching to Linux container mode, causes Docker Engine to start a MobyLinux VM in Hyper-V, which will be the host on which Linux containers will run. This may sound like it defeats the purpose of having containers, but you still get the benefit of being able to run many containers with a single VM without duplication of the kernel.

How to get started

This example is going to cover how to get started using Docker with .NET Core in Windows, but the Docker commands and the structure of a Dockerfile remains the same across platforms.

Prerequisites

Before you can start running building and running Docker containers, you will need to install Docker for Windows (Download). Note: the installer will enable Hyper-V, which will cannot run alongside VirtualBox. Windows 10 64-bit is also required for Docker for Windows, but an alternate tool called Docker Toolbox is available if your system does not meet this requirement.

To follow the tutorial below, you will also need .NET Core installed.

Creating your application

If you have Visual Studio 2017 installed with the .NET Core workload, you will see a new option when creating .NET Core applications, to add Docker support to your project. Here, I will be creating a Web Application project, with Docker Support enabled:

New Application

Creating a new application with Docker support in Visual Studio.

When the project is created, take note of two important parts of the solution, which are not present in a normal .NET Core solution, namely, the Dockerfile and a separate project called "docker-compose":

Solution Explorer

Structure of a .NET Core solution with Docker support.

The Dockerfile

As discussed previously, the Dockerfile defines your the Docker image that your container will be constructed with. The default Dockerfile created with the project contains everything you need to run your project and looks like this:

FROM microsoft/aspnetcore:1.1

ARG source  
WORKDIR /app  
EXPOSE 80  
COPY ${source:-obj/Docker/publish} .

ENTRYPOINT ["dotnet", "DockerApp.dll"]  

This defines a number of things:

  • The FROM command defines the base image that we want to build upon. Here we want version 1.1 of .NET Core to run our application on. This image is a Debian linux image with .NET Core already installed and set up. For those interested, you can check out the Dockerfile for that image here.
  • ARG defines a variable that we can set when building the image and used elsewhere in the Dockerfile.
  • WORKDIR specifies where the working directory will be inside the container when it starts up. Commands that modify the file system past this point will work relative to this directory.
  • EXPOSE specifies which ports the container will be listening on. Since we are creating a web application here, we will expose PORT 80.
  • COPY ${source:-obj/Docker/publish} . copies our application files into the container at the current working directory (/app). Notice that we use the source argument defined previously here, or if that is not set, the default path obj/Docker/publish.
  • Finally, specifying how our application is started, is done by using the ENTRYPOINT command. Here we are telling it to start dotnet.exe and provide our application DLL as a parameter.

Some other important commands include RUN, which allows you to run commands on your container before your application starts up, and ENV, which sets environment variables. A full reference of available Dockerfile keywords, is available here.

To build an image file from a Dockerfile, run:

docker build --t <image_name> <path_to_Dockerfile>  

Docker Compose

As an alternative to building each of your Docker images manually, you can use Docker compose. Docker compose allows you to define your applications as microservices and combine their runtime configurations into one configuration file.

docker-compose.yml (Base):

version: '2'

services:  
  dockerapp:
    image: dockerapp
    build:
      context: ./DockerApp
      dockerfile: Dockerfile

docker-compose.vs.debug.yml (Debug override):

version: '2'

services:  
  dockerapp:
    image: dockerapp:dev
    build:
      args:
        source: ${DOCKER_BUILD_SOURCE}
    environment:
      - DOTNET_USE_POLLING_FILE_WATCHER=1
    volumes:
      - ./DockerApp:/app
      - ~/.nuget/packages:/root/.nuget/packages:ro
      - ~/clrdbg:/clrdbg:ro
    entrypoint: tail -f /dev/null
    labels:
      - "com.microsoft.visualstudio.targetoperatingsystem=linux"

In the files above, you can see that the application is listed under the service definitions. These YAML (Yet Another Markup Language) configuration files define which image to build for each service, as well as its arguments, environment variables, volumes etc.

In this example, a service called 'dockerapp' is defined, which will be constructed from the dockerapp:dev image. The path to the Dockerfile is specified, as well as the source path variable, used in the Dockerfile. The volumes allow the docker container to access the source files and dependencies stored in the host filesystem. The entrypoint field is used to override the Dockerfile entrypoint. Here tail -f /dev/null is simply causing the container to keep running indefinitely during debugging, so that the Visual Studio debugger can start/stop the .NET app inside it and attach to it. Normally, you would just run the command you want executed in the Docker container here.

Running the application

To run the application, click on the run button in Visual Studio:

Run Button

And there you have an instance of your .NET Core website, running inside a Docker container!

Running

.NET Core application running in a Docker container.

Conclusion

Docker is a convenient way to quickly create and deploy your application environments. This can be useful for a number of situations, ranging from development environments to production.

Although we looked at how Docker containers run on a single host, you can even orchestrate containers in a cluster of machines using platforms such as DC/OS, Kubernetes or Docker Swarm.

Need a software development partner?

Get in touch