/ docker

An Introduction to Docker


I started using Docker to develop and deploy some of my projects and want to give a short introduction to it. What is Docker?


Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.

(from the official web page)

This short sentence actually contains a whole lot of substance. In short: developers and sysadmins do not have to make sure dependencies are met on every machine individually, be it development, staging, or production — OS X, Fedora, or Ubuntu. The dependencies are part of the distributed package. Tests also don't have to be run every time a new version or service is deployed, as the newly deployed instance is already tested. And it only had to be tested once.

To achieve all this, Docker utilizes container virtualization and provides higher level tools on top of that.

The only prerequisite to run a Docker container is an installation of docker. This official guide helps you install Docker pretty much anywhere: http://docs.docker.com/installation/#installation

Docker and its official Tools

I will go briefly over some terms and available Docker tools here. The official user guide is very thorough and should help you get a better understanding of all the details.

The basic architecture is a server client model. The Docker daemon acts as a server and is in charge of running GNU/Linux containers. The Docker client is a command line interface to access the daemon. The client and the daemon talk over a REST(-ish) HTTP API. Together, they make up the Docker Engine.


An image defines what a running container consists of. It represents the configuration of a service, like a web or database server. It usually consists of an operating system, installed packages and binaries, configuration files, a definition of what to do when running the container (e.g. run the web server), and some meta-data.


Source: https://docs.docker.com/terms/images/docker-filesystems-multilayer.png

You always build images by making changes to the filesystem of a parent image. The filesystem is a layered, union filesystem. Every change you commit to the filesystem, e.g. installing a package, results in a new layer and thus in a new image. The new image now has the old image as its parent image. As a result, one image never changes, but rather creates new ones by adding filesystem layers.

For more details, see the official user guide.


A container is a running instance of a service (image). A container's filesystem has an additional read-write layer on top of the image's layered read-only filesystem. Every change inside the container does not affect the image that was used to run the container.

By default, the containers' networks are also namespaced. But it is easy to link containers to provide network access to one another. This way, your application container can easily access your database container.

For more details, see the official user guide.


Dockerfiles define the way how a certain image should be built. It starts with a parent image and has instructions on how to alter it. Every change is made in a new layer of the image's filesystem. The resulting image can be shared, e.g. using the Docker Hub, and used to run services anywhere.

For more details, see the official reference and best practices.

Docker Hub

The Docker Hub is a public registry of Docker images. After you've built an image, you can upload it to the Docker Hub and then other people can run a container from it or use it as a parent image for their own images. You could, for example, use the official php repository as a parent image and create a WordPress image on top. To run WordPress, however, there is also an image readily available.

For more details, see the official user guide.


Docker Compose let's you define a combination of services and how they are linked. For example a web server, a php application, and a database server. The definition takes place in a single yml-file. One command then runs all containers with their respective configurations and linkings.

For more details, see the official user guide.


Docker Machine installs Docker Engines on various environments, including local virtual machines and cloud provided (virtual) hosts. For example: with a single command, you can start a DigitalOcean Droplet with a running Docker Engine accessible from your local Docker Client. That means you can run the Docker commands in the terminal as you always do, but everything is executed on the setup host. For example pulling an image and running a container.

For more details, see the official user guide.


Docker Swarm exposes multiple Docker Engines as a single one. What this means is that you can run a set-up of multiple services on multiple hosts and Swarm automatically distributes the services among the hosts. If the services are defined inside a Compose file, the compose command will run all services distributed on multiple hosts by Swarm.

For more details, see the official user guide.

Martin Schenck

Martin Schenck

Martin is CTO of PlugSurfing. He pioneered in the European e-mobility market by technically commanding PlugSurfing since 2013. PlugSurfing is Europe's leading e-mobility service provider.

Read More