Introduction to Containers and Docker

Web Design

Introduction to Containers and Docker

I know just like everyone in the tech industry getting to hear new buzz words is not something new or strange and just like everyone else you must have read in an article or heard in a discussion or on a podcast as you go through your morning routines things like how docker is the future or how docker is revolutionalizing the tech industry or how it is pushing the microservice agenda. If you’re looking for a gentle introduction to docker and containers, then you’re in the right place, I will try to explain what brought about the need for containers and Docker and why they have become so popular in recent years.

A brief history of containers

In the IT industry, applications run industries, from the big IT companies like Facebook, Uber, Snapchat to the not so big ones, for the most part, you can’t differentiate between the business and the application as they are tightly knitted together. For these applications(or apps) to get to the end-users that is you and me, they need to be put on web servers so that they can be accessed through the internet. A web server is a computer that stores the components that make up a website(e.g. HTML documents, images, CSS stylesheets, and JavaScript files) and can connect to the internet and supports physical data interchange with other devices connected to the web. In the early days, servers could hold only one application at a time which meant that for the minutest website with only one page, you had to buy a big and expensive server to be able to deploy it on the internet and you had to incur operating cost for that server in terms of cost of power, cooling and employing professionals to set it up and also maintain it, for an application that might not use up to 5% of the servers capacity, not a reasonable thing to do right? Virtual Machines(or VMs) came to the rescue.

Virtual Machines was a game-changer as it allowed us to be able to make the best out of servers by making it possible to run several operating systems on the same computer or server, therefore, making it possible to run several applications on a server. Our website that didn’t use up to 5% of the server before can now have up to 20 websites similar to it running on the server thereby reducing cost, great news right? Yes, it was but there were still a few shortcomings. Each of the virtual machines took a chunk of the computer’s resources and each needed an operating system to be able to run which means extra cost for purchasing operating system licenses, other applications like antivirus, etc. Although the VMs are running on a singular machine, they behave as though they are running on individual machines because in a sense they are as VMs are oblivious of other VMs and the host machine itself, they operate as single independent machines. Although VMs made our lives considerably easier, there was still a need to squeeze out more from our server (sigh humans never get satisfied) and computers without incurring extra cost and containers came into the limelight.

Virtual Machine Architecture

Containers: The big picture

Although containerization is not a new concept it has been around as far back as the late 1970s and has been evolving to what it is today. The major setback to it not being adopted was that it was complex and hard to manage containers resulting in only large companies like Google, Oracle, HP, IBM that needed to solve the VM setbacks, to scale fast and could also afford such expertise used them. Containers are derived from the Linux kernel feature called control groups, developed by Google engineers. Linux containers combine control groups and support for isolated namespaces to provide an isolated environment for applications.

Containers are similar to VMs in the sense that both allow you to package your application together with libraries and other dependencies, providing isolated environments for running your software services. The major difference is that containers don’t need their separate OS, all the containers on a computer use the Linux kernel of the OS of the host computer to operate. The benefits of containers are obvious as the resources used up are reduced considerably and the overhead cost of operation from purchasing licenses and maintenance reduces considerably. Containers brought to the table

  • Portability: They are a lot smaller in size compared to VMs and as a result, they are faster to start up and deploy on servers.
  • Lightweight: Containers leverage and share the host kernel, making them much more efficient in terms of system resources than virtual machines.
  • Scalable: You can increase and automatically distribute container replicas across a datacenter.
  • Loosely coupled: Containers are highly self-sufficient and encapsulated, allowing you to replace or upgrade one without disrupting others.
  • Secure: Containers apply aggressive constraints and isolations to processes without any configuration required on the part of the user.
Containers Architecture

What is Docker?

Now that we have a fair idea of what containers are, what they do and how they are useful, it is time for us to see how they are managed, that is where Docker comes into the picture finally!!! What is docker? Docker represents one of three things:

  1. Docker Inc. the company
  2. Docker the container runtime and orchestration technology.
  3. The open-source project is also known as Moby.

We will only be focusing on the second one, the runtime and orchestration technology in this article.

From their official website, docker is ‘‘The only independent container platform that enables organizations to seamlessly build, share and run any application, anywhere — from hybrid cloud to the edge. Docker makes running applications in containers easy’’

Remember when I said containers were complex and hard to manage, well docker makes it super easy to run and manage containers whether on the cloud, on datacenters VMs or even on your personal computer this is made possible by the docker container runtime known as the Docker Engine. The Docker engine operates on several Linux and windows server operating systems. Docker Engine enables containerized applications to run anywhere consistently on any infrastructure, solving “dependency hell” for developers and operations teams, and eliminating the “it works on my laptop!” problem.

Docker consists of three main components:

  1. The docker client which is the link between the operator and the docker daemon. The docker client communicates with the docker daemon and docker registry through a REST API over UNIX socket or over a network interface.
  2. The docker daemon is the main component of docker, it is where the main work of building, running and managing containers is done.
  3. The docker registry is where images are stored and there is a public registry called the Docker Hub that is accessible to everyone and docker by default looks for images there if the container image doesn’t exist on the host machine.
Docker Architecture

Docker Terminology

Some docker terminologies we have not looked into are:

  • Docker images: a docker image is like a template with instructions for creating a docker container, to build a docker image you need a docker file and a build context.
  • Dockerfile: a docker file is a textlike document usually a JSON or YAML file format with instructions a user can call on the command line to assemble an image.
  • Build context: build context is usually the root folder on the host computer where the code for the application exists.
  • Container: is simply a running instance of an image.


Containerization has been around for a long time used by top tech companies to build, deploy and manage their applications but it got popular with the advent of docker. Docker brings a lot to the table when it comes to containerization and container management and it is definitely going to be around for a long time so it is a good idea to get on the bandwagon and get yourself dockerized.

Leave your thought here

Your email address will not be published. Required fields are marked *