{"id":3395,"date":"2021-08-06T13:59:00","date_gmt":"2021-08-06T11:59:00","guid":{"rendered":"https:\/\/blog.besharp.it\/?p=3395"},"modified":"2021-08-05T17:42:20","modified_gmt":"2021-08-05T15:42:20","slug":"docker-and-containers-from-their-birth-up-to-the-present-day","status":"publish","type":"post","link":"https:\/\/blog.besharp.it\/docker-and-containers-from-their-birth-up-to-the-present-day\/","title":{"rendered":"Docker and containers: from their birth up to the present day"},"content":{"rendered":"\n

If you work in the Information Technology field, you will surely have heard of software containers<\/em> regardless of your job position. Maybe you work on them every day, maybe you’ve worked on them a few times (like me), or maybe you’ve just heard of them.<\/p>\n\n\n\n

We have heard about this technology for no more than 10 years, but the container concept is much older.<\/p>\n\n\n\n

This article aims to revisit the history of containers and understand where they come from up to the present day. We will discuss the main standard (Open Container Initiative) and clarify the different technological components (image, container, runtime). Finally, we will see possible alternatives and some applications in the AWS world.<\/p>\n\n\n\n

What is a container?<\/h2>\n\n\n\n

It\u2019s a big deal giving a simple answer to this question, let’s start with what is reported on the Docker homepage<\/a>:<\/p>\n\n\n\n

A software container is a software unit that contains code and all its dependencies so that the application can be run in the same way in different computational environments.<\/p><\/blockquote>\n\n\n\n

Let’s try to analyze the answer deeper. When an application is packaged in a container, it can be run on different machines with the certainty that it will run in the same way. From this point of view, it\u2019s very similar to a Virtual Machine.<\/p>\n\n\n\n

But the container is much more than that.<\/p>\n\n\n\n

Let’s start with the etymology of the word container<\/em>. We are all familiar with the containers used on ships, trains, and trucks to transport goods. So, let’s imagine software containers<\/em> as a sort of envelope for our application, with their own size, standard, and locking mechanisms to the outside world. Thanks to its shape, any container can be moved from one side to the other without major problems, regardless of its content.<\/p>\n\n\n\n

But what is the content of a software container? It\u2019s a set of dependencies, libraries, together with the application code, saved in the form of an image (container image<\/em>) that can be run on any machine that supports the execution of containers. It follows that the same software, contained in a container, will behave the same way whether it is run on the developer’s machine, on the on-premise server, or on the virtual machine in the cloud.<\/p>\n\n\n\n

In general, any application also needs its dependencies, which can vary from machine to machine (due to different hardware architecture and operating systems). To keep consistency, the code should be prepared together with its dependencies.<\/p>\n\n\n\n

Indeed, Virtual Machines (VMs) have been used in the software industry for years. The virtual machine is nothing more than an emulation of a computer, including the operating system, on which you can install all your necessary dependencies, making it independent from the host machine (code, dependencies, and operating system). However, it becomes challenging to use VMs as a software delivery tool, as you should use the entire image, including the operating system. I guess everyone has had to create a virtual machine on their computer and found its slowness (try it!)<\/p>\n\n\n\n

Another problem with VMs is that they are tied to virtual hardware (via the hypervisor). A developer shouldn’t worry about storage, networking, or processor type (or at least, not low-level). Do you remember the initial metaphor? A container should be independent of who transports it (ship, train, truck, etc.).<\/p>\n\n\n\n

We have already said this, but another problem with VMs is performance; they require many hardware resources and generally suffer from high boot times. A virtual machine, although virtual, is still a complete machine, it is not suitable for the delivery of software.<\/p>\n\n\n\n

In summary, to release reliable and repeatable software on different computers, we need an airtight box to put our code. This box should be agnostic to the system it runs on so that the developer can focus on developing the software and its close dependencies, not including machine details. Furthermore, it must be more performant than a virtual machine.<\/p>\n\n\n\n

\"container<\/figure><\/div>\n\n\n\n

<\/p>\n\n\n\n

Container history<\/h2>\n\n\n\n

Let’s briefly revisit the history of containers, including features of the Linux kernel, first open-source projects, and first companies that saw the potential of this technology, up to their widespread diffusion thanks to Docker.<\/p>\n\n\n\n

1979: Unix V7<\/strong><\/h4>\n\n\n\n

This year, a Linux kernel feature, chroot<\/em>, is released, allowing you to change the root directory of a process. This result is only the beginning of the isolation of processes, a necessary mechanism in today’s containers.<\/p>\n\n\n\n

2001: Linux VServer<\/strong><\/h4>\n\n\n\n

Linux VServer is among the first software supporting the so-called jail mechanism, <\/em>a sort of virtualization at the operating system level that allows isolating and partitioning resources on a machine.<\/p>\n\n\n\n

2002: Namespaces<\/strong><\/h4>\n\n\n\n

The Linux kernel releases a new feature: namespaces<\/em>. Namespaces allow you to partition hardware resources between a set of processes, limiting their visibility to the rest of the system.<\/p>\n\n\n\n

2001-2007:<\/strong><\/h4>\n\n\n\n

Many companies start investing in this technology: Solaris container (Oracle), Open VZ<\/p>\n\n\n\n

2007: Cgroups<\/strong><\/h4>\n\n\n\n

This is another feature of the Linux kernel. Indeed, the cgroup<\/em> (short for control groups) is a Linux kernel feature that limits the processes’ resources (CPU, memory, disk I \/ O, network, etc.).<\/p>\n\n\n\n

2008: LXC<\/strong><\/h4>\n\n\n\n

LXC (Linux Containers) was the first example of a modern container engine. Indeed, it exploits features of the Linux kernel used by the most recent container engines (cgroups, namespaces)<\/p>\n\n\n\n

2013: Docker<\/strong><\/h4>\n\n\n\n

Starting from LXC, Docker was born in 2013 as open-source software to run containers. Since then, the world of containers is about to change.<\/p>\n\n\n\n

Over time it has implemented its container manager using its libraries (libcontainer<\/em>).<\/p>\n\n\n\n

2013 – Today<\/strong><\/h4>\n\n\n\n

Since 2013, technologies have been consolidated, standards (OCI), container orchestrators (Kubernetes, Docker Swarm), and several alternatives (micro Virtual Machines) have been created.<\/p>\n\n\n\n

Docker and the explosion of containers<\/h2>\n\n\n\n

Quite often, the words Docker<\/em> and container<\/em> are confused, but why? Simply because it is thanks to Docker that containers have become so popular.<\/p>\n\n\n\n

Docker is a set of PaaS (Platform as a Service) products that enable the development and delivery of containerized software.<\/p>\n\n\n\n

dotCloud, now Docker Inc, released Docker to facilitate software development by creating “standard boxes”. Starting from the Linux kernel features previously seen, Docker wanted to facilitate the use of these low-level features by providing easy-to-use interfaces.<\/p>\n\n\n\n

Docker open-sourced three key aspects that facilitated the use of containers, thus favoring their wide use:<\/p>\n\n\n\n