A software goes through many phases through out the development life cycle. After the initial development , the software then has to be tested and deployed. Since almost all modern day software uses external libraries and dependencies in development, testing and deploying code in the same environment as it was developed is almost impossible. Docker is an open source intuitive solution for the aforementioned problem which makes it easier to create and deploy software by using a concept called containerizing. Containers allows developers to basically package up a software with all its dependencies and external libraries and ship to the tester or deploy as needed.
Why not Hypervisor?
Before docker was introduced, there were similar solutions which inherited to concept of packaging the intended software and shipping it as intended. One of the most popular solutions was hypervisor.
To better understand how hypervisors worked, we will first try to understand the structure of how an application was deployed to a hypervisor. The hypervisor was installed into the host software which ran on top of the hardware(servers in the general use case) which might include other applications. Whenever an application that was needed to be shipped was created , a guest OS instance was created on top of the hypervisor and the application was installed to the guest OS. All the dependencies and libraries that were associated the application was installed to the guest OS. This meant that an image could be created of the OS whenever necessary and shipped as intended. The tester can run the image which included all the dependencies with the image of the guest OS. The application could also be deployed to a server that contained hypervisors.
But the caveat to this was that whenever a new application that has to be packaged was created , a new instance of the guest OS had to be created on top of the hypervisor. This wasted a lot of resources which intern associated with a huge overhead for this applications. Another issue concerned licensing. Whenever a new instance of the guest OS was created, if for instance the OS was windows or macOS , there was a implicit cost associated with it meaning that it was really expensive.
On the flip side , docker introduced a concept called containers , which packaged only the applications and the dependencies associated with it and not the whole guest OS. Docker containers were run on top of the Docker engine.
The concept of isolating a process from the rest of the operating system is not a new concept by any means. The original Linux container technology, LXC, is an OS-level virtualization method for running multiple isolated Linux systems on a single host.
A container is a logical separation of an application from its execution environment(Operating system and the hardware that is lying underneath).This decoupling process ensures that the application can be deployed to a server with the same version of dependencies with the same execution environment as it was developed in. This made life , so much easier for people working in devOps.
Container technology comprises of 3 software categories,
- Builder: technology used to build a container.
- Engine: technology used to run a container.
- Orchestration: technology used to manage many containers.
(We will look at this further in the next section)
Introduction to docker
To understand how docker works, let us first try to grasp the idea of some of the components behind a containerized docker application.
- Docker engine
Docker engine is the underlying layer on top of which the docker containers run on. This sits on top of the host operating system. There are 2 different versions of docker engine: Enterprise version and community version.
2. Docker image
Docker image is a portable file that contains the specification for which software components the container will run.
This is the initial point of entry into a docker container. This text file includes the instructions to build a Docker image. This file contains environment variable, language, network port and other containers that it requires and the service that it realizes.
Docker is very versatile when it comes to the language of the code that it understands. This gives the freedom for the developer to easily containerize the application and send it to de tested or deployed to the servers. The following links contain how to get started on docker
Hope docker will make your life easier testing and deployment of applications.