Nowadays systems are dynamic. Sometimes it is necessary for a service to be provided by a single server in the cloud and it may be that in peaks of work (such as black friday) a few more are needed. In addition, these servers may be from different providers. This is one of the main reasons why Docker was born.
A container is a zone of space where code, data and executable files are stored and then allows to execute a service provided by that configuration.
Sometimes it is confused with virtual machine but it is not the same. In a virtual machine everything is virtualized: hardware, virtual BIOS, operating system, etc. The operating system installed on the host machine is used in a container (except in Windows environments, where it runs under VirtualBox or Hyper-V).
For this reason it is often said that they are lightweight containers. This feature also affects the efficiency and effectiveness, being greater than if it will run in virtual machine.
Creating a container in Docker from scratch has some complexity, but thanks to its system of creation through scrippting, it becomes simpler and allows creating containers derived from others that are also maintainable.
Docker has a public repository where containers can be registered. In this way there is a great variety of containers with the main technologies already available: web servers (nginx / apache), database managers (MySQL, Postgres, MongoDB, ...), etc.
Another great advantage of using containers is that it adds flexibility when changing the provider or server instance. Simply recreate the container in the new instance and add the volume of data. It can be recreated using the script or obtaining the image directly from a repository (public or private).
You usually have one container per service. A blog project with Wordpress usually consists of 3 services: web server, PHP application (WordPress) and MySQL database.
Orchestrating services manually only with Docker is complex and can lead to errors. For this reason, another tool was created: Docker Compose.
In this way, in a YAML type file the services are defined, how they communicate with each other, what service should be started before the other, etc. Then just a simple order for docker-compose to do its magic and lift the services.
One step further is to not even worry that the system is running. This is useful in large projects. For that, there are projects like Docker Swarm or Kubernetes.
We use Docker and Docker Compose in almost all projects where there is a backend.
We prepare the generation scripts with care so that the created containers are robust.
According to the project, we generate several runtime environments. A "local" environment, for active development. Another for tests in "testing" and the production environment. This is orchestrated very well with docker-compose.
All the scripts are also under version control with GIT.
The cookies on this website are used to personalize content and analyze traffic. In addition, we share information about your use of the website with web analytics partners, who may combine it with other information you have provided or collected from your use of their services. You accept our cookies if you continue to use our website.
The cookies on this website are used to personalize content and analyze traffic. In addition, we share information about your use of the website with web analytics partners, who may combine it with other information you have provided or collected from your use of their services. You accept our cookies if you continue to use our website.