I have been reading various articles for migrating my Docker Application into a different machine. All the articles talk about “docker commit” or “export/ import”. This only refers to a single Container, which is first converted to an Image and then we do a “docker run” on the new machine.
But my application is usually made up of several containers, because I am following the best practice of segregating different services.
The question is, how do I migrate or move all the containers that have been configured to join together and run as one. I don’t know whether “swarm” is the correct term for this.
The alternative I see is - simply copy the “docker-compose” and “dockerfile” into the new machine and do a fresh setup of the architecture. Then I copy all the application files. It runs fine.
My purpose, of course is not the only solution, but it's quite nice:
Create docker images in one machine (where you need your Dockerfile)
Upload images to a docker registry (you can use your own docker hub account, or maybe a nexus, or whatever)
2.1. It's also recommended to tag with version your images, and protect overwritting an image with the same version and different code.
Use docker-compose (it's recommended define a docker network for all docker that have to interact among them) to deploy (docker-compose up is like several docker run, but easier to mantain.)
You can deploy in several machines just using the same docker-compose.yml to deploy and access to your registry.
4.1. Deploy can be done in a single host, swarm, kubernetes... (you'd have to translate your docker-compose.yml to kubectl yml file for that)
I agree to the docker-compose suggestion. And to store your images in a registry or on your local machine. Each section in your docker compose file will be separated per service. Each service will have to be written in YAML format.
You are going to want version 3 YAML I believe. Then from there you code something like below. But each service will use your Dockerfile image in your registry or locally in your folder.
version : '3'
services:
drupal:
image:
......ports, volumes, etc
postgres:
image:
......ports, volumes, etc
Disclosure: I took a Docker Course from Bret Fisher on Udemy.
Related
I am installing OpenStack Keystone now.
For standalone Keystone needs three components: mysql, python, and apache2.
Absolutely I can’t pick all of them to the base, I made python as a base image, and others were inserted as RUN statements for installing mysql and apache2.
I think that the RUN statements are duplication because all the three components exist on Docker Public Registry.
Is there any good solution or proper way to reuse the existing external Dockerfile???
There seems to be some confusion here about what a Dockerfile does: it defines a single Docker image. In general, the recommended way to run applications in Docker is to have a container for each service and have them connect to other services in other containers as needed (more on this later).
In your case, it sounds like your application consists of OpenStack Keystone (which requires Python and Apache to run) and MySQL. So I would install Python & Apache in your Dockerfile, and set up MySQL (possibly just using the image from the public repository) as a separate container that the OpenStack container connects to over the network.
As mentioned above, this scenario is the recommended way to run Docker applications - it follows the Unix paradigm of "each app does only one thing, but does it very well". Each container does one thing only and connects to any other services in other containers. But it is possible to run multiple services in the same container - eg. Keystone running on Apache/python AND MySQL in the same container. If this is your goal, you would write a Dockerfile that installs everything and gets everything running together. This Dockerfile is likely to be pretty complicated and will require an ENTRYPOINT that gets both MySQL and Apache working together. You're likely to wind up duplicating a lot of the work that has already gone into the standard MySQL and Apache images.
You can make use of Docker Compose to run you application having mysql, python, and apache2.
Using Docker compose will allow you to control the application setup using a single command. You just need to write a DockerCompose.yml file and also Dockerfiles corresponding to containers you will setup.
In your case you can have a dockerfile for setting up a python and apache2 container and other Dockerfile having mysql as the base image for setting up the said container.
I'm pretty new to docker and docker compose. I have managed to build my image and push it to Docker Hub. The app I built is simple and consists of 2 images php7-apache and mysql offical images. All declared in docker-compose.yml.
I informed my team to pull the image I built from my Docker Hub repository using docker pull ... and start it using docker run -d .... But when we run docker ps in the production server, only 1 process is running but no MySQL.
Usually, when I run locally using docker-compose up I get this in the terminal:
Creating network "myntrelease_default" with the default driver
Creating myntrelease_mysql_1
Creating myntrelease_laravel_1
I can then access the MySql using docker-compose exec mysql bash and tweak some tables there. So far so good.
Question is how can I use docker-compose.yml in the production server when it's not available because its in the image itself?
Short answer: Yes, you need the docker-compose.yml in the production environment.
Explanation: Every image is independent. Since your image is independent of MySQL image (at least that's what I understand from your questions), and docker-compose.yml defines the relationship between the two (eg. how MySQL is accessible in the php7-apache image), then you definitely need the docker-compose.yml in production. Even if you only have a single image its usually good to use docker-compose.yml so that settings and configuration like volume mounts, ports etc. can be clearly defined.
Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/
There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy, which include mem_limit and cpus, which seems like nice/important to be able to set.
So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions will be simpler for other people to follow (they won't need to install Docker-Compose).
But maybe there are reasons why I should not use Swarm on a single node?
I'm posting an answer below, but I'm not sure if it's correct.
Edit: Please note that this is not an opinion based question. If you have a look at the answer below, you'll see that there are "have-to" and "cannot-do" facts about this.
For development, use Docker-Compose. Because only Docker-Compose is able to read your Dockerfiles and build images for you. Docker Stack instead needs pre-built images. Also, with Docker-Compose, you can easily start and stop single containers, with docker-compose kill ... and ... start .... This is useful, during development (in my experience). For example, to see how the app server reacts if you kill the database. Then you don't want Swarm to auto-restart the database directly.
In production, use Docker Swarm (unless: see below), so you can configure mem limits. Docker-Compose has less functionality that Docker Swarm (no mem or cpu limits for example) and doesn't have anything that Swarm does not have (right?). So no reason to use Compose in production. (Except maybe if you know how Compose works already and don't want to spend time reading about the new Swarm commands.)
Docker Swarm doesn't, however, support .env files like Docker-Compose does. So you cannot have e.g. IMAGE_VERSION=1.2.3 in an .env file and then in the docker-compose.yml file have: image: name:${IMAGE_VERSION}. See https://github.com/moby/moby/issues/29133 — instead you'll need to set env vars "manually": IMAGE_VERSION=SOMETHING docker stack up ... (this actually made me stick with Docker-Compose. + that I didn't reasonably quickly find out how to view a container's log, via Swarm; Swarm seemed more complicated.)
In addition to #KajMagnus answer I should note that Docker Swarm still don't support Linux Capabilities as Docker [Compose] do. You can learn about this issue and dive into Docker community discussions here.
We are trying to upgrade docker container to latest image.
Here is the process i am trying to follow.
Let's say i have already pulled docker image having version 1.1
Create container with image 1.1
Now we have fixed some issue on image 1.1 and uploaded it as 1.2
After that i wanted to update container running on 1.1 to 1.2
Below are the step i thought i will follow.
Pull latest image
Inspect docker container to get all the info(port, mapped volume etc.)
Stop current container
Remove current container
Create container with values got on step 2 and using latest image.
The problem I am facing is i don't know how to use output of "Docker Inspect" command while creating container.
What you should have done in the first place:
In production environments, with lots of containers, You will lose track of docker run commands. In order to keep up with complexity, Use docker-compose.
First you need to install docker-compose. Refer to official documents for that.
Then create a yaml file, describing your environment. You can specify more than one container (for apps that require multiple services, for example nginx,php-fpm and mysql)
Now doing all that, When you want to upgrade containers to newer versions, you just change the version in the yaml file, and do a docker-compose down and docker-compose up.
Refer to compose documentation for more info.
What to do now:
Start by reading docker inspect output. Then gather facts:
Ports Published. (host and container mapping)
Networks used (names,Drivers)
Volumes mounted. (bind/volume,Driver,path)
Possible Run time command arguments
Possible Environmental variables
Restart Policy
Then try to create docker-compose yaml file with those facts on a test machine, and test your setup.
When confident enough, Roll it in production and keep latest compose yaml for later reference.