I have already created containers running in my server.
They were created through the command line with docker run.
I believe using docker-compose will result in a more portable containers (I don't need to docker run all containers in a different server, only put the same file).
Is there a way to translate the already created containers to a docker-compose file?
In their website (https://docs.docker.com/compose/gettingstarted/), it is mentioned to use docker-compose migrate_to_labels, but it seems the command is not working in the latest version of docker-compose
Related
TLDR: When using docker compose, I can simply recreate a container by changing its configuration and/or image in the docker-compose.yml file along with running docker-compose up. Is there any generic equivalent for recreating a container (to apply changes) which was created by a bare docker create/run command?
Elaborating a bit:
The associated docker compose documentation states:
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes).
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
Is it safe to simply run docker container rm xy and then docker container create/run (along with passing the full and modified configuration)? Or is docker compose actually doing more under the hood?
I already found answers about applying specific configuration changes like e.g. this one about port mappings, but I'm still wondering whether there is a more general answer to this.
I'm having troubles to understand which underlaying steps are actually performed during this recreation, as e.g. the docker (without compose) documentation doesn't really seem to use the recreate term at all.
docker-compose is a high level tool; it performs in a single operation what would require multiple commands using the docker cli. When docker-compose says, "docker-compose up picks up the changes by stopping and recreating the containers", it means it is doing the equivalent of:
docker stop <somecontainer>
docker rm <somecontainer>
docker run ...
(Where ... represents whatever configuration is implied by the service definition in your docker-compose.yaml).
Let's say it recognizes a change in container1 it does (not really, working via API):
docker compose rm -fs container1
docker compose create (--build) container1
docker compose start container1
What is partially close to (depending on your compose-config):
docker rm -f projectname_container1
(docker build --flags)
docker create --allDozensOfAttributes projectname_container1
docker start projectname_container1
docker network connect (--flags) projectname_networkname projectname_container1
and maybe more..
so i would advise to use the docker compose commands for single services instead of docker cli if suitable..
The issue is that the variables and settings are not exposed through any docker apis. It may be possible by way of connecting directly to the docker socket, parsing the variables, and then stopping/removing the container and recreating it.
This would be prone to all kinds of errors and would require lots of debugging to get these values.
What I do is to simply store my docker commands in a shell script. You can just save the command you need to run into a text file, name it .sh, set the -x on the file, then run it. Then when you stop/delete the container, you can just rerun the shell script.
Another thing you can do would be to replace the docker command with a function (in something like your ~/.bashrc) that stores the arguments to a text file and rechecks that text file with a passed argument (like "recreate" followed by a name). However, I'm more a fan of doing docker containers in their own shell scripts as its more portable.
I am a newbie as far as both Airflow and Docker are concerned; to make things more complicated, I use Astronomer, and to make things worse, I run Airflow on Windows. (Not on a Unix subsystem - could not install Docker on Ubuntu 20.4). "astro dev start" breaks with an error, but in Docker Desktop I see, and can start, 3 Airflow-related containers. They see my DAGs just fine, but my DAGs don't see the local file system. Is thus unavoidable with the Airflow + Docker combo? (Seems like a big handicap; one can only use a file in the cloud).
In general, you can declare a volume at image runtime in Docker using the -v switch with your docker run command to mount a local folder on your host to a mount point in your container, and you can access that point from inside the container.
If you go on to use docker-compose up to orchestrate your containers, you can specify volumes in the docker-compose.yml file for your containers which configures the volumes for the containers that run.
In your case, the Astronomer docs here suggest it is possible to create a custom directive in the Astronomer docker-compose.override.yml file to mount the volumes in the Airflow containers created as part of your astro commands for your stack which should then be visible from your DAGs.
I have recently gotten familiar with Docker but still somewhat new to docker-compose. I have a server running an application and a database each in their own containers. In the application's directory (/opt/my-app) there is a docker-compose.yml file that references a specific image to run for the application. Our application now lives in a different repo.
My question is, If I want to update to the new app (that resides in a different repo) do I manually pull the image, alter the docker-compose.yml file to reference and then run docker-compose start/up?
With just docker, I would run docker stop containerName
Then I would pull the new image, and run docker run -d -p 8080:80 newImageName
I'm just not certain of how this should be done with docker-compose.
Is there a docker command which works like the vagrant up command?
I'd like to use the arangodb docker image and provide a Dockerfile for my team without forcing my teammates to get educated on the details of its operation, it should 'just work'. Within the the project root, I would expect the database to start and stop with a standard docker command. Does this not exist? If so, why not?
Docker Compose could do it.
docker-compose up builds image, creates container and starts it.
docker-compose stop stops the container.
docker-compose start restarts the container.
docker-compose down stops the container and removes image and the container.
With Docker compose file you can configure the ArangoDB (expose ports, volume mapping for db initialisation, etc.). Place the compose file to the project root, and run the up command.
We are trying to upgrade docker container to latest image.
Here is the process i am trying to follow.
Let's say i have already pulled docker image having version 1.1
Create container with image 1.1
Now we have fixed some issue on image 1.1 and uploaded it as 1.2
After that i wanted to update container running on 1.1 to 1.2
Below are the step i thought i will follow.
Pull latest image
Inspect docker container to get all the info(port, mapped volume etc.)
Stop current container
Remove current container
Create container with values got on step 2 and using latest image.
The problem I am facing is i don't know how to use output of "Docker Inspect" command while creating container.
What you should have done in the first place:
In production environments, with lots of containers, You will lose track of docker run commands. In order to keep up with complexity, Use docker-compose.
First you need to install docker-compose. Refer to official documents for that.
Then create a yaml file, describing your environment. You can specify more than one container (for apps that require multiple services, for example nginx,php-fpm and mysql)
Now doing all that, When you want to upgrade containers to newer versions, you just change the version in the yaml file, and do a docker-compose down and docker-compose up.
Refer to compose documentation for more info.
What to do now:
Start by reading docker inspect output. Then gather facts:
Ports Published. (host and container mapping)
Networks used (names,Drivers)
Volumes mounted. (bind/volume,Driver,path)
Possible Run time command arguments
Possible Environmental variables
Restart Policy
Then try to create docker-compose yaml file with those facts on a test machine, and test your setup.
When confident enough, Roll it in production and keep latest compose yaml for later reference.