I'm using Docker Destkop under Windows 10.
I use multiples docker-compose files for multiple projects,
and I switch between projects to stop/start some docker-compose.yml files depends on my needs.
When I leave my workstation, I put my Windows10 into stand by or shutdown, but when I come back, I always get 2 docker compose actives.
I don't know why they are already started, or where does Docker Destkop retrieve docker container to start ?
I want to retrieve my workstation with zero container running. What should I check ?
It's very likely happening because of the restart or restart_policy setting declared in your docker-compose.yml file (depending on which version of docker-compose you are using). More information can be found in the official documentation. Here you can look for the RESTART_POLICY section to find out what options there are available for you and what do they mean (or here if you are using the older version of docker-compose). While here is the description of the arguments you can pass to the docker run command.
Related
--always-recreate-deps is described as:
Recreate dependent containers. Incompatible with --no-recreate.
--build is described as:
Build images before starting containers.
What is the difference between "Recreate dependent containers" and "Build images before starting containers"?
When a Dockerfile changed I use docker compose up --build. Do I need to also use --always-recreate-deps?
What are use cases for the --always-recreate-deps while we already have --build and --force-recreate?
--always-recreate-deps: This option tells Docker Compose to always recreate the dependencies of a service, even if they haven't changed. This means that if a service depends on another service, and the other service's image hasn't been updated, Docker Compose will still recreate that service when the up command is run with the --always-recreate-deps option.
--build: This option tells Docker Compose to build the images for all services defined in the docker-compose.yml file before starting the containers. This is useful if you have made changes to your services and want to ensure that the images are rebuilt and the containers are running the latest version of your code.
In summary, --always-recreate-deps option ensures that all dependent services are recreated even if they haven't changed whereas --build option ensures that the images are rebuilt and the containers are running the latest version of your code.
how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/
Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/
There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy, which include mem_limit and cpus, which seems like nice/important to be able to set.
So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions will be simpler for other people to follow (they won't need to install Docker-Compose).
But maybe there are reasons why I should not use Swarm on a single node?
I'm posting an answer below, but I'm not sure if it's correct.
Edit: Please note that this is not an opinion based question. If you have a look at the answer below, you'll see that there are "have-to" and "cannot-do" facts about this.
For development, use Docker-Compose. Because only Docker-Compose is able to read your Dockerfiles and build images for you. Docker Stack instead needs pre-built images. Also, with Docker-Compose, you can easily start and stop single containers, with docker-compose kill ... and ... start .... This is useful, during development (in my experience). For example, to see how the app server reacts if you kill the database. Then you don't want Swarm to auto-restart the database directly.
In production, use Docker Swarm (unless: see below), so you can configure mem limits. Docker-Compose has less functionality that Docker Swarm (no mem or cpu limits for example) and doesn't have anything that Swarm does not have (right?). So no reason to use Compose in production. (Except maybe if you know how Compose works already and don't want to spend time reading about the new Swarm commands.)
Docker Swarm doesn't, however, support .env files like Docker-Compose does. So you cannot have e.g. IMAGE_VERSION=1.2.3 in an .env file and then in the docker-compose.yml file have: image: name:${IMAGE_VERSION}. See https://github.com/moby/moby/issues/29133 — instead you'll need to set env vars "manually": IMAGE_VERSION=SOMETHING docker stack up ... (this actually made me stick with Docker-Compose. + that I didn't reasonably quickly find out how to view a container's log, via Swarm; Swarm seemed more complicated.)
In addition to #KajMagnus answer I should note that Docker Swarm still don't support Linux Capabilities as Docker [Compose] do. You can learn about this issue and dive into Docker community discussions here.
We are trying to upgrade docker container to latest image.
Here is the process i am trying to follow.
Let's say i have already pulled docker image having version 1.1
Create container with image 1.1
Now we have fixed some issue on image 1.1 and uploaded it as 1.2
After that i wanted to update container running on 1.1 to 1.2
Below are the step i thought i will follow.
Pull latest image
Inspect docker container to get all the info(port, mapped volume etc.)
Stop current container
Remove current container
Create container with values got on step 2 and using latest image.
The problem I am facing is i don't know how to use output of "Docker Inspect" command while creating container.
What you should have done in the first place:
In production environments, with lots of containers, You will lose track of docker run commands. In order to keep up with complexity, Use docker-compose.
First you need to install docker-compose. Refer to official documents for that.
Then create a yaml file, describing your environment. You can specify more than one container (for apps that require multiple services, for example nginx,php-fpm and mysql)
Now doing all that, When you want to upgrade containers to newer versions, you just change the version in the yaml file, and do a docker-compose down and docker-compose up.
Refer to compose documentation for more info.
What to do now:
Start by reading docker inspect output. Then gather facts:
Ports Published. (host and container mapping)
Networks used (names,Drivers)
Volumes mounted. (bind/volume,Driver,path)
Possible Run time command arguments
Possible Environmental variables
Restart Policy
Then try to create docker-compose yaml file with those facts on a test machine, and test your setup.
When confident enough, Roll it in production and keep latest compose yaml for later reference.