To update some images I used 'docker-compose pull'.
Then I build: 'docker-compose build'.
I wanted only to update the Application Container so I removed
it and restarted:
'docker-compose rm app' and 'docker-compose up -d app'.
But something unwanted happened. The data container was recreated too.
The old data is lost.
Dockerfile for Datacontainer:
FROM gitlab/gitlab-ce:latest
VOLUME ["/etc/gitlab", "/var/log/gitlab", "/var/opt/gitlab"]
ENTRYPOINT ["hostname"]
docker-compose.yml:
version: '2'
services:
gitlab:
image: 'gitlab/gitlab-ce:latest'
domainname: example.com
hostname: gitlab
networks:
- devenv
restart: always
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.example.com'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
ports:
- '80:80'
- '2224:22'
volumes_from:
- gitlabdata
gitlabdata:
build: gitlab-data
How can I avoid this next time?
The docker-compose up command has the --no-recreate flag.
This flag avoid to recreate containers if they already exists.
Therefore you can run
docker-compose up -d --no-recreate app
Your issue was because you created a volume for a container, and then removed the container, this also removed your volumes.
You should change your volumes so that they bind mount to a host directory, this way your files are stored on the host, and you can reattach to those directories incase the container goes away. Another benefit is the ability to get access to those files from the host.
Here is roughly what your compose file would look like with the new volume config.
version: '2'
services:
gitlab:
image: 'gitlab/gitlab-ce:latest'
domainname: example.com
hostname: gitlab
networks:
- devenv
restart: always
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.example.com'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
ports:
- '80:80'
- '2224:22'
volumes_from:
- gitlabdata
gitlabdata:
build: gitlab-data
Volumes:
- /etc/gitlab:/dir/on/host
- /var/log/gitlab:/dir/on/host2
- /var/opt/gitlab:/dir/on/host3
You can mount to what ever you want on the host. More info about volumes here: https://docs.docker.com/engine/userguide/containers/dockervolumes/#mount-a-host-directory-as-a-data-volume
I'm facing an issue with this, in my case, always I restart the docker-compose service, the container is recreated:
My /docker-compose-app.service
[Unit]
Description=Docker Compose Application Service
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/srv/dockercnf
ExecStart=/usr/local/bin/docker-compose up --no-recreate -d
ExecStop=/usr/local/bin/docker-compose down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
The docker-compose.yml
version: '2'
services:
mysql2:
image: mysql/mysql-server:8.0
container_name: mysql2-container
networks:
static-network:
ipv4_address: 172.18.1.2
mysql57:
image: mysql/mysql-server:5.7
container_name: mysql57-container
networks:
static-network:
ipv4_address: 172.18.1.3
networks:
static-network:
ipam:
config:
- subnet: 172.18.0.0/16
#docker-compose v3+ do not use ip_range
ip_range: 172.18.1.0/24
I don't know what I'm missing or doing wrong.
Edited - Solved
After reading StackOverFlow Thread and getting my own conclusion, I edited my docker-compose.yml file as follows:
version: '2'
services:
mysql2:
image: mysql/mysql-server:8.0
container_name: mysql2-container
networks:
static-network:
ipv4_address: 172.18.1.2
mysql57:
image: mysql/mysql-server:5.7
container_name: mysql57-container
restart: always
volumes:
- /srv/mysql57-container:/var/lib/mysql
networks:
static-network:
ipv4_address: 172.18.1.3
networks:
static-network:
ipam:
config:
- subnet: 172.18.0.0/16
#docker-compose v3+ do not use ip_range
ip_range: 172.18.1.0/24
Related
I have the following docker-compose configuration:
version: '2'
services:
nginx:
image: 'nginx:latest'
expose:
- '80'
- '8080'
container_name: nginx
ports:
- '80:80'
- '8080:8080'
volumes:
- '/home/ubuntu/nginx.conf:/etc/nginx/nginx.conf'
networks:
- default
restart: always
inmates:
image: 'xxx/inmates:mysql'
container_name: 'inmates'
expose:
- '3000'
env_file: './inmates.env'
volumes:
- inmates_documents_images:/data
- inmates_logs:/logs.log
networks:
- default
restart: always
we19:
image: 'xxx/we19:dev'
container_name: 'we19'
expose:
- '3000'
env_file: './we19.env'
volumes:
- we19_logs:/logs.log
networks:
- default
restart: always
desktop:
image: 'xxx/desktop:dev'
container_name: 'desktop'
expose:
- '3000'
env_file: './desktop.env'
volumes:
- desktop_logs:/logs.log
networks:
- default
restart: always
volumes:
inmates_documents_images:
inmates_logs:
desktop_logs:
we19_logs:
Assume I did docker-compose up -d --buiild.
Now the 4 containers (services) are runnig.
Now, I want to update ./desktop.env file with new content. Is there any possible way to reset only desktop container with the new env file? Or docker-compose restart is neccessary?
Basically I'm trying to restart only desktop container with the new env file but keep all 3 others container up running without restarting them.
Extract from docker-compose up --help
[...]
If there are existing containers for a service, and the service's configuration or image was changed after the container's creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
[...]
Usage: up [options] [--scale SERVICE=NUM...] [SERVICE...]
[...]
The following command should do the trick in your case.
docker-compose up -d desktop
If not, see the documentation for other options you can use to meet your exact requirement (e.g. --force-recreate, --renew-anon-volumes, ...)
I am having below error while running the docker-compose file.
I have created a bridge network with name my_network
Unsupported config option for services.networks: 'my_network'
docker-compose.yml
version: '3.3'
services:
webapp1:
image: nginx:latest
container_name: my_container
ports:
- "8080:8080"
networks:
- my_network
volumes:
- /home/ajay/nginx:/www/data
That seems docker-compose configuration issue, use below docker-compose file, it will create a network and then you can use the same network in the docker-compose file.
version: '3.5'
services:
webapp1:
image: nginx:latest
container_name: my_container
ports:
- "8080:80"
networks:
- my_network
networks:
my_network:
driver: bridge
Also Nginx port should be 80 inside the container if you did not modify the default configuration.
I have a 'master' container, that should be already running when starting all the others.
In it i have a conf/ directory, that this service is monitoring and applying the relevant changes.
How can i have each new container drop a file in this directory?
real scenario:
given my docker-compose.yml below, i want each service (portainer, whoami, apache) to drop a .yml file in the "./traefik/conf/:/etc/traefik/conf/" path mapping of the traefik service.
docker-compose.yml
version: "3.5"
services:
traefik:
image: traefik
env_file: ./traefik/env
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/conf/:/etc/traefik/conf/
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
portainer:
image: portainer/portainer
depends_on: [traefik]
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
whoami:
image: containous/whoami
depends_on: [traefik]
portainer.traefik.yml
http:
routers:
portainer:
entryPoints: [http]
middlewares: [redirect-to-http]
service: portainer-preauth#docker
rule: Host(`portainer.docker.mydomain`)
whoami.traefik.yml
http:
routers:
whoami:
entryPoints: [http]
middlewares: [redirect-to-http]
service: whoami-preauth#docker
rule: Host(`whoami.docker.mydomain`)
Where are the files portainer.traefik.yml and whoami.traefik.yml
located? If they are on host machine, you can directly copy them to
./traefik/conf/. – Shashank V
the thing is i cant have all files in traefik/conf.
this would require manually dropping a file there every time i create a new image.
i believe that every service should be responsible for its own files.
also, when traefik starts and finds files of those other services that haven't started yet, it logs lots of errors.
to avoid this behavior, i would like to put the file there only when the container is started.
below is is the project file structure.
You can use a volume across all services. Just define it in your docker-compose.yml and assign it to each service:
version: "3.5"
services:
traefik:
image: traefik
env_file: ./traefik/env
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/conf/:/etc/traefik/conf/
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
- foo:/path/to/share/
portainer:
image: portainer/portainer
depends_on: [traefik]
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- foo:/another/path/to/share/
whoami:
image: containous/whoami
depends_on: [traefik]
volumes:
- foo:/and/another/path/
volumes:
foo:
driver: local
This is the equivalent to the --volumes-from feature of "plain" Docker. Or at least, what comes closest to it.
Your master container would then have to use the same volume. If this container doesn't run within the same Docker Compose context, you have to define this volume externally before.
I don't how to run the docker-compose equivalent of my code
docker run -d --name=server --restart=always --net network --ip 172.18.0.5 -p 5003:80 -v $APP_PHOTO_DIR:/app/mysql-data -v $APP_CONFIG_DIR:/app/config webserver
I've done this:
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
Are you sure you need an IP address for container? It is not recommended practice, why do you want to set it explicitly?
docker-compose.yml
version: '3'
services:
server: # correct, this would be container's name
image: webserver # this should be image name from your command line
ports:
- "5003:80" # correct, but only if you need to communicate to service from ouside
volumes: # volumes just repeat you command line, you can use Env vars
- $APP_PHOTO_DIR:/app/mysql-data
- $APP_CONFIG_DIR:/app/config
command: ["python", "/app/app.py"] # JSON notation strongly recommended
restart: always
Then docker-compose up -d and that's it. You can access your service from host with localhost:5003, no need for internal IP.
For networks, I always include in the docker-compose file, the network specification. If the network already exists, docker will not create a new one.
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
networks:
app_net:
name: NETWORK_NAME
driver: bridge
ipam:
config:
- subnet: NETWORK_SUBNET
volumes:
VOLUME_NAME:
driver:local
And you will need to add the volumes separately to match the docker run command.
I am using Docker version 1.12.3 and docker-compose version 1.8.1. I have some services which contains for example elasticsearch, rabbitmq and a webapp
My problem is that a service can not access another service by its host becuase docker-compose does not put all service hots in /etc/hosts file. I don't know their IP's because it is defined on docker-compose up phase.
I use networks feature as it is described at https://docs.docker.com/compose/networking/ instead of links because I do circular reference and links doesn't support it. But using networks does not put all services hosts to each service nodes /etc/hosts file. I set container_name, I set hostname but nothing happened. What I am missing;
Here is my docker-compose.yml;
version: '2'
services:
elasticsearch1:
image: elasticsearch:5.0
container_name: "elasticsearch1"
hostname: "elasticsearch1"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Ned Stark' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
ports:
- "9200:9200"
- "9300:9300"
networks:
- webapp
elasticsearch2:
image: elasticsearch:5.0
container_name: "elasticsearch2"
hostname: "elasticsearch2"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Daenerys Targaryen' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
elasticsearch3:
image: elasticsearch:5.0
container_name: "elasticsearch3"
hostname: "elasticsearch3"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='John Snow' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
rabbit1:
image: harbur/rabbitmq-cluster
container_name: "rabbit1"
hostname: "rabbit1"
environment:
- ERLANG_COOKIE=abcdefg
networks:
- webapp
rabbit2:
image: harbur/rabbitmq-cluster
container_name: "rabbit2"
hostname: "rabbit2"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
- ENABLE_RAM=true
networks:
- webapp
rabbit3:
image: harbur/rabbitmq-cluster
container_name: "rabbit3"
hostname: "rabbit3"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
networks:
- webapp
my_webapp:
image: my_webapp:0.2.0
container_name: "my_webapp"
hostname: "my_webapp"
command: "supervisord -c /etc/supervisor/supervisord.conf -n"
environment:
- DYNACONF_SETTINGS=settings.prod
ports:
- "8000:8000"
tty: true
networks:
- webapp
networks:
webapp:
driver: bridge
This is how I understand they can't comunicate with each other;
I get this error on elasticserach cluster initialization;
Caused by: java.net.UnknownHostException: elasticsearch3
And this is how I docker-composing
docker-compose up
If the container expects the hostname to be available immediate when the container starts that is likely why it's failing.
The hostname isn't going to exist until the other containers start. You can use an entrypoint script to wait until all the hostnames are available, then exec elasticsearch ...