Docker-compose volume resetting when upgrading image - docker

I have a docker image of Grafana 8.0.5. I created a volume using docker volume create grafana-storage
I can stop the volume, and bring it back up with no data loss.
However, if I update my docker-compose.yml to point to the latest version, 8.0.6, and re-run docker-compose up -d the volume goes back to a default install, losing any of my previously created dashboards, accounts, data sources, etc.
As far as I understand, I shouldn't lose any data, since it should all be in the volume. How do you update images without resetting the volume
docker-compose.yml:
version: "3.3"
volumes:
grafana-storage:
external: true
services:
grafana:
image: "grafana/grafana:8.0.6"
container_name: "grafana"
volumes:
- "grafana-storage:/usr/src/grafana"
Docker Version:
Docker version 20.10.7, build f0df350
Docker-Compose Version:
docker-compose version 1.29.2, build 5becea4c
docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fb6da4a8de9 grafana/grafana:8.0.6 "/run.sh" 17 minutes ago Up 17 minutes 3000/tcp grafana
046892ab0a7b traefik:v2.0 "/entrypoint.sh --pr…" 46 minutes ago Up 23 minutes 80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp traefik
docker volume ls:
DRIVER VOLUME NAME
local grafana-storage

The data is not stored in /usr/src/grafana but in /var/lib/grafana. In consequence, your volume-definition in docker-compose.yml is wrong and everytime the container is recreated, the data is lost.
Change the path to /var/lib/grafana and it should work:
services:
grafana:
[...]
volumes:
- "grafana-storage:/var/lib/grafana"

Related

Service added in second docker-compose file is orphaned and not visible in docker-compose ps output

I have the following docker-compose.yml file:
version: '3'
services:
db:
image: postgres:${PG_VERSION}
ports:
- "${DB_PORT}:5432"
environment:
- POSTGRES_USER=${SUPER_USER}
- POSTGRES_PASSWORD=${SUPER_PASS}
- POSTGRES_DB=${DB_NAME}
- SUPER_USER=${SUPER_USER}
- SUPER_USER_PASSWORD=${SUPER_PASS}
- DB_NAME=${DB_NAME}
- DB_USER=${DB_USER}
- DB_PASS=${DB_PASS}
- DB_ANON_ROLE=${DB_ANON_ROLE}
volumes:
- ./initdb:/docker-entrypoint-initdb.d
# PostgREST
postgrest:
image: postgrest/postgrest
ports:
- "${API_PORT}:3000"
links:
- db:db
environment:
- PGRST_DB_URI=postgres://${DB_USER}:${DB_PASS}#${DB_HOST}:5432/${DB_NAME}
- PGRST_DB_SCHEMA=${DB_SCHEMA}
- PGRST_DB_ANON_ROLE=${DB_ANON_ROLE}
- PGRST_JWT_SECRET=${JWT_SECRET}
depends_on:
- db
swagger:
image: swaggerapi/swagger-ui
ports:
- ${SWAGGER_PORT}:8080
environment:
API_URL: ${SWAGGER_API_URL-:http://localhost:${API_PORT}/
And another file docker-compose.prod.yml
version: '3'
services:
db:
volumes:
- ./initdb/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./var/postgres-data:/var/lib/postgresql/data
- ./var/log/postgresql:/var/log/postgresql
- ./etc/postgresql/postgresql.conf:/var/lib/postgresql/data/postgresql.conf
nginx:
image: nginx
ports:
- 80:80
- 443:443
volumes:
- ./etc/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./var/log/nginx:/var/log/nginx
depends_on:
- postgrest
As you can see I am adding a few volumes to the db service, but importantly I have also added a new nginx service.
The reason I am adding it in this file is because nginx is not needed during development.
However, what is strange is when I issue the docker-compose up command as follows:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
And then list the processes with
docker-compose ps
I get the following output
Name Command State Ports
-----------------------------------------------------------------------------------------
api_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
api_postgrest_1 /bin/sh -c exec postgrest ... Up 0.0.0.0:3000->3000/tcp
api_swagger_1 /docker-entrypoint.sh sh / ... Up 80/tcp, 0.0.0.0:8080->8080/tcp
Notice that nginx is not here. However it is actually running, when I issue:
docker ps
I get the following output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba281fd80743 nginx "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp api_nginx_1
d0028fdaecf5 postgrest/postgrest "/bin/sh -c 'exec po…" 8 minutes ago Up 8 minutes 0.0.0.0:3000->3000/tcp api_postgrest_1
1d6e3d689210 postgres:11.2 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:5432->5432/tcp api_db_1
ed5fa7a71848 swaggerapi/swagger-ui "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 80/tcp, 0.0.0.0:8080->8080/tcp api_swagger_1
So my question is, why is docker-compose not seeing nginx as part of the group of services?
NOTE: The reason I am using this override approach, and not using extends, is that extends does not support services with links and depends_on properties. My understanding is that combining files like this is the recommended approach. However I do understand why it is not possible to add new services in a secondary file.
For example see https://docs.docker.com/compose/extends/#example-use-case, here the docs are adding a new dbadmin service using this method, but no mention that the service won't be included in the output of docker-compose ps, and that there will be warnings about orphans, for example:
$docker-compose down
Stopping api_postgrest_1 ... done
Stopping api_db_1 ... done
Stopping api_swagger_1 ... done
WARNING: Found orphan containers (api_nginx_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Removing api_postgrest_1 ... done
Removing api_db_1 ... done
Removing api_swagger_1 ... done
Removing network api_default
Tested on:
Docker version 20.10.4, build d3cb89e
Docker version 19.03.12-ce, build 48a66213fe
and:
docker-compose version 1.27.0, build unknown
docker-compose version 1.29.2, build 5becea4c
So I literally figured it out as I was typing, and noticed a related question.
The trick is this https://stackoverflow.com/a/45515734/2685895
The reason why my new nginx service was not visible, is because docker-compose ps by default only looks at the docker-compose.yml file.
In order to get the expected output, one needs to specify both files,
In my case:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml ps

How to run a dockercompose file from another dockercompose file?

I have looked at the docker docs for the answer to this question and I dont see it being laid out simply anywhere. I want to start my app in a docker using docker-compose.yml I want that docker-compose.yml to start up other containers defined in another docker-compose.yml file in a different project.
version: '3.4'
#we need network if we want the services to talk to each other
networks:
services:
driver: bridge
services:
jc:
build:
context: .
dockerfile: ./Dockerfile
args:
- PORT=8080
network: host
networks:
- services
image: jc
container_name: jc
ports:
- 8080:8080
How can I edit this file so that I can run another docker-compose.yml file in a different file path when I run this docker-compose.yml file?
After trying the to use the extends option of the docker-compose file and differing variations of its usage I consistently received an error indicating that extends is not supported even after updating to the latest docker version. I did however solve the problem by running this command.
docker-compose -f /path/to/other/docker-compose-file/docker-compose.yml up
I was sure that I had to add something to the docker-compose file so I overlooked this in the docs. But you can read more about it here.
docker-compose docs
This is kind of a hack but you can add another container that starts docker compose with another docker-compose file.
for example:
Docker file for the starter container:
FROM ubuntu:bionic
RUN mkdir -p /compose
WORKDIR /compose
CMD ["docker-compose", "up", "-d"]
The main docker compose file (starts a redis server and the starter container). note the compose binary, docker socket and another docker-compose.yml file are mounted into the starter container:
version: '2.1'
services:
from-main-compose:
image: redis:3
starter:
image: starter:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /usr/local/bin/docker-compose:/usr/local/bin/docker-compose:ro
- /home/shay/source/composeFromContainer/another-compose/docker-compose.yml:/compose/docker-compose.yml:ro
Second docker compose file:
version: '2.1'
services:
redis-from-container:
image: redis:3
The result is this:
b1faa975df49 redis:3 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 6379/tcp compose_redis-from-container_1
7edca79d3d99 redis:3 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 6379/tcp composefromcontainer_from-main-compose_1
Note that if using this hack as it is the services will be placed on different networks so this might need to be tweaked a bit.

docker-compose up not binding ports

/usr/local/bin/docker-compose up
I am using this command on Amazon Linux. It does not bind the ports, so I could not connect to the services running inside the container. The same configuration is working on a local development server. Not sure what I am missing.
[root#ip-10-0-1-42 ec2-user]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec6320747ef3 d8bd4345ca7f "/bin/sh -c 'gulp bu…" 30 seconds ago Up 30 seconds vigilant_jackson
Here is the docker-compose.yml:
version: '2'
services:
web:
build: .
command: gulp serve
env_file:
- .env
volumes:
- .:/app/code
ports:
- "8050:8000"
- "8005:8005"
- "8888:8888"
npm -v 5.6.0
docker -v Docker version 18.06.1-ce, build e68fc7a215d7133c34aa18e3b72b4a21fd0c6136
Are you sure the ports are not published?
Use docker inspect, I would guess that they are published. If this is the case, then my guess is that as you are on AWS, you are not ssh-ing to the opened port (8050, 8005, 8888 are ports of the AWS linux instance, if I got your question correctly).

docker compose up nginx reverse proxy not adding containers to docker0 bridge

After I carry out docker-compose up, it starts the containers.
when I do docker ps I get the below, which tells me that the containers are running. However when I do docker network inspect bridge the result shows me that there are no containers part of the docker0 bridge.
When I then carry out docker run meanchat_myserver it actually does show up on docker0 and I am also getting the data that the server is running on port 3000.
Which I don't get by using docker-compose.
What am I doing wrong here?
I am reading that when I use docker0 I can only refer to IP's to connect to other containers and not the name. Can I assume the ip's don't change on the containers and that this works without issue on deploying the app in production?
02cf08b1c3da d57f06ba9c68 "npm start" 33 minutes ago Up 33 minutes 4200/tcp meanchat_client_1
e257063c9e21 meanchat_myserver "npm start" 33 minutes ago Up 33 minutes 3000/tcp meanchat_myserver_1
02441c2e43f5 e114a298eabd "npm start" About an ago Up 33 minutes 0.0.0.0:80->80/tcp meanchat_nginx_1
88d9841d2553 mongo "docker-entrypoint..." 3 hours ago Up 3 hours 27017/tcp meanchat_mongo_1
compose
version: '3'
services:
# Build the container using the client Dockerfile
client:
build: ./
# This line maps the contents of the client folder into the container.
volumes:
- ./:/usr/src/app
myserver:
build: ./express-server
volumes:
- ./:/usr/src/app
depends_on:
- mongo
nginx:
build: ./nginx
# Map Nginx port 80 to the local machine's port 80
ports:
- "80:80"
# Link the client container so that Nginx will have access to it
mongo:
environment:
- AUTH=yes
- MONGO_INITDB_ROOT_USERNAME=superAdmin
- MONGO_INITDB_ROOT_PASSWORD=admin123
- MONGO_INITDB_DATABASE=d0c4ae452a5c
image: mongo
volumes:
- /var/mongodata/data:/data/db
By default Compose sets up a single network for your app.
For more detail, refer this link.
This means containers with compose won't be located in default bridge network by default.
You can check which network the containers with compose are using with the command.
docker inspect $container_name -f "{{.NetworkSettings.Networks}}"
However, If you want containers to be in default bridge network, you can use network_mode.
services:
service_name:
# other options....
network_mode: bridge

Docker containers not running on VM

I am new to Docker and I am following the 'Getting Started' documentation at the Docker site.
I am trying to run 3 containers on a VM.
OS: Centos 7.3
Docker: 17.03.1-ce
I followed the first part and could get hello-world running on a container inside the VM.
Then I moved on to the Docker compose example.
I have the following directory structure:
home
|
- docker-compose.yml
|
- docker-test
|
- app.py
- Dockerfile
- requirements.txt
The files under docker-test are from the python app example on the docker website.
With the docker-compose, I was attempting to run 3 containers of the hello-world example.
My docker-compose.yml:
version: "3"
services:
web:
image: hello-world
deploy:
replicas: 3
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
Then I ran the following commands:
sudo docker swarm init
sudo docker stack deploy -c docker-compose.yml getstartedlab
sudo docker stack ps getstartedlab shows:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
iytr4ptz3m8l getstartedlab_web.1 hello-world:latest <node1> Shutdown Complete 16 minutes ago
s5t41txo05ex getstartedlab_web.2 hello-world:latest <node2> Shutdown Complete 16 minutes ago
91iitdnc49fk getstartedlab_web.3 hello-world:latest <node3> Shutdown Complete 16 minutes ago
However,
sudo docker ps shows no containers and when I curl http://localhost:80, it can't connect.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
What am I missing?
Your docker-compose.yml file says that the web service should use the hello-world image, which just prints a message & exits when run, leading to all of the containers being stopped. Presumably, you meant to instead use the image created by building docker-test/; to do this, simply replace the image: hello-world line with build: docker-test.

Resources