I have this docker-compose.yml file code:
version: '3.3'
services:
redis:
container_name: redis
image: 'redis:latest'
environment:
X_REDIS_PORT: ${REDIS_PORT}
ports:
- ${REDIS_PORT}:${REDIS_PORT}
volumes:
- redis:/data
networks:
app_network:
external: true
driver: bridge
volumes:
postgres:
pgadmin:
supertoken:
redis:
I want to save the cached data inside of the redis container but it is not getting saved in the container instead it gets saved on my local machine.
How to change this behaviour?
inspect your volume,
docker volume inspect redis
the mountpoint is /var/lib/docker/volumes/, check the redis volumes data.
I use folder,there is a demo in my github repo,
volumes:
- ./data:/data
Related
I have two conteiners:
docker-compose.yml
version: '3.8'
services:
db:
image: postgres:14.1
container_name: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
......
network_mode: bridge
web:
container_name: web
build: .
........
network_mode: bridge
external_links:
- postgres
depends_on:
- db
volumes:
postgres_data:
name: postgres_data
After docker-compose up, when i recreate only one container - "db", all works, but i can not connect to conteiner "web", i get error: "Failure
Cannot link to a non running container: /postgres AS /web/postgres".
In conteiner "web" i call db as host=postgres.
What am I doing wrong?
The external_links: setting is obsolete and you don't need it. You can just remove it with no adverse consequences.
network_mode: bridge and container_name: are also unnecessary, though they shouldn't specifically cause problems; still, I'd delete them. What you show can be reduced to
version: '3.8'
services:
db:
image: postgres:14.1
volumes:
- postgres_data:/var/lib/postgresql/data/
......
web:
build: .
........
depends_on:
- db
volumes:
postgres_data: # empty
Since Compose creates a network named default for you and attaches containers to it, your application container can still reach the database container using the hostname db. Networking in Compose in the Docker documentation describes this further.
I have an application where I need to reset the database (wipe it completely).
I ran all the commands I could find
docker system prune
docker system prune -a -f
docker volume prune
Using docker volume ls, I copied the volume ID and then ran
docker volume rm "the volume id"
When I do docker system df nothing is shown anymore. However, once I run my app again
docker-compose up --build
the database still contains old values.
What am I doing wrong?
EDIT here is my compose file
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
networks:
- postgres
extra_hosts:
- "host.docker.internal:host-gateway"
restart: always
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/var/lib/postgresql/data
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
It seems the database in your config is mapped to a directory on your host system:
volumes:
- /data/postgres:/var/lib/postgresql/data
so the data in the containers /var/lib/postgresql/data is read from and written to your local /data/postgres directory.
If you want to delete it you should empty out that directory. (Or move it until you are sure that you can delete it)
Using Docker toolbox on Windows 10 Home, Docker version 19.03, we have created a docker-compose.yml and added a secrets file as JSON, it runs fine on a Mac system, but it is unable to run the same in Windows 10 Home.
Error after running docker-compose up:
ERROR: for orthancserver Cannot create container for service orthanc: invalid mount config for type
"bind": invalid mount path: 'C:/Users/ABC/Desktop/Project/orthanc.json' mount path must be absolute
docker-compose.yml:
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
secrets:
- orthanc.json
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true
secrets:
orthanc.json:
file: orthanc.json
orthanc.json file kept next to docker-compose.yml
Found an alternative solution for windows 10 home, with docker toolbox. as commented by #Schwarz54, the file-sharing works well with docker volume for Dockerized Orthanc server.
Add shared folder:
Open Oracle VM manager
Go to setting of default VM
Click Shared Folders
Add C:\ drive to the list
Edit docker-compose.yml to transfer the config file to Orthanc via volume
version: "3.7"
services:
orthanc:
image: jodogne/orthanc-plugins:1.6.1
command: /run/secrets/
container_name: orthancserver
restart: always
ports:
- "4242:4242"
- "8042:8042"
networks:
- mynetwork
volumes:
- /tmp/orthanc-db/:/var/lib/orthanc/db/
- /c/Users/ABCUser/Desktop/Project/orthanc.json:/etc/orthanc/orthanc.json:ro
dcserver:
build: ./dc_node_server
depends_on:
- orthanc
container_name: dcserver
restart: always
ports:
- "5001:5001"
networks:
- mynetwork
volumes:
- localdb:/database
volumes:
localdb:
external: true
networks:
mynetwork:
external: true
I'm trying do deploy a simple node - redis architecture using docker-compose.
I have a dump.rdb with the backup of redis data and I want to launch a container with that data loaded.
My docker-compose.yml looks like this:
version: '3'
services:
redis:
image: redis:alpine
container_name: "redis"
ports:
- "6379:6379"
server:
build: ./src
image: hubName:imageName
container_name: containerName
links:
- redis
depends_on:
- "redis"
ports:
- "8443:8443"
restart: always
Should I include volumes? What if I want persistance of that redis data?
Thanks :)
You can use docker-compose.yml like :
version: '3'
services:
redis:
image: redis:alpine
container_name: "redis"
ports:
- "6379:6379"
volumes:
- /data/redis:/data
server:
build: ./src
image: hubName:imageName
container_name: containerName
links:
- redis
depends_on:
- "redis"
ports:
- "8443:8443"
restart: always
Let's copy your dump.rdb to /data/redis folder on your host machine then start docker-compose.
About redis persistance,you must have docker volume and have two types for redis persinstance: RDB and AOF
RDB: The RDB persistence performs point-in-time snapshots of your dataset at specified intervals ( example: 60 seconds or if there're at least 10000 keys have been changed)
AOF: logs every write operation received by the server(eg: SET command) , that will be played again at server startup, reconstructing the original dataset
For more: https://redis.io/topics/persistence
You should decide base on your critical data level. In this case you have rdb dump so you can use RDB, it's default option
I'm trying to save data using volume. It won't restore my data when I docker stack deploy it. How do I set volume?
Running docker stack deploy -c compose-db.yml db.
This is my compose file.
compose-db.yml
version: '3'
services:
redis:
image: 172.16.12.154:5000/redis
networks:
pitbull-overlay:
aliases:
- redis
volumes:
- redis-volume:/data
ports:
- 6379:6379
mongodb:
image: 172.16.12.154:5000/mongodb
networks:
pitbull-overlay:
aliases:
- mongodb
volumes:
- mongodb-volume:/data/db
ports:
- 27017:27017
networks:
pitbull-overlay:
external:
name: pitbull-overlay
volumes:
mongodb-volume:
redis-volume: