Start redis container with backup dump.rdb - docker

I'm trying do deploy a simple node - redis architecture using docker-compose.
I have a dump.rdb with the backup of redis data and I want to launch a container with that data loaded.
My docker-compose.yml looks like this:
version: '3'
services:
redis:
image: redis:alpine
container_name: "redis"
ports:
- "6379:6379"
server:
build: ./src
image: hubName:imageName
container_name: containerName
links:
- redis
depends_on:
- "redis"
ports:
- "8443:8443"
restart: always
Should I include volumes? What if I want persistance of that redis data?
Thanks :)

You can use docker-compose.yml like :
version: '3'
services:
redis:
image: redis:alpine
container_name: "redis"
ports:
- "6379:6379"
volumes:
- /data/redis:/data
server:
build: ./src
image: hubName:imageName
container_name: containerName
links:
- redis
depends_on:
- "redis"
ports:
- "8443:8443"
restart: always
Let's copy your dump.rdb to /data/redis folder on your host machine then start docker-compose.
About redis persistance,you must have docker volume and have two types for redis persinstance: RDB and AOF
RDB: The RDB persistence performs point-in-time snapshots of your dataset at specified intervals ( example: 60 seconds or if there're at least 10000 keys have been changed)
AOF: logs every write operation received by the server(eg: SET command) , that will be played again at server startup, reconstructing the original dataset
For more: https://redis.io/topics/persistence
You should decide base on your critical data level. In this case you have rdb dump so you can use RDB, it's default option

Related

Redis is not saving data in the docker container

I have this docker-compose.yml file code:
version: '3.3'
services:
redis:
container_name: redis
image: 'redis:latest'
environment:
X_REDIS_PORT: ${REDIS_PORT}
ports:
- ${REDIS_PORT}:${REDIS_PORT}
volumes:
- redis:/data
networks:
app_network:
external: true
driver: bridge
volumes:
postgres:
pgadmin:
supertoken:
redis:
I want to save the cached data inside of the redis container but it is not getting saved in the container instead it gets saved on my local machine.
How to change this behaviour?
inspect your volume,
docker volume inspect redis
the mountpoint is /var/lib/docker/volumes/, check the redis volumes data.
I use folder,there is a demo in my github repo,
volumes:
- ./data:/data

Docker Postgres database not running or accessible

Below is my Dockerfile:
FROM node:14
WORKDIR /workspace
COPY . .
COPY /prisma ./prisma/
RUN npm install
EXPOSE 3333
EXPOSE 9229
CMD [ "npm", "run", "start" ]
And my docker-compose.yml
version: '3.8'
services:
todoapp-api:
container_name: todoapp-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
volumes:
postgres:
networks:
nestjs-crud:
And my .env:
DATABASE_URL="postgresql://myuser:mypassword#192.168.1.1/mydb?schema=public"
After struggling with making the database run and be accessible, I found out that one possible solution was to change the DATABASE_URL. As you can see, I am writing my IP Address there to get it to run and this works for me. However, when I replace 192.168.1.1 with the name of the service: postgres, it stops working and I get the error:
Can't reach database server at postgres:5432
Writing the IP address is not ideal of course. However, if I don't write the IP address then the database server just doesn't work.
I think you may need to atributte networks in the containers specs. You already defined what networks you have in the YAML but they need to be inserted in container's spec like
todoapp-api:
container_name: todoapp-api
networks:
- nestjs-crud
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
nestjs-crud:
internal: true
My recomendation is to create one network for the db and other for the API, then assing the network db for the db, and both in the API, thus, the API can acess db network. Than, you can acess the db by the host nestjs-crud.postgres
To bounce back, on the point of the comment above, the two services are not in the same network, which is why you have the concern. To solve this problem, it will be necessary to put the services in the same network by putting the mention:
networks:
- nestjs-crud
and depends_on in todoapp-api
in the todoapp-api and postgres service, this becomes:
version: '3.8'
services:
todoapp-api:
container_name: todoapp-api
build:
context: .
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
- nestjs-crud
depends_on:
- postgres
postgres:
image: postgres:13.5
container_name: postgres
restart: always
environment:
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- nestjs-crud
volumes:
postgres:
networks:
nestjs-crud:
And add in .env database service name.

Why in docker-compose after recreate conteiner i get "Docker cannot link to a non running container"?

I have two conteiners:
docker-compose.yml
version: '3.8'
services:
db:
image: postgres:14.1
container_name: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
......
network_mode: bridge
web:
container_name: web
build: .
........
network_mode: bridge
external_links:
- postgres
depends_on:
- db
volumes:
postgres_data:
name: postgres_data
After docker-compose up, when i recreate only one container - "db", all works, but i can not connect to conteiner "web", i get error: "Failure
Cannot link to a non running container: /postgres AS /web/postgres".
In conteiner "web" i call db as host=postgres.
What am I doing wrong?
The external_links: setting is obsolete and you don't need it. You can just remove it with no adverse consequences.
network_mode: bridge and container_name: are also unnecessary, though they shouldn't specifically cause problems; still, I'd delete them. What you show can be reduced to
version: '3.8'
services:
db:
image: postgres:14.1
volumes:
- postgres_data:/var/lib/postgresql/data/
......
web:
build: .
........
depends_on:
- db
volumes:
postgres_data: # empty
Since Compose creates a network named default for you and attaches containers to it, your application container can still reach the database container using the hostname db. Networking in Compose in the Docker documentation describes this further.

ConnectionError: Error -3 connecting to redis:6379. Try again

I cannot connect redis client in a docker container with custom redis.conf file. Also even if i remove the code for connect redis with custom redis.conf file docker will still attempt to connect to custom redis file.
Docker.compose.yml
version: '2'
services:
data:
environment:
- RHOST=redis
command: echo true
networks:
- redis-net
depends_on:
- redis
redis:
image: redis:latest
build:
context: .
dockerfile: Dockerfile_redis
ports:
- "6379:6379"
command: redis-server /etc/redis/redis.conf
volumes:
- ./redis.conf:/etc/redis/redis.conf
networks:
redis-net:
volumes:
redis-data:
Dockerfile_redis
FROM redis:latest
COPY redis.conf /etc/redis/redis.conf
CMD [ "redis-server", "/etc/redis/redis.conf" ]
This is where i connect to redis. I use requirepass in redis.conf file.
redis_client = redis.Redis(host='redis',password='password1')
Is there a way to find out original redis.conf file that docker uses so then i could just change password to make redis secure ? I just use original redis.conf file which comes after installation of redis to server with "apt install redis" then i change requirepass.
I have fixed this issue finally with help of https://github.com/sameersbn/docker-redis.
There are no need to use dockerfile for redis in this case.
Docker.compose.yml:
version: '2'
services:
data:
command: echo true
environment:
- RHOST=Redis
depends_on:
- Redis
Redis:
image: sameersbn/redis:latest
ports:
- "6379:6379"
environment:
- REDIS_PASSWORD=changeit
volumes:
- /srv/docker/redis:/var/lib/redis
restart: always
redis_connect.py
redis_client = redis.Redis(host='Redis',port=6379,password='changeit')

How docker manage volumes when scaling up compose project?

What if have 10 instance of any container which needs presistant storage, so on 10 instance how docker will manage volume for them, i've defined volume in docker-compose.yml
i didn't find anything regarding this that what will happen when multiple instance will run?
1. will docker create new folders for each instance or
2. share same folder to all of them (this will lead data corrption)?
here is my sample docker-compose.yml
version: '2'
services:
consul:
#image: myappteam/consul:3.4.0
build: ./consul
container_name: consul
hostname: consul
domainname: consul
restart: always
volumes:
- myapp-data:/data/consul
consului:
#image: myappteam/consul-ui:3.4.0
build: ./consul-ui
container_name: consul-ui
hostname: consul-ui
domainname: consul-ui
ports:
- 8500:8500
restart: always
volumes:
- myapp-data:/data/consului
nginx:
#image: myappteam/nginx:3.4.0
build: ./nginx
container_name: nginx
hostname: nginx
domainname: nginx
ports:
- "80:80"
volumes:
- myapp-logs:/logs/nginx_access_logs
- myapp-logs:/logs/nginx_error_logs
restart: always
volumes:
myapp-data:
myapp-logs:
myapp-bundle:
myapp-source:
so in above example, myapp-data is plan i want to have all data so i was thinking when i'll increase instance of nginx, consul, will they use same myapp-data volume or create new volume? because if it will use same instance then data will corrupted because two instance will write same files..
so in that case what should i do?

Resources