So I have been trying to persist the data of a redis container. I created a host shared folder called /redis-volume and am trying to store the data from redis into it.
I was trying to persist data across redis container in docker-compose version 3. And this is what finally worked. Please let me know if this is the right way to do it
container_name: redis_db
command: redis-server --appendonly yes
image: redis
ports:
- "6379:6379"
volumes:
- ./redis-volume:/data
Inside the /redis-volume on the host you will be able to find a file called appendonly.aof.
This is what persists data. Also while restarting the container you will be able to find a line like this if you look closely.
redis_db | 1:M 08 Jun 2020 19:40:28.024 * DB loaded from append only file: 0.000 seconds
Hope this helps!
Related
I can't start redis container in my docker-compose file. I know that docker-compose file is OK, because my colleagues can start the project successfully. I read that there is a solution to delete dump.rdb file. But I can't find it. I use Windows machine. Any suggestions will be very helpful.
Error
2023-02-09 16:41:28 1:M 09 Feb 2023 13:41:28.699 # Can't handle RDB format version 10
Redis in docker_compose:
redis:
container_name: redis
hostname: redis
image: redis:5.0
ports:
- "6379:6379"
volumes:
- redis:/data
restart: always
The solution was very simple:
docker volume ls
docker volume rm <volume_name>
I'm having an issue with data persistence in influxDB running in a docker container. The influxDB container is spun up with the following docker-compose file:
services:
influxdb:
image: influxdb:1.8.6
restart: always
container_name: influxDB
volumes:
- ./influxDB_data:/var/lib/influxdb
ports:
- "8083:8083"
- "8086:8086"
environment:
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=xxxx
- INFLUXDB_DB=openhab_db
So the data should be persisted on the local folder ./influxDB_data and when I shutdown docker-compose down and restart docker-compose up -d the container this seems to be the case, because all the time series data is still there.
But if I shutdown the container and move all files from the local ./influxDB_data folder to a different machine and spin up the container there, only the database settings are persisted, all the series data is lost.
It seems, that not all the data from influx is stored at /var/lib/influxdb (maybe in ram or a different location?). But if this is the case, why is the data persisted on the same machine in the first place? And does anyone know how this could be fixed?
I am running standalone Redis using docker-hub image with volume
forpersistent storage(--appendonly yes), however after a while all the keys from the Redis are disappearing.
I haven't set the EXPIRE time for any of the keys.
Runnning the docker with following command :
docker run -p 6379:6379 -v redis-vol:/data -d redis redis-server --appendonly yes
Can anyone please let me know what might be going wrong?
Thank you.
Yeah you will be losing the keys each time you are creating a new container, although you have a permanent storage 'volume'.
The thing you are missing is to set the environment variables ALLOW_EMPTY_PASSWORD=yes and DISABLE_COMMANDS=FLUSHDB,FLUSHALL,CONFIG
If you are using docker-compose file you can simply add them as:
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- DISABLE_COMMANDS=FLUSHDB,FLUSHALL,CONFIG
container_name: haproxy_redis_auth_redis
ports:
- "6379:6379"
volumes:
- redis-data:/bitnami/redis/data
In so case, do not forget to make your application depends on redis service
application:
build: .
depends_on:
- db
- redis
I had the same issue which been fixed after I took a look here:
https://hub.docker.com/r/bitnami/redis/
I tried a very simple first test of a python app with a redis according to the docker documentation. This crashes after a while because redis cannot persist. I don't have a clue why. You can find the public repo here: Github repo
My current docker-compose.yml is:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis:latest
volumes:
- ./data:/data
Edit: this is an excerpt of the log:
1:M 09 Feb 10:51:15.130 # Background saving error
1:M 09 Feb 10:51:21.072 * 100 changes in 300 seconds. Saving...
1:M 09 Feb 10:51:21.073 * Background saving started by pid 345
345:C 09 Feb 10:51:21.074 # Failed opening .rdb for saving: Permission denied
1:M 09 Feb 10:51:21.173 # Background saving error
1:M 09 Feb 10:51:27.011 * 100 changes in 300 seconds. Saving...
1:M 09 Feb 10:51:27.011 * Background saving started by pid 346
346:C 09 Feb 10:51:27.013 # Failed opening .rdb for saving: Permission denied
Edit2: this is the complete error Redis throws in python:
MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error
The funny thing is, I don't do anything to the redis image.
It is a permission error, log into the redis container via docker exec -it redis_container_name bash and ensure it has write permissions on /data.
It probably does not, and you can fix it several ways: use a docker volume instead of bind-mounting the host, or try to fix permissions from the host by having matching uid/gid with the owner in the container.
Also, as stated in the docker hub page, you should set redis' command to:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis:latest
command: redis-server --appendonly yes
volumes:
- ./data:/data
if you intend to persist data.
Since your data folder has the wrong permissions set, start by deleting it and letting docker-compose make a new one.
I have updated my repo with a working version, tag 0.2
Once I worked with version 2 of the docker file it worked fine.
I have this app trying to orchestrate using docker + fig which works great for the first day of trying. It uses a data container where I want to persist my database files and a redis + mysql container(s) used by the app.
Once booted up the mysql container looks inside /var/lib/mysql for data files and, if none found, it creates the default sb which I can then populate and the files are created and also persisted in my data volume.
While I'm learning fig I had to do a fig rm --force mysql which deleted my mysql container. I did this without fear knowing that my data is safe on the data container. Running a ls on my host shows the mysql files still intact.
The problem occurs when I run fig up again which creates the mysql container again. Even though I have the same volumes shared and my old mysql files are still present this new container creates a new database as if the volume shared was empty. This only occurs if I rm the container and not if I close fig and bring it back up.
Here's my fig file if it helps:
data:
image: ubuntu:12.04
volumes:
- /data/mysql:/var/lib/mysql
redis:
image: redis:latest
mysql:
image: mysql:latest
ports:
- 3306
environment:
MYSQL_DATABASE: *****
MYSQL_ROOT_PASSWORD: *****
volumes_from:
- data
web:
build: .
dns: 8.8.8.8
command: python manage.py runserver 0.0.0.0:8000
environment:
- DEBUG=True
- PYTHONUNBUFFERED=1
volumes:
- .:/code
ports:
- "8000:8000"
links:
- data
- mysql
- redis
Any ideas why the new mysql container won't use the existing files.?
I have not used fig but will be looking into it as the simplistic syntax in your post looks pretty great. Every day I spend a couple more hours expanding my knowledge about Docker and rewire my brain as to what is possible. For about 3 weeks now I have been running a stateless data container for my MySQL instances. This has been working great with no issues.
If the content inside /var/lib/mysql does not exist when the container starts, then a script installs the needed database files. The script checks to see if the initial database files exist not just the /var/lib/mysql path.
if [[ ! -f $VOLUME_HOME/ibdata1 ]]; then
echo "=> An empty or uninitialized MySQL volume is detected in $VOLUME_HOME"
echo "=> Installing MySQL ..."
else
echo "=> Using an existing volume of MySQL"
fi
Here is a direct link to a MySQL repo I am continuing to enhance
This seems to be a bug, see this related question which has links to relevant fig/docker issues: https://stackoverflow.com/a/27562669/204706
Apparently the situation is improved with docker 1.4.1, so you should try using that version if you aren't already.