Docker Stack/Compose Redis Instance Not Persisting Data - docker

I'm trying to use Redis as a cache in my Sails.js application, but am having a hard time getting the volumes to work correctly and to get the Redis instance to persist the data automatically.
Here's (the relevant part of) my stack.yml file:
redis:
image: 'redis:latest'
command: redis-server --appendonly yes
command: ['redis-server', '--requirepass', 'mypasswordgoeshere']
volumes:
- ./redis-data:/data
deploy:
replicas: 1
I have a Sails.js service and an Angular app service as well, but didn't show them as they aren't really needed here.
The above doesn't work exactly as expected out of the box. I expected Docker to create the ./redis-data folder for me automatically if not present, but it doesn't. So I had to create the folder and make sure the permissions were set correctly. Then I expected the Redis container to persist its data to an appendonly.aof file and have that automatically be saved periodically. That's what happens on my local machine when I tried this.
However, on the server, it doesn't save the appendonly.aof file for me, and thus the data I want to persist doesn't get saved so that it's there when the stack is restarted.
The weirdest part of this is that I created a volume for the Sails.js app to get to the log files, and I didn't need to create the folder for the application; it just created it and everything worked as expected on its own. Am I missing something obvious here? Any help would be greatly appreciated. Thanks!
EDIT
Also, if I go in to the container (docker exec -it container_id /bin/bash, there's nothing in that /data folder that it drops you in. Which explains why there's nothing there on the host machine either.

For anyone using Redis Stack, what resolved my problem was adding the following
environment:
REDIS_ARGS: --save 20 1
I assume any values for the parameters, or using the AOF mode, would also work, but I have not verified this.
I initially tried specifying this as
command: redis-server --save 20 1
which was based on this tutorial. However, this resulted in a ConnectionError("Connection is not ready") (source) when running docker-compose up.

To make this work, I used the following definition for the Redis service:
redis:
image: 'redis:latest'
environment:
- REDIS_PASS=mypassword
- REDIS_APPENDONLY=yes
- REDIS_APPENDFSYNC=always
volumes:
- ./redis-data:/data
deploy:
replicas: 1
Using the environment variables or settings seemed to work as expected.
I got this idea from this mini tutorial in the Docker docs. I'm not exactly sure why this worked, as the documentation on Docker Hub for the redis image didn't specify any of those environment variables, and actually said to use to the steps mentioned in the original question, but regardless of that this solution do work.

Related

Problems writing log to a shared docker volume

I have not been able to connect the containers of my app and promtail via volumes so that promtail can read it.
I have an app that creates log files (by log4j2 in java ) to a folder with the extension appXX.log, when I share volumes my app is not able to write this file.
Here is my docker-compose (I have skipped the loki/grafana containers).
My app writes fine in that path without the shared volume, so it must be somehow how docker manages the volumes. Any ideas what could be going on?
promtail:
image: grafana/promtail:2.4.1
volumes:
- "app-volume:/var/log/"
- "path/config.yml:/etc/promtail/config.yml"
command: -config.file=/etc/promtail/config.yml
app:
image: app/:latest
volumes:
- "app-volume:/opt/app/logs/"
command: ["/bin/sh","-c","java $${JAVA_ARGS} -jar app-*.jar"]
volumes:
app-volume:
On the other hand, I do not know if it is the correct way to log an application to promtail, I have seen that usually read directly the log of the container (which does not work for me, because it only works in docker-linux) and I can think of these other possibilities What would be the right one in case it is impossible by volumes?
other alternatives
Any idea is welcome, thanks !

Rails 5+, WebPacker and Docker development workflow

One of the advantages of using Docker is single environment for entire team. Some time ago I was using Vagrant to unify development environment in a team, and it worked pretty well. Our workflow was like:
Run vagrant up, command takes some time to download the base image, run provisioning scripts. It also maps directory from local filesystem to container filesystem.
Change file on the host system, all changes will be mapped to guest filesystem (container), so no container restart needed.
Some folks use Docker for the similar development workflow, but I usually use docker-compose just to run satellite services. And I was always running Rails monolith inside of host operating system, like natively.
So my development workflow is pretty standard:
All the satellite services are up and located inside of Docker containers, I just have a bunch of exposed ports. I don't need to brew-install lots of software to support them, it's good.
Rails monolith runs in host OS, so every time I make, for example, JavaScript file change, WebPacker comes into play, rebuilds, and applies changes without page refresh. It's important to emphasize, because page refresh takes time, I don't want to refresh the page every time I do JavaScript or CSS file change.
With Vagrant the above scheme works well as well. But with Docker things are different.
The development workflow some folks use with Docker is as follows:
Run a bunch of services with docker-compose command, except Rails monolith (same step as with my development workflow above).
Every time you make change in your app (for example, JavaScript file) you need to rebuild container, because you're making changes on your local filesystem, not inside of a docker container. So you 1) stop 2) build 3) run Docker container again.
In other words, with Docker-only approach we have the following cons:
No webpacker js/css refresh
Container rebuild, which takes time
Application restart, which takes a lot sometimes, even zero-code "Rails" app starts in ~3 seconds
So my question is: what's the best way to go with Docker-only approach? How you can take advantage of Docker while using WebPacker with Rails and avoid page refresh and application restart?
I've been reading a good book on this recently (Docker for Rails developers). The gist seems to be that you run Rails in a Docker container and use a volume to 'link' your local files into the container, so that any file changes will take immediate effect. With that, you should not need to restart/rebuild the container. On top of that, you should run webpack-dev-server as a separate container (which also needs the local files mounted as a volume) which will do JavaScript hot reloading - so no need to reload the page to see JS updates.
Your docker-compose.yml file would look something like this (uses Redis and Postgres as well):
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/web
- .env/development/database
environment:
- WEBPACKER_DEV_SERVER_HOST=webpack_dev_server
webpack_dev_server:
build: .
command: ./bin/webpack-dev-server
ports:
- 3035:3035
volumes:
- .:/usr/src/app
env_file:
- .env/development/web
- .env/development/database
environment:
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:

Docker-compose restart policy cause logs to be cut

I run two containers inside docker compose yaml file. One is application server (with deployed application) and second one is oracle database. I have following yaml file.
services:
jboss-7.0.3:
image: jboss-7.0.3
build: ../dockerfiles/jboss-eap-7.0.3
ports:
- '8080:8080'
- '9990:9990'
- '9999:9999'
- '8787:8787'
restart: always
oracle11:
image: oracle11
build: ../dockerfiles/oracle-11xe-dima
ports:
- "48088:48088"
- "1521:1521"
- "40022:40022"
restart: always
I wanted to debug why server can't connect to database (in standalone-full.xml file i have oracle11 name as host specified). Now what is strange for me I can't se error which exatcly cause jboss to restart. It's always around db connection but i should be able to see some error in logs, but before error log appears jboss restarts. Thus I can't see what caused the error. Even without restart policy it gets kill signal and log immediately stops. How can I solve this issue ?
From your yaml file, i could see that you have not linked your server to database. Use links:oracle11 field in your service jboss-7.0.3. And the DB URI should contain your db container address/db service name.
I finally figured out what was going on.. It was really simple mistake - reason why my logs were cut was that.. actually there were not cut. I had to small memory in my docker host machine and JBoss were killed by system, so this was the reason. Now, after increasing memory to docker host machine, everything works like a charm

How to share a value between all docker containers spun op by the same "docker-compose up" call?

Context
We are migrating an older application to docker and as a first step we're working against some constraints. The database can not yet be put in a container, and moreover, it is shared between all developers in our team. So this question is to find a fix for a temporary problem.
To not clash with other developers using the same database, there is a system in place where each developer machine starts the application with a value that is unique to his machine. Each container should use this same value.
Question
We are using docker-compose to start the containers. Is there a way to provide a (environment) variable to it that gets propagated to all containers?
How I'm trying to do it:
My docker-compose.yml looks kind of like this:
my_service:
image: my_service:latest
command: ./my_service.sh
extends:
file: base.yml
service: base
environment:
- batch.id=${BATCH_ID}
then I thought running BATCH_ID=$somevalue docker-compose up my_service would fill in the ${BATCH_ID}, but it doesn't seem to work that way.
Is there another way? A better way?
Optional: Ideally everything should be contained so that a developer can just call docker-compose up my_service leading to compose itself calculating a value to pass to all the containers. But from what I see online, I think this is not possible.
You are correct. Alternatively you can just specify the env var name:
my_service:
environment:
- BATCH_ID
So the var BATCH_ID is defined from the current docker-compose execution scope; and passed to the container with the same name.
I don't know what I changed, but suddenly it works as described.
BATCH_ID is the name of the environment variable on the host.
batch.id will be the name of the environment variable inside the container.
my_service:
environment:
- batch.id=${BATCH_ID}

How can I link an image created volume with a docker-compose specified named volume?

I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6
Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).
You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external

Resources