Problems writing log to a shared docker volume - docker

I have not been able to connect the containers of my app and promtail via volumes so that promtail can read it.
I have an app that creates log files (by log4j2 in java ) to a folder with the extension appXX.log, when I share volumes my app is not able to write this file.
Here is my docker-compose (I have skipped the loki/grafana containers).
My app writes fine in that path without the shared volume, so it must be somehow how docker manages the volumes. Any ideas what could be going on?
promtail:
image: grafana/promtail:2.4.1
volumes:
- "app-volume:/var/log/"
- "path/config.yml:/etc/promtail/config.yml"
command: -config.file=/etc/promtail/config.yml
app:
image: app/:latest
volumes:
- "app-volume:/opt/app/logs/"
command: ["/bin/sh","-c","java $${JAVA_ARGS} -jar app-*.jar"]
volumes:
app-volume:
On the other hand, I do not know if it is the correct way to log an application to promtail, I have seen that usually read directly the log of the container (which does not work for me, because it only works in docker-linux) and I can think of these other possibilities What would be the right one in case it is impossible by volumes?
other alternatives
Any idea is welcome, thanks !

Related

Docker-compose how add volume to save logs from java app in windows/mac

my docker compose file looks like:
app:
image: app
restart: always
ports:
- 127.0.0.1:8080:8080
as far as I know docker is storing logs into virtual disc, so how can I copy logs from there and store into my host machine
In fact, I tried to add
volumes:
- ./logs:/home/logs
but only directory logs is creating, there are no logs. What am I doing wrong?
I have a suspicion that the target folder in the docker container is wrong. You specify /home/logs - which seems like an odd place. That would mean that the logs are stored in the home folder of a user named 'logs'.
Are you sure that is the path where logs are stored in the docker container?

Docker compose Volume - Uploaded files

I have a basic application runnig inside a docker container. The application is a page where users can upload files. The uploaded files are storing inside app/myApp/UploadedFiles (app folder is where the container installs my application)
If I restart the container I lost all files stored inside the folder app/myApp/UploadedFiles
What is the best approach to persist the uploaded files even if I restart the container?
I tried to use volumes inside my docker compose files;
volumes:
- ${WEBAPP_STORAGE_HOME}/var/myFiles:/myApp/UploadedFiles
This creates a folder in the following root home>var>myFiles but if I upload files I never see them in these directory.
How can I do that?
My goal is to persists the files and could access to them and for example download the files.
Thanks
EDIT:
I created an App Service in Azure using Container Registry and this docker compose:
version: '2.0'
services:
myWebSite:
image: myWebSite.azurecr.io/myWebSite:latest
volumes:
- ${WEBAPP_STORAGE_HOME}/var/myFiles:/myApp/UploadedFiles
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
ports:
- 5001:443
If I upload a file in the web site the file goes to /myApp/UploadedFiles
Using BASH I can go to /home/var/myFiles but there aren't any file inside.
I don't know if this is the correct approach. I can have the same problem with my application logs. I don't know how to read my logs.
Besides declaring the volume in the service you are using, you need to create another section in order to link in both ways the volume (from container to machine and from machine to pc)
Like this:
volumes:
- database:/var/lib/postgresql/data
volumes:
database:

Docker Stack/Compose Redis Instance Not Persisting Data

I'm trying to use Redis as a cache in my Sails.js application, but am having a hard time getting the volumes to work correctly and to get the Redis instance to persist the data automatically.
Here's (the relevant part of) my stack.yml file:
redis:
image: 'redis:latest'
command: redis-server --appendonly yes
command: ['redis-server', '--requirepass', 'mypasswordgoeshere']
volumes:
- ./redis-data:/data
deploy:
replicas: 1
I have a Sails.js service and an Angular app service as well, but didn't show them as they aren't really needed here.
The above doesn't work exactly as expected out of the box. I expected Docker to create the ./redis-data folder for me automatically if not present, but it doesn't. So I had to create the folder and make sure the permissions were set correctly. Then I expected the Redis container to persist its data to an appendonly.aof file and have that automatically be saved periodically. That's what happens on my local machine when I tried this.
However, on the server, it doesn't save the appendonly.aof file for me, and thus the data I want to persist doesn't get saved so that it's there when the stack is restarted.
The weirdest part of this is that I created a volume for the Sails.js app to get to the log files, and I didn't need to create the folder for the application; it just created it and everything worked as expected on its own. Am I missing something obvious here? Any help would be greatly appreciated. Thanks!
EDIT
Also, if I go in to the container (docker exec -it container_id /bin/bash, there's nothing in that /data folder that it drops you in. Which explains why there's nothing there on the host machine either.
For anyone using Redis Stack, what resolved my problem was adding the following
environment:
REDIS_ARGS: --save 20 1
I assume any values for the parameters, or using the AOF mode, would also work, but I have not verified this.
I initially tried specifying this as
command: redis-server --save 20 1
which was based on this tutorial. However, this resulted in a ConnectionError("Connection is not ready") (source) when running docker-compose up.
To make this work, I used the following definition for the Redis service:
redis:
image: 'redis:latest'
environment:
- REDIS_PASS=mypassword
- REDIS_APPENDONLY=yes
- REDIS_APPENDFSYNC=always
volumes:
- ./redis-data:/data
deploy:
replicas: 1
Using the environment variables or settings seemed to work as expected.
I got this idea from this mini tutorial in the Docker docs. I'm not exactly sure why this worked, as the documentation on Docker Hub for the redis image didn't specify any of those environment variables, and actually said to use to the steps mentioned in the original question, but regardless of that this solution do work.

RabbitMQ docker container persistence on Windows

I've created a docker-compose.yml file and when trying to "up" it, I'm failing to have my RabbitMQ docker container persisting to my host volume. It's complaining about the erlang cookie file not being accessible by owner only.
Any help with this would be greatly appreciated.
EDIT
So I added the above volume binding and rabbitmq seems to be placing files into that directory when I do a docker-compose up. I then add 2 messages and I can see via the rabbitmq console that the 2 messages are sitting in the queue...but then when I perform a docker-compose down followed by a docker-compose up, expecting the 2 messages to still be there as the directory and files were created, but they aren't and the message count=0 :(.
Maybe it's trying to access some privileged user functions.
Try adding privileged: true section to your docker-compose service in yml and do docker-compose up again.
If it works and you prefer to do some privileges, only what RabbitMQ needs, replace privileged: true by capability section for adding or dropping privileges:
cap_add:
- ALL
- <WHAT_YOU_PREFER>
cap_drop:
- NET_ADMIN
- SYS_ADMIN
- <WHAT_YOU_PREFER>
For further information, please check Compose file documentation
EDIT:
In order to provide data persistency when containers fails, add volumes section to docker-compose.yml file
volumes: /your_host_dir_with_data:/destination_in_docker

Docker-compose restart policy cause logs to be cut

I run two containers inside docker compose yaml file. One is application server (with deployed application) and second one is oracle database. I have following yaml file.
services:
jboss-7.0.3:
image: jboss-7.0.3
build: ../dockerfiles/jboss-eap-7.0.3
ports:
- '8080:8080'
- '9990:9990'
- '9999:9999'
- '8787:8787'
restart: always
oracle11:
image: oracle11
build: ../dockerfiles/oracle-11xe-dima
ports:
- "48088:48088"
- "1521:1521"
- "40022:40022"
restart: always
I wanted to debug why server can't connect to database (in standalone-full.xml file i have oracle11 name as host specified). Now what is strange for me I can't se error which exatcly cause jboss to restart. It's always around db connection but i should be able to see some error in logs, but before error log appears jboss restarts. Thus I can't see what caused the error. Even without restart policy it gets kill signal and log immediately stops. How can I solve this issue ?
From your yaml file, i could see that you have not linked your server to database. Use links:oracle11 field in your service jboss-7.0.3. And the DB URI should contain your db container address/db service name.
I finally figured out what was going on.. It was really simple mistake - reason why my logs were cut was that.. actually there were not cut. I had to small memory in my docker host machine and JBoss were killed by system, so this was the reason. Now, after increasing memory to docker host machine, everything works like a charm

Resources