Docker-compose restart policy cause logs to be cut - docker

I run two containers inside docker compose yaml file. One is application server (with deployed application) and second one is oracle database. I have following yaml file.
services:
jboss-7.0.3:
image: jboss-7.0.3
build: ../dockerfiles/jboss-eap-7.0.3
ports:
- '8080:8080'
- '9990:9990'
- '9999:9999'
- '8787:8787'
restart: always
oracle11:
image: oracle11
build: ../dockerfiles/oracle-11xe-dima
ports:
- "48088:48088"
- "1521:1521"
- "40022:40022"
restart: always
I wanted to debug why server can't connect to database (in standalone-full.xml file i have oracle11 name as host specified). Now what is strange for me I can't se error which exatcly cause jboss to restart. It's always around db connection but i should be able to see some error in logs, but before error log appears jboss restarts. Thus I can't see what caused the error. Even without restart policy it gets kill signal and log immediately stops. How can I solve this issue ?

From your yaml file, i could see that you have not linked your server to database. Use links:oracle11 field in your service jboss-7.0.3. And the DB URI should contain your db container address/db service name.

I finally figured out what was going on.. It was really simple mistake - reason why my logs were cut was that.. actually there were not cut. I had to small memory in my docker host machine and JBoss were killed by system, so this was the reason. Now, after increasing memory to docker host machine, everything works like a charm

Related

Gitlab-runner + Docker-compose deploying scheme: how to properly restart containers after reboot of host server

Suppose I have repository on Gitlab and following deploying scheme:
Setup docker and gitlab-runner with docker executor on host server.
In .gitlab-ci.yml setup docker-compose to build and up my service together with dependencies.
Setup pipeline to be triggering by pushing commits to production branch.
Suppose docker-compose.yml has two services: app (with restart: always) and db (without restarting rule). app depends on db so docker-compose up starts db and then app.
It works perfectly until host server reboots. After it is only app container restarts.
Workarounds I've found and their cons:
add restart: always to db service. But app can start before db and hence fails.
use docker-compose on host machine and setup docker-compose up to autorun. But in that case I should setup docker-compose, deploy ssh-keys, clone code somewhere to the host server and update it. It seems like violating DRY principle and overcomplicating scheme.
trigger pipleline after reboot. The only way I've found is to trigger it by API and trigger token. But in that case I have to setup trigger token which seems like not as bad as before but violating DRY principle and overcomplicating scheme.
How can one improve deploying scheme to make docker restart containers after reboot in right order.
P.S. Configs are as following:
.gitlab-ci.yml:
image:
name: docker/compose:latest
services:
- docker:dind
stages:
- deploy
deploy:
stage: deploy
only:
- production
script:
- docker image prune -f
- docker-compose build --no-cache
- docker-compose up -d
docker-compose.yml:
version: "3.8"
services:
app:
build: .
container_name: app
depends_on:
- db
ports:
- "80:80"
restart: always
db:
image: postgres
container_name: db
ports:
- "5432:5432"
When you add restart: always to db service, your app can start before db and fails. But your app must restart after fails, becase "restart:always" policy, if it doesn't work probably you have wrong exit code from failed app.
So You can add healthcheck and restart app after delay, which you suppose app must work.
Simple check of 80 port can help.
It is basically happen because you are failing fast in your app due to an unavailable database.
It can be useful in some cases, but for your use case, you can implement the app in a way that you retry to establish the connection if it fails. Ideally a backoff strategy could be implemented so that you don't overload your database in case of a real issue.
Losing the connection to the database can happen, but does it make sense to kill your app if the database is unavailable? Can you implement any fallback e.g. "Sorry we have an issue but we are working on it". In a user perspective letting them know that you have an issue and you are working to fix it, has a much better user experience than just don't open your app.

Problems writing log to a shared docker volume

I have not been able to connect the containers of my app and promtail via volumes so that promtail can read it.
I have an app that creates log files (by log4j2 in java ) to a folder with the extension appXX.log, when I share volumes my app is not able to write this file.
Here is my docker-compose (I have skipped the loki/grafana containers).
My app writes fine in that path without the shared volume, so it must be somehow how docker manages the volumes. Any ideas what could be going on?
promtail:
image: grafana/promtail:2.4.1
volumes:
- "app-volume:/var/log/"
- "path/config.yml:/etc/promtail/config.yml"
command: -config.file=/etc/promtail/config.yml
app:
image: app/:latest
volumes:
- "app-volume:/opt/app/logs/"
command: ["/bin/sh","-c","java $${JAVA_ARGS} -jar app-*.jar"]
volumes:
app-volume:
On the other hand, I do not know if it is the correct way to log an application to promtail, I have seen that usually read directly the log of the container (which does not work for me, because it only works in docker-linux) and I can think of these other possibilities What would be the right one in case it is impossible by volumes?
other alternatives
Any idea is welcome, thanks !

Error when running Asp.net core website using https on Docker

I'm running an Asp.net core site on Docker that works normally when not using https. When using https it gives me an error.
The error is being caused by this environment variable: ASPNETCORE_URLS=https://+:443. I've seen many solutions to similar problems and their solution was to simply remove this. Removing the https allows the server to start, but I can't connect to it from my browser. Then again, I'm still looking for a solution where https works, not just a general solution.
Here is the error: link. Correct me if I'm wrong but I don't think it's a certificate problem. I'm usually decent at interpreting errors but I can't make sense of this one for the life of me.
My dockerfile is pretty much the same as the microsoft example dockerfile except with different names and paths.
Here is my docker-compose.yml file. The same error happens even if I get rid of 443 and 80 from the env. variable.
version: "3.0"
services:
webapp:
image: webapp
stdin_open: true
tty: true
environment:
- ASPNETCORE_URLS=https://+:443;http://+:80
- ASPNETCORE_Kestrel__Certificates__Default__Password=crypticpassword
- ASPNETCORE_HTTPS_PORT=443
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- /home/user/.aspnet/https:/https/
network_mode: "host"
The same error happens when manually running using docker run with -e environment variables, etc.
Credit to Hans Kilian.
Set environment variable TZ to your timezone.
For docker run: TZ="Asia/example"
For docker-compose: just remove the quotes

Docker Stack/Compose Redis Instance Not Persisting Data

I'm trying to use Redis as a cache in my Sails.js application, but am having a hard time getting the volumes to work correctly and to get the Redis instance to persist the data automatically.
Here's (the relevant part of) my stack.yml file:
redis:
image: 'redis:latest'
command: redis-server --appendonly yes
command: ['redis-server', '--requirepass', 'mypasswordgoeshere']
volumes:
- ./redis-data:/data
deploy:
replicas: 1
I have a Sails.js service and an Angular app service as well, but didn't show them as they aren't really needed here.
The above doesn't work exactly as expected out of the box. I expected Docker to create the ./redis-data folder for me automatically if not present, but it doesn't. So I had to create the folder and make sure the permissions were set correctly. Then I expected the Redis container to persist its data to an appendonly.aof file and have that automatically be saved periodically. That's what happens on my local machine when I tried this.
However, on the server, it doesn't save the appendonly.aof file for me, and thus the data I want to persist doesn't get saved so that it's there when the stack is restarted.
The weirdest part of this is that I created a volume for the Sails.js app to get to the log files, and I didn't need to create the folder for the application; it just created it and everything worked as expected on its own. Am I missing something obvious here? Any help would be greatly appreciated. Thanks!
EDIT
Also, if I go in to the container (docker exec -it container_id /bin/bash, there's nothing in that /data folder that it drops you in. Which explains why there's nothing there on the host machine either.
For anyone using Redis Stack, what resolved my problem was adding the following
environment:
REDIS_ARGS: --save 20 1
I assume any values for the parameters, or using the AOF mode, would also work, but I have not verified this.
I initially tried specifying this as
command: redis-server --save 20 1
which was based on this tutorial. However, this resulted in a ConnectionError("Connection is not ready") (source) when running docker-compose up.
To make this work, I used the following definition for the Redis service:
redis:
image: 'redis:latest'
environment:
- REDIS_PASS=mypassword
- REDIS_APPENDONLY=yes
- REDIS_APPENDFSYNC=always
volumes:
- ./redis-data:/data
deploy:
replicas: 1
Using the environment variables or settings seemed to work as expected.
I got this idea from this mini tutorial in the Docker docs. I'm not exactly sure why this worked, as the documentation on Docker Hub for the redis image didn't specify any of those environment variables, and actually said to use to the steps mentioned in the original question, but regardless of that this solution do work.

How can I link an image created volume with a docker-compose specified named volume?

I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6
Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).
You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external

Resources