Standard deployment of jasperreports (docker pull bitnami/jasperreports - under Ubuntu 20.04.3 LTS)
version: '3.7'
services:
jasperServerDB:
container_name: jasperServerDB
image: docker.io/bitnami/mariadb:latest
ports:
- '3306:3306'
volumes:
- './jasperServerDB_data:/bitnami/mariadb'
environment:
- MARIADB_ROOT_USER=mariaDbUser
- MARIADB_ROOT_PASSWORD=mariaDbPassword
- MARIADB_DATABASE=jasperServerDB
jasperServer:
container_name: jasperServer
image: docker.io/bitnami/jasperreports:latest
ports:
- '8085:8080'
volumes:
- './jasperServer_data:/bitnami/jasperreports'
depends_on:
- jasperServerDB
environment:
- JASPERREPORTS_DATABASE_HOST=jasperServerDB
- JASPERREPORTS_DATABASE_PORT_NUMBER=3306
- JASPERREPORTS_DATABASE_USER=dbUser
- JASPERREPORTS_DATABASE_PASSWORD=dbPassword
- JASPERREPORTS_DATABASE_NAME=jasperServerDB
- JASPERREPORTS_USERNAME=adminUser
- JASPERREPORTS_PASSWORD=adminPassword
restart: on-failure
The reporting server is behind nginx reverse proxy which points to port 8085 of the docker machine.
Everything works as expected on https://my.domain.com/jasperserver/ url.
It is required to have JasperReports server responding on only https://my.domain.com/ url.
What is the recommended/best approach to configure the container (default Tomcat application) which can survive container's restarts and updates?
Some results from searching the net:
https://cwiki.apache.org/confluence/display/tomcat/HowTo#HowTo-HowdoImakemywebapplicationbetheTomcatdefaultapplication?
https://coderanch.com/t/85615/application-servers/set-application-default-application
https://benhutchison.wordpress.com/2008/07/30/how-to-configure-tomcat-root-context/
Which doubtfully are applicable to bitnami containers.
Hopefully there is a simple image configuration which could be included in the docker-compose.yml file.
Reference to GitHub Bitnami JasperReports Issues List where the same question is posted.
After trying all recommended ways to achieve the requirement, seems that Addendum 1 from cwiki.apache.org is the best one.
Submitted a PR to bitnami with single parameter fix of the use case: ROOT URL setting
Here is a workaround in case the above PR doesn't get accepted
Step 1
Create a .sh (e.g start.sh) file in the docker-compose.yml folder with following content:
#!/bin/bash
docker-compose up -d
echo "Building JasperReports Server..."
#Long waiting period to ensure the container is up and running (health checks didn't worked out well)
sleep 180;
echo "...completed!"
docker exec -u 0 -it jasperServer sh -c "rm -rf /opt/bitnami/tomcat/webapps/ROOT && rm /opt/bitnami/tomcat/webapps/jasperserver && ln -s /opt/bitnami/jasperreports /opt/bitnami/tomcat/webapps/ROOT"
echo "Ready to rock!"
Note that the container name must match the one from your docker-compose.yml file.
Step 2
Start the container by typing: $sh ./start.sh instead of $docker-compose up -d.
Step 3
Give it some time and try https://my.domain.com/.
I am trying to get a simple docker-compose file working on windows.
version: "2"
volumes:
db_data: {}
services:
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: test123
volumes:
- db_data:/var/lib/mysql/data
I need to persist the db data. I've created the directory db_data and I found this solution from a github issue: https://github.com/docker-library/mysql/issues/69. I had previously been using mysql 5.6. I'm simply running
docker-compose up -d
when I check
docker ps
I do not get any running processes. Any help with this would be greatly appreciated. I've added the output from running the command below:
PS D:\test-exercise> docker-compose up -d
Starting test-exercise_db_1 ... done
I cannot start keycloak container using ansible and docker-compose. I'am getting error: User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
I have 3 ansible jobs:
Create netwrok:
- name: Create a internal network
docker_network:
name: internal
Setup postgres:
- name: "Install Postgres"
docker_compose:
project_name: posgressdb
restarted: true
pull: yes
definition:
version: '2'
services:
postgres:
image: postgres:12.1
container_name: postgres
restart: always
env_file:
- /etc/app/db.env
networks:
- internal
volumes:
- postgres-data:/var/lib/postgresql/data
- /etc/app/createdb.sh:/docker-entrypoint-initdb.d/init-app-db.sh
ports:
- "5432:5432"
volumes:
postgres-data:
networks:
internal:
external:
name: internal
Create keycloak container:
- name: Install keycloak
docker_compose:
project_name: appauth
restarted: true
pull: yes
definition:
version: '2'
services:
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak
restart: always
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=postgres
- DB_PORT=5432
- DB_SCHEMA=public
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=keycloak
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
networks:
- internal
networks:
internal:
external:
name: internal
Does anyone have any idea why I get this error?
EDIT
If I downgrade keycloak to version 7 it starts normally!
Just to clarify the other answers. I had the same issue. What helped for me was:
stop all containers
comment out the two relevant lines
version: "3"
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
environment:
# KEYCLOAK_USER: admin
# KEYCLOAK_PASSWORD: pass
...
start all containers;
wait until keycloak container has successfully started
stop all containers, again
comment back in the two lines from above
version: "3"
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: pass
...
start all containers
This time (and subsequent times) it worked. Keycloak was running and the admin user was registered and working as expected.
This happens when Keycloak is interrupted during boot. After this, command which attempts to add admin user starts to fail. In Keycloak 7 this wasn't fatal, but in 8.0.1 this line was added to /opt/jboss/tools/docker-entrypoint.sh which aborts the entire startup script:
set -eou pipefail
Related issue: https://issues.redhat.com/browse/KEYCLOAK-12896
The reason commenting out the KEYCLOAK_USER works is it forces a recreation of the container. The same can be accomplished with:
docker rm -f keycloak
docker compose up keycloak
I had the same issue. After commenting out the KEYCLOAK_USER env variables in docker-compose and updating the stack, the container started again.
docker_compose:
project_name: appauth
restarted: true
pull: yes
definition:
version: '2'
services:
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak
restart: always
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=postgres
- DB_PORT=5432
- DB_SCHEMA=public
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=keycloak
#- KEYCLOAK_USER=admin
#- KEYCLOAK_PASSWORD=admin
networks:
- internal
networks:
internal:
external:
name: internal
According to my findings, the best way to set this default user is NOT by adding it via environment variables, but via the following command:
docker exec <CONTAINER> /opt/jboss/keycloak/bin/add-user-keycloak.sh -u <USERNAME> -p <PASSWORD>
As per the official documentation.
I use Keycloak 12 where I still see this problem when the startup is interrupted. I could see that removing the file "keycloak-add-user.json" and restarting the container works.
Idea is to integrate this logic into container startup. I developed a simple custom-entrypoint script.
#!/bin/bash
set -e
echo "executing the custom entry point script"
FILE=/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json
if [ -f "$FILE" ]; then
echo "keycloak-add-user.json exist, hence deleting it"
rm $FILE
fi
echo "executing the entry point script from original image"
source "/opt/jboss/tools/docker-entrypoint.sh"
And I ensured to rebuild the keycloak image with appropriate adaptations to Entrypoint
in Dockerfile during the initial deployment.
ARG DEFAULT_IMAGE_BASEURL_APPS
FROM "${DEFAULT_IMAGE_BASEURL_APPS}/jboss/keycloak:12.0.1"
COPY custom-entrypoint.sh /opt/jboss/tools/custom-entrypoint.sh
ENTRYPOINT [ "/opt/jboss/tools/custom-entrypoint.sh" ]
As our deployment is on-premise, the access to the development team is not that easy. All that our first line support could do is try giving a restart of the server where we deployed. Hence the idea of this workaround.
The way I got past this was to replace set -eou pipefail with # set -eou pipefail within the container file systems.
Logged in as root on my docker host and then edited each of the files returned by this search:
find /var/lib/docker/overlay2 | grep /opt/jboss/tools/docker-entrypoint.sh
Thomas Solutions is good but restarting all containers and start again is worthless because my docker-compose file has 7 services.
I resolved the issue in two steps.
first I commend these two lines of code like other fellows did
#- KEYCLOAK_USER=admin
#- KEYCLOAK_PASSWORD=admin
Then new terminal I run this command and it works.
docker-compose up keycloak
keycloak is a ServiceName
For other users with this problem and none of the previous answers have helped, check your connection to the database, this error usually appears if keycloak cannot connect to the database.
Test in Keycloak 8 with Docker.
I have tried the solution by
Thomas as but it sometimes works sometimes does not.
The issue is that Keycloak on boot does not find the db required, so it gets interrupted as Zmey mentions. Have you tried in the second ansible job to add depends_on: - postgres ?
Having the same issue but with docker-compose, i first started with the postgres container in order to create the necessary dbs (manual step) docker-compose up postgres and the i booted the entire setup docker-compose up.
This was happening to me when I used to shut down the Keycloak containers in Portainer and tried to get them up and running again.
I can prevent the error by also 'removing' the container after I've shut it down (both in Portainer) and then running docker-compose up. Make sure not to remove any volumes attached to your containers else you may lose data.
In case you want to add user before server start or want it look like a classic migration, build custom image with admin parameters passed
FROM quay.io/keycloak/keycloak:latest
ARG ADMIN_USERNAME
ARG ADMIN_PASSWORD
RUN /opt/jboss/keycloak/bin/add-user-keycloak.sh -u $ADMIN_USERNAME -p $ADMIN_PASSWORD
docker-compose:
auth_service:
build:
context: .
dockerfile: Dockerfile
args:
ADMIN_USERNAME: ${KEYCLOAK_USERNAME}
ADMIN_PASSWORD: ${KEYCLOAK_PASSWORD}
(do not add KEYCLOAK_USERNAME/KEYCLOAK_PASSWORD to the environment section)
I was facing this issue with Keycloak "jboss/keycloak:11.0.3" running in Docker, error:
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
Adicional info, was running with PostgreSQL v13.2 also in Docker. I create some schemas for other resources but I wasn't creating the schema for the Keycloak, so the solution was for my case, run in postgres the create schema command:
CREATE SCHEMA IF NOT EXISTS keycloak AUTHORIZATION postgres;
NOTE: Hope this helps, none of other solutions shared in this post solved my issue.
You can also stop the containers and simply remove associated volumes.
If you don't know wiwh volume is associated to your keycloak container, run:
docker-compose down
for vol in $(docker volume ls --format {{.Name}}); do
docker volume rm $vol
done
My Docker container keeps restarting when running docker-compose up -d. When inspecting the logs with docker logs --tail 50 --follow --timestamps db, I get the following error:
/usr/local/bin/docker-entrypoint.sh: line 37: "/run/secrets/db_mysql_root_pw": No such file or directory
This probably means that no secrets are made. The output of docker secret ls also gives no secrets.
My docker-compose.yml file looks something like this (excluding port info etc.):
version: '3.4'
services:
db:
image: mysql:8.0
container_name: db
restart: always
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
secrets:
- db_mysql_user
- db_mysql_user_pw
- db_mysql_root_pw
volumes:
- "./mysql-data:/docker-entrypoint-initdb.d"
secrets:
db_mysql_user:
file: ./db_mysql_user.txt
db_mysql_user_pw:
file: ./db_mysql_user_pw.txt
db_mysql_root_pw:
file: ./db_mysql_root_pw.txt
In the same directory I have the 3 text files which simply contain the values for the environment variables. e.g. db_mysql_user_pw.txt contains password.
I am running Linux containers on a Windows host.
This is pretty dumb but changing
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
to
environment:
- MYSQL_USER_FILE=/run/secrets/db_mysql_user
- MYSQL_PASSWORD_FILE=/run/secrets/db_mysql_user_pw
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db_mysql_root_pw
made it work. I still don't know why I cannot see the secrets with docker secret ls though.
I would like to know if it's possible to execute a PSQL command inside the docker-compose file.
I have the following docker-compose.yml:
version: '3'
services:
postgres:
image: postgres:9.6
container_name: postgres-container
ports:
- "5432:5432"
network_mode: host
environment:
- LC_ALL=C.UTF-8
- POSTGRES_DB=databasename
- POSTGRES_USER=username
- POSTGRES_PASSWORD=
- POSTGRES_PORT=5432
And After this is running ok, I run the following command:
docker exec -i postgres-container psql -U username -d databasename < data.sql
These 2 steps works fine. But I would ike to know if it's possible to make one single step.
Every time I want to run this command. It's important the database is always new. That's why I don't persist it in a volume and want to run this command.
Is it possible to run docker-compose up and also run the psql command?
Thanks in advance!
Pure docker-compose solution with volume,
volumes:
- ./data.sql:/docker-entrypoint-initdb.d/init.sql
According to the dockerfile, at start up, it will dump in every sql data in docker-entrypoint-initdb.d