JasperReports as Tomcat default application in URL - docker

Standard deployment of jasperreports (docker pull bitnami/jasperreports - under Ubuntu 20.04.3 LTS)
version: '3.7'
services:
jasperServerDB:
container_name: jasperServerDB
image: docker.io/bitnami/mariadb:latest
ports:
- '3306:3306'
volumes:
- './jasperServerDB_data:/bitnami/mariadb'
environment:
- MARIADB_ROOT_USER=mariaDbUser
- MARIADB_ROOT_PASSWORD=mariaDbPassword
- MARIADB_DATABASE=jasperServerDB
jasperServer:
container_name: jasperServer
image: docker.io/bitnami/jasperreports:latest
ports:
- '8085:8080'
volumes:
- './jasperServer_data:/bitnami/jasperreports'
depends_on:
- jasperServerDB
environment:
- JASPERREPORTS_DATABASE_HOST=jasperServerDB
- JASPERREPORTS_DATABASE_PORT_NUMBER=3306
- JASPERREPORTS_DATABASE_USER=dbUser
- JASPERREPORTS_DATABASE_PASSWORD=dbPassword
- JASPERREPORTS_DATABASE_NAME=jasperServerDB
- JASPERREPORTS_USERNAME=adminUser
- JASPERREPORTS_PASSWORD=adminPassword
restart: on-failure
The reporting server is behind nginx reverse proxy which points to port 8085 of the docker machine.
Everything works as expected on https://my.domain.com/jasperserver/ url.
It is required to have JasperReports server responding on only https://my.domain.com/ url.
What is the recommended/best approach to configure the container (default Tomcat application) which can survive container's restarts and updates?
Some results from searching the net:
https://cwiki.apache.org/confluence/display/tomcat/HowTo#HowTo-HowdoImakemywebapplicationbetheTomcatdefaultapplication?
https://coderanch.com/t/85615/application-servers/set-application-default-application
https://benhutchison.wordpress.com/2008/07/30/how-to-configure-tomcat-root-context/
Which doubtfully are applicable to bitnami containers.
Hopefully there is a simple image configuration which could be included in the docker-compose.yml file.
Reference to GitHub Bitnami JasperReports Issues List where the same question is posted.

After trying all recommended ways to achieve the requirement, seems that Addendum 1 from cwiki.apache.org is the best one.
Submitted a PR to bitnami with single parameter fix of the use case: ROOT URL setting
Here is a workaround in case the above PR doesn't get accepted
Step 1
Create a .sh (e.g start.sh) file in the docker-compose.yml folder with following content:
#!/bin/bash
docker-compose up -d
echo "Building JasperReports Server..."
#Long waiting period to ensure the container is up and running (health checks didn't worked out well)
sleep 180;
echo "...completed!"
docker exec -u 0 -it jasperServer sh -c "rm -rf /opt/bitnami/tomcat/webapps/ROOT && rm /opt/bitnami/tomcat/webapps/jasperserver && ln -s /opt/bitnami/jasperreports /opt/bitnami/tomcat/webapps/ROOT"
echo "Ready to rock!"
Note that the container name must match the one from your docker-compose.yml file.
Step 2
Start the container by typing: $sh ./start.sh instead of $docker-compose up -d.
Step 3
Give it some time and try https://my.domain.com/.

Related

Docker Compose Detach Mode Parameter Error

Here is my docker-compose file, mysql.yml:
# Use root/example as user/password credentials
version: '3'
services:
db:
image: mysql
tty: true
stdin_open: true
command: --default-authentication-plugin=mysql_native_password
container_name: db
restart: always
networks:
- db
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: example1
command: bash -c "apt update"
adminer:
image: adminer
restart: always
container_name: web
networks:
- db
ports:
- 8080:8080
volumes:
- ./data/db:/var/lib/mysql
networks:
db:
external: true
When I run this file as "docker-compose -f mysql.yml up -d" it starts working, but after 5 or 10 seconds it dies with 0 exit code. Then, it restarts due to "restart: always" parameter.
I search on the internet about my problem and got some solutions:
First one,
tty: true
std_in_open: true
parameters, but they are not working. The container dies anyway.
Second one,
entrypoint:
- bash
- -c
command:
- |
tail -f /dev/null
This solution is working, but it overrides the default entrypoint, and, so my MySQL service does not work at the end.
Yes, I can concatenate entrypoints or create a Dockerfile(I actually want to complete all this in a single file), but I think it' s not the right way and I need some advice.
Thanks in advance!
When your Compose setup says:
command: bash -c "apt update"
This is the only thing the container does; this runs instead of the normal container process. Once that command completes (successfully) the container will exit (with status code 0).
In normal operation you shouldn't need to specify the command: for a container; the Dockerfile will have a CMD line that provides a useful default. (The notable exception is a setup where you have both a Web server and a background worker sharing substantial code, so you can set CMD to run, say, the Flask application but override command: to run a Celery worker.)
Many of the other options you include in the docker-compose.yml file are unnecessary. You can safely remove tty:, stdin_open:, container_name:, and networks: with no ill effects. (You can configure the Compose-provided default network if you specifically need containers running on a pre-created network.)
The comments hint at trying to run package updates at container startup time. I'd echo #xdhmoore's comment here: you should only run APT or similar package managers during an image build, never on a running container. (You don't want your application startup to fail because a Debian mirror is down, or because an incompatible update has gotten deployed.)
For the standard Docker Hub images, in general they update somewhat frequently, especially if you're not pinning to a specific patch release. If you run
docker-compose pull
docker-compose up
it will ask Docker Hub for a newer version of the image, and recreate the container on it if needed.
The standard Docker Hub packages also frequently download and install the thing they're packaging outside their distribution's package manager system, so running an upgrade isn't necessarily useful.
If you must, though, the best way to do this is to write a minimal Dockerfile
FROM mysql
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get upgrade --assume-yes
and reference it in the docker-compose.yml file
services:
db:
build: .
# replacing the image: line
# do NOT leave `image: mysql` behind

docker-compose COPY before running endrypoint

Using docker desktop with WSL2, the ultimate aim is to run a shell command to generate local SSL certs before starting an nginx service.
to docker up we have
version: '3.6'
services:
# Frontend
rp:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
container_name: revproxy
image: nginx:latest
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
so far so good, now we would like to add a pre startup script to create the ssl certs before launching the nginx server /home/scripts/certs.sh
mkdir -p /home/ssl/certs
mkdir -p /home/ssl/private
openssl req -x509 -nodes -days 365 -subj "/C=CA/ST=QC/O=Company, Inc./CN=zero.url" -addext "subjectAltName=DNS:mydomain.com" -newkey rsa:2048 -keyout /home/ssl/private/nginx-zero.key -out /home/ssl/certs/nginx-zero.crt;
Now adding the following to docker-compose.yml causes the container to bounce between running to rebooting and keeps recreating the certs via the script the exits the container. no general error message. I assume the exit code means the container is exiting correctly, that then triggers the restart.
command: /bin/sh -c "/home/scripts/certs.sh"
following other answers, adding exec "$#" makes no difference.
as an alternative I tried to copy the script into the pre nginx launch folder docker-entrypoint.d. this creates an error on docker up
version: '3.6'
services:
# Frontend
rp:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
container_name: revproxy
image: nginx:latest
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
COPY /home/scripts/certs.sh /docker-entrypoint.d/certs.sh
this generates an error
ERROR: yaml.scanner.ScannerError: while scanning a simple key
in ".\docker-compose.yml", line 18, column 7
could not find expected ':'
in ".\docker-compose.yml", line 18, column 64
The terminal process "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -Command docker-compose -f "docker-compose.yml" up -d --build" terminated with exit code: 1.
So what are the options for running a script before starting the primary docker-entrypoint.sh script
UPDATE:
as per suggestion in comment, changing the format of the flag did not help,
version: '3.6'
services:
# Frontend
rp:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS: 1
container_name: revproxy
image: nginx:latest
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\dc_scripts:/home/scripts
COPY /home/scripts/certs.sh /docker-entrypoint.d/certs.sh
ERROR: yaml.scanner.ScannerError: while scanning a simple key
in ".\docker-compose.yml", line 17, column 7
could not find expected ':'
in ".\docker-compose.yml", line 18, column 7
The terminal process "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -Command docker-compose -f "docker-compose.yml" up -d --build" terminated with exit code: 1.
Dockerfiles are used to buid images, and contains a list of commands like RUN, EXEC and COPY. They have a very shell script like syntax with one command per line (for the most part).
A docker compose file on the other hand is a yaml formatted file that is used to deploy built images to docker as running services. You cannot put commands like COPY in this file.
You can, for local deployments, on non windows systems, map individual files in in the volumes section:
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
- ./scripts/certs.sh:/usr/local/bin/certs.sh
But this syntax only works on linux and MacOS hosts - I believe.
An alternative is to restructure your project with a Dockerfile and a docker-compose.yml file.
With a Dockerfile
FROM nginx:latest
COPY --chmod=0755 scripts/certs.sh /usr/local/bin
ENTRYPOINT ["certs.sh"]
Into the docker-compose, add a build: node with the path to the Dockerfile. "." will do. docker-compose build will be needed to force a rebuild if the Dockerfile changes after the first time.
version: '3.9'
services:
revproxy:
environment:
COMPOSE_CONVERT_WINDOWS_PATHS: 1
image: nginx:custom
build: .
user: root
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- .\conf:/home/conf
- .\scripts:/home/scripts
Now, that youve changed the entrypoint of the nginx container to your custom script, you need to chain to the original one, and call it with the original command.
So, certs.sh needs to look like:
#!/bin/sh
# your cert setup here
# this should remove "certs.sh" from the beginning of the current parameter list.
shift 1
# and now, transfer control to the original entrypoint, with the commandline that was passed.
exec "./docker-entrypoint.sh" "$#"
docker inspect nginx:latest was used to discover the original entrypoint.
Added after edit:
Also, COMPOSE_CONVERT_WINDOWS_PATHS doesn't look like an environment variable that nginx is going to care about. This variable should probably be set on your windows user environment so it is available before running docker-compose.
C:\> set COMPOSE_CONVERT_WINDOWS_PATHS=1
C:\> docker-compose build
...
C:\> docker-compose up
...
Also, nginx on docker hub indicates that /etc/nginx is the proper configuration folder for nginx, so I don't think that mapping things to /home/... is going to do anything. nginx should display a default page however.

Unable to access site in docker-compose

I've been using: docker build -t devstack .
docker run --rm -p 443:443 -it -v ~/code:/code devstack
That has been working fine for me so far. I've been able to access the site as expected through my browser. I set my hosts file to point devstack.com to 127.0.0.1 and the site loads nicely. Now I'm trying to use docker-compose so I can use some of the functionality there to more easily connect with AWS.
services:
web:
build:
context: .
network_mode: "bridge"
ports:
- "443"
- "80"
volumes:
- ~/code:/code
image: devstack:latest
So I run docker-compose build which gives me the familiar build stuff from Dockerfile.
Then I run docker-compose run web which puts me into the VM where I start apache (doing it manually at the moment), hit top to verify it’s running, then tail the log files. But when I attempt to hit the site in my browser, I get: devstack.com refused to connect. and no logs in the apache log files, so it's not even getting to apache. So something about the ports isn't opening up to me. Any idea what I need to change to make this work?
Edit: Updated file. Still same problem:
version: "3"
services:
web:
build:
context: .
# Same issue with both of these:
# network_mode: "bridge"
# network_mode: "host"
ports:
- "443:443"
- "80:80"
volumes:
- ~/code:/code
tty: true
This is what I did to get it working. I used the example project docker-compose showed in their documentation, which runs a test app on port 5000. That worked, so I knew it could be done.
I updated my docker-compose.yml to be very similar to the one in the test project. So it looks like this now:
version: "3"
services:
web:
build: .
ports:
- "443:443"
- "80:80"
volumes:
- ~/code:/code
Then I created an entry.sh file which will start apache, and added this to my Dockerfile:
# copy the entry file which will start apache
COPY entry.sh entry.sh
RUN chmod +x entry.sh
# start apache
CMD ./entry.sh; tail -f /var/log/apache2/*.log
So now when I do docker-compose up, it will start apache and tail the apache log files. So I immediately see apache log files output to terminal. Then I'm able to access the site. Basically the problem was just with the VM exiting. This was the only way I could find to keep it from exiting without doing tty=true in the docker-compose, which while it kept it from exiting, wouldn't publish the ports.

Dockercompose can't access using hostname

I am quite new to docker but am trying to use docker compose to run automation tests against my application.
I have managed to get docker compose to run my application and run my automation tests, however, at the moment my application is running on localhost when I need it to run against a specific domain example.com.
From research into docker it seems you should be able to hit the application on the hostname by setting it within links, but I still don't seem to be able to.
Below is the code for my docker compose files...
docker-compose.yml
abc:
build: ./
command: run container-dev
ports:
- "443:443"
expose:
- "443"
docker-compose.automation.yml
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && DISPLAY=:1.0 && ENVIRONMENT=qa BASE_URL=https://example.com npm run automation"
links:
- abc:example.com
volumes:
- /tmp:/tmp/
and am using the following command to run...
docker-compose -p tests -f docker-compose.yml -f docker-compose.automation.yml up --build
Is there something I'm missing to map example.com to localhost?
If the two containers are on the same Docker internal network, Docker will provide a DNS service where one can talk to the other by just its container name. As you show this with two separate docker-compose.yml files it's a little tricky, because Docker Compose wants to isolate each file into its own separate mini-Docker world.
The first step is to explicitly declare a network in the "first" docker-compose.yml file. By default Docker Compose will automatically create a network for you, but you need to control its name so that you can refer to it from elsewhere. This means you need a top-level networks: block, and also to attach the container to the network.
version: '3'
networks:
abc:
name: abc
services:
abc:
build: ./
command: run container-dev
ports:
- "443:443"
networks:
abc:
aliases:
- example.com
Then in your test file, you can import that as an external network.
version: 3
networks:
abc:
external: true
name: abc
services:
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && npm run automation"
environment:
DISPLAY: "1.0"
ENVIRONMENT: qa
BASE_URL: "https://example.com"
networks:
- abc
Given the complexity of what you're showing for the "test" container, I would strongly consider running it not in Docker, or else writing a shell script that launches the X server, checks that it actually started, and then runs the test. The docker-compose.yml file isn't the only tool you have here.

Mounted volume is empty inside container

I've got a docker-compose.yml like this:
db:
image: mongo:latest
ports:
- "27017:27017"
server:
image: artificial/docker-sails:stable-pm2
command: sails lift
volumes:
- server/:/server
ports:
- "1337:1337"
links:
- db
server/ is relative to the folder of the docker-compose.yml file. However when I docker exec -it CONTAINERID /bin/bash and check /server it is empty.
What am I doing wrong?
Aside from the answers here, it might have to do with drive sharing in Docker Setting. On Windows, I discovered that drive sharing needs to be enabled.
In case it is already enabled and you recently changed your PC's password, you need to disable drive sharing (and click "Apply") and re-enable it again (and click "Apply"). In the process, you will be prompted for your PC's new password. After this process, run your docker command (run or compose) again
Try using:
volumes:
- ./server:/server
instead of server/ -- there are some cases where Docker doesn't like the trailing slash.
As per docker volumes documentation,
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
The host-dir can either be an absolute path or a name value. If you
supply an absolute path for the host-dir, Docker bind-mounts to the
path you specify. If you supply a name, Docker creates a named volume
by that name
I had similar issue when I wanted to mount a directory from command line:
docker run -tid -p 5080:80 -v /d/my_project:/var/www/html/my_project nimmis/apache-php5
The container has been started successfully but the mounted directory was empty.
The reason was that the mounted directory must be under the user's home directory. So, I created a symlink under c:\Users\<username> that mounts to my project folder d:\my_project and mounted that one:
docker run -tid -p 5080:80 -v /c/Users/<username>/my_project/:/var/www/html/my_project nimmis/apache-php5
If you are using Docker for Mac then you need to go to:
Docker Desktop -> Preferences -> Resources -> File Sharing
and add the folder you intend to mount. See the screenshot:
I don't know if other people made the same mistake but the host directory path has to start from /home
So my msitake was that in my docker-compose I was WRONGLY specifying the following:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- /Desktop/subfolder/subfolder2:/app/subfolder
When the host path should have been full path from /home. something like:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- home/myuser/Desktop/subfolder/subfolder2:/app/subfolder
On Ubuntu 20.04.4 LTS, with Docker version 20.10.12, build e91ed57, I started observing a similar symptom with no apparent preceding action. After a docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command, with no changes to one of the services (production-001-volumeConsumingService is up-to-date), a part of the volumes stopped mounting.
# deploy/docker-compose.yml
version: "3"
services:
...
volumeConsumingService:
container_name: production-001-volumeConsumingService
hostname: production-001-volumeConsumingService
image: group/production-001-volumeConsumingService
build:
context: .
dockerfile: volumeConsumingService.Dockerfile
depends_on:
- anotherServiceDefinedEarlier
restart: always
volumes:
- ../data/certbot/conf:/etc/letsencrypt # mouning
- ../data/certbot/www:/var/www/certbot # not mounting
- ../data/www/public:/var/www/public # not mounting
- ../data/www/root:/var/www/root # not mounting
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- default
- external
...
networks:
external:
name: routing
A workaround that seems to be working is to enforce a restart on the failing service immediately after the docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command:
docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build && docker stop production-001-volumeConsumingService && docker start production-001-volumeConsumingService
In the case when the volumes are not mounted after a host reboot, adding a cron task to restart the service once should do.
In my case, the volume was empty because I did not use the right path format without quotes.
If you have a relative or absolute path with spaces in it, you do not need to use double quotes around the path, you can just use any path with spaces and it will be understood since docker-compose has the ":" as the delimiter and does not check spaces.
Ways that do not work (double quotes are the problem!):
volumes:
- "MY_PATH.../my server":/server
- "MY_PATH.../my server:/server" (I might have missed testing this, not sure!)
- "./my server":/server
- ."/my server":/server
- "./my server:/server"
- ."/my server:/server"
Two ways how you can do it (no double quotes!):
volumes:
- MY_PATH.../my server:/server
- ./my server:/server

Resources