I am setting up a NGINX node as container using docker compose file.
my compose file as below:
version: '3.9'
services:
reverse_proxy_nginx:
image: nginx:1.19.10-alpine
container_name: reverse_proxy_nginx
networks:
external_net:
ipv4_address: 192.168.100.10
ports:
- "80:80"
volumes:
- ./static/:/usr/share/nginx/html/
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- /etc/localtime:/etc/localtime:ro
command: sh -c "rm -f /etc/nginx/conf.d/* &&
nginx -c /etc/nginx/nginx.conf"
tty: true
networks:
external_net:
external:
name: localsw
after docker-compose up command:
[root#Site Reverse_Proxy]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e1498798339d nginx:1.19.10-alpine "/docker-entrypoint.…" 6 seconds ago **Exited (0) 5 seconds ago** reverse_proxy_nginx
"docker logs" command and -json.log all output nothing !!
[root#Site Reverse_Proxy]# docker logs reverse_proxy_nginx
[root#Site Reverse_Proxy]# cat /var/lib/docker/containers/e1498798339d022ec5d744e1c557ca3e9f0779be114ffb0349acc9be1a70e5c7/e1498798339d022ec5d744e1c557ca3e9f0779be114ffb0349acc9be1a70e5c7-json.log
[root#Site Reverse_Proxy]#
In my previous attempts, I did see "docker logs" output logs showing executing docker-entrypoint.sh and other shell scripts (10-listen-on-ipv6-by-default.sh, 20-envsubst-on-templates.sh etc), also looking for default.conf file to initialize NGINX server. but now, it outputs nothing !!
I searched quite a long time on this issue, and did not find any solid & holistic solution yet. anyone knows where I should report this bug please ?
put below directive in config file solves the problem.
daemon off;
Related
I'm trying to understand why I can't see containers created with docker-compose up -d using docker ps. If I go to the folder where is the docker-compose.yaml located and run docker-compose ps I can see the container runing. I did the same on windows because i'm using ubuntu and it works as expected, I can see the container just runing docker ps. Could anyone give me a hint about this behavior, please? Thanks in advance.
Environment:
Docker version 20.10.17, build 100c701
docker-compose version 1.25.0, build unknown
Ubuntu 20.04.4 LTS
in my terminal i see this output:
/GIT/project$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
/GIT/project$ cd scripts/
/GIT/project/scripts$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
/GIT/project/scripts$ docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------------
scripts_db_1 docker-entrypoint.sh --def ... Up 0.0.0.0:3306->3306/tcp,:::3306->3306/tcp,
33060/tcp
/GIT/project/scripts$
docker-compose.yaml
version: '3.3'
services:
db:
image: mysql:5.7
# NOTE: use of "mysql_native_password" is not recommended: https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password
# (this is just an example, not intended to be a production configuration)
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
# <Port exposed> : < MySQL Port running inside container>
- 3306:3306
expose:
# Opens port 3306 on the container
- 3306
# Where our data will be persisted
volumes:
- treip:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: changeit
MYSQL_DATABASE: treip
volumes:
treip:
I executed the container with sudo and the problem was solve. now the container apear using docker ps, so instead of docker-compose up I executed it with sudo sudo docker-compose up . Sorry, my bad.
Standard deployment of jasperreports (docker pull bitnami/jasperreports - under Ubuntu 20.04.3 LTS)
version: '3.7'
services:
jasperServerDB:
container_name: jasperServerDB
image: docker.io/bitnami/mariadb:latest
ports:
- '3306:3306'
volumes:
- './jasperServerDB_data:/bitnami/mariadb'
environment:
- MARIADB_ROOT_USER=mariaDbUser
- MARIADB_ROOT_PASSWORD=mariaDbPassword
- MARIADB_DATABASE=jasperServerDB
jasperServer:
container_name: jasperServer
image: docker.io/bitnami/jasperreports:latest
ports:
- '8085:8080'
volumes:
- './jasperServer_data:/bitnami/jasperreports'
depends_on:
- jasperServerDB
environment:
- JASPERREPORTS_DATABASE_HOST=jasperServerDB
- JASPERREPORTS_DATABASE_PORT_NUMBER=3306
- JASPERREPORTS_DATABASE_USER=dbUser
- JASPERREPORTS_DATABASE_PASSWORD=dbPassword
- JASPERREPORTS_DATABASE_NAME=jasperServerDB
- JASPERREPORTS_USERNAME=adminUser
- JASPERREPORTS_PASSWORD=adminPassword
restart: on-failure
The reporting server is behind nginx reverse proxy which points to port 8085 of the docker machine.
Everything works as expected on https://my.domain.com/jasperserver/ url.
It is required to have JasperReports server responding on only https://my.domain.com/ url.
What is the recommended/best approach to configure the container (default Tomcat application) which can survive container's restarts and updates?
Some results from searching the net:
https://cwiki.apache.org/confluence/display/tomcat/HowTo#HowTo-HowdoImakemywebapplicationbetheTomcatdefaultapplication?
https://coderanch.com/t/85615/application-servers/set-application-default-application
https://benhutchison.wordpress.com/2008/07/30/how-to-configure-tomcat-root-context/
Which doubtfully are applicable to bitnami containers.
Hopefully there is a simple image configuration which could be included in the docker-compose.yml file.
Reference to GitHub Bitnami JasperReports Issues List where the same question is posted.
After trying all recommended ways to achieve the requirement, seems that Addendum 1 from cwiki.apache.org is the best one.
Submitted a PR to bitnami with single parameter fix of the use case: ROOT URL setting
Here is a workaround in case the above PR doesn't get accepted
Step 1
Create a .sh (e.g start.sh) file in the docker-compose.yml folder with following content:
#!/bin/bash
docker-compose up -d
echo "Building JasperReports Server..."
#Long waiting period to ensure the container is up and running (health checks didn't worked out well)
sleep 180;
echo "...completed!"
docker exec -u 0 -it jasperServer sh -c "rm -rf /opt/bitnami/tomcat/webapps/ROOT && rm /opt/bitnami/tomcat/webapps/jasperserver && ln -s /opt/bitnami/jasperreports /opt/bitnami/tomcat/webapps/ROOT"
echo "Ready to rock!"
Note that the container name must match the one from your docker-compose.yml file.
Step 2
Start the container by typing: $sh ./start.sh instead of $docker-compose up -d.
Step 3
Give it some time and try https://my.domain.com/.
I have the following docker-compose.yml file:
version: '3'
services:
db:
image: postgres:${PG_VERSION}
ports:
- "${DB_PORT}:5432"
environment:
- POSTGRES_USER=${SUPER_USER}
- POSTGRES_PASSWORD=${SUPER_PASS}
- POSTGRES_DB=${DB_NAME}
- SUPER_USER=${SUPER_USER}
- SUPER_USER_PASSWORD=${SUPER_PASS}
- DB_NAME=${DB_NAME}
- DB_USER=${DB_USER}
- DB_PASS=${DB_PASS}
- DB_ANON_ROLE=${DB_ANON_ROLE}
volumes:
- ./initdb:/docker-entrypoint-initdb.d
# PostgREST
postgrest:
image: postgrest/postgrest
ports:
- "${API_PORT}:3000"
links:
- db:db
environment:
- PGRST_DB_URI=postgres://${DB_USER}:${DB_PASS}#${DB_HOST}:5432/${DB_NAME}
- PGRST_DB_SCHEMA=${DB_SCHEMA}
- PGRST_DB_ANON_ROLE=${DB_ANON_ROLE}
- PGRST_JWT_SECRET=${JWT_SECRET}
depends_on:
- db
swagger:
image: swaggerapi/swagger-ui
ports:
- ${SWAGGER_PORT}:8080
environment:
API_URL: ${SWAGGER_API_URL-:http://localhost:${API_PORT}/
And another file docker-compose.prod.yml
version: '3'
services:
db:
volumes:
- ./initdb/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./var/postgres-data:/var/lib/postgresql/data
- ./var/log/postgresql:/var/log/postgresql
- ./etc/postgresql/postgresql.conf:/var/lib/postgresql/data/postgresql.conf
nginx:
image: nginx
ports:
- 80:80
- 443:443
volumes:
- ./etc/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./var/log/nginx:/var/log/nginx
depends_on:
- postgrest
As you can see I am adding a few volumes to the db service, but importantly I have also added a new nginx service.
The reason I am adding it in this file is because nginx is not needed during development.
However, what is strange is when I issue the docker-compose up command as follows:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
And then list the processes with
docker-compose ps
I get the following output
Name Command State Ports
-----------------------------------------------------------------------------------------
api_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
api_postgrest_1 /bin/sh -c exec postgrest ... Up 0.0.0.0:3000->3000/tcp
api_swagger_1 /docker-entrypoint.sh sh / ... Up 80/tcp, 0.0.0.0:8080->8080/tcp
Notice that nginx is not here. However it is actually running, when I issue:
docker ps
I get the following output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba281fd80743 nginx "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp api_nginx_1
d0028fdaecf5 postgrest/postgrest "/bin/sh -c 'exec po…" 8 minutes ago Up 8 minutes 0.0.0.0:3000->3000/tcp api_postgrest_1
1d6e3d689210 postgres:11.2 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:5432->5432/tcp api_db_1
ed5fa7a71848 swaggerapi/swagger-ui "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 80/tcp, 0.0.0.0:8080->8080/tcp api_swagger_1
So my question is, why is docker-compose not seeing nginx as part of the group of services?
NOTE: The reason I am using this override approach, and not using extends, is that extends does not support services with links and depends_on properties. My understanding is that combining files like this is the recommended approach. However I do understand why it is not possible to add new services in a secondary file.
For example see https://docs.docker.com/compose/extends/#example-use-case, here the docs are adding a new dbadmin service using this method, but no mention that the service won't be included in the output of docker-compose ps, and that there will be warnings about orphans, for example:
$docker-compose down
Stopping api_postgrest_1 ... done
Stopping api_db_1 ... done
Stopping api_swagger_1 ... done
WARNING: Found orphan containers (api_nginx_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Removing api_postgrest_1 ... done
Removing api_db_1 ... done
Removing api_swagger_1 ... done
Removing network api_default
Tested on:
Docker version 20.10.4, build d3cb89e
Docker version 19.03.12-ce, build 48a66213fe
and:
docker-compose version 1.27.0, build unknown
docker-compose version 1.29.2, build 5becea4c
So I literally figured it out as I was typing, and noticed a related question.
The trick is this https://stackoverflow.com/a/45515734/2685895
The reason why my new nginx service was not visible, is because docker-compose ps by default only looks at the docker-compose.yml file.
In order to get the expected output, one needs to specify both files,
In my case:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml ps
We use the docker image nginx:stable-apline in a docker compose setup:
core-nginx:
image: nginx:stable-alpine
restart: always
environment:
- NGINX_HOST=${NGINX_HOST}
- NGINX_PORT=${NGINX_PORT}
- NGINX_APP_HOST=${NGINX_APP_HOST}
volumes:
- ./nginx/conf/dev.template:/tmp/default.template
- ./log/:/var/log/nginx/
depends_on:
- core-app
command: /bin/sh -c "envsubst '$$NGINX_HOST $$NGINX_PORT $$NGINX_APP_HOST'< /tmp/default.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 5001:5001
Logfiles are unlimited in size in this setup.
Can anybody provide some pointers how to limit the size of access.log and error.log?
There are a couple of ways of tackling this problem.
Docker log driver
The Nginx container you're using, by default, configures the access.log to go to stdout and the error.log to go to stderr. If you were to remove the volume you're mounting on /var/log/nginx, you would get this default behavior, which means you would be able to manage logs via the Docker log driver. The default json-file log driver has a max-size option that would do exactly what you want.
With this solution, you would use the docker logs command to inspect the nginx logs.
Containerized log rotation
If you really want to log to local files instead of using the Docker log driver, you can add a second container to your docker-compose.yml that:
Runs cron
Periodically calls a script to rename the log files
Sends an appropriate signal to the nginx process
To make all this work:
The cron container needs access to the nginx logs. Because you're storing the logs on a volume, you can just mount that same volume in the cron container.
The cron container needs to run in the nginx pid namespace in order to send the restart signal. This is the --pid=container:... option to docker run, or the pid: option in docker-compose.yml.
For example, something like this:
version: "3"
services:
nginx:
image: nginx:stable-alpine
restart: always
volumes:
- ./nginx-logs:/var/log/nginx
- nginx-run:/var/run
ports:
- 8080:80
logrotate:
image: alpine:3.13
restart: always
volumes:
- ./nginx-logs:/var/log/nginx
- nginx-run:/var/run
- ./cron.d:/etc/periodic/daily
pid: service:nginx
command: ["crond", "-f", "-L", "/dev/stdout"]
volumes:
nginx-run:
In cron.d in my local directory, I have rotate-nginx-logs (mode 0755) that looks like this:
#!/bin/sh
pidfile=/var/run/nginx.pid
logdir=/var/log/nginx
if [ -f "$pidfile" ]; then
echo "rotating nginx logs"
for logfile in access error; do
mv ${logdir}/${logfile}.log ${logdir}/${logfile}.log.old
done
kill -HUP $(cat "$pidfile")
fi
With this configuration in place, the logrotate container will rename the logs once/day and send a USR1 signal to nginx, causing it to re-open its log files.
My preference would in general be for the first solution (gathering logs with Docker and using Docker log driver options to manage log rotation), since it reduces the complexity of the final solution.
I'm particularly new to Docker. I was trying to containerize a project for development and production versions. I came up with a very basic docker-compose configuration and then tried the override feature which doesn't seem to work.
I added overrides for volumes to web and celery services which do not actually mount to the container, can confirm the same by looking at the inspect log of both the containers.
Contents of compose files:-
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:5.0.9-alpine
celery:
build: .
command: celery worker -A facedetect.celeryapp -l INFO --concurrency=1 --without-gossip --without-heartbeat
depends_on:
- redis
environment:
- C_FORCE_ROOT=true
docker-compose.override.yml
version: '3'
services:
web:
volumes:
- .:/code
ports:
- "8000:8000"
celery:
volumes:
- .:/code
I use Docker with Pycharm on Windows 10.
Command executed to deploy the compose configuration:-
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full-path>/docker-compose.yml up -d
Command executed to inspect one of the containers:-
docker container inspect <container_id>
Any help would be appreciated! :)
Just figured out that I had provided the docker-compose.yml file explicitly to the Run Configuration created in Pycharm as it was mandatory to provide at least one of these.
The command used by Pycharm explicitly mentions the .yml files using -f option when running the configuration. Adding the docker-compose.override.yml file to the Run Configuration changed the command to
C:\Program Files\Docker Toolbox\docker-compose.exe" -f <full_path>\docker-compose.yml -f <full_path>/docker-compose.override.yml up -d
This solved the issue. Thanks to Exadra37 directing to look out for the command that was being executed.