We use the docker image nginx:stable-apline in a docker compose setup:
core-nginx:
image: nginx:stable-alpine
restart: always
environment:
- NGINX_HOST=${NGINX_HOST}
- NGINX_PORT=${NGINX_PORT}
- NGINX_APP_HOST=${NGINX_APP_HOST}
volumes:
- ./nginx/conf/dev.template:/tmp/default.template
- ./log/:/var/log/nginx/
depends_on:
- core-app
command: /bin/sh -c "envsubst '$$NGINX_HOST $$NGINX_PORT $$NGINX_APP_HOST'< /tmp/default.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 5001:5001
Logfiles are unlimited in size in this setup.
Can anybody provide some pointers how to limit the size of access.log and error.log?
There are a couple of ways of tackling this problem.
Docker log driver
The Nginx container you're using, by default, configures the access.log to go to stdout and the error.log to go to stderr. If you were to remove the volume you're mounting on /var/log/nginx, you would get this default behavior, which means you would be able to manage logs via the Docker log driver. The default json-file log driver has a max-size option that would do exactly what you want.
With this solution, you would use the docker logs command to inspect the nginx logs.
Containerized log rotation
If you really want to log to local files instead of using the Docker log driver, you can add a second container to your docker-compose.yml that:
Runs cron
Periodically calls a script to rename the log files
Sends an appropriate signal to the nginx process
To make all this work:
The cron container needs access to the nginx logs. Because you're storing the logs on a volume, you can just mount that same volume in the cron container.
The cron container needs to run in the nginx pid namespace in order to send the restart signal. This is the --pid=container:... option to docker run, or the pid: option in docker-compose.yml.
For example, something like this:
version: "3"
services:
nginx:
image: nginx:stable-alpine
restart: always
volumes:
- ./nginx-logs:/var/log/nginx
- nginx-run:/var/run
ports:
- 8080:80
logrotate:
image: alpine:3.13
restart: always
volumes:
- ./nginx-logs:/var/log/nginx
- nginx-run:/var/run
- ./cron.d:/etc/periodic/daily
pid: service:nginx
command: ["crond", "-f", "-L", "/dev/stdout"]
volumes:
nginx-run:
In cron.d in my local directory, I have rotate-nginx-logs (mode 0755) that looks like this:
#!/bin/sh
pidfile=/var/run/nginx.pid
logdir=/var/log/nginx
if [ -f "$pidfile" ]; then
echo "rotating nginx logs"
for logfile in access error; do
mv ${logdir}/${logfile}.log ${logdir}/${logfile}.log.old
done
kill -HUP $(cat "$pidfile")
fi
With this configuration in place, the logrotate container will rename the logs once/day and send a USR1 signal to nginx, causing it to re-open its log files.
My preference would in general be for the first solution (gathering logs with Docker and using Docker log driver options to manage log rotation), since it reduces the complexity of the final solution.
Related
Standard deployment of jasperreports (docker pull bitnami/jasperreports - under Ubuntu 20.04.3 LTS)
version: '3.7'
services:
jasperServerDB:
container_name: jasperServerDB
image: docker.io/bitnami/mariadb:latest
ports:
- '3306:3306'
volumes:
- './jasperServerDB_data:/bitnami/mariadb'
environment:
- MARIADB_ROOT_USER=mariaDbUser
- MARIADB_ROOT_PASSWORD=mariaDbPassword
- MARIADB_DATABASE=jasperServerDB
jasperServer:
container_name: jasperServer
image: docker.io/bitnami/jasperreports:latest
ports:
- '8085:8080'
volumes:
- './jasperServer_data:/bitnami/jasperreports'
depends_on:
- jasperServerDB
environment:
- JASPERREPORTS_DATABASE_HOST=jasperServerDB
- JASPERREPORTS_DATABASE_PORT_NUMBER=3306
- JASPERREPORTS_DATABASE_USER=dbUser
- JASPERREPORTS_DATABASE_PASSWORD=dbPassword
- JASPERREPORTS_DATABASE_NAME=jasperServerDB
- JASPERREPORTS_USERNAME=adminUser
- JASPERREPORTS_PASSWORD=adminPassword
restart: on-failure
The reporting server is behind nginx reverse proxy which points to port 8085 of the docker machine.
Everything works as expected on https://my.domain.com/jasperserver/ url.
It is required to have JasperReports server responding on only https://my.domain.com/ url.
What is the recommended/best approach to configure the container (default Tomcat application) which can survive container's restarts and updates?
Some results from searching the net:
https://cwiki.apache.org/confluence/display/tomcat/HowTo#HowTo-HowdoImakemywebapplicationbetheTomcatdefaultapplication?
https://coderanch.com/t/85615/application-servers/set-application-default-application
https://benhutchison.wordpress.com/2008/07/30/how-to-configure-tomcat-root-context/
Which doubtfully are applicable to bitnami containers.
Hopefully there is a simple image configuration which could be included in the docker-compose.yml file.
Reference to GitHub Bitnami JasperReports Issues List where the same question is posted.
After trying all recommended ways to achieve the requirement, seems that Addendum 1 from cwiki.apache.org is the best one.
Submitted a PR to bitnami with single parameter fix of the use case: ROOT URL setting
Here is a workaround in case the above PR doesn't get accepted
Step 1
Create a .sh (e.g start.sh) file in the docker-compose.yml folder with following content:
#!/bin/bash
docker-compose up -d
echo "Building JasperReports Server..."
#Long waiting period to ensure the container is up and running (health checks didn't worked out well)
sleep 180;
echo "...completed!"
docker exec -u 0 -it jasperServer sh -c "rm -rf /opt/bitnami/tomcat/webapps/ROOT && rm /opt/bitnami/tomcat/webapps/jasperserver && ln -s /opt/bitnami/jasperreports /opt/bitnami/tomcat/webapps/ROOT"
echo "Ready to rock!"
Note that the container name must match the one from your docker-compose.yml file.
Step 2
Start the container by typing: $sh ./start.sh instead of $docker-compose up -d.
Step 3
Give it some time and try https://my.domain.com/.
I want to deploy some services into my server and all of them will use nginx as web server, every project has it own .conf file and I want to share all of then with nginx container. I tried to use named volumes but when it's used by more than one container the data gets replaced. I want to get all this .conf files from diferent containers and put in a volume so it can be read by nginx container. I also tried to use subdirectories in named volumes, but, use namedVolumeName/path do not work.
Obs: I'm using docker-compose in all projects
version: "3.7"
services:
backend:
container_name: jzmimoveis-backend
image: paulomesquita/jzmimoveis-backend
command: uwsgi --socket :8000 --wsgi-file jzmimoveis/wsgi.py
volumes:
- nginxConfFiles:/app/nginx
- jzmimoveisFiles:/app/src
networks:
- jzmimoveis
restart: unless-stopped
expose:
- 8000
frontend:
container_name: jzmimoveis-frontend
image: paulomesquita/jzmimoveis-frontend
command: serve -s build/
volumes:
- nginxConfFiles:/app/nginx
networks:
- jzmimoveis
restart: unless-stopped
expose:
- 5000
volumes:
nginxConfFiles:
external: true
jzmimoveisFiles:
external: true
networks:
jzmimoveis:
external: true
For example, is this case i linked both frontend and backend nginx file to the named volume nginxConfFiles, but, when I do docker-compose up -d in this file, just one of the .conf file appears in volume, I think it gets overwritten by the other container in the same file.
Probably you could have, on the nginx container, the shared volume pointing to /etc/nginx/conf.d, and then use different names for each project conf file.
Below a proof-of-concept, three servers with a config file to be attached on each one, and a proxy (your Nginx) with the shared volume bound to /config:
version: '3'
services:
server1:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config
- ./server1.conf:/app/server1.conf
command: sh -c "cp /app/server1.conf /config; tail -f /dev/null"
server2:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config
- ./server2.conf:/app/server2.conf
command: sh -c "cp /app/server2.conf /config; tail -f /dev/null"
server3:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config
- ./server3.conf:/app/server3.conf
command: sh -c "cp /app/server3.conf /config; tail -f /dev/null"
proxy1:
image: busybox:1.31.1
volumes:
- deleteme_after_demo:/config:ro
command: tail -f /dev/null
volumes:
deleteme_after_demo:
Let's create 3 config files to be included:
➜ echo "server 1" > server1.conf
➜ echo "server 2" > server2.conf
➜ echo "server 3" > server3.conf
then:
➜ docker-compose up -d
Creating network "deleteme_default" with the default driver
Creating deleteme_server2_1 ... done
Creating deleteme_server3_1 ... done
Creating deleteme_server1_1 ... done
Creating deleteme_proxy1_1 ... done
And finally, let's verify the config files are accessible from proxy container:
➜ docker-compose exec proxy1 sh -c "cat /config/server1.conf"
server 1
➜ docker-compose exec proxy1 sh -c "cat /config/server2.conf"
server 2
➜ docker-compose exec proxy1 sh -c "cat /config/server3.conf"
server 3
I hope it helps.
Cheers!
Note: you should see mounting a volume exactly the same way as using Unix mount command. If you already have content inside the mount point, after mount you are not going to see it, but the content of the mounted device (unless it was empty and first created here). Whatever you want to see there needs to be already on the device or you need to move it afterward.
So, I did it by mounting the files because I had no data in the container I used. Then copying these with the starting command. You could address it a different way, eg copying the config file to the mounted volume by the use of an entry point script in your image.
A named volume is initialized when it's empty/new and a container is started using that volume. The initialization is from the image filesystem, and after that, the named volume is persistent and will retain the state from the previous use.
In this case, what you have is a race condition. The volume is sharing the files, but it depends on which container compose starts up first to control which image is used to initialize the volume. The named volume is shared between multiple images, it's just the content that you want to be different.
For your use case, you may be better off putting some logic in the image build and entrypoint to save the files you want to mirror in the volume to a different location in the image on build, and then update the volume on container startup. By moving this out of the named volume initialization steps, you avoid the race condition, and allow the volume to be updated with future changes from the image. An example of this is in my base image with the save-volume you'd run in the Dockerfile, and load-volume you'd run in your entrypoint.
As a side note, it's also a good practice to mount that named volume as read-only in the containers that have no need to write to the config files.
There are several issues similar to this one, such as:
Redis is configured to save RDB snapshots, but it is currently not able to persist on disk - Ubuntu Server
MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled
but none of these solves my problem.
The problem is that I am running my redis in docker-compose, and just cannot understand how to fix this at docker-compose startup.
The redis docs say this is the fix:
echo 1 > /proc/sys/vm/overcommit_memory
And this works when Redis is installed outside of docker. But how do I run this command with docker-compose?
I tried the following:
1) adding the command:
services:
cache:
image: redis:5-alpine
command: ["echo", "1", ">", "/proc/sys/vm/overcommit_memory", "&&", "redis-server"]
ports:
- ${COMPOSE_CACHE_PORT:-6379}:6379
volumes:
- cache:/data
this doesn't work:
docker-compose up
Recreating constructor_cache_1 ... done
Attaching to constructor_cache_1
cache_1 | 1 > /proc/sys/vm/overcommit_memory && redis-server
constructor_cache_1 exited with code 0
2) Mounting /proc/sys/vm/ directory.
This failed: turns out I cannot mount to /proc/ directory.
3) Overriding the entrypoint:
custom-entrypoint.sh:
#!/bin/sh
set -e
echo 1 > /proc/sys/vm/overcommit_memory
# first arg is `-f` or `--some-option`
# or first arg is `something.conf`
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$#"
fi
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
find . \! -user redis -exec chown redis '{}' +
exec su-exec redis "$0" "$#"
fi
exec "$#"
docker-compose.yml:
services:
cache:
image: redis:5-alpine
ports:
- ${COMPOSE_CACHE_PORT:-6379}:6379
volumes:
- cache:/data
- ./.cache/custom-entrypoint.sh:/usr/local/bin/custom-entrypoint.sh
entrypoint: /usr/local/bin/custom-entrypoint.sh
This doesn't work too.
How to fix this?
TL;DR Your redis is not secure
UPDATE:
Use expose instead of ports so the service is only available to linked services
Expose ports without publishing them to the host machine - they’ll
only be accessible to linked services. Only the internal port can be
specified.
expose
- 6379
ORIGINAL ANSWER:
long answer:
This is possibly due to an unsecured redis-server instance. The default redis image in a docker container is unsecured.
I was able to connect to redis on my webserver using just redis-cli -h <my-server-ip>
To sort this out, I went through this DigitalOcean article and many others and was able to close the port.
You can pick a default redis.conf from here
Then update your docker-compose redis section to(update file paths accordingly)
redis:
restart: unless-stopped
image: redis:6.0-alpine
command: redis-server /usr/local/etc/redis/redis.conf
env_file:
- app/.env
volumes:
- redis:/data
- ./app/conf/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
the path to redis.conf in command and volumes should match
rebuild redis or all the services as required
try to use redis-cli -h <my-server-ip> to verify (it stopped working for me)
I am stuck trying to configure docker volumes to share files between my host and make able in my container to use this files. let me explain.
I have a rails docker app with puma as a web server, I want to make able to puma to view and use the ssl .key and .crt files, so for this project also I am using docker-compose in "production mode", but I do not know how to make this work.
My setup is this:
Ubuntu 18.04 server host for production has the ssl files inside /home/ubuntu/my_app_keys, the containers are also in my host.
/home/ubuntu/docker-compose.yml
version: '3'
services:
postgres:
image: postgres:10.5
environment:
POSTGRES_DB: my_app_production
env_file:
-~/production.env
redis:
image: redis:4.0.11
web:
image: my_app:latest
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' -e production
ports:
- '3000:3000'
volumes:
- /home/ubuntu/my_app_keys
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
sidekiq:
image: my_app_sidekiq:latest
command: bundle exec sidekiq -C config/sidekiq.yml
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
so, as you can see: command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' is looking for ssl files in /home/ubuntu/my_app_keys, when I execute docker-compose up puma can not find the ssl files and exits with:
/usr/local/bundle/gems/puma-3.9.1/lib/puma/minissl.rb:180:in `key=': No such key file '/home/ubuntu/my_app_keys/server.key' (ArgumentError)
I think is because key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt are pointing in the container context but I have the cert and key in my host context
so, I include in docker compose volume in order to bind-mount the files:
volumes:
- /home/ubuntu/my_app_keys
but without luck, same error.
In the container context my app lives in /var/www/my_app directory, so I tried to specify an absolute path (for some reason I imagined that it was because the ssl files were not in the same directory where my app lived could not be shared), so I add as compose-file docs say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
and change in compose file:
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=server.key&cert=server.crt' -e
when I execute the compose up my web service exit with error:
web | Could not locate Gemfile or .bundle/ directory
only way that web service run is (but no ssl files exist):
volumes:
- /home/ubuntu/my_app_keys
so, I do not know what to do now. any help?
When your Docker Compose YAML file says:
volumes:
- /home/ubuntu/my_app_keys
It means, "make /home/ubuntu/my_app_keys in container space persist across restarts of the container; it will start off empty unless the Dockerfile did something special; it's not connected to any specific host content".
When you say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
It means, "totally replace the contents of /var/www/my_app in container space with the contents of /home/ubuntu/my_app_keys on the host". (The path names in host and container space don't need to be the same.)
As a bonus question, when you say:
rails server -b 'ssl://127.0.0.1:3000?...'
It means, "only listen for inbound connections on port 3000 initiated from within this Docker container; don't accept any connections from outside the container at all, whether from the same physical host, other containers, or elsewhere."
I've got a docker-compose.yml like this:
db:
image: mongo:latest
ports:
- "27017:27017"
server:
image: artificial/docker-sails:stable-pm2
command: sails lift
volumes:
- server/:/server
ports:
- "1337:1337"
links:
- db
server/ is relative to the folder of the docker-compose.yml file. However when I docker exec -it CONTAINERID /bin/bash and check /server it is empty.
What am I doing wrong?
Aside from the answers here, it might have to do with drive sharing in Docker Setting. On Windows, I discovered that drive sharing needs to be enabled.
In case it is already enabled and you recently changed your PC's password, you need to disable drive sharing (and click "Apply") and re-enable it again (and click "Apply"). In the process, you will be prompted for your PC's new password. After this process, run your docker command (run or compose) again
Try using:
volumes:
- ./server:/server
instead of server/ -- there are some cases where Docker doesn't like the trailing slash.
As per docker volumes documentation,
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
The host-dir can either be an absolute path or a name value. If you
supply an absolute path for the host-dir, Docker bind-mounts to the
path you specify. If you supply a name, Docker creates a named volume
by that name
I had similar issue when I wanted to mount a directory from command line:
docker run -tid -p 5080:80 -v /d/my_project:/var/www/html/my_project nimmis/apache-php5
The container has been started successfully but the mounted directory was empty.
The reason was that the mounted directory must be under the user's home directory. So, I created a symlink under c:\Users\<username> that mounts to my project folder d:\my_project and mounted that one:
docker run -tid -p 5080:80 -v /c/Users/<username>/my_project/:/var/www/html/my_project nimmis/apache-php5
If you are using Docker for Mac then you need to go to:
Docker Desktop -> Preferences -> Resources -> File Sharing
and add the folder you intend to mount. See the screenshot:
I don't know if other people made the same mistake but the host directory path has to start from /home
So my msitake was that in my docker-compose I was WRONGLY specifying the following:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- /Desktop/subfolder/subfolder2:/app/subfolder
When the host path should have been full path from /home. something like:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- home/myuser/Desktop/subfolder/subfolder2:/app/subfolder
On Ubuntu 20.04.4 LTS, with Docker version 20.10.12, build e91ed57, I started observing a similar symptom with no apparent preceding action. After a docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command, with no changes to one of the services (production-001-volumeConsumingService is up-to-date), a part of the volumes stopped mounting.
# deploy/docker-compose.yml
version: "3"
services:
...
volumeConsumingService:
container_name: production-001-volumeConsumingService
hostname: production-001-volumeConsumingService
image: group/production-001-volumeConsumingService
build:
context: .
dockerfile: volumeConsumingService.Dockerfile
depends_on:
- anotherServiceDefinedEarlier
restart: always
volumes:
- ../data/certbot/conf:/etc/letsencrypt # mouning
- ../data/certbot/www:/var/www/certbot # not mounting
- ../data/www/public:/var/www/public # not mounting
- ../data/www/root:/var/www/root # not mounting
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- default
- external
...
networks:
external:
name: routing
A workaround that seems to be working is to enforce a restart on the failing service immediately after the docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command:
docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build && docker stop production-001-volumeConsumingService && docker start production-001-volumeConsumingService
In the case when the volumes are not mounted after a host reboot, adding a cron task to restart the service once should do.
In my case, the volume was empty because I did not use the right path format without quotes.
If you have a relative or absolute path with spaces in it, you do not need to use double quotes around the path, you can just use any path with spaces and it will be understood since docker-compose has the ":" as the delimiter and does not check spaces.
Ways that do not work (double quotes are the problem!):
volumes:
- "MY_PATH.../my server":/server
- "MY_PATH.../my server:/server" (I might have missed testing this, not sure!)
- "./my server":/server
- ."/my server":/server
- "./my server:/server"
- ."/my server:/server"
Two ways how you can do it (no double quotes!):
volumes:
- MY_PATH.../my server:/server
- ./my server:/server