There are several issues similar to this one, such as:
Redis is configured to save RDB snapshots, but it is currently not able to persist on disk - Ubuntu Server
MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled
but none of these solves my problem.
The problem is that I am running my redis in docker-compose, and just cannot understand how to fix this at docker-compose startup.
The redis docs say this is the fix:
echo 1 > /proc/sys/vm/overcommit_memory
And this works when Redis is installed outside of docker. But how do I run this command with docker-compose?
I tried the following:
1) adding the command:
services:
cache:
image: redis:5-alpine
command: ["echo", "1", ">", "/proc/sys/vm/overcommit_memory", "&&", "redis-server"]
ports:
- ${COMPOSE_CACHE_PORT:-6379}:6379
volumes:
- cache:/data
this doesn't work:
docker-compose up
Recreating constructor_cache_1 ... done
Attaching to constructor_cache_1
cache_1 | 1 > /proc/sys/vm/overcommit_memory && redis-server
constructor_cache_1 exited with code 0
2) Mounting /proc/sys/vm/ directory.
This failed: turns out I cannot mount to /proc/ directory.
3) Overriding the entrypoint:
custom-entrypoint.sh:
#!/bin/sh
set -e
echo 1 > /proc/sys/vm/overcommit_memory
# first arg is `-f` or `--some-option`
# or first arg is `something.conf`
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$#"
fi
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
find . \! -user redis -exec chown redis '{}' +
exec su-exec redis "$0" "$#"
fi
exec "$#"
docker-compose.yml:
services:
cache:
image: redis:5-alpine
ports:
- ${COMPOSE_CACHE_PORT:-6379}:6379
volumes:
- cache:/data
- ./.cache/custom-entrypoint.sh:/usr/local/bin/custom-entrypoint.sh
entrypoint: /usr/local/bin/custom-entrypoint.sh
This doesn't work too.
How to fix this?
TL;DR Your redis is not secure
UPDATE:
Use expose instead of ports so the service is only available to linked services
Expose ports without publishing them to the host machine - they’ll
only be accessible to linked services. Only the internal port can be
specified.
expose
- 6379
ORIGINAL ANSWER:
long answer:
This is possibly due to an unsecured redis-server instance. The default redis image in a docker container is unsecured.
I was able to connect to redis on my webserver using just redis-cli -h <my-server-ip>
To sort this out, I went through this DigitalOcean article and many others and was able to close the port.
You can pick a default redis.conf from here
Then update your docker-compose redis section to(update file paths accordingly)
redis:
restart: unless-stopped
image: redis:6.0-alpine
command: redis-server /usr/local/etc/redis/redis.conf
env_file:
- app/.env
volumes:
- redis:/data
- ./app/conf/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
the path to redis.conf in command and volumes should match
rebuild redis or all the services as required
try to use redis-cli -h <my-server-ip> to verify (it stopped working for me)
Related
We use the docker image nginx:stable-apline in a docker compose setup:
core-nginx:
image: nginx:stable-alpine
restart: always
environment:
- NGINX_HOST=${NGINX_HOST}
- NGINX_PORT=${NGINX_PORT}
- NGINX_APP_HOST=${NGINX_APP_HOST}
volumes:
- ./nginx/conf/dev.template:/tmp/default.template
- ./log/:/var/log/nginx/
depends_on:
- core-app
command: /bin/sh -c "envsubst '$$NGINX_HOST $$NGINX_PORT $$NGINX_APP_HOST'< /tmp/default.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
ports:
- 5001:5001
Logfiles are unlimited in size in this setup.
Can anybody provide some pointers how to limit the size of access.log and error.log?
There are a couple of ways of tackling this problem.
Docker log driver
The Nginx container you're using, by default, configures the access.log to go to stdout and the error.log to go to stderr. If you were to remove the volume you're mounting on /var/log/nginx, you would get this default behavior, which means you would be able to manage logs via the Docker log driver. The default json-file log driver has a max-size option that would do exactly what you want.
With this solution, you would use the docker logs command to inspect the nginx logs.
Containerized log rotation
If you really want to log to local files instead of using the Docker log driver, you can add a second container to your docker-compose.yml that:
Runs cron
Periodically calls a script to rename the log files
Sends an appropriate signal to the nginx process
To make all this work:
The cron container needs access to the nginx logs. Because you're storing the logs on a volume, you can just mount that same volume in the cron container.
The cron container needs to run in the nginx pid namespace in order to send the restart signal. This is the --pid=container:... option to docker run, or the pid: option in docker-compose.yml.
For example, something like this:
version: "3"
services:
nginx:
image: nginx:stable-alpine
restart: always
volumes:
- ./nginx-logs:/var/log/nginx
- nginx-run:/var/run
ports:
- 8080:80
logrotate:
image: alpine:3.13
restart: always
volumes:
- ./nginx-logs:/var/log/nginx
- nginx-run:/var/run
- ./cron.d:/etc/periodic/daily
pid: service:nginx
command: ["crond", "-f", "-L", "/dev/stdout"]
volumes:
nginx-run:
In cron.d in my local directory, I have rotate-nginx-logs (mode 0755) that looks like this:
#!/bin/sh
pidfile=/var/run/nginx.pid
logdir=/var/log/nginx
if [ -f "$pidfile" ]; then
echo "rotating nginx logs"
for logfile in access error; do
mv ${logdir}/${logfile}.log ${logdir}/${logfile}.log.old
done
kill -HUP $(cat "$pidfile")
fi
With this configuration in place, the logrotate container will rename the logs once/day and send a USR1 signal to nginx, causing it to re-open its log files.
My preference would in general be for the first solution (gathering logs with Docker and using Docker log driver options to manage log rotation), since it reduces the complexity of the final solution.
I need a redis container start with some predefined configurations which my application will use.
I found a solution by adding data (using CMD) while reading docker file.
Docker file :-
FROM redis:latest
COPY my-data.redis /my-dir/
COPY my-redis.sh /my-dir/
CMD ["sh", "/my-dir/my-redis.sh"]
my-redis.sh :-
redis-server --daemonize yes && sleep 1
redis-cli < /my-dir/my-data.redis
redis-cli save
redis-cli shutdown
redis-server
my-data.redis :-
SET key1 val1
SET key2 val2
docker-compose :-
redis:
image: my-redis:latest
networks:
- back-tier
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
#command: redis-server --appendonly yes
ports:
- 6379:6379
volumes:
- ./data/redis-data:/data
restart: always
My predefined data is getting populated to the redis container but when i am modifiying the data, its getting overridden when the container is getting restart.
PS :- I have mapped redis-data outside but at the time of restart its getting modified.
Any Help would be appreciated.
If you don't want to re-initialize Redis every time the container starts, then you need to include logic in your startup script to prevent that behavior. Something as simple as:
if ! [ -f /etc/redis-was-configured ]; then
redis-server --daemonize yes && sleep 1
redis-cli < /my-dir/my-data.redis
redis-cli save
redis-cli shutdown
touch /etc/redis-was-configured
fi
redis-server
This would create a flag file after configuring Redis, and if that file exists when the container starts it causes it to skip over the initial data load.
Rather than relying on a flag file, you could instead run a Redis query to check if the expected data is available, but in general what I've presented here is sufficient (and is a fairly common solution to this sort of issue).
I'm a beginner with docker and I created a docker-compose file that can provide our production environment and I want to use it for our client servers for production environment also I want to use it locally and without internet.
Now, I have binaries of docker and docker compose and saved images that I want to load to a server without internet. this is my init bash script on Linux :
#!/bin/sh -e
#docker
tar xzvf docker-18.09.0.tgz
sudo cp docker/* /usr/bin/
sudo dockerd &
#docker-compose
cp docker-compose-Linux-x86_64 /ussr/local/bin/docker-compose
chmod +x /ussr/local/bin/docker-compose
#load images
docker load --input images.tar
my structure :
code/*
nginx/
site.conf
logs/
phpfpm/
postgres/
data/
custom.ini
.env
docker-compose.yml
docker-compose file:
version: '3'
services:
web:
image: nginx:1.15.6
ports:
- "8080:80"
volumes:
- ./code:/code
- ./nginx/site.conf:/etc/nginx/conf.d/default.conf
- ./nginx/logs:/var/log/nginx
restart: always
depends_on:
- php
php:
build: ./phpfpm
restart: always
volumes:
- ./phpfpm/custom.ini:/opt/bitnami/php/etc/conf.d/custom.ini
- ./code:/code
db:
image: postgres:10.1
volumes:
- ./postgres/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
ports:
- 5400:5432
There are some questions :
Why docker doesn't exist in Linux services? but when I install docker by apt-get it goes to Linux services list. How can I set docker as a service and enable it for loading on startup?
How can I set docker-compose in Linux services to run when system startup?
Install docker with package sudo dpkg -i /path/to/package.deb that you can download from https://download.docker.com/linux/ubuntu/dists/.
Then do post install, sudo systemctl enable docker. This will start docker at system boots, combined with restart: always your previous compose will be restarted automatically.
I think that dockerd is creating a daemon, but you have to enable it.
$ sudo systemctl enable docker
Add restart: always to your db container.
How the docker restart policies work
I'm having problems with starting my containers via docker-compose up.
I guess this is an Windows problem, because my colleague owns a macbook and has now problems when he runs docker-compose up.
ERROR: for oracle-apex Cannot start service oracle-apex: oci runtime
error: container_linux.go:265: starting container process caused
"exec: \"/temp/entrypoint.sh\": stat /temp/entrypoint.sh: no such file
or directory"
The directory docker/apex/scripts does exists on my machine, isn't empty and contains the file entrypoint.sh.
I've found some similar problems when I've googled for this error, which told me to create a env file with COMPOSE_CONVERT_WINDOWS_PATHS=1, what I've done.
Versions:
Docker version 1.13.1, build 092cba3
docker-compose version 1.8.0,build unknown
Windows 10
docker-compose.yml
version: '2'
services:
proxy:
build: ./docker/proxy/
container_name: searchkit_proxy
ports:
- "8000:80"
volumes:
- ./docker/searchkit-v2/dist:/public/static
oracle-apex:
image: araczkowski/oracle-apex-ords
container_name: vanditmar-apex
volumes:
- ./docker/apex/scripts/:/temp/
ports:
- "49160:22"
- "8080:8080"
- "1521:1521"
entrypoint: ["/temp/entrypoint.sh"]
volumes:
esdata1:
driver: local
oracle-data:
driver: local
networks:
esnet:
Dockerfile in folder docker/apex/
FROM araczkowski/oracle-apex-ords
ADD ./scripts/ /temp/
RUN /temp/install.sh
entrypoint.sh
#!/bin/bash
exec >> >(tee -ai /docker_log.txt)
exec 2>&1
# # Update hostname
sed -i -E "s/HOST = [^)]+/HOST = $HOSTNAME/g" /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
sed -i -E "s/HOST = [^)]+/HOST = $HOSTNAME/g" /u01/app/oracle/product/11.2.0/xe/network/admin/tnsnames.ora
sed -i -E "s/PORT = [^)]+/PORT = 1521/g" /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
#
/etc/init.d/oracle-xe start
/etc/init.d/tomcat start
/etc/init.d/ssh start
/temp/install.sh
##
## Workaround for graceful shutdown. ....ing oracle... ‿( ́ ̵ _-`)‿
##
while [ "$END" == '' ]; do
sleep 1
trap "/etc/init.d/oracle-xe stop && END=1" INT TERM
done
;;
I'm new to docker, but I would like to know how to fix this problem. We think the problem is that the paths to the files are different on windows.
I use Bash on Ubuntu on Windows as command line. If there is any information missing please tell me, so I can add it :)
I've got a docker-compose.yml like this:
db:
image: mongo:latest
ports:
- "27017:27017"
server:
image: artificial/docker-sails:stable-pm2
command: sails lift
volumes:
- server/:/server
ports:
- "1337:1337"
links:
- db
server/ is relative to the folder of the docker-compose.yml file. However when I docker exec -it CONTAINERID /bin/bash and check /server it is empty.
What am I doing wrong?
Aside from the answers here, it might have to do with drive sharing in Docker Setting. On Windows, I discovered that drive sharing needs to be enabled.
In case it is already enabled and you recently changed your PC's password, you need to disable drive sharing (and click "Apply") and re-enable it again (and click "Apply"). In the process, you will be prompted for your PC's new password. After this process, run your docker command (run or compose) again
Try using:
volumes:
- ./server:/server
instead of server/ -- there are some cases where Docker doesn't like the trailing slash.
As per docker volumes documentation,
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume
The host-dir can either be an absolute path or a name value. If you
supply an absolute path for the host-dir, Docker bind-mounts to the
path you specify. If you supply a name, Docker creates a named volume
by that name
I had similar issue when I wanted to mount a directory from command line:
docker run -tid -p 5080:80 -v /d/my_project:/var/www/html/my_project nimmis/apache-php5
The container has been started successfully but the mounted directory was empty.
The reason was that the mounted directory must be under the user's home directory. So, I created a symlink under c:\Users\<username> that mounts to my project folder d:\my_project and mounted that one:
docker run -tid -p 5080:80 -v /c/Users/<username>/my_project/:/var/www/html/my_project nimmis/apache-php5
If you are using Docker for Mac then you need to go to:
Docker Desktop -> Preferences -> Resources -> File Sharing
and add the folder you intend to mount. See the screenshot:
I don't know if other people made the same mistake but the host directory path has to start from /home
So my msitake was that in my docker-compose I was WRONGLY specifying the following:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- /Desktop/subfolder/subfolder2:/app/subfolder
When the host path should have been full path from /home. something like:
services:
myservice:
build: .
ports:
- 8888:8888
volumes:
- home/myuser/Desktop/subfolder/subfolder2:/app/subfolder
On Ubuntu 20.04.4 LTS, with Docker version 20.10.12, build e91ed57, I started observing a similar symptom with no apparent preceding action. After a docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command, with no changes to one of the services (production-001-volumeConsumingService is up-to-date), a part of the volumes stopped mounting.
# deploy/docker-compose.yml
version: "3"
services:
...
volumeConsumingService:
container_name: production-001-volumeConsumingService
hostname: production-001-volumeConsumingService
image: group/production-001-volumeConsumingService
build:
context: .
dockerfile: volumeConsumingService.Dockerfile
depends_on:
- anotherServiceDefinedEarlier
restart: always
volumes:
- ../data/certbot/conf:/etc/letsencrypt # mouning
- ../data/certbot/www:/var/www/certbot # not mounting
- ../data/www/public:/var/www/public # not mounting
- ../data/www/root:/var/www/root # not mounting
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
- default
- external
...
networks:
external:
name: routing
A workaround that seems to be working is to enforce a restart on the failing service immediately after the docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build command:
docker-compose -p production-001 -f deploy/docker-compose.yml up -d --build && docker stop production-001-volumeConsumingService && docker start production-001-volumeConsumingService
In the case when the volumes are not mounted after a host reboot, adding a cron task to restart the service once should do.
In my case, the volume was empty because I did not use the right path format without quotes.
If you have a relative or absolute path with spaces in it, you do not need to use double quotes around the path, you can just use any path with spaces and it will be understood since docker-compose has the ":" as the delimiter and does not check spaces.
Ways that do not work (double quotes are the problem!):
volumes:
- "MY_PATH.../my server":/server
- "MY_PATH.../my server:/server" (I might have missed testing this, not sure!)
- "./my server":/server
- ."/my server":/server
- "./my server:/server"
- ."/my server:/server"
Two ways how you can do it (no double quotes!):
volumes:
- MY_PATH.../my server:/server
- ./my server:/server