How to prevent Minio MC to exit in docker compose - docker

I use Minio and Minio/MC in my docker-compose as follows:
version: '3'
services:
minio:
image: minio/minio
command: server --address 0.0.0.0:9000 --console-address 0.0.0.0:9001 /data
volumes:
- minio-data:/data
expose:
- 9000
ports:
- "9001:9001"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
ready_minio:
image: minio/mc
env_file:
- ./envs/local.env
entrypoint: >
/bin/sh -c "
until (/usr/bin/mc config host add myminio http://minio:9000 minio minio123) do echo '...waiting...' &&
sleep 1;
done;
/usr/bin/mc alias set myminio http://minio:9000 minio minio123;
/usr/bin/mc admin user add myminio/ $${MINIO_ACCESS_KEY} $${MINIO_SECRET_KEY};
/usr/bin/mc admin policy set myminio/ readwrite user=$${MINIO_ACCESS_KEY};
/usr/bin/mc mb myminio/$${MINIO_MEDIA_FILES_BUCKET};
/usr/bin/mc policy set public myminio/$${MINIO_MEDIA_FILES_BUCKET};
exit 0;
"
depends_on:
- minio
volumes:
minio-data:
After all lines of entrypoint are executed, ready_minio container will be exited.
I want minio/mc to remain running after the execution of entrypoint commands is finished so that I can run other commands via docker exec -it ready_minio /bin/sh or something like that.
How can I do that?
Already tried ways:
I removed exit 0; from the last line of the entrypoint and this has not solved the problem.

If you want the container to continue running, then you need to provide a command that will not exit. A common option for "do nothing, forever" is the sleep inf command:
ready_minio:
image: minio/mc
env_file:
- ./envs/local.env
entrypoint: >
/bin/sh -c "
until (/usr/bin/mc config host add myminio http://minio:9000 minio minio123) do echo '...waiting...' &&
sleep 1;
done;
/usr/bin/mc alias set myminio http://minio:9000 minio minio123;
/usr/bin/mc admin user add myminio/ $${MINIO_ACCESS_KEY} $${MINIO_SECRET_KEY};
/usr/bin/mc admin policy set myminio/ readwrite user=$${MINIO_ACCESS_KEY};
/usr/bin/mc mb myminio/$${MINIO_MEDIA_FILES_BUCKET};
/usr/bin/mc policy set public myminio/$${MINIO_MEDIA_FILES_BUCKET};
exec sleep inf;
"

Related

When scaling, run some docker container startup commands only once

I have the following docker-compose.yml.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
I am scaling backend service so my startup command is sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=10.
The problem I am facing is command A, command B in service backend was running for all 10 containers startup(means they were being run 10 times).
But I want command A to run only once for all the backend service-related containers but Command B should run for all containers.
Any suggestions in accomplishing this?
I'm not entirely sure that there would be an out-of-the-box solution for your requirement.
However, I can suggest you a workaround like this. You can duplicate your backend service in docker-compose and run one backend service with both Command A and Command B, while the other backend has only Command B.
Then when you want to scale, you scale the backend which has only Command B.
version: "3.1"
services:
db:
container_name: ${MYSQL_CONTAINER}
image: mysql:5.7.30
volumes:
- ${VOLUMES_DIR}/mysql_data:/var/lib/mysql
- ./slow_log.cnf:/etc/mysql/my.cnf
- ${VOLUMES_DIR}/mysql_logs:/var/log/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
entrypoint: ""
command: bash -c "chown -R mysql:mysql /var/log/mysql && exec /entrypoint.sh mysqld --default-authentication-plugin=mysql_native_password"
restart: on-failure
backend_default:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command A
&& command B
&& ... "
restart: unless-stopped
backend:
container_name: ${BACKEND_CONTAINER}
image: ${BACKEND_IMAGE}
depends_on:
- db
ports:
- ${BACKEND_PORT}
command: >
bash -c "command B
&& ... "
restart: unless-stopped
Now you can use the scale option like below:
sudo docker-compose -p ${COMPOSE_PROJECT_NAME} up -d --scale backend=9
Now if there happens to be a scenario, where you need only 1 backend to be run, you can use profiles in docker-compose to only run backend when there is a specific profile is given with docker-compose command. That means only default_backend will run if that profile is not given and hence the scale is 1.
Hope this helps you. Cheers 🍻 !!!
If BACKEND_IMAGE is being built by you, you should do RUN command A in your Dockerfile. The RUN line will be executed only once during build time — so you will also need to make sure that this meshes with your needs — while the ENTRYPOINT and CMD lines will only be run upon execution of the container. The command in the docker-compose file overrides the CMD line.

docker: MISCONF Redis is configured to save RDB snapshots

There are several issues similar to this one, such as:
Redis is configured to save RDB snapshots, but it is currently not able to persist on disk - Ubuntu Server
MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled
but none of these solves my problem.
The problem is that I am running my redis in docker-compose, and just cannot understand how to fix this at docker-compose startup.
The redis docs say this is the fix:
echo 1 > /proc/sys/vm/overcommit_memory
And this works when Redis is installed outside of docker. But how do I run this command with docker-compose?
I tried the following:
1) adding the command:
services:
cache:
image: redis:5-alpine
command: ["echo", "1", ">", "/proc/sys/vm/overcommit_memory", "&&", "redis-server"]
ports:
- ${COMPOSE_CACHE_PORT:-6379}:6379
volumes:
- cache:/data
this doesn't work:
docker-compose up
Recreating constructor_cache_1 ... done
Attaching to constructor_cache_1
cache_1 | 1 > /proc/sys/vm/overcommit_memory && redis-server
constructor_cache_1 exited with code 0
2) Mounting /proc/sys/vm/ directory.
This failed: turns out I cannot mount to /proc/ directory.
3) Overriding the entrypoint:
custom-entrypoint.sh:
#!/bin/sh
set -e
echo 1 > /proc/sys/vm/overcommit_memory
# first arg is `-f` or `--some-option`
# or first arg is `something.conf`
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$#"
fi
# allow the container to be started with `--user`
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
find . \! -user redis -exec chown redis '{}' +
exec su-exec redis "$0" "$#"
fi
exec "$#"
docker-compose.yml:
services:
cache:
image: redis:5-alpine
ports:
- ${COMPOSE_CACHE_PORT:-6379}:6379
volumes:
- cache:/data
- ./.cache/custom-entrypoint.sh:/usr/local/bin/custom-entrypoint.sh
entrypoint: /usr/local/bin/custom-entrypoint.sh
This doesn't work too.
How to fix this?
TL;DR Your redis is not secure
UPDATE:
Use expose instead of ports so the service is only available to linked services
Expose ports without publishing them to the host machine - they’ll
only be accessible to linked services. Only the internal port can be
specified.
expose
- 6379
ORIGINAL ANSWER:
long answer:
This is possibly due to an unsecured redis-server instance. The default redis image in a docker container is unsecured.
I was able to connect to redis on my webserver using just redis-cli -h <my-server-ip>
To sort this out, I went through this DigitalOcean article and many others and was able to close the port.
You can pick a default redis.conf from here
Then update your docker-compose redis section to(update file paths accordingly)
redis:
restart: unless-stopped
image: redis:6.0-alpine
command: redis-server /usr/local/etc/redis/redis.conf
env_file:
- app/.env
volumes:
- redis:/data
- ./app/conf/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
the path to redis.conf in command and volumes should match
rebuild redis or all the services as required
try to use redis-cli -h <my-server-ip> to verify (it stopped working for me)

Docker for Mac | Docker Compose | Cannot access containers using localhost

I've been trying to figure out why I cannot containers using "localhost:3000" from host. I've tried installing Docker via Homebrew, as well as the Docker for Mac installer. I believe I have the docker-compose file configured correctly.
Here is the output from docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------------
ecm-datacontroller_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
ecm-datacontroller_kafka_1 supervisord -n Up 0.0.0.0:2181->2181/tcp, 0.0.0.0:9092->9092/tcp
ecm-datacontroller_redis_1 docker-entrypoint.sh redis ... Up 0.0.0.0:6379->6379/tcp
ecm-datacontroller_web_1 npm start Up 0.0.0.0:3000->3000/tcp
Here is my docker-compose.yml
version: '2'
services:
web:
ports:
- "3000:3000"
build: .
command: npm start
env_file: .env
depends_on:
- db
- redis
- kafka
volumes:
- .:/app/user
db:
image: postgres:latest
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"
kafka:
image: heroku/kafka
ports:
- "2181:2181"
- "9092:9092"
I cannot access any ports that are exposed by docker-compose with curl localhost:3000 I get the following result from that
curl: (52) Empty reply from server
I should be getting {"hello":"world"}.
Dockerfile:
FROM heroku/heroku:16-build
# Which version of node?
ENV NODE_ENGINE 10.15.0
# Locate our binaries
ENV PATH /app/heroku/node/bin/:/app/user/node_modules/.bin:$PATH
# Create some needed directories
RUN mkdir -p /app/heroku/node /app/.profile.d
WORKDIR /app/user
# Install node
RUN curl -s https://s3pository.heroku.com/node/v$NODE_ENGINE/node-v$NODE_ENGINE-linux-x64.tar.gz | tar --strip-components=1 -xz -C /app/heroku/node
# Export the node path in .profile.d
RUN echo "export PATH=\"/app/heroku/node/bin:/app/user/node_modules/.bin:\$PATH\"" > /app/.profile.d/nodejs.sh
ADD package.json /app/user/
RUN /app/heroku/node/bin/npm install
ADD . /app/user/
EXPOSE 3000
Anyone have any ideas?
Ultimately, I ended up having a service that was listening on 127.0.0.1 instead of 0.0.0.0. Updating this resolved the connectivity issue I was having.

docker-compose keep container running after run entrypoint

My docker-compose.yml
version: '3.1'
services:
redis:
container_name: redis
image: redis:3.0
app_prod:
container_name: app_prod
build:
dockerfile: .docker/app/prod.Dockerfile
context: ./../
ports:
- "8080:80"
links:
- mysql:mysql
- redis:redis
depends_on:
- mysql
- redis
environment:
PRODUCTION_MODE: 'true'
entrypoint: .docker/app/sh/entry-point.sh
mysql:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: 'my-db'
build:
context: ./mysql # path to folder containing Dockerfile
My .docker/app/sh/entry-point.sh
#!/usr/bin/env bash
set -e # exit script if any command fails (non-zero value)
echo Waiting for redis service start...;
while ! nc -z redis 6379;
do
sleep 1;
done;
echo Waiting for mysql service start...;
while ! nc -z mysql 3306;
do
sleep 1;
done;
echo Connected!;
php www/index.php orm:schema-tool:update --force
exec "$#"
I am building by command:
docker-compose -f .docker/docker-compose-prod.yml up -d --build
All containers are built successfully but at the end is running entrypoint script of container app_prod (.docker/app/sh/entry-point.sh). Entry point script was processed successfully too but after execute entrypoint script is app_prod container stopped.
It is some way to keep container running?
Thanks
Definitionally, no: once the entrypoint exits the container exits.
Your entrypoint is a shell script ending in exec "$#" (good!) which means that, after it successfully waits for its databases to be up, it will run whatever is passed in the docker-compose.yml as command:. (Note that if you declare entrypoint: in docker-compose.yml, it ignores a CMD in the Dockerfile.) So you just need a command: that starts your service and you should be set
entrypoint: .docker/app/sh/entry-point.sh
command: php-fpm

Pass parameters and execute scripts in docker-compose

I would like to make a service like this :
datastax:
image: luketillman/datastax-enterprise:5.1.0
ports:
- "9042:9042"
volumes:
- /datasets:/tmp/scripts
command: [ "-s",
"bash -c \"sleep 40 &&
cqlsh -f /tmp/scripts/init.cql\""]
Starts Cassandra in search mode (-s) and then (when it's up), execute init.cql via cqlsh.
Is it possible to do with compose ? How to proceed ?
You can run a command having multiple sub-commands like this:
datastax:
image: luketillman/datastax-enterprise:5.1.0
ports:
- "9042:9042"
volumes:
- /datasets:/tmp/scripts
command: bash -c "dse cassandra -s; sleep 40; cqlsh -f /tmp/scripts/init.cql"
Note that you must use the full dse cassandra -s command, you can't reuse the default command from the images AFAIK.

Resources