Print status after docker-compose container starts - docker

In a docker-compose file, is it possible to wait for a container to start and then print a status?
e.g. sleep 10 && echo started mysql on http://${HOST}:${PORT}
A Dockerfile has a run command, but there isn't such a thing in a compose file. How can I do this?

With docker-compose, just like with a regular docker run [...], you can specify entrypoint (cf. here) and command (cf. here).
In your case, however, what I would do is building an own Docker image based on your preferred MySQL image and COPY a simple entrypoint script into the image that does what you want, e.g.
#!/bin/sh
sleep 10
[command to run MySQL]
echo "Started MySQL on xyz"
Then specify this script as ENTRYPOINT in your Dockerfile.

The best way is just add this to the compose file:
print-status:
image: busybox
env_file: .env
command: "sh -c 'sleep 10 && echo \"http://localhost:${PORT}\"'"
depends_on:
- mysql

Related

Can't execute command to set environment variables in docker container

I have seen the following links to execute multiple commands in docker-compose file:
Docker-Compose + Command
Using Docker-Compose, how to execute multiple commands
docker-compose run multiple commands for a service
which tell us how to execute multiple commands in docker-compose file (also in the docker container).
In order to run sburn/apache-atlas image properly, I have to set some environment variables which exists in /opt/apache-atlas-2.1.0/conf/atlas-env.sh directory.
I have tried the following docker-compose.yml file:
version: "3.3"
services:
atlas:
image: sburn/apache-atlas
container_name: atlas
ports:
- "21000:21000"
volumes:
- "./bash_script:/app"
command: bash -c "
source ./opt/apache-atlas-2.1.0/conf/atlas-env.sh
&& chmod 777 /app/import-hive.sh
&& /opt/apache-atlas-2.1.0/bin/atlas_start.py
"
Unfortunately, the first command (I mean source ./opt/apache-atlas-2.1.0/conf/atlas-env.sh) doesn't work. It doesn't have any error but the environment variables such as JAVA_HOME aren't set.
How are you checking that the variables are not set?
Run Docker exec -it atlas bash in the terminal.
Run set in the terminal. It shows all the environment variables.
Check whether the environment variables are set or not.
Your question involves a lot of stuff, if you can narrow it down people can help better. Here are my suggestions to debug it:
bash -exc "
echo home1=$JAVA_HOME
source ./opt/apache-atlas-2.1.0/conf/atlas-env.sh
echo home2=$JAVA_HOME
chmod 777 /app/import-hive.sh
echo home3=$JAVA_HOME
/opt/apache-atlas-2.1.0/bin/atlas_start.py
"
If JAVA_HOME is never set, there's something wrong with .sh file, either you fix that file or manually set it with
export JAVA_ENV=/aaa/bbb/ccc
Or defining it in your compose yaml file.
Also the way you're checking for env vars is wrong, running Docker exec -it atlas bash won't run in the same bash as bash -c "source ./opt/apache-a..."
to set enviroment variables you must set this:
environment:
- JAVA_HOME=/usr/bin/java
- OTHER_VARIABLE=example
Or you can set your variables on Dockerfile with:
ENV JAVA_HOME="Your variable"
ENV OTHER_VARIABLE="example"
If you want execute ./opt/apache-atlas-2.1.0/conf/atlas-env.sh script at the container start because this script have all environments that you need, you can include it on entrypoint or Dockerfile with CMD exec
Example:
FROM: source_image
RUN source ./opt/apache-atlas-2.1.0/conf/atlas-env.sh
ENTRYPOINT []
To execute commands from your docker-compose try this:
command: sh -c "source ./opt/apache-atlas-2.1.0/conf/atlas-env.sh"
Regards
Sources: docker-compose, run a script after container has started?

How to put command of build docker container to dockerfile?

I have script: docker run -it -p 4000:4000 bitgosdk/express:latest --disablessl -e test
how to put this command to dockerfile with arguments?
FROM bitgosdk/express:latest
EXPOSE 4000
???
Gone through your Dockerfile contents.
The command running inside container is:
/ # ps -ef | more
PID USER TIME COMMAND
1 root 0:00 /sbin/tini -- /usr/local/bin/node /var/bitgo-express/bin/bitgo-express --disablessl -e test
The command is so because the entrypoint set in the Dockerfile is ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/node", "/var/bitgo-express/bin/bitgo-express"] and the arguments --disablessl -e test are the one provided while running docker run command.
The --disablessl -e test arguments can be set inside your Dockerfile using CMD:
CMD ["--disablessl", "-e","test"]
New Dockerfile:
FROM bitgosdk/express:latest
EXPOSE 4000
CMD ["--disablessl", "-e","test"]
Refer this to know the difference between entrypoint and cmd.
You don't.
This is what docker-compose is used for.
i.e. create a docker-compose.yml with contents like this:
version: "3.8"
services:
test:
image: bitgodsdk/express:latest
command: --disablessl -e test
ports:
- "4000:4000"
and then execute the following in a terminal to access the interactive terminal for the service named test.
docker-compose run test
Even if #mchawre's answer seems to directly answer OP's question "syntactically speaking" (as a Dockerfile was asked), a docker-compose.yml is definitely the way to go to make a docker run command, as custom as it might be, reproducible in a declarative way (YAML file).
Just to complement #ChrisBecke's answer, note that the writing of this YAML file can be automated. See e.g., the FOSS (under MIT license) https://github.com/magicmark/composerize
FTR, the snippet below was automatically generated from the following docker run command, using the accompanying webapp https://composerize.com/:
docker run -it -p 4000:4000 bitgosdk/express:latest
version: '3.3'
services:
express:
ports:
- '4000:4000'
image: 'bitgosdk/express:latest'
I omitted the CMD arguments --disablessl -e test on-purpose, as composerize does not seem to support these extra arguments. This may sound like a bug (and FTR a related issue is opened), but meanwhile it might just be viewed as a feature, in line of #DavidMaze's comment…

How can I run a cron in MariaDB container?

I want to include a cron task in a MariaDB container, based on the latest image mariadb, but I'm stuck with this.
I tried many things without success because I can't launch both MariaDB and Cron.
Here is my actual dockerfile:
FROM mariadb:10.3
# DB settings
ENV MYSQL_DATABASE=beurre \
MYSQL_ROOT_PASSWORD=beurette
COPY ./data /docker-entrypoint-initdb.d
COPY ./keys/keys.enc home/mdb/
COPY ./config/encryption.cnf /etc/mysql/conf.d/encryption.cnf
# Installations
RUN apt-get update && apt-get -y install python cron
# Cron
RUN touch /etc/cron.d/bp-cron
RUN printf '* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1\n#' >> /etc/cron.d/bp-cron
RUN touch /var/log/cron.log
RUN chmod 0644 /etc/cron.d/bp-cron
RUN cron
With its settings, the database starts correctly, but "Cron" is not initialized. To make it work, I have to get into the container and execute the "Cron" command, and everything works perfectly.
So I'm looking for a way to launch both the db and cron from my Dockerfile used in my docker-compose.
If this is not possible, maybe there is another way to do tasks planned? The purpose being to execute a script of the db.
Elaborating on #k0pernikus's comment, I would recommend to use a separate container that runs cron. The cronjobs in that container can then work with your mysql database.
Here's how I would approach it:
1. Create a Cron Docker Container
You can set up a cron container fairly simply. Here's an example Dockerfile that should do the job:
FROM alpine
COPY ./crontab /etc/crontab
RUN crontab /etc/crontab
RUN touch /var/log/cron.log
CMD crond -f
Just put your crontab into a crontab file next to that Dockerfile and you should have a working cron container.
An example crontab file:
* * * * * mysql -h mysql --execute "INSERT INTO database.table VALUES 'v';"
2. Add the cron container to your docker-compose.yml as a service
Make sure you add your cron container to the docker-compose.yml, and put it in the same network as your mysql service:
networks:
my_network:
services:
mysql:
image: mariadb
networks:
- my_network
cron:
image: my_cron
depends_on:
- mysql
build:
context: ./path/to/my/cron-docker-folder
networks:
- my_network
I recommend the solution provided by fjc. Treat this as nice-to-know to understand why your approach is not working.
Docker has RUN commands that are only being executed during build. Not on container startup.
It also has a CMD (or ENTRYPOINT) for executing specific scripts.
Since you are using mariadb your CMD it is:
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
(You can find the link to the actual dockerfles on dockerhub.)
This tells docker to run:
docker-entrypoint.sh mysqld
on startup.
You'd have to override its docker-entrypoint.sh to allow for the startup of the cron job as well.
See the relevant part of the Dockerfile for the CMD instruction:
CMD The CMD instruction has three forms:
CMD ["executable","param1","param2"] (exec form, this is the preferred
form) CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form) There can only be one CMD
instruction in a Dockerfile. If you list more than one CMD then only
the last CMD will take effect.
The main purpose of a CMD is to provide defaults for an executing
container. These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT
instruction as well.
Note: If CMD is used to provide default arguments for the ENTRYPOINT
instruction, both the CMD and ENTRYPOINT instructions should be
specified with the JSON array format.
Note: The exec form is parsed as a JSON array, which means that you
must use double-quotes (“) around words not single-quotes (‘).
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, CMD [ "echo", "$HOME" ] will not do variable substitution on
$HOME. If you want shell processing then either use the shell form or
execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME"
]. When using the exec form and executing a shell directly, as in the
case for the shell form, it is the shell that is doing the environment
variable expansion, not docker.
When used in the shell or exec formats, the CMD instruction sets the
command to be executed when running the image.
If you use the shell form of the CMD, then the will execute
in /bin/sh -c:
FROM ubuntu CMD echo "This is a test." | wc - If you want to run your
without a shell then you must express the command as a JSON
array and give the full path to the executable. This array form is the
preferred format of CMD. Any additional parameters must be
individually expressed as strings in the array:
FROM ubuntu CMD ["/usr/bin/wc","--help"] If you would like your
container to run the same executable every time, then you should
consider using ENTRYPOINT in combination with CMD. See ENTRYPOINT.
If the user specifies arguments to docker run then they will override
the default specified in CMD.
Note: Don’t confuse RUN with CMD. RUN actually runs a command and
commits the result; CMD does not execute anything at build time, but
specifies the intended command for the image.

Running and configuring mqtt broker with docker composer

In the documentation of docker composer version 3, from what I understood, to run some commands after a container has started I need to add the "command" tag as follows:
version: "3"
services:
broker:
image: "toke/mosquitto"
restart: always
ports:
- "1883:1883"
- "9001:9001"
command: ["cd /etc/mosquitto", "echo \"\" > mosquitto.pwd", "mosquitto_passwd -b /etc/mosquitto/mosquitto.pwd user pass", "echo \"password_file mosquitto.pwd\" >> mosquitto.conf", "echo \"allow_anonymous false\" >> mosquitto.conf"]
The log returns /usr/bin/docker-entrypoint.sh: 5: exec: cd /etc/mosquitto: not found
A workaround could be specify in the composer file what dockerfile to run and add the commands that should run there, so I created one dockerfile:
FROM toke/mosquitto
WORKDIR .
EXPOSE 1883:1883 9001:9001
ENTRYPOINT cd /etc/mosquitto
ENTRYPOINT echo "" > mosquitto.pwd
ENTRYPOINT mosquitto_passwd -b mosquitto.pwd usertest passwordtest
ENTRYPOINT echo "password_file mosquitto.pwd" >> mosquitto.conf
ENTRYPOINT echo "allow_anonymous false" >> mosquitto.conf
The container's keep restarting and the log doesn't return anything. I've also tried changing the "ENTRYPOINT" for "CMD" with no changing in the output.
As an addend specifying the docker composer file to use a specific dockerfile it fails to parse and says:
ERROR: The Compose file '.\docker-compose.yml' is invalid because:
Unsupported config option for services.broker: 'dockerfile'
As in it can't parse or doesn't understand "dockerfile" tag. Does anyone know how to configure a dockerfile or even docker-composer to run the commands intended in this post to configure a mqtt broker?
The command entry in the compose file is not a list of commands to run, it's a single command and it's arguments
e.g. to run mosquitto -c /etc/mosquitto/mosquitto.conf
command: ["mosquitto", "-c", "/etc/mosquitto/mosquitto.conf"]
As for the Dockerfile, There should only be one ENTRYPOINT or CMD. If you want to run multiple commands then you should create a shell script to do run them, add it to the container then use ENTRYPOINT or CMD to run the script.

Running simple script using docker-compose

I am new to docker. I am trying to run a very simple script as a service along with other service using docker-compose. I have created an image using Dockerfile with following details :
FROM bash
CMD bash test_script.sh
I have redis and test_script images installed. My docker-compose.yml looks like this
version: '2'
services:
redis:
image: redis
test-script:
image: test-script
My test_script looks like this
#!/bin/bash
echo "***************** Sleeping *****************"
sleep 10
echo "***************** Woke Up ******************"
When I run "docker-compose up", Redis starts properly but I get "bash: test_script.sh: No such file or directory". Any help is appreciated.
Docker is telling you the truth. You need to COPY your test_script.sh into the container. Something like:
FROM bash
COPY test_script.sh /test_script.sh
CMD bash test_script.sh
This assumes that there is a file named test_script.sh in the same directory as your Dockerfile.

Resources