I am extending the APIMan / Wildfly Docker image with my own image which will do two things:
1) Drop my .war file application into the Wildfly
standalone/deployments directory
2) Execute a series of cURL commands that would query the Wildfly
server in order to configure APIMan.
Initially, I tried creating two Docker images (the first to drop in the .war file and the second to execute the cURL commands), however I incorrectly assumed that the CMD instruction in the innermost image would be executed first and CMD's would be executed outward.
For example:
ImageA:
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]
And
ImageB:
FROM ImageA
COPY /configure.sh /opt/jboss/wildfly/
CMD ["/opt/jboss/wildfly/configure.sh"]
I had initially assumed that during runtime Wildfly / APIMAN would be started first (per the ImageA CMD instruction) and then my custom script would be run (per ImageB CMD instruction). I'm assuming that's incorrect because throughout the entire hierarchy, only 1 CMD instruction is executed (the last one in the outermost Dockerfile within the chain)?
So, I then attempted to merge everything into 1 Dockerfile which would (in the build process) startup Wildfly / APIMAN, run the cURL commands, shutdown the wildfly server and then the CMD command would start it back up during runtime and Wildfly / APIMan would be configured. This, however, does not work because when I start Wildfly (as part of the build), it controls the console and waits for log messages to display, thus the build never completes. If I append an '&' at the end of the RUN command, it does not run (Dockerfile : RUN results in a No op).
Here is my Dockerfile for this attempt:
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
COPY /configure.sh /opt/jboss/wildfly/
RUN /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone-apiman.xml
RUN /opt/jboss/wildfly/configure.sh
RUN /opt/jboss/wildfly/bin/jboss-cli.sh --connect controller=127.0.0.1 command=:shutdown
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]
Are there any solutions to this? I'm trying to have my "configure.sh" script run AFTER Wildfly / APIMan are started up. It wouldn't matter to me whether this is done during the build process or at run time, however I don't see any way to do it during the build process because Wildfly doesn't have a daemon mode.
only 1 CMD instruction is executed (the last one in the outermost Dockerfile within the chain)?
Yes this is correct, and keep in mind that CMD is not run at build time but at instantiation time. In essence what you are doing in your second Dockerfile's CMD is overriding the first one when you instantiated the container from ImageB
If you are doing some sort of Rest API or cli or cURL to connect to your Wildfly server I suggest you do that configuration after the container's instantiation, not at the after the container's build. This way:
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]`
is always your last command.
If you need some extra files or changes to configuration files you can put them in the Dockerfile so that they get copied at build time before CMD gets called at instantiation.
So in summary:
1) Build your Docker container with this Dockerfile (docker build):
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
COPY /configure.sh /opt/jboss/wildfly/
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]
2) Run (instantiate your container from your newly created image)
docker run <image-id>
3) Run the following from a container or from your host that has Wildfly configured the same way. This assuming you are using some Rest API to configure things (i.e using cURL):
/opt/jboss/wildfly/configure.sh
You can instantiate a second container to run this command with something like this:
docker run -ti <image-id> /bin/bash
The original premise behind my problem (though it was not explicitly stated in the original post) was to configure APIMan within the image and without any intervention outside of the image.
It's a bit of a hack, but I was able to solve this by creating 3 scripts. One for starting up Wildfly, one for running the configuration script and a third to execute them both. Hopefully, this saves some other poor soul from spending a day figuring all of this out.
Because of the nature of the Dockerfile only allowing 1 execution call at runtime, that call needed to be to a custom script.
Below are the files with comments.
Dockerfile
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
COPY /configure.sh /opt/jboss/wildfly/
COPY /execute.sh /opt/jboss/wildfly/
COPY /runWF.sh /opt/jboss/wildfly/
CMD ["/opt/jboss/wildfly/execute.sh"]
Note, all 3 scripts are built into the image. The execute.sh script is executed at runtime (instantiation) not build-time.
execute.sh
#!/bin/sh
/opt/jboss/wildfly/configure.sh &
/opt/jboss/wildfly/runWF.sh
Note, the configure.sh script is sent to the background so that we can move on to the runWF.sh script while configure.sh is still running)
configure.sh
#!/bin/sh
done=""
while [ "$done" != "200" ]
do
done=$(curl --write-out %{http_code} --silent --output /dev/null -u username:password -X GET -H "Accept: application/json" http://127.0.0.1:8080/apiman/system/status)
sleep 3
done
# configuration curl commands
curl ...
curl ...
The above configure.sh script runs in a loop querying the wildfly / apiman server via curl every 3 seconds checking its status. Once it gets back an HTTP status code of 200 (representing an "up and running" state), it exits the loop and moves freely on to the configuration. Note, this should probably be made a bit 'safer' by providing another way to exit the loop (e.g. after a certain number of query's, etc.). I imagine this would give a production developer heart palpitations and I wouldn't suggest deploying it in production, however it works for the time being.
runWF.sh
#!/bin/sh
/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone-apiman.xml
This script simply starts up the server. The parameters bind various modules to 0.0.0.0 and directs wildfly to use the apiman standalone xml file for configuration.
On my machine, it takes wildfly + apiman (with my custom war file) about 10-15 seconds (depending on which computer I run it on) to fully load, but once it does, the configure script will be able to query it successfully and then move on with the configuration curl commands. Meanwhile, wildfly still controls the console because it was started up last and you can monitor the activity and terminate the process with ctrl-c.
Build one image:
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
COPY /configure.sh /opt/jboss/wildfly/
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]
The start it up. After startup is complete, use the docker exec command to launch your configure script within the running container.
docker run -d --name wildfly <image name>
docker exec wildfly /opt/jboss/wildfly/configure.sh
Related
I want to include a cron task in a MariaDB container, based on the latest image mariadb, but I'm stuck with this.
I tried many things without success because I can't launch both MariaDB and Cron.
Here is my actual dockerfile:
FROM mariadb:10.3
# DB settings
ENV MYSQL_DATABASE=beurre \
MYSQL_ROOT_PASSWORD=beurette
COPY ./data /docker-entrypoint-initdb.d
COPY ./keys/keys.enc home/mdb/
COPY ./config/encryption.cnf /etc/mysql/conf.d/encryption.cnf
# Installations
RUN apt-get update && apt-get -y install python cron
# Cron
RUN touch /etc/cron.d/bp-cron
RUN printf '* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1\n#' >> /etc/cron.d/bp-cron
RUN touch /var/log/cron.log
RUN chmod 0644 /etc/cron.d/bp-cron
RUN cron
With its settings, the database starts correctly, but "Cron" is not initialized. To make it work, I have to get into the container and execute the "Cron" command, and everything works perfectly.
So I'm looking for a way to launch both the db and cron from my Dockerfile used in my docker-compose.
If this is not possible, maybe there is another way to do tasks planned? The purpose being to execute a script of the db.
Elaborating on #k0pernikus's comment, I would recommend to use a separate container that runs cron. The cronjobs in that container can then work with your mysql database.
Here's how I would approach it:
1. Create a Cron Docker Container
You can set up a cron container fairly simply. Here's an example Dockerfile that should do the job:
FROM alpine
COPY ./crontab /etc/crontab
RUN crontab /etc/crontab
RUN touch /var/log/cron.log
CMD crond -f
Just put your crontab into a crontab file next to that Dockerfile and you should have a working cron container.
An example crontab file:
* * * * * mysql -h mysql --execute "INSERT INTO database.table VALUES 'v';"
2. Add the cron container to your docker-compose.yml as a service
Make sure you add your cron container to the docker-compose.yml, and put it in the same network as your mysql service:
networks:
my_network:
services:
mysql:
image: mariadb
networks:
- my_network
cron:
image: my_cron
depends_on:
- mysql
build:
context: ./path/to/my/cron-docker-folder
networks:
- my_network
I recommend the solution provided by fjc. Treat this as nice-to-know to understand why your approach is not working.
Docker has RUN commands that are only being executed during build. Not on container startup.
It also has a CMD (or ENTRYPOINT) for executing specific scripts.
Since you are using mariadb your CMD it is:
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
(You can find the link to the actual dockerfles on dockerhub.)
This tells docker to run:
docker-entrypoint.sh mysqld
on startup.
You'd have to override its docker-entrypoint.sh to allow for the startup of the cron job as well.
See the relevant part of the Dockerfile for the CMD instruction:
CMD The CMD instruction has three forms:
CMD ["executable","param1","param2"] (exec form, this is the preferred
form) CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form) There can only be one CMD
instruction in a Dockerfile. If you list more than one CMD then only
the last CMD will take effect.
The main purpose of a CMD is to provide defaults for an executing
container. These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT
instruction as well.
Note: If CMD is used to provide default arguments for the ENTRYPOINT
instruction, both the CMD and ENTRYPOINT instructions should be
specified with the JSON array format.
Note: The exec form is parsed as a JSON array, which means that you
must use double-quotes (“) around words not single-quotes (‘).
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, CMD [ "echo", "$HOME" ] will not do variable substitution on
$HOME. If you want shell processing then either use the shell form or
execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME"
]. When using the exec form and executing a shell directly, as in the
case for the shell form, it is the shell that is doing the environment
variable expansion, not docker.
When used in the shell or exec formats, the CMD instruction sets the
command to be executed when running the image.
If you use the shell form of the CMD, then the will execute
in /bin/sh -c:
FROM ubuntu CMD echo "This is a test." | wc - If you want to run your
without a shell then you must express the command as a JSON
array and give the full path to the executable. This array form is the
preferred format of CMD. Any additional parameters must be
individually expressed as strings in the array:
FROM ubuntu CMD ["/usr/bin/wc","--help"] If you would like your
container to run the same executable every time, then you should
consider using ENTRYPOINT in combination with CMD. See ENTRYPOINT.
If the user specifies arguments to docker run then they will override
the default specified in CMD.
Note: Don’t confuse RUN with CMD. RUN actually runs a command and
commits the result; CMD does not execute anything at build time, but
specifies the intended command for the image.
I'm trying to setup a Dockerfile for keycloak and I want to run some commands once my container has started
The reason for this is once the server is started, I want to add some custom configuration each time the Dockerfile is run. I've tried using the "RUN" command however since my container hasn't started when I use the run command, it causes the whole Dockerfile to bomb out
I thought to run a command after the container has started, I could use "CMD" however when I even try running CMD ["echo", "hi"] or CMD ["sh", "echo", "hi"], I get an error "invalid option echo"
Is there a way to get commands to run once a container is running and if so how?
The way to define what your container does when you start it is to specify either CMD or ENTRYPOINT. These commands are executed when you use docker run. You can use RUN to perform various tasks during the build phase. Depending on what you want to do it may or may not be appropriate.
Try CMD sh -c 'echo hi' or CMD ["sh", "-c", "echo hi"]
The exec (list style) format is preferred but the shell format is also acceptable.
Also, keep in mind that the Dockerfile is used only for the build process. Containers are generally designed to be stateless. You shouldn't have to rebuild every time you change something in your application config.
I have built my docker image using openjdk.
# config Dockerfile
FROM openjdk:8
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp
# build image
docker build -t shantanuo/dbt .
It is working as expected using this command...
docker run -p 8081:8080 -it shantanuo/dbt
Once I log-in, I have to run this command...
sh bin/startup.sh
My Question: Is it possible to add the startup command to dockerfile? I tried adding this line in my dockerfile.
CMD ["sh", "bin/startup.sh"]
But after building the image, I can not use -d parameter to start the container.
You can use the entrypoint to run the startup script. In the entrypoint you can specify your custom script, and then run catlina.sh.
Example:
ENTRYPOINT "bin/startup.sh && catalina.sh run"
This will run your startup script and then start your tomcat server, and it won't exit the container.
This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/
If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line.
#!/bin/bash
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
A docker container must have a dedicated task. It is important that this task/startup script does not terminate. When it does, the task is done and everything for docker is done right.
It makes no sense to start a container only with the JDK. You have to put your application in it.
I think it would help when you will post what you exactly want to do.
The Docker reference is always a good place to look at: https://docs.docker.com/engine/reference/builder/#entrypoint
I have the following Dockerfile:
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
CMD ["node", "server.js"]
My /build/conf/ssh-setup.sh looks like the following:
#!/bin/sh
set -e
echo "${SSH_PUBKEY}" >> /var/www/.ssh/authorized_keys
chown www-data:www-data -R /var/www/.ssh
chmod go-rwx -R /var/www/.ssh
It just adds SSH_PUBKEY env to /var/www/.ssh/authorized_keys to enable ssh access.
I run my container just like the following:
docker run -d -p 192.168.99.100:80:80 -p 192.168.99.100:2222:22 \
-e SSH_PUBKEY="$(cat ~/.ssh/id_rsa.pub)" \
--name dev hub.core.test/dev
My container starts fine but unfortunately /etc/my_init.d/ssh-setup.sh script does't get executed and I'm unable to ssh my container.
Could you help me what is the reason why /var/www/.ssh/authorized_keys doesn't get executed on starting of my container?
I had a pretty similar issue, also using phusion/baseimage. It turned out that my start script needed to be executable, e.g.
RUN chmod +x /etc/my_init.d/ssh-setup.sh
Note:
I noticed you're not using baseimage's init system ( maybe on purpose? ). But, from my understanding of their manifesto, doing that forgoes their whole "a better init system" approach.
My understanding is that they want you to, in your case, move your start command of node server.js to a script within my_init.d, e.g. /etc/my_init.d/start.sh and in your dockerfile use their init system instead as the start command, e.g.
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/start.sh /etc/my_init.d/start.sh
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
RUN chmod +x /etc/my_init.d/start.sh
RUN chmod +x /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
That'll start baseimage's init system, which will then go and look in your /etc/my_init.d/ and execute all the scripts in there in alphabetical order. And, of course, they should all be executable.
My references for this are: Running start scripts and Getting Started.
As the previous answer states you did not execute ssh-setup.sh. You can only have one process in a Docker container (that is a lie, but it will do for now). Why not run ssh-setup.sh as your CMD/ENTRYPOINT process and have ssh-setup.sh exec into your final command, i.e.
exec node server.js
Or cleaner, have a script, like boot.sh, which runs any init scripts, like ssh-setup.sh, then execs to node.
Because you didn't invoke /etc/my_init.d/ssh-setup.sh when you started your container.
you should call it in CMD or ENTRYPOINT, read more here
RUN executes command(s) in a new layer and creates a new image. E.g.,
it is often used for installing software packages.
CMD sets default
command and/or parameters, which can be overwritten from command line
when docker container runs.
ENTRYPOINT configures a container that
will run as an executable.
I'm looking at http://progrium.viewdocs.io/dokku/process-management/ and trying to work out how to get several services running from a single project.
I have a repo with a Dockerfile:
FROM wjdp/flatcar
ADD . app
RUN /app/bin/install.sh
EXPOSE 8000
CMD /app/bin/run.sh
run.sh starts up a single threaded web server. This works fine but I'd like to run several services.
I tried making a Procfile with a single line of web: /app/bin/run.sh
and removing the CMD line from the Dockerfile. This doesn't work as without a command to run the Docker container doesn't stay alive and dokku gets sad:
remote: Error response from daemon: Cannot kill container ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e: Container ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e is not running
remote: Error: failed to kill containers: [ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e]
Your best bet is probably to use supervisord. Supervisord is a very lightweight process manager.
You would launch supervisord with your CMD, and then put all the processes you want to launch into the supervisord.conf file.
For more information, look at the Docker documentation about this: https://docs.docker.com/articles/using_supervisord/ . The most relevant excerpts (taken from that page, but reworded):
You would put this into your Dockerfile:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
And the supervisord.conf file would contain something like this:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
Obviously, you will also need to make sure that supervisord is installed in your image to begin with. It's part of most distros, so you can probably use yum or apt-get to install it.