Dokku multi-process (container) with Dockerfile project - docker

I'm looking at http://progrium.viewdocs.io/dokku/process-management/ and trying to work out how to get several services running from a single project.
I have a repo with a Dockerfile:
FROM wjdp/flatcar
ADD . app
RUN /app/bin/install.sh
EXPOSE 8000
CMD /app/bin/run.sh
run.sh starts up a single threaded web server. This works fine but I'd like to run several services.
I tried making a Procfile with a single line of web: /app/bin/run.sh
and removing the CMD line from the Dockerfile. This doesn't work as without a command to run the Docker container doesn't stay alive and dokku gets sad:
remote: Error response from daemon: Cannot kill container ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e: Container ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e is not running
remote: Error: failed to kill containers: [ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e]

Your best bet is probably to use supervisord. Supervisord is a very lightweight process manager.
You would launch supervisord with your CMD, and then put all the processes you want to launch into the supervisord.conf file.
For more information, look at the Docker documentation about this: https://docs.docker.com/articles/using_supervisord/ . The most relevant excerpts (taken from that page, but reworded):
You would put this into your Dockerfile:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
And the supervisord.conf file would contain something like this:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
Obviously, you will also need to make sure that supervisord is installed in your image to begin with. It's part of most distros, so you can probably use yum or apt-get to install it.

Related

Docker multistage doesn't call entrypoint

I have a grails app running in Docker, and I was trying to add the Apache Derby server to run in the same image using Docker multi stage. But when I add Derby, then the grails app doesn't run.
So I started with this:
$ cat build/docker/Dockerfile
FROM azul/zulu-openjdk:13.0.3
EXPOSE 8080
VOLUME ["/AppData/derby"]
WORKDIR /app
COPY holder-0.1.jar application.jar
COPY app-entrypoint.sh app-entrypoint.sh
RUN chmod +x app-entrypoint.sh
RUN apt-get update && apt-get install -y dos2unix && dos2unix app-entrypoint.sh
ENTRYPOINT ["/app/app-entrypoint.sh"]
So far, so good this starts off grails as a web server, and I can connect to the web app. But then I added Derby....
FROM azul/zulu-openjdk:13.0.3
EXPOSE 8080
VOLUME ["/AppData/derby"]
WORKDIR /app
COPY holder-0.1.jar application.jar
COPY app-entrypoint.sh app-entrypoint.sh
RUN chmod +x app-entrypoint.sh
RUN apt-get update && apt-get install -y dos2unix && dos2unix app-entrypoint.sh
ENTRYPOINT ["/app/app-entrypoint.sh"]
FROM datagrip/derby-server
WORKDIR /derby
Now when I start the container, Derby runs, but the grails app doesn't run at all. This is obvious from what is printed on the terminal, but I also logged in and did a ps aux to verify it.
Now I suppose I could look into creating my own startup script to start the Derby server, although this would seem to violate the independence of the two images' configurations.
Other people might say, I should use two containers, but I was hoping to keep it simple, derby is a very simple database, I don't feel the need for this complexity here.
Am I just trying to push the concept of multi stage docker containers too far?
Is it actually normal at all for docker containers to have more than one process start up? Will I have to fudge it and come up with my own entry point that starts Derby server in the background before starting grails in the foreground? Or is this all just wrong, and I really should be using multiple containers?
It is fine for Docker to have multiple processes in one container but the concept is different: one container, one process. Having a database separately is certainly how it should be done.
Now the problem with your Dockerfile is that after you've declared a second FROM, you have effectively discarded most of what you've done so far. You may use a previous stage to copy some files from it (this is normally used to build some binaries) but Docker will not do that for you, unless you explicitly define what to copy. Thus your actual entrypoint is the one declared in datagrip/derby-server image.
I suggest you get started with docker-compose. It's a nice tool to run several containers without complications. With a file like this:
version: "3.0"
services:
app:
build:
context: .
database:
image: datagrip/derby-server
docker-compose will build an image for the app (if the Dockefile is in the same directory but this can be customised) and start a database as well. The database can be access from the application container as just 'database' (it is a resolvable name). See this reference for more options.

How to run any commands in docker volumes?

After couple of days testing and working on docker (i am in general trying to migrate from vagrant to docker) i encountered a huge problem which i am not sure how or where to fix it.
docker-compose.yml
version: "3"
services:
server:
build: .
volumes:
- ./:/var/www/dev
links:
- database_dev
- database_testing
- database_dev_2
- mail
- redis
ports:
- "80:8080"
tty: true
#the rest are only images of database redis and mailhog with ports
Dockerfile
example_1
FROM ubuntu:latest
LABEL Yamen Nassif
SHELL ["/bin/bash", "-c"]
RUN apt-get install vim mc net-tools iputils-ping zip curl git -y
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN cd /var/www/dev
RUN composer install
Dockerfile
example_2
....
RUN apt-get install apache2 openssl php7.2 php7.2-common libapache2-mod-php7.2 php7.2-fpm php7.2-mysql php7.2-curl php7.2-dom php7.2-zip php7.2-gd php7.2-json php7.2-opcache php7.2-xml php7.2-cli php7.2-intl php7.2-mbstring php7.2-redis -y
# basically 2 files with just rooting to /var/www/dev
COPY docker/config/vhosts /etc/apache2/sites-available/
RUN service apache2 restart
....
now the example_1 composer.json file/directory not found
and example_2 apache will says the root dir is not found
file/directory = /var/www/dev
i guess its because its a volume and it wont be up until the container is fully up because if i launch the container without the prev commands which will lead to an error i can then login to the container and execute the commands from command line without anyerror
HOW TO FIX THIS ?
In your first Dockerfile, use the COPY directive to copy your application into the image before you do things like RUN composer install. It'd look something like
FROM php:7.0-cli
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN composer install
(cribbed from the php image documentation; that image may not have composer preinstalled).
In both Dockerfiles, remember that each RUN command creates a new empty container, runs its command, and cleans up after itself. That means commands like RUN cd ... have no effect, and you can't start a service in the background in one RUN command and have it available later; it will get stopped before the Dockerfile moves on to the next line.
In the second Dockerfile, commands like service or systemctl or initctl just don't work in Docker and you shouldn't try to use them. Standard practice is to start the server process as a foreground process when the container launches via a default CMD directive. The flip side of this is that, since the server won't start until docker run time, your volume will be available at that point. I might RUN mkdir in the Dockerfile just to be sure it exists.
The problem seems the execution order. At image build time /var/www/dev is available. When you start a container from that image the container /var/www/dev is overwritten with your local mount.
If you need no access from your host, the you can simple skip the extra volume.
In case you want use it in other containers to, the you should work with symlinks.

Why does container does't execute scripts inside /etc/my_init.d/ on startup?

I have the following Dockerfile:
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
CMD ["node", "server.js"]
My /build/conf/ssh-setup.sh looks like the following:
#!/bin/sh
set -e
echo "${SSH_PUBKEY}" >> /var/www/.ssh/authorized_keys
chown www-data:www-data -R /var/www/.ssh
chmod go-rwx -R /var/www/.ssh
It just adds SSH_PUBKEY env to /var/www/.ssh/authorized_keys to enable ssh access.
I run my container just like the following:
docker run -d -p 192.168.99.100:80:80 -p 192.168.99.100:2222:22 \
-e SSH_PUBKEY="$(cat ~/.ssh/id_rsa.pub)" \
--name dev hub.core.test/dev
My container starts fine but unfortunately /etc/my_init.d/ssh-setup.sh script does't get executed and I'm unable to ssh my container.
Could you help me what is the reason why /var/www/.ssh/authorized_keys doesn't get executed on starting of my container?
I had a pretty similar issue, also using phusion/baseimage. It turned out that my start script needed to be executable, e.g.
RUN chmod +x /etc/my_init.d/ssh-setup.sh
Note:
I noticed you're not using baseimage's init system ( maybe on purpose? ). But, from my understanding of their manifesto, doing that forgoes their whole "a better init system" approach.
My understanding is that they want you to, in your case, move your start command of node server.js to a script within my_init.d, e.g. /etc/my_init.d/start.sh and in your dockerfile use their init system instead as the start command, e.g.
FROM phusion/baseimage:0.9.16
RUN mv /build/conf/start.sh /etc/my_init.d/start.sh
RUN mv /build/conf/ssh-setup.sh /etc/my_init.d/ssh-setup.sh
RUN chmod +x /etc/my_init.d/start.sh
RUN chmod +x /etc/my_init.d/ssh-setup.sh
EXPOSE 80 22
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
That'll start baseimage's init system, which will then go and look in your /etc/my_init.d/ and execute all the scripts in there in alphabetical order. And, of course, they should all be executable.
My references for this are: Running start scripts and Getting Started.
As the previous answer states you did not execute ssh-setup.sh. You can only have one process in a Docker container (that is a lie, but it will do for now). Why not run ssh-setup.sh as your CMD/ENTRYPOINT process and have ssh-setup.sh exec into your final command, i.e.
exec node server.js
Or cleaner, have a script, like boot.sh, which runs any init scripts, like ssh-setup.sh, then execs to node.
Because you didn't invoke /etc/my_init.d/ssh-setup.sh when you started your container.
you should call it in CMD or ENTRYPOINT, read more here
RUN executes command(s) in a new layer and creates a new image. E.g.,
it is often used for installing software packages.
CMD sets default
command and/or parameters, which can be overwritten from command line
when docker container runs.
ENTRYPOINT configures a container that
will run as an executable.

Chaining Docker Images and execute in order

I am extending the APIMan / Wildfly Docker image with my own image which will do two things:
1) Drop my .war file application into the Wildfly
standalone/deployments directory
2) Execute a series of cURL commands that would query the Wildfly
server in order to configure APIMan.
Initially, I tried creating two Docker images (the first to drop in the .war file and the second to execute the cURL commands), however I incorrectly assumed that the CMD instruction in the innermost image would be executed first and CMD's would be executed outward.
For example:
ImageA:
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]
And
ImageB:
FROM ImageA
COPY /configure.sh /opt/jboss/wildfly/
CMD ["/opt/jboss/wildfly/configure.sh"]
I had initially assumed that during runtime Wildfly / APIMAN would be started first (per the ImageA CMD instruction) and then my custom script would be run (per ImageB CMD instruction). I'm assuming that's incorrect because throughout the entire hierarchy, only 1 CMD instruction is executed (the last one in the outermost Dockerfile within the chain)?
So, I then attempted to merge everything into 1 Dockerfile which would (in the build process) startup Wildfly / APIMAN, run the cURL commands, shutdown the wildfly server and then the CMD command would start it back up during runtime and Wildfly / APIMan would be configured. This, however, does not work because when I start Wildfly (as part of the build), it controls the console and waits for log messages to display, thus the build never completes. If I append an '&' at the end of the RUN command, it does not run (Dockerfile : RUN results in a No op).
Here is my Dockerfile for this attempt:
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
COPY /configure.sh /opt/jboss/wildfly/
RUN /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone-apiman.xml
RUN /opt/jboss/wildfly/configure.sh
RUN /opt/jboss/wildfly/bin/jboss-cli.sh --connect controller=127.0.0.1 command=:shutdown
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]
Are there any solutions to this? I'm trying to have my "configure.sh" script run AFTER Wildfly / APIMan are started up. It wouldn't matter to me whether this is done during the build process or at run time, however I don't see any way to do it during the build process because Wildfly doesn't have a daemon mode.
only 1 CMD instruction is executed (the last one in the outermost Dockerfile within the chain)?
Yes this is correct, and keep in mind that CMD is not run at build time but at instantiation time. In essence what you are doing in your second Dockerfile's CMD is overriding the first one when you instantiated the container from ImageB
If you are doing some sort of Rest API or cli or cURL to connect to your Wildfly server I suggest you do that configuration after the container's instantiation, not at the after the container's build. This way:
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]`
is always your last command.
If you need some extra files or changes to configuration files you can put them in the Dockerfile so that they get copied at build time before CMD gets called at instantiation.
So in summary:
1) Build your Docker container with this Dockerfile (docker build):
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
COPY /configure.sh /opt/jboss/wildfly/
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]
2) Run (instantiate your container from your newly created image)
docker run <image-id>
3) Run the following from a container or from your host that has Wildfly configured the same way. This assuming you are using some Rest API to configure things (i.e using cURL):
/opt/jboss/wildfly/configure.sh
You can instantiate a second container to run this command with something like this:
docker run -ti <image-id> /bin/bash
The original premise behind my problem (though it was not explicitly stated in the original post) was to configure APIMan within the image and without any intervention outside of the image.
It's a bit of a hack, but I was able to solve this by creating 3 scripts. One for starting up Wildfly, one for running the configuration script and a third to execute them both. Hopefully, this saves some other poor soul from spending a day figuring all of this out.
Because of the nature of the Dockerfile only allowing 1 execution call at runtime, that call needed to be to a custom script.
Below are the files with comments.
Dockerfile
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
COPY /configure.sh /opt/jboss/wildfly/
COPY /execute.sh /opt/jboss/wildfly/
COPY /runWF.sh /opt/jboss/wildfly/
CMD ["/opt/jboss/wildfly/execute.sh"]
Note, all 3 scripts are built into the image. The execute.sh script is executed at runtime (instantiation) not build-time.
execute.sh
#!/bin/sh
/opt/jboss/wildfly/configure.sh &
/opt/jboss/wildfly/runWF.sh
Note, the configure.sh script is sent to the background so that we can move on to the runWF.sh script while configure.sh is still running)
configure.sh
#!/bin/sh
done=""
while [ "$done" != "200" ]
do
done=$(curl --write-out %{http_code} --silent --output /dev/null -u username:password -X GET -H "Accept: application/json" http://127.0.0.1:8080/apiman/system/status)
sleep 3
done
# configuration curl commands
curl ...
curl ...
The above configure.sh script runs in a loop querying the wildfly / apiman server via curl every 3 seconds checking its status. Once it gets back an HTTP status code of 200 (representing an "up and running" state), it exits the loop and moves freely on to the configuration. Note, this should probably be made a bit 'safer' by providing another way to exit the loop (e.g. after a certain number of query's, etc.). I imagine this would give a production developer heart palpitations and I wouldn't suggest deploying it in production, however it works for the time being.
runWF.sh
#!/bin/sh
/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone-apiman.xml
This script simply starts up the server. The parameters bind various modules to 0.0.0.0 and directs wildfly to use the apiman standalone xml file for configuration.
On my machine, it takes wildfly + apiman (with my custom war file) about 10-15 seconds (depending on which computer I run it on) to fully load, but once it does, the configure script will be able to query it successfully and then move on with the configuration curl commands. Meanwhile, wildfly still controls the console because it was started up last and you can monitor the activity and terminate the process with ctrl-c.
Build one image:
FROM jboss/apiman-wildfly:1.1.6.Final
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
COPY /RatedRestfulServer/target/RatedRestfulServer-1.0-SNAPSHOT.war /opt/jboss/wildfly/standalone/deployments/
COPY /configure.sh /opt/jboss/wildfly/
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "-c", "standalone-apiman.xml"]
The start it up. After startup is complete, use the docker exec command to launch your configure script within the running container.
docker run -d --name wildfly <image name>
docker exec wildfly /opt/jboss/wildfly/configure.sh

frequent restart - docker containers in marathon/mesos

I have been successful till completely dockerizing my webserver application. Now I want to explore more by deploying them directly to a mesos slave through marathon framework.
I can deploy a docker container in to a marathon in two different approaches , either command line or through marathon web UI.
Both worked for me but challenge is when I am trying to deploy my docker image, marathon frequently restarting a job and in mesos UI page I can see many finished job for the same container. Close to 10 tasks per minute. Which is not expected I believe.
My docker file looks like below:
FROM ubuntu:latest
#---------- file Author / Maintainer
MAINTAINER "abc"
#---------- update the repository sources list
RUN apt-get update && apt-get install -y \
apache2 \
curl \
openssl \
php5 \
php5-mcrypt \
unzip
#--------- installing composer
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN a2enmod rewrite
#--------- modifying the 000default file
COPY ./ /var/www/airavata-php-gateway
WORKDIR /etc/apache2/sites-available/
RUN sed -i 's/<\/VirtualHost>/<Directory "\/var\/www"> \n AllowOverride All \n <\/Directory> \n <\/VirtualHost>/g' 000-default.conf
RUN sed -i 's/DocumentRoot \/var\/www\/html/DocumentRoot \/var\/www/g' 000-default.conf
WORKDIR /etc/php5/mods-available/
RUN sed -i 's/extension=mcrypt.so/extension=\/usr\/lib\/php5\/20121212\/mcrypt.so/g' mcrypt.ini
WORKDIR /var/www/airavata-php-gateway/
RUN php5enmod mcrypt
#--------- making storage folder writable
RUN chmod -R 777 /var/www/airavata-php-gateway/app/storage
#-------- starting command
CMD ["sh", "-c", "sh pga-setup.sh ; service apache2 restart ; /bin/bash"]
#--------- exposing apache to default port
EXPOSE 80
Now I am clueless how to resolve this issue,any guidance will be highly appreciated.
Thanks
Marathon is meant to run long-running tasks. So in your case, if you start a Docker container that does not keep listening on a specific port, meaning it exits successfully or unsuccessfully, Marathon will start it again.
For example, I started a Docker container using the simplest image hello-world. That generated more than 10 processes in Mesos UI in a matter of seconds! This was expected. Code inside Docker container was executing successfully and exiting normally. And since it exited, Marathon made sure that another instance of the app was started immediately.
On the other hand, when I start an nginx container which keeps listening on port 80, it becomes a long running task and a new task (Docker container) is spun up only when the existing container exits (successfully or unsuccessfully).
You probably need to work on the CMD section of your Dockerfile. Does the container in question keep running when started normally? That is, without Marathon - just using plain docker run? If yes, check if it keeps running in detached mode - docker run -d. If it exits, then CMD is the part you need to work on.

Resources