frequent restart - docker containers in marathon/mesos - docker

I have been successful till completely dockerizing my webserver application. Now I want to explore more by deploying them directly to a mesos slave through marathon framework.
I can deploy a docker container in to a marathon in two different approaches , either command line or through marathon web UI.
Both worked for me but challenge is when I am trying to deploy my docker image, marathon frequently restarting a job and in mesos UI page I can see many finished job for the same container. Close to 10 tasks per minute. Which is not expected I believe.
My docker file looks like below:
FROM ubuntu:latest
#---------- file Author / Maintainer
MAINTAINER "abc"
#---------- update the repository sources list
RUN apt-get update && apt-get install -y \
apache2 \
curl \
openssl \
php5 \
php5-mcrypt \
unzip
#--------- installing composer
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
RUN a2enmod rewrite
#--------- modifying the 000default file
COPY ./ /var/www/airavata-php-gateway
WORKDIR /etc/apache2/sites-available/
RUN sed -i 's/<\/VirtualHost>/<Directory "\/var\/www"> \n AllowOverride All \n <\/Directory> \n <\/VirtualHost>/g' 000-default.conf
RUN sed -i 's/DocumentRoot \/var\/www\/html/DocumentRoot \/var\/www/g' 000-default.conf
WORKDIR /etc/php5/mods-available/
RUN sed -i 's/extension=mcrypt.so/extension=\/usr\/lib\/php5\/20121212\/mcrypt.so/g' mcrypt.ini
WORKDIR /var/www/airavata-php-gateway/
RUN php5enmod mcrypt
#--------- making storage folder writable
RUN chmod -R 777 /var/www/airavata-php-gateway/app/storage
#-------- starting command
CMD ["sh", "-c", "sh pga-setup.sh ; service apache2 restart ; /bin/bash"]
#--------- exposing apache to default port
EXPOSE 80
Now I am clueless how to resolve this issue,any guidance will be highly appreciated.
Thanks

Marathon is meant to run long-running tasks. So in your case, if you start a Docker container that does not keep listening on a specific port, meaning it exits successfully or unsuccessfully, Marathon will start it again.
For example, I started a Docker container using the simplest image hello-world. That generated more than 10 processes in Mesos UI in a matter of seconds! This was expected. Code inside Docker container was executing successfully and exiting normally. And since it exited, Marathon made sure that another instance of the app was started immediately.
On the other hand, when I start an nginx container which keeps listening on port 80, it becomes a long running task and a new task (Docker container) is spun up only when the existing container exits (successfully or unsuccessfully).
You probably need to work on the CMD section of your Dockerfile. Does the container in question keep running when started normally? That is, without Marathon - just using plain docker run? If yes, check if it keeps running in detached mode - docker run -d. If it exits, then CMD is the part you need to work on.

Related

Testing an application that needs MySQL/MariaDB in Jenkins

This is likely a standard task, but I've spent a lot of time googling and prototyping this without success.
I want to set up CI for a Java application that needs a database (MySQL/MariaDB) for its tests. Basically, just a clean database where it can write to. I have decided to use Jenkins for this. I have managed to set up an environment where I can compile the application, but fail to provide it with a database.
What I have tried is to use a Docker image with Java and MariaDB. However, I run into problems starting MariaDB daemon, because at that point Jenkins already activates its user (UID 1000), which doesn't have permissions to start the daemon, which only the root user can do.
My Dockerfile:
FROM eclipse-temurin:17-jdk-focal
RUN apt-get update \
&& apt-get install -y git mariadb-client mariadb-server wget \
&& apt-get clean
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
The docker-entrypoint.sh is pretty trivial (and also chmod a+x'd, that's not the problem):
#! /bin/sh
service mysql start
exec "$#"
However, Jenkins fails with these messages:
$ docker run -t -d -u 1000:1001 [...] c8b472cda8b242e11e2d42c27001df616dbd9356 cat
$ docker top cbc373ea10653153a9fe76720c204e8c2fb5e2eb572ecbdbd7db28e1d42f122d -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
I have tried debugging this from the command line using the built Docker image c8b472cda8b. The problem is as described before: because Jenkins passes -u 1000:1001 to Docker, docker-entrypoint.sh script no longer runs as root and therefore fails to start the daemon. Somewhere in Docker or Jenkins the error is "eaten up" and not shown, but basically the end result is that mysqld doesn't run and also it doesn't get to exec "$#".
If I execute exactly the same command as Jenkins, but without -u ... argument, leaving me as root, then everything works fine.
I'm sure there must be a simple way to start the daemon and/or set this up somehow completely differently (external database?), but can't figure it out. I'm practically new to Docker and especially to Jenkins.
My suggestion is:
Run the docker build command without -u (as root)
Create Jenkins user inside the container (via Dockerfile)
At the end of the entrypoint.sh switch to jenkins user by su - jenkins
One disadvantage is that every time you enter the container you will be root user

Docker container exits immediately upon invocation

I am using docker version 18.09.0. The image is built without errors. Upon creating container from the image, the container runs and exits immediately with exit status 0, even though I use -it option. Here is Dockerfile.
FROM node:8.15-alpine
WORKDIR /usr/src/app
COPY package*.json ./
COPY middleware middleware
COPY hfc-key-store hfc-key-store
COPY app.js ./
RUN apk --no-cache --virtual build-dependencies add \
python \
make \
g++ \
&& npm install \
&& npm install -g forever
ENTRYPOINT ["forever", "start", "-l", "/logsBackEnd.txt", "--spinSleepTime", "10000", "app.js"]
Command to build image:
docker image build -t nid-api:1.0 .
Command to run container:
docker run -it nid-api:1.0
You need to run in detach mode using -d
There are two reason I can think of for container to exit.
If there is no service running inside container
In case the service is running and docker is running without any deattach option.
The first case seems to be of more related to your error. But always run container in deattached mode. By default new version of docker always run in deattached mode
Also try the below as well.
Docker container will automatically stop after "docker run -d"
The forever is running as a daemon inside the docker container and that may be the cause of making the container to exit immediately.
You can try to use dumb-init to start any process running in a docker container so that exit signals are handled correctly.
dumb-init enables you to simply prefix your command with dumb-init. It acts as PID 1 and immediately spawns your command as a child process, taking care to properly handle and forward signals as they are received.
dumb-init runs as PID 1, acting like a simple init system. It launches a single process and then proxies all received signals to a session rooted at that child process.
Since your actual process is no longer PID 1, when it receives signals from dumb-init, the default signal handlers will be applied, and your process will behave as you would expect. If your process dies, dumb-init will also die, taking care to clean up any other processes that might still remain.

PHP and redis in same docker image

I'm trying to add redis to a php:7.0-apache image, using this Dockerfile:
FROM php:7.0-apache
RUN apt-get update && apt-get -y install build-essential tcl
RUN cd /tmp \
&& curl -O http://download.redis.io/redis-stable.tar.gz \
&& tar xzvf redis-stable.tar.gz \
&& cd redis-stable \
&& make \
&& make install
COPY php.ini /usr/local/etc/php/
COPY public /var/www/html/
RUN chown -R root:www-data /var/www/html
RUN chmod -R 1755 /var/www/html
RUN find /var/www/html -type d -exec chmod 1775 {} +
RUN mkdir -p /var/redis/6379
COPY 6379.conf /etc/redis/6379.conf
COPY redis_6379 /etc/init.d/redis_6379
RUN chmod 777 /etc/init.d/redis_6379
RUN update-rc.d redis_6379 defaults
RUN service apache2 restart
RUN service redis_6379 start
It build and run fines but redis is never started? When I run /bin/bash inside my container and manually input "service redis_6379 start" it works, so I'm assuming my .conf and init.d files are okay.
While I'm aware it'd much easier using docker-compose, I'm specifically trying to avoid having to use it for specific reasons.
There are multiple things wrong here:
Starting processes in dockerfile has no effect. A dockerfile builds an image. The processes need to be started at container construction time. This can be done using an entrypoint can be defined in the dockerfile by using ENTRYPOINT. That entrypoint is typically a script that is executed when an actual container is started.
There is no init process in docker by default. Issuing service calls will fail without further work. If you need to start multiple processes you can look for the docs of the supervisord program.
Running both redis and a webserver in one container is not best practice. For a php application using redis you'd typically have 2 containers - one running redis and one running apache and let them interact via network.
I suggest you read the docker documentation before continuing. All this is described in depth there.
I am agree with #Richard. Use two or more containers according to your needs then --link them, in order to get the things work!

How to run any commands in docker volumes?

After couple of days testing and working on docker (i am in general trying to migrate from vagrant to docker) i encountered a huge problem which i am not sure how or where to fix it.
docker-compose.yml
version: "3"
services:
server:
build: .
volumes:
- ./:/var/www/dev
links:
- database_dev
- database_testing
- database_dev_2
- mail
- redis
ports:
- "80:8080"
tty: true
#the rest are only images of database redis and mailhog with ports
Dockerfile
example_1
FROM ubuntu:latest
LABEL Yamen Nassif
SHELL ["/bin/bash", "-c"]
RUN apt-get install vim mc net-tools iputils-ping zip curl git -y
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN cd /var/www/dev
RUN composer install
Dockerfile
example_2
....
RUN apt-get install apache2 openssl php7.2 php7.2-common libapache2-mod-php7.2 php7.2-fpm php7.2-mysql php7.2-curl php7.2-dom php7.2-zip php7.2-gd php7.2-json php7.2-opcache php7.2-xml php7.2-cli php7.2-intl php7.2-mbstring php7.2-redis -y
# basically 2 files with just rooting to /var/www/dev
COPY docker/config/vhosts /etc/apache2/sites-available/
RUN service apache2 restart
....
now the example_1 composer.json file/directory not found
and example_2 apache will says the root dir is not found
file/directory = /var/www/dev
i guess its because its a volume and it wont be up until the container is fully up because if i launch the container without the prev commands which will lead to an error i can then login to the container and execute the commands from command line without anyerror
HOW TO FIX THIS ?
In your first Dockerfile, use the COPY directive to copy your application into the image before you do things like RUN composer install. It'd look something like
FROM php:7.0-cli
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN composer install
(cribbed from the php image documentation; that image may not have composer preinstalled).
In both Dockerfiles, remember that each RUN command creates a new empty container, runs its command, and cleans up after itself. That means commands like RUN cd ... have no effect, and you can't start a service in the background in one RUN command and have it available later; it will get stopped before the Dockerfile moves on to the next line.
In the second Dockerfile, commands like service or systemctl or initctl just don't work in Docker and you shouldn't try to use them. Standard practice is to start the server process as a foreground process when the container launches via a default CMD directive. The flip side of this is that, since the server won't start until docker run time, your volume will be available at that point. I might RUN mkdir in the Dockerfile just to be sure it exists.
The problem seems the execution order. At image build time /var/www/dev is available. When you start a container from that image the container /var/www/dev is overwritten with your local mount.
If you need no access from your host, the you can simple skip the extra volume.
In case you want use it in other containers to, the you should work with symlinks.

Dokku multi-process (container) with Dockerfile project

I'm looking at http://progrium.viewdocs.io/dokku/process-management/ and trying to work out how to get several services running from a single project.
I have a repo with a Dockerfile:
FROM wjdp/flatcar
ADD . app
RUN /app/bin/install.sh
EXPOSE 8000
CMD /app/bin/run.sh
run.sh starts up a single threaded web server. This works fine but I'd like to run several services.
I tried making a Procfile with a single line of web: /app/bin/run.sh
and removing the CMD line from the Dockerfile. This doesn't work as without a command to run the Docker container doesn't stay alive and dokku gets sad:
remote: Error response from daemon: Cannot kill container ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e: Container ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e is not running
remote: Error: failed to kill containers: [ae9d50af17deed4b50bc8327e53ee942bbb3080d3021c49c6604b76b25bb898e]
Your best bet is probably to use supervisord. Supervisord is a very lightweight process manager.
You would launch supervisord with your CMD, and then put all the processes you want to launch into the supervisord.conf file.
For more information, look at the Docker documentation about this: https://docs.docker.com/articles/using_supervisord/ . The most relevant excerpts (taken from that page, but reworded):
You would put this into your Dockerfile:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
And the supervisord.conf file would contain something like this:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
Obviously, you will also need to make sure that supervisord is installed in your image to begin with. It's part of most distros, so you can probably use yum or apt-get to install it.

Resources