Run drush command after apache starts - docker

I am using centos docker image to build a container and host drupal site. I need to run drush commands after apache starts. But every commands after apache starts doesn't run. Is there any way to run drush commands after apache starts? My start up script has following lines:
/usr/sbin/httpd -DFOREGROUND
drush updb -y

Apache might be taking some time to start, after you fire that command. And, your drush command might be executing even before apache has successfully started.
There can be two ways:
Either you put a static sleep before running drush commands.
after apache start command, you can put a loop which checks apache status, if status not running, then sleep. Else break from loop. And after this, you can run your drush commands.

"Where" is this been executed? If it is not in your drupal home directory or below probably drush can't find your site.
Try "cd /var/www/html/orwhateverdirthesitehave" before drush.
On the other hand it doesn't seem safe to run this data model changes automatically. Only need to run that when a module is updated AND the updated requires a data model change... IMHO it's to risky to automate like this.
If it's not about updb exactly consider using a secondary script with all the drush stuff and starting changing the working dir to the drupal instalation path.
/usr/sbin/httpd -DFOREGROUND
/home/myuser/drushcommands.sh
and in drushcommands.sh
!#/bin/sh
cd /var/www/html/mydrupal
drush cim -y
...

Related

Docker container does not run crontab

I have a dockerfile image based on ubuntu. Iam trying to make a bash script run each day but the cron never runs. When the container is running, i check if cron is running and it is. the bash script works perfectly and the crontab command is well copied inside the container. i can't seem to find where the problem is coming from.
Here is the Dockerfile:
FROM snipe/snipe-it:latest
ENV TZ=America/Toronto
RUN apt-get update \
&& apt-get install awscli -y \
&& apt-get clean \
&& apt-get install cron -y \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /var/www/html/backups_scripts /var/www/html/config/scripts
COPY config/crontab.txt /var/www/html/backups_scripts
RUN /usr/bin/crontab /var/www/html/backups_scripts/crontab.txt
COPY config/scripts/backups.sh /var/www/html/config/scripts
CMD ["cron","-f"]
The last command CMD doesn't work. And as soon as i remove the cmd command i get this message when i check the cron task inside the container:
root#fcfb6052274a:/var/www/html# /etc/init.d/cron status
* cron is not running
Even if i start the cron process before the crontab, the crontab is still not launched
This dockerfile is called by a docker swarm file (compose file type). Maybe the cron must be activated with the compose file.
How can i tackle this problem ??? Thank you
You need to approach this differently, as you have to remember that container images and containers are not virtual machines. They're a single process that starts and is maintained through its lifecycle. As such, background processes (like cron) don't exist in a container.
What I've seen most people do is have the container just execute whatever you're looking for it to do on a job like do_the_thing.sh and then using the docker run command on on the host machine to call it via cron.
So for sake of argument, let's say you had an image called myrepo/task with a default entrypoint of do_the_thing.sh
On the host, you could add an entry to crontab:
# m h dom mon dow user command
0 */2 * * * root docker run --rm myrepo/task
Then it's down to a question of design. If the task needs files, you can pass them down via volume. If it needs to put something somewhere when it's done, maybe look at blob storage.
I think this question is a duplicate, with a detailed response with lots of upvotes here. I followed the top-most dockerfile example without issues.
Your CMD running cron in the foreground isn't the problem. I ran a quick version of your docker file and exec'ing into the container I could confirm cron was running. Recommend checking how your cron entries in the crontab file are re-directing their output.
Expanding on one of the other answers here a container is actually a lot like a virtual machine, and often they do run many processes concurrently. If you happen to have any other containers running you might be able to see this most easily by running docker stats and looking at the PID column.
Also, easy to examine interactively yourself like this:
$ # Create a simple ubuntu running container named my-ubuntu
$ docker run -it -h my-ubuntu ubuntu
root#my-ubuntu$ ps aw # Shows bash and ps processes running.
root#my-ubuntu$ # Launch a ten minute sleep in the background.
root#my-ubuntu$ sleep 600 &
root#my-ubuntu$ ps aw # Now shows sleep also running.

Running cron in a docker container on a windows host

I am having some problems trying to make a container that runs a cronjob. I can see cron running using top in the container but it doesn't write to the log file as the below example attempts to. The file stays empty.
I have read answers to the same question here:
How to run a cron job inside a docker container?
Output of `tail -f` at the end of a docker CMD is not showing
But I could not make any of the suggestions work. For example I used the dockerfile from here: https://github.com/Ekito/docker-cron/
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
#Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
crontab:
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
# Don't remove the empty line at the end of this file. It is required to run the cron job
It didn't work on my machine (windows 10). Apparently there seems to be a windows specific issue also reported by someone else: https://github.com/Ekito/docker-cron/issues/3
To test if it was just me doing something wrong I tried to do the same in a virtual machine running ubuntu (so an ubuntu host instead of my windows host) and that worked as expected. The log file is extended as expected.
So what can I do to try to make this work?
I tried writing to a mounted (bind) folder and making a volume to write to. Neither worked.
rferalli's answer on the github issue did the trick for me:
"Had the same issue. Fixed it by changing line ending of the crontab file from CRLF to LF. Hope this helps!"
I have this problem too.
My workaround is to use Task Scheduler to run a .bat file that start a container instead
Using Task Scheduler: https://active-directory-wp.com/docs/Usage/How_to_add_a_cron_job_on_Windows.html
hello.bat
docker run hello-world
TaskScheduler Action
cmd /c hello.bat >> hello.log 2>&1
Hope this help :)

initctl too old upstart check

I am trying to do a syntax check on an upstart script using init-checkconf. However when I run it, it returns ERROR: version of /sbin/initctl too old.
I have no idea what to do, I have tried reinstalling upstart but nothing changes. This is being run from within a docker container (ubuntu:14.04) which might have something to do with it.
I just ran into the same issue.
Looking in the container:
root#puppet-master:/# cat /sbin/initctl
#!/bin/sh
exit 0
I haven't tested it completly yet, but I added the following to my Dockerfile:
# Fix upstart
RUN rm -rf /sbin/initctl && ln -s /sbin/initctl.distrib /sbin/initctl
I thought this link explained it pretty good:
When your Docker container starts, only the CMD command is run. The only processes that will be running inside the container is the CMD command, and all processes that it spawns. That's why all kinds of important system services are not run automatically – you have to run them yourself.
Digging around some more, I found an official Ubuntu image containing a working version of upstart:
https://registry.hub.docker.com/_/ubuntu-upstart/

Docker multiple entrypoints

Say I have the following Dockerfile:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y apache2
RUN apt-get install -y mongod #pretend this exists
EXPOSE 80
ENTRYPOINT ["/usr/sbin/apache2"]
The ENTRYPOINT command makes it so that apache2 starts when the container starts. I want to also be able to start mongod when the the container starts with the command service mongod start. According to the documentation however, there must be only one ENTRYPOINT in a Dockerfile. What would be the correct way to do this then?
As Jared Markell said, if you wan to launch several processes in a docker container, you have to use supervisor. You will have to configure supervisor to tell him to launch your different processes.
I wrote about this in this blog post, but you have a really nice article here detailing how and why using supervisor in Docker.
Basically, you will want to do something like:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y apache2
RUN apt-get install -y mongod #pretend this exists
RUN apt-get install -y supervisor # Installing supervisord
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 80
ENTRYPOINT ["/usr/bin/supervisord"]
And add a configuration a file supervisord.conf
[supervisord]
nodaemon=true
[program:mongodb]
command=/etc/mongod/mongo #To adapt, I don't know how to launch your mongodb process
[program:apache2]
command=/usr/sbin/apache2 -DFOREGROUND
EDIT: As this answer has received quite lot of upvotes, I want to precise as a warning that using Supervisor is not considered as a best practice to run several jobs. Instead, you may be interested in creating several containers for your different processes and managing them through docker compose.
In a nutshell, Docker Compose allows you to define in one file all the containers needed for your app and launch them in one single command.
My solution is to throw individual scripts into /opt/run/ and execute them with:
#!/bin/bash
LOG=/var/log/all
touch $LOG
for a in /opt/run/*
do
$a >> $LOG &
done
tail -f $LOG
And my entry point is just the location of this script, say it's called /opt/bin/run_all:
ADD 00_sshd /opt/run/
ADD 01_nginx /opt/run/
ADD run_all /opt/bin/
ENTRYPOINT ["/opt/bin/run_all"]
The simple answer is that you should not because it breaks the single responsibility principle: one container, one service. Imagine that you want to spawn additional cloud images of MongoDB because of a sudden workload - why increasing Apache2 instances as well and at a 1:1 ratio?
Instead, you should link the boxes and make them speak through TCP. See https://docs.docker.com/userguide/dockerlinks/ for more info.
Typically, you would not do this. It is an anti-pattern because:
You typically have different update cycles for the two processes
You may want to change base filesystems for each of these processes
You want logging and error handling for each of these processes that are independent of each other
Outside of a shared network or volume, the two processes likely have no other hard dependencies
Therefore the best option is to create two separate images, and start the two containers with a compose file that handles the shared private network.
If you cannot follow that best practice, then you end up in a scenario like the following. The parent image contains a line:
ENTRYPOINT ["/entrypoint-parent.sh"]
and you want to add the following to your child image:
ENTRYPOINT ["/entrypoint-child.sh"]
Then the value of ENTRYPOINT in the resulting image is replaced with /entrypoint-child.sh, in other words, there is only a single value for ENTRYPOINT. Docker will only call a single process to start your container, though that process can spawn child processes. There are a couple techniques to extend entrypoints.
Option A: Call your entrypoint, and then run the parent entrypoint at the end, e.g. /entrypoint-child.sh could look like:
#!/bin/sh
echo Running child entrypoint initialization steps here
/usr/bin/mongodb ... &
exec /entrypoint-parent.sh "$#"
The exec part is important, it replaces the current shell by the /entrypoint-parent.sh shell or process, which removes issues with signal handling. The result is you run the first bit of initialization in the child entrypoint, and then delegate to the original parent entrypoint. This does require that you keep track of the name of the parent entrypoint, would could change between versions of your base image. This also means you lose error handling and graceful termination on mongodb since it is run in the background. This could result in a false healthy container and data lose, neither of which I would recommend for a production environment.
Option B: Run the parent entrypoint in the background. This is less than ideal since you will no longer have error handling on the parent process unless you take some extra steps. At the simplest, this looks like the following in your /entrypoint-child.sh:
#!/bin/sh
# other initialization steps
/entrypoint-parent.sh "$#" &
# potentially wait for parent to be running by polling
# run something new in the foreground, that may depend on parent processes
exec /usr/bin/mongodb ...
Note, the "$#" notation I keep using is passing through the value of CMD as arguments to the parent entrypoint.
Option C: Switch to a tool like supervisord. I'm not a huge fan of this since it still implies running multiple daemons inside your container, and you are usually best to split that into multiple containers. You need to decide what the proper response is when a single child process keeps failing.
Option D: Similar to Options A and B, I often create a directory of entrypoint scripts that can be extended at different levels of the image build. The entrypoint itself is unchanged, I just add new files into a directory that gets called sequentially based on the filename. In my scenarios, these scripts are all run in the foreground, and I exec the CMD at the end. You can see an example of this in my base image repo, in particular the entrypoint.d directory and bin/entrypointd.sh script which includes the section:
# ...
for ep in /etc/entrypoint.d/*; do
ext="${ep##*.}"
if [ "${ext}" = "env" -a -f "${ep}" ]; then
# source files ending in ".env"
echo "Sourcing: ${ep}"
set -a && . "${ep}" && set +a
elif [ "${ext}" = "sh" -a -x "${ep}" ]; then
# run scripts ending in ".sh"
echo "Running: ${ep}"
"${ep}"
fi
done
# ...
# run command with exec to pass control
echo "Running CMD: $#"
exec "$#"
However, the above is more for extending the initialization steps, and not for running multiple daemons inside the container. Given the bad options and issues they each have, I hope it's clear why running two containers would be preferred in your scenario.
I was not able to get the usage of && to work. I was able to solve this as described here: https://stackoverflow.com/a/19872810/2971199
So in your case you could do:
RUN echo "/usr/sbin/apache2" >> /etc/bash.bashrc
RUN echo "/path/to/mongodb" >> /etc/bash.bashrc
ENTRYPOINT ["/bin/bash"]
You may need/want to edit your start commands.
Be careful if you run your Dockerfile more than once, you probably don't want multiple copies of commands appended to your bash.bashrc file. You could use grep and an if statement to make your RUN command idempotent.
You can't specify multiple entry points in a Dockerfile. To run multiple servers in the same docker container you must use a command that will be able to launch your servers. Supervisord has already been cited but I could also recommend multirun, a project of mine which is a lighter alternative.
There is an answer in docker docs:
https://docs.docker.com/config/containers/multi-service_container/
But in short
If you need to run more than one service within a container, you can accomplish this in a few different ways.
The first one is to run script which mange your process.
The second one is to use process manager like supervisord
I can think of several ways:
you can write a script to put on the container (ADD) that does all the startup commands, then put that in the ENTRYPOINT
I think you can put any shell commands on the ENTRYPOINT, so you can do service mongod start && /usr/sbin/apache2
If you are trying to run multiple concurrent npm scripts such as a watch script and a build script for example, check out:
How can I run multiple npm scripts in parallel?

Tiny tiny rss monit

Help!
I want to set up a monitoring service on my Debian server, that will monitor and start wen needed the updater for tiny tiny rss. The problem is that it is a php foreground process normally run in a screen on a non-root user.
I can run it as:
php ./update_daemon2.php
or better putting it in the background and in order to run it from a different account
sudo -u tinyrssuser php ./update_deamon2.php -daemon > /dev/null & disown $!
I have installed monit, but cant seem to find a way to have it detect if t is running.
I would prefer to keep with monit but it is not necessary.
Any ideas would be appreciated.
Found the answer at:
http://510x.se/notes/posts/Install_Tiny_Tiny_RSS_on_Debian/
But use this instead under /etc/init.d/
http://mylostnotes.blogspot.co.il/2013/03/tiny-tiny-rss-initd-script.html
make sure to set the user and group
Create an upstart script /etc/init/ttrss.conf:
description "TT-RSS Feed Updater"
author "The Epyon Avenger <epyon_avenger on TT-RSS forums>"
env USER=www-data
env TTRSSDIR=/var/www/ttrss
start on started mysql
stop on stopping mysql
respawn
exec start-stop-daemon --start --make-pidfile --pidfile /var/run/ttrss.pid --chdir $TTRSSDIR --chuid $USER --group $USER --exec /usr/bin/php ./update_daemon2.php >> /var/log/ttrss/ttrss. log 2>&1
Start the script:
sudo start --system ttrss
Add the following lines to your monit conf:
check process ttrss with pidfile /var/run/ttrss.pid
start program = "/sbin/start ttrss"
stop program = "/sbin/stop ttrss"

Resources