I have the following /etc/init/uwsgi.conf:
description "uWSGI"
start on runlevel [2345]
stop on runlevel [06]
respawn
env UWSGI=/var/www/my_project/venv/bin/uwsgi
env LOGTO=/var/log/uwsgi/emperor.log
exec $UWSGI --master --emperor /etc/uwsgi/vassals --die-on-term --uid www-data --gid www-data --logto $LOGTO
As far as I can tell this is best practice as seen here
I also have /etc/uwsgi/vassals/my_project_uwsgi.ini:
[uwsgi]
#application's base folder
base = /var/www/my_project
#python module to import
app = server
module = %(app)
home = %(base)/venv
pythonpath = %(base)
#socket file's location
socket = /var/www/my_project/%n.sock
#permissions for the socket file
chmod-socket = 666
callable = app
#location of log files
logto = /var/log/uwsgi/%n.log
processes = 10
Now, is it that uWSGI isn't being called on startup, or is it that there's something wrong with the overall config? (uWSGI config, nginx config, my app logic, file permissions etc?)
I think that the init script simply isn't being run. The reason why I think this is that when I run the init script manually, ie # /var/www/my_project/venv/bin/uwsgi --master --emperor /etc/uwsgi/vassals --die-on-term --uid www-data --gid www-data --logto /var/log/uwsgi/emperor.log then everything works as it should. On the other hand when I reboot the machine # reboot then nothing seems to happen. The uWSGI isn't running after reboot and more importantly, NOTHING is written on /var/log/uwsgi/emperor.log. From my experience with uWSGI, if you have a configuration mistake something is still written to emperor.log. So I conclude that /etc/init/uwsgi.conf isn't being run on startup.
How can I check this, and fix it?
EDIT: Update. tried sudo apt install upstart. Also this says that upstart needs inotify to detect changes in files in /etc/init, so I also did sudo apt install inotify-tools. However my script still doesn't run on startup.
This solved it
apt install upstart-sysv
Inspired by this (Under "Permanent switch back to upstart)
Related
Okay so firstly I read some posts on this topic. That is how I ended up with my solution. Still I don’t find my mistake. Also I am more of a beginner.
So this is my docker file:
FROM conda/miniconda3
WORKDIR /app
RUN apt-get update -y
RUN apt-get install cron -y
RUN apt-get install curl -y
RUN conda update -n base -c defaults conda
RUN conda install mamba -n base -c conda-forge
COPY ./environment.yml ./environment.yml
RUN mamba env create -f environment.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "--no-capture-output", "-n", "d2", "/bin/bash", "-c"]
#Setup cron
COPY ./cronjob /etc/cron.d/cronjob
RUN crontab /etc/cron.d/cronjob
RUN chmod 0600 /etc/cron.d/cronjob
RUN touch ./cron.log
COPY ./ ./
RUN ["chmod", "+x", "run.sh"]
ENTRYPOINT ["sh", "run.sh"]
CMD ["cron", "-f"]
What I want to do:
Run my run.sh (I managed to do that.)
Setup a cronjob inside my container which is defined in a file called cronjob (see content below)
My cronjob is not working. Why?
Note that cron.log is empty. It is never triggered.
Also the output of crontab -l (run inside of the container) is:
$ crontab -l
# Updates every 15 minutes.
*/15 * * * * /bin/sh /app/cron.sh >> /app/cron.log 2&>1
cronjob
# Updates every 15 minutes.
*/15 * * * * /bin/sh /app/cron.sh >> /app/cron.log 2&>1
As Saeed pointed out already, there is reason to believe you did not place your cron.sh script inside the container.
On top of that cron is programmed such that it does not log failed invocations anywhere. You can try to turn on some debug logging (I almost had to search cron's source to find the right settings years ago). Finally cron will send it's debug output to syslog - but in your container only cron is running, so the log entries are probably lost on that stage again.
That ultimately means you are in the dark and need to find the needle. But installing the script is a first good attempt.
As Saeed said in this comment
First of all, your cronjob command is wrong. You should have 2>&1 instead of 2&>1. Second. run ls -lh /app/cron.sh to see if your file is copied. Also be sure cron.sh is in the directory where your Dockerfile is.
2&>1 was the mistake that I had made.
I had a similar issue with the crontab not being read
I was also using something like:
COPY ./cronjob /etc/cron.d/cronjob
Locally the cronjob file had permissions of 664 instead of 644. This was causing cron to log Sep 29 16:21:01 0f2c2e0ddbfd cron[389]: (*system*crontab) INSECURE MODE (group/other writable) (/etc/cron.d/crontab) (I actually had to install syslog-ng to see this happen).
Turns out cron will refuse to read cron configurations if they are writeable by others. I guess it makes sense in hindsight but I was completely oblivious to this.
Changing my cronjob file permissions to 644 fixed this for me (I did this on my local filesystem, the Dockerfile copies permissions over)
only you need to root right then it will solve issuse
*/15 * * * * root /bin/sh /app/cron.sh >> /app/cron.log 2&>1
My Dockerfile extends from php:8.1-apache. The following happens while developing:
The application creates log files (as www-data, 33:33)
I create files (as the image's default user root, 0:0) within the container
These files are mounted on my host where I'm acting as user (1000:1000). Of course I'm running into file permission issues now. I'd like to update/delete files created in the container on my host and vice versa.
My current solution is to set the image's user to www-data. In that way, all created files belong to it. Then, I change its user and group id from 33 to 1000. That solves my file permission issues.
However, this leads to another problem:
I'm prepending sudo -E to the entrypoint and command. I'm doing that because they're normally running as root and my custom entrypoint requires root permissions. But in that way the stop signal stops working and the container has to be killed when I want it to stop:
~$ time docker-compose down
Stopping test_app ... done
Removing test_app ... done
Removing network test_default
real 0m10,645s
user 0m0,167s
sys 0m0,004s
Here's my Dockerfile:
FROM php:8.1-apache AS base
FROM base AS dev
COPY entrypoint.dev.sh /usr/local/bin/custom-entrypoint.sh
ARG user_id=1000
ARG group_id=1000
RUN set -xe \
# Create a home directory for www-data
&& mkdir --parents /home/www-data \
&& chown --recursive www-data:www-data /home/www-data \
# Make www-data's user and group id match my host user's ones (1000 and 1000)
&& usermod --home /home/www-data --uid $user_id www-data \
&& groupmod --gid $group_id www-data \
# Add sudo and let www-data execute it without asking for a password
&& apt-get update \
&& apt-get install --yes --no-install-recommends sudo \
&& rm --recursive --force /var/lib/apt/lists/* \
&& echo "www-data ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/www-data
USER www-data
# Run entrypoint and command as sudo, as my entrypoint does some config substitution and both normally run as root
ENTRYPOINT [ "sudo", "-E", "custom-entrypoint.sh" ]
CMD [ "sudo", "-E", "apache2-foreground" ]
Here's my custom-entrypoint.sh
#!/bin/sh
set -e
sed --in-place 's#^RemoteIPTrustedProxy.*#RemoteIPTrustedProxy '"$REMOTEIP_TRUSTED_PROXY"'#' $APACHE_CONFDIR/conf-available/remoteip.conf
exec docker-php-entrypoint "$#"
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again? Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again?
First, get rid of sudo, if you need to be root in your container, run it as root with USER root in your Dockerfile. There's little value add to sudo in the container since it should be an environment to run one app and not a multi-user general purpose Linux host.
Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
The pattern I go with is to have developers launch the container as root, and have the entrypoint detect the uid/gid of the mounted volume, and adjust the uid/gid of the user in the container to match that id before running gosu to drop permissions and run as that user. I've included a lot of this logic in my base image example (note the fix-perms script that tweaks the uid/gid). Another example of that pattern is in my jenkins-docker image.
You'll still need to either configure root's login shell to automatically run gosu inside the container, or remember to always pass -u www-data when you exec into your image, but now that uid/gid will match your host.
This is primarily for development. In production, you probably don't want host volumes, use named volumes instead, or at least hardcode the uid/gid of the user in the image to match the desired id on the production hosts. That means the Dockerfile would still have USER www-data but the docker-compose.yml for developers would have user: root that doesn't exist in the compose file in production. You can find a bit more on this in my DockerCon 2019 talk (video here).
You can use user namespace to map different user/group in your docker to you on the host.
For example, the group www-data/33 in the container could be the group docker-www-data/100033 on the host, you just have be in the group to access log files.
I am using centos docker image to build a container and host drupal site. I need to run drush commands after apache starts. But every commands after apache starts doesn't run. Is there any way to run drush commands after apache starts? My start up script has following lines:
/usr/sbin/httpd -DFOREGROUND
drush updb -y
Apache might be taking some time to start, after you fire that command. And, your drush command might be executing even before apache has successfully started.
There can be two ways:
Either you put a static sleep before running drush commands.
after apache start command, you can put a loop which checks apache status, if status not running, then sleep. Else break from loop. And after this, you can run your drush commands.
"Where" is this been executed? If it is not in your drupal home directory or below probably drush can't find your site.
Try "cd /var/www/html/orwhateverdirthesitehave" before drush.
On the other hand it doesn't seem safe to run this data model changes automatically. Only need to run that when a module is updated AND the updated requires a data model change... IMHO it's to risky to automate like this.
If it's not about updb exactly consider using a secondary script with all the drush stuff and starting changing the working dir to the drupal instalation path.
/usr/sbin/httpd -DFOREGROUND
/home/myuser/drushcommands.sh
and in drushcommands.sh
!#/bin/sh
cd /var/www/html/mydrupal
drush cim -y
...
I'm using Search Guard plugin to secure an elasticsearch cluster composed of multiple nodes.
Here is my Dockerfile:
#!/bin/sh
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.3
USER root
# Install search guard
RUN bin/elasticsearch-plugin install --batch com.floragunn:search-guard-5:5.6.3-16 \
&& chmod +x \
plugins/search-guard-5/tools/hash.sh \
plugins/search-guard-5/tools/sgadmin.sh \
bin/init_sg.sh \
&& chown -R elasticsearch:elasticsearch /usr/share/elasticsearch
USER elasticsearch
To initialize SearchGuard (create internal users and assign roles). I need to run the script init_sg.sh after the container startup.
Here is the problem: Unless elasticsearch is running, the script will not initialize any security index.
The script's content is :
sleep 10
plugins/search-guard-5/tools/sgadmin.sh -cd config/ -ts config/truststore.jks -ks config/kirk-keystore.jks -nhnv -icl
Now, I just run the script manually after the container startup but since I'm running it on Kubernetes.. Pods may get killed or fail and get recreated automatically for some reason. In this case, the plugin have to be initialized automatically after the container startup!
So how to accomplish this? Any help or hint would be really appreciated.
The image itself has an entrypoint ENTRYPOINT ["/run/entrypoint.sh"] specified in the Dockerfile. You can replace it by your own script. So for example create a new script, mount it and first call /run/entrypoint.sh and then wait for start of elasticsearch before running your init_sg.sh.
Not sure this will solves your problem, but its worth check my repo'sDockerfile
I have created a simple run.sh file copied to docker image and in the Dockerfile I wrote CMD ["run.sh"]. In the same way define whatever you want in run.sh and write CMD ["run.sh"]. You can find another example like below
Dockerfile
FROM java:8
RUN apt-get update && apt-get install stress-ng -y
ADD target/restapp.jar /restapp.jar
COPY dockerrun.sh /usr/local/bin/dockerrun.sh
RUN chmod +x /usr/local/bin/dockerrun.sh
CMD ["dockerrun.sh"]
dockerrun.sh
#!/bin/sh
java -Dserver.port=8095 -jar /restapp.jar &
hostname="hostname: `hostname`"
nohup stress-ng --vm 4 &
while true; do
sleep 1000
done
This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/
If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line.
#!/bin/bash
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
I was trying to solve the exact problem. Here's the approach that worked for me.
Create a separate shell script that checks for ES status, and only start initialization of SG when ES is ready:
Shell Script
#!/bin/sh
echo ">>>> Right before SG initialization <<<<"
# use while loop to check if elasticsearch is running
while true
do
netstat -uplnt | grep :9300 | grep LISTEN > /dev/null
verifier=$?
if [ 0 = $verifier ]
then
echo "Running search guard plugin initialization"
/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -h 0.0.0.0 -cd plugins/search-guard-6/sgconfig -icl -key config/client.key -cert config/client.pem -cacert config/root-ca.pem -nhnv
break
else
echo "ES is not running yet"
sleep 5
fi
done
Install script in Dockerfile
You will need to install the script in container so it's accessible after it starts.
COPY sginit.sh /
RUN chmod +x /sginit.sh
Update entrypoint script
You will need to edit the entrypoint script or run script of your ES image. So that it starts the sginit.sh in the background BEFORE starting ES process.
# Run sginit in background waiting for ES to start
/sginit.sh &
This way the sginit.sh will start in the background, and will only initialize SG after ES is started.
The reason to have this sginit.sh script starts before ES in the background is so that it's not blocking ES from starting. The same logic applies if you put it after starting of ES, it will never run unless you put the starting of ES in the background.
I would suggest to put the CMD in you docker file to execute the script when the container start
FROM debian
RUN apt-get update && apt-get install -y nano && apt-get clean
EXPOSE 8484
CMD ["/bin/bash", "/opt/your_app/init.sh"]
There is other way , but before using this look at your requirement,
ENTRYPOINT "put your code here" && /bin/bash
#exemple ENTRYPOINT service nginx start && service ssh start &&/bin/bash "use && to separate your code"
You can also use wait-for-it script. It will wait on the availability of a host and TCP port. It is useful for synchronizing the spin-up of interdependent services and works like a charm with containers. It does not have any external dependencies so you can just run it as an RUN command without doing anything else.
A Dockerfile example based on this thread:
FROM elasticsearch
# Make elasticsearch write data to a folder that is not declared as a volume in elasticsearchs' official dockerfile.
RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml
# Download wait-for-it
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh
# Copy the files you may need and your insert script
# Insert data into elasticsearch
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- path/to/insert/script.sh; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
Help!
I want to set up a monitoring service on my Debian server, that will monitor and start wen needed the updater for tiny tiny rss. The problem is that it is a php foreground process normally run in a screen on a non-root user.
I can run it as:
php ./update_daemon2.php
or better putting it in the background and in order to run it from a different account
sudo -u tinyrssuser php ./update_deamon2.php -daemon > /dev/null & disown $!
I have installed monit, but cant seem to find a way to have it detect if t is running.
I would prefer to keep with monit but it is not necessary.
Any ideas would be appreciated.
Found the answer at:
http://510x.se/notes/posts/Install_Tiny_Tiny_RSS_on_Debian/
But use this instead under /etc/init.d/
http://mylostnotes.blogspot.co.il/2013/03/tiny-tiny-rss-initd-script.html
make sure to set the user and group
Create an upstart script /etc/init/ttrss.conf:
description "TT-RSS Feed Updater"
author "The Epyon Avenger <epyon_avenger on TT-RSS forums>"
env USER=www-data
env TTRSSDIR=/var/www/ttrss
start on started mysql
stop on stopping mysql
respawn
exec start-stop-daemon --start --make-pidfile --pidfile /var/run/ttrss.pid --chdir $TTRSSDIR --chuid $USER --group $USER --exec /usr/bin/php ./update_daemon2.php >> /var/log/ttrss/ttrss. log 2>&1
Start the script:
sudo start --system ttrss
Add the following lines to your monit conf:
check process ttrss with pidfile /var/run/ttrss.pid
start program = "/sbin/start ttrss"
stop program = "/sbin/stop ttrss"