nginx-alpine: how to view its docker file - docker

I have the following image: nginx:1.19.0-alpine
I want to know about its docker file. I checked https://github.com/nginxinc/docker-nginx but could not understand how to check
Basically i want to change the /docker-entrypoint.sh
Currently its
#!/usr/bin/env sh
# vim:sw=4:ts=4:et
set -e
if [ "$1" = "nginx" -o "$1" = "nginx-debug" ]; then
if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -maxdepth 1 -type f -print -quit 2>/dev/null | read v; then
echo "$0: /docker-entrypoint.d/ is not empty, will attempt to perform configuration"
echo "$0: Looking for shell scripts in /docker-entrypoint.d/"
find "/docker-entrypoint.d/" -follow -type f -print | sort -n | while read -r f; do
case "$f" in
*.sh)
if [ -x "$f" ]; then
echo "$0: Launching $f";
"$f"
else
# warn on shell scripts without exec bit
echo "$0: Ignoring $f, not executable";
fi
;;
*) echo "$0: Ignoring $f";;
esac
done
echo "$0: Configuration complete; ready for start up"
else
echo "$0: No files found in /docker-entrypoint.d/, skipping configuration"
fi
fi
# Handle enabling SSL
if [ "$ENABLE_SSL" = "True" ]; then
echo "Enabling SSL support!"
cp /etc/nginx/configs/default_ssl.conf /etc/nginx/conf.d/default.conf
fi
exec "$#"
I want it to modify it as
#!/usr/bin/env sh
# vim:sw=4:ts=4:et
set -x -o verbose;
echo $1
if [ "$1" = "nginx" -o "$1" = "nginx-debug" ]; then
if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -maxdepth 1 -type f -print -quit 2>/dev/null | read v; then
echo "$0: /docker-entrypoint.d/ is not empty, will attempt to perform configuration"
echo "$0: Looking for shell scripts in /docker-entrypoint.d/"
find "/docker-entrypoint.d/" -follow -type f -print | sort -n | while read -r f; do
case "$f" in
*.sh)
if [ -x "$f" ]; then
echo "$0: Launching $f";
"$f"
else
# warn on shell scripts without exec bit
echo "$0: Ignoring $f, not executable";
fi
;;
*) echo "$0: Ignoring $f";;
esac
done
echo "$0: Configuration complete; ready for start up"
else
echo "$0: No files found in /docker-entrypoint.d/, skipping configuration"
fi
fi
# Handle enabling SSL
if [ "$ENABLE_SSL" = "True" ]; then
echo "Enabling SSL support!"
cp /etc/nginx/configs/default_ssl.conf /etc/nginx/conf.d/default.conf
fi
exec "$#"
I see $1 is being passed. But no clue what to pass

Normally, the CMD from the Dockerfile is sent as parameters to the ENTRYPOINT. In the generated Dockerfile (found here for current alpine: https://github.com/nginxinc/docker-nginx/blob/master/stable/alpine/Dockerfile), you can see that the CMD is CMD ["nginx", "-g", "daemon off;"].
So in normal use, the parameters for the entrypoint script are "nginx", "-g" and "daemon off;".
Since the first parameter is "nginx", the script will enter the if block and run the code in there.
The if is there in case you want to run a different command. For instance if you want to enter the shell and look around in the image, you could do docker run -it nginx:alpine /bin/sh. Now the parameter for the entrypoint.sh script is just "/bin/sh" and now the script won't enter the if-block.
If you want to build your own, modified version of the image, you can fork https://github.com/nginxinc/docker-nginx/ and build from your own version of the repo.
If you need the 1.19 version of the Dockerfile, you'll have to look through the old versions on the repo. The 1.19.0 version is here: https://github.com/nginxinc/docker-nginx/blob/1.19.0/stable/alpine/Dockerfile

Related

how to make docker keep running in frontend and not exit so that I could see the running log output

Now I want to make a docker command run in frontend so that I could see the log output. Now I am using this command to run my docker container:
docker run -p 11110:11110 -p 11111:11111 -p 11112:11112 --name canal-server dolphinjiang/canal-server:v1.1.5
this is the Dockerfile of my project:
FROM centos:7
RUN cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
RUN echo ZONE=\"Asia/Shanghai\" > /etc/sysconfig/clock
RUN rm -rf /etc/yum.repos.d/*.repo
COPY CentOS6-Base-163.repo /etc/yum.repos.d/
RUN yum clean all
RUN groupadd -g 2500 canal; useradd -u 2501 -g canal -d /home/canal -m canal
RUN echo canal:De#2018er | chpasswd; echo root:dockerroot | chpasswd
RUN yum -y update && yum -y install wget vi openssl.x86_64 glibc.x86_64 tar tar.x86_64 inetutils-ping net-tools telnet which file
RUN yum clean all
COPY jdk-8u291-linux-x64.tar.gz /opt
RUN tar -zvxf /opt/jdk-8u291-linux-x64.tar.gz -C /opt && \
rm -rf /opt/jdk-8u291-linux-x64.tar.gz && \
chmod -R 755 /opt/jdk1.8.0_291 && \
chown -R root:root /opt/jdk1.8.0_291
RUN echo 'export JAVA_HOME=/opt/jdk1.8.0_291' >> /etc/profile
RUN echo 'export JRE_HOME=$JAVA_HOME/jre' >> /etc/profile
RUN echo 'export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH' >> /etc/profile
RUN echo 'export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH' >> /etc/profile
RUN source /etc/profile
RUN yum install kde-l10n-Chinese -y
RUN yum install glibc-common -y
RUN localedef -c -f UTF-8 -i zh_CN zh_CN.utf8
ENV JAVA_HOME /opt/jdk1.8.0_291
ENV PATH $PATH:$JAVA_HOME/bin
ENV LANG zh_CN.UTF-8
ENV LC_ALL zh_CN.UTF-8
ADD canal-server /home/canal/
RUN chmod 755 /home/canal/bin
WORKDIR /home/canal/bin
RUN chmod 777 /home/canal/bin/restart.sh
RUN chmod 777 /home/canal/bin/startup.sh
RUN chmod 777 /home/canal/bin/stop.sh
RUN chmod 777 /home/canal/bin/config.sh
CMD /home/canal/bin/config.sh
this is the config.sh:
cat > /home/canal/conf/canal.properties <<- EOF
# register ip
canal.register.ip = ${HOSTNAME}.canal-server-discovery-svc-stable.testcanal.svc.cluster.local
# canal admin config
canal.admin.manager = canal-admin-stable:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441
# admin auto register
canal.admin.register.auto = true
canal.admin.register.cluster =
EOF
sh /home/canal/bin/restart.sh
and this is the restart.sh:
#!/bin/bash
args=$#
case $(uname) in
Linux)
bin_abs_path=$(readlink -f $(dirname $0))
;;
*)
bin_abs_path=$(cd $(dirname $0) ||exit ; pwd)
;;
esac
sh "$bin_abs_path"/stop.sh $args
sh "$bin_abs_path"/startup.sh $args
and this is the start.sh:
#!/bin/bash
current_path=`pwd`
case "`uname`" in
Linux)
bin_abs_path=$(readlink -f $(dirname $0))
;;
*)
bin_abs_path=`cd $(dirname $0); pwd`
;;
esac
base=${bin_abs_path}/..
canal_conf=$base/conf/canal.properties
canal_local_conf=$base/conf/canal_local.properties
logback_configurationFile=$base/conf/logback.xml
export LANG=en_US.UTF-8
export BASE=$base
if [ -f $base/bin/canal.pid ] ; then
echo "found canal.pid , Please run stop.sh first ,then startup.sh" 2>&2
exit 1
fi
if [ ! -d $base/logs/canal ] ; then
mkdir -p $base/logs/canal
fi
## set java path
if [ -z "$JAVA" ] ; then
JAVA=$(which java)
fi
ALIBABA_JAVA="/usr/alibaba/java/bin/java"
TAOBAO_JAVA="/opt/taobao/java/bin/java"
if [ -z "$JAVA" ]; then
if [ -f $ALIBABA_JAVA ] ; then
JAVA=$ALIBABA_JAVA
elif [ -f $TAOBAO_JAVA ] ; then
JAVA=$TAOBAO_JAVA
else
echo "Cannot find a Java JDK. Please set either set JAVA or put java (>=1.5) in your PATH." 2>&2
exit 1
fi
fi
case "$#"
in
0 )
;;
1 )
var=$*
if [ "$var" = "local" ]; then
canal_conf=$canal_local_conf
else
if [ -f $var ] ; then
canal_conf=$var
else
echo "THE PARAMETER IS NOT CORRECT.PLEASE CHECK AGAIN."
exit
fi
fi;;
2 )
var=$1
if [ "$var" = "local" ]; then
canal_conf=$canal_local_conf
else
if [ -f $var ] ; then
canal_conf=$var
else
if [ "$1" = "debug" ]; then
DEBUG_PORT=$2
DEBUG_SUSPEND="n"
JAVA_DEBUG_OPT="-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=$DEBUG_PORT,server=y,suspend=$DEBUG_SUSPEND"
fi
fi
fi;;
* )
echo "THE PARAMETERS MUST BE TWO OR LESS.PLEASE CHECK AGAIN."
exit;;
esac
str=`file -L $JAVA | grep 64-bit`
if [ -n "$str" ]; then
JAVA_OPTS="-server -Xms2048m -Xmx3072m -Xmn1024m -XX:SurvivorRatio=2 -XX:PermSize=96m -XX:MaxPermSize=256m -Xss256k -XX:-UseAdaptiveSizePolicy -XX:MaxTenuringThreshold=15 -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError"
else
JAVA_OPTS="-server -Xms1024m -Xmx1024m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:MaxPermSize=128m "
fi
JAVA_OPTS=" $JAVA_OPTS -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8"
CANAL_OPTS="-DappName=otter-canal -Dlogback.configurationFile=$logback_configurationFile -Dcanal.conf=$canal_conf"
if [ -e $canal_conf -a -e $logback_configurationFile ]
then
for i in $base/lib/*;
do CLASSPATH=$i:"$CLASSPATH";
done
CLASSPATH="$base/conf:$CLASSPATH";
echo "cd to $bin_abs_path for workaround relative path"
cd $bin_abs_path
echo LOG CONFIGURATION : $logback_configurationFile
echo canal conf : $canal_conf
echo CLASSPATH :$CLASSPATH
$JAVA $JAVA_OPTS $JAVA_DEBUG_OPT $CANAL_OPTS -classpath .:$CLASSPATH com.alibaba.otter.canal.deployer.CanalLauncher 2>&1
echo $! > $base/bin/canal.pid
echo "cd to $current_path for continue"
cd $current_path
else
echo "canal conf("$canal_conf") OR log configration file($logback_configurationFile) is not exist,please create then first!"
fi
after I start the docker, it exit automaticlly, and the docker not startup, no log output. what should I do to make it run in frontend. after successs, switch to the backend. I also tried to run in deamon like this(make the container run background and not exit):
docker run -it -d -p 11110:11110 -p 11111:11111 -p 11112:11112 --name canal-server canal/canal-server:v1.1.5
the process still exit automaticlly. and docker container did not startup.
Basically, you should get the point (based on your latest comment).
Docker is based on some command, when it's done - it stops the container.
So to make it continuously running you should have command and run infinitely.
Also check this answer as well, there are more explanation
Why docker exiting with code 0
One of the easiest solution is to tail some logs.
Like,
tail -f /dev/null
Taken from here
you can use tail -f /dev/null to keep the container from stopping, try this
docker run -it -d -p 11110:11110 -p 11111:11111 -p 11112:11112 --name canal-server canal/canal-server:v1.1.5 tail -f /dev/null
see also this post

How to pass a target argument in a sh script to run Docker build for a specific environment

I'm building a sh script to be able to run Docker containers all in once. In this way, I no need to run single containers every time.
What I need to add now is a way to run my docker-compose up --build -target=<ENV> the ENV will be DEV or PROD. In this way, I can run the right environment in my Docker setup.
At this time my script looks as follow but when I try to pass $2 = $DEV is giving me an error of [: =: unary operator expected and I don't know what could be the right fix to this
#!/bin/bash
CLEAN="clean"
RUN="run"
STOP="stop"
DEV="dev"
PROD="prod"
if [ "$#" -eq 0 ] || [ $1 = "-h" ] || [ $1 = "--help" ]; then
echo "Usage: ./myapp [OPTIONS] COMMAND [arg...]"
echo " ./myapp [ -h | --help ]"
echo ""
echo "Options:"
echo " -h, --help Prints usage."
echo ""
echo "Commands:"
echo " $CLEAN - Stop and Remove containers."
echo " $RUN - Build and Run containers."
echo " $STOP - Stop containers."
exit
fi
clean() {
stop_existing
remove_stopped_containers
remove_unused_volumes
}
run() {
echo "Cleaning..."
clean
echo "Running docker..."
if [ $2 = $DEV ]; then
echo "$DEV - Running in - $DEV - environment"
docker-compose up --build -target=$DEV
fi
}
stop_existing() {
MYAPP="$(docker ps --all --quiet --filter=name=wetaxitask_api_dev)"
REDIS="$(docker ps --all --quiet --filter=name=wetaxitask_redis)"
MONGO="$(docker ps --all --quiet --filter=name=wetaxitask_mongodb)"
if [ -n "$MYAPP" ]; then
docker stop $MYAPP
fi
if [ -n "$REDIS" ]; then
docker stop $REDIS
fi
if [ -n "$MONGO" ]; then
docker stop $MONGO
fi
}
remove_stopped_containers() {
CONTAINERS="$(docker ps -a -f status=exited -q)"
if [ ${#CONTAINERS} -gt 0 ]; then
echo "Removing all stopped containers."
docker rm $CONTAINERS
else
echo "There are no stopped containers to be removed."
fi
}
remove_unused_volumes() {
CONTAINERS="$(docker volume ls -qf dangling=true)"
if [ ${#CONTAINERS} -gt 0 ]; then
echo "Removing all unused volumes."
docker volume rm $CONTAINERS
else
echo "There are no unused volumes to be removed."
fi
}
if [ $1 = $CLEAN ]; then
echo "Cleaning..."
clean
exit
fi
if [ $1 = $RUN ]; then
run
exit
fi
if [ $1 = $STOP ]; then
stop_existing
exit
fi
What I want to achieve is that possible to run my sh as follow
./script.sh run dev or prod
When you invoke a shell function, it has its own argument list. The POSIX shell spec indicates:
The operands to the command temporarily shall become the positional parameters during the execution of the compound-command
So, if you define and call a shell function
print_two_things() {
echo "dollars one is $1"
echo "dollars two is $2"
}
print_two_things foo bar
print_two_things
$1 and $2 are the arguments to the function, not the script.
In your script, there are two errors in the run function
run() {
if [ $2 = $DEV ]; then :; fi
}
Here, again, $2 is the second argument to the function, not the script; since you invoke this as just run with no arguments it's an empty string. Second, you don't quote either of these variables, so the empty string just gets dropped. This expands to the nonsensical [ = dev ] which produces the error you see.
You need to either capture the positional parameters in variables at the top level, or pass them down to the shell function. An example of the latter approach could be:
run() {
environment="$1" # first parameter _to this function_
clean
if [ "$environment" = "$DEV" ]; then :; fi
}
# at the top level; parameters _to the script_
if [ "$1" = "$RUN" ]; then
run "$2"
exit 0
fi
In a Docker context you might find it easier just to pass this in as an environment variable. Anything you set with a docker run -e option or similar settings will be directly available as environment variables in shell scripts. It's also usually considered a best practice to run identical images in dev, test, and prod if at all possible.

cron not running in alpine docker

I have created and added below entry in my entry-point.sh for docker file.
# start cron
/usr/sbin/crond &
exec "${DIST}/bin/ss" "$#"
my crontab.txt looks like below:
bash-4.4$ crontab -l
*/5 * * * * /cleanDisk.sh >> /apps/log/cleanDisk.log
So when I run the docker container, i don't see any file created called as cleanDisk.log.
I have setup all permissions and crond is running as a process in my container see below.
bash-4.4$ ps -ef | grep cron
12 sdc 0:00 /usr/sbin/crond
208 sdc 0:00 grep cron
SO, can anyone, guide me why the log file is not getting created?
my cleanDisk.sh looks like below. Since it runs for very first time,and it doesn't match all the criteria, so I would expect at least to print "No Error file found on Host $(hostname)" in cleanDisk.log.
#!/bin/bash
THRESHOLD_LIMIT=20
RETENTION_DAY=3
df -Ph /apps/ | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output
do
#echo $output
used=$(echo $output | awk '{print $1}' | sed s/%//g)
partition=$(echo $output | awk '{print $2}')
if [ $used -ge ${THRESHOLD_LIMIT} ]; then
echo "The partition \"$partition\" on $(hostname) has used $used% at $(date)"
FILE_COUNT=$(find ${SDC_LOG} -maxdepth 1 -mtime +${RETENTION_DAY} -type f -name "sdc-*.sdc" -print | wc -l)
if [ ${FILE_COUNT} -gt 0 ]; then
echo "There are ${FILE_COUNT} files older than ${RETENTION_DAY} days on Host $(hostname)."
for FILENAME in $(find ${SDC_LOG} -maxdepth 1 -mtime +${RETENTION_DAY} -type f -name "sdc-*.sdc" -print);
do
ERROR_FILE_SIZE=$(stat -c%s ${FILENAME} | awk '{ split( "B KB MB GB TB PB" , v ); s=1; while( $1>1024 ){ $1/=1024; s++ } printf "%.2f %s\n", $1, v[s] }')
echo "Before Deleting Error file ${FILENAME}, the size was ${ERROR_FILE_SIZE}."
rm -rf ${FILENAME}
rc=$?
if [[ $rc -eq 0 ]];
then
echo "Error log file ${FILENAME} with size ${ERROR_FILE_SIZE} is deleted on Host $(hostname)."
fi
done
fi
if [ ${FILE_COUNT} -eq 0 ]; then
echo "No Error file found on Host $(hostname)."
fi
fi
done
edit
my docker file looks like this
FROM adoptopenjdk/openjdk8:jdk8u192-b12-alpine
ARG SDC_UID=20159
ARG SDC_GID=20159
ARG SDC_USER=sdc
RUN apk add --update --no-cache bash \
busybox-suid \
sudo && \
echo 'hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4' >> /etc/nsswitch.conf
RUN addgroup --system ${SDC_USER} && \
adduser --system --disabled-password -u ${SDC_UID} -G ${SDC_USER} ${SDC_USER}
ADD --chown=sdc:sdc crontab.txt /etc/crontabs/sdc/
RUN chgrp sdc /etc/cron.d /etc/crontabs /usr/bin/crontab
# Also tried to run like this but not working
# RUN /usr/bin/crontab -u sdc /etc/crontabs/sdc/crontab.txt
USER ${SDC_USER}
EXPOSE 18631
RUN /usr/bin/crontab /etc/crontabs/sdc/crontab.txt
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["dc", "-exec"]

Run process with non-root user in docker container

I'm building redis sentinal image that run.sh should run as non-rootuser
run.sh
while true; do
master=$(redis-cli -h ${REDIS_SENTINEL_SERVICE_HOST} -p ${REDIS_SENTINEL_SERVICE_PORT} --csv SENTINEL get-master-addr-by-name mymaster | tr ',' ' ' | cut -d' ' -f1)
if [[ -n ${master} ]]; then
master="${master//\"}"
else
master=$(hostname -i)
fi
redis-cli -h ${master} INFO
if [[ "$?" == "0" ]]; then
break
fi
echo "Connecting to master failed. Waiting..."
sleep 10
done
sentinel_conf=/home/ubuntu/sentinel.conf
echo "sentinel monitor mymaster ${master} 6379 2" > ${sentinel_conf}
echo "sentinel down-after-milliseconds mymaster 60000" >> ${sentinel_conf}
echo "sentinel failover-timeout mymaster 180000" >> ${sentinel_conf}
echo "sentinel parallel-syncs mymaster 1" >> ${sentinel_conf}
echo "bind 0.0.0.0" >> ${sentinel_conf}
redis-sentinel ${sentinel_conf} --protected-mode no
}
function launchslave() {
while true; do
master=$(redis-cli -h ${REDIS_SENTINEL_SERVICE_HOST} -p ${REDIS_SENTINEL_SERVICE_PORT} --csv SENTINEL get-master-addr-by-name mymaster | tr ',' ' ' | cut -d' ' -f1)
if [[ -n ${master} ]]; then
master="${master//\"}"
else
echo "Failed to find master."
sleep 60
exit 1
fi
redis-cli -h ${master} INFO
if [[ "$?" == "0" ]]; then
break
fi
echo "Connecting to master failed. Waiting..."
sleep 10
done
sed -i "s/%master-ip%/${master}/" /redis-slave/redis.conf
sed -i "s/%master-port%/6379/" /redis-slave/redis.conf
redis-server /redis-slave/redis.conf --protected-mode no
Dockerfile
FROM alpine:3.4
RUN apk add --no-cache redis sed bash busybox-suid
#su: must be suid to work properly
COPY redis-master.conf /redis-master/redis.conf
COPY redis-slave.conf /redis-slave/redis.conf
RUN adduser -D ubuntu
USER ubuntu
COPY run.sh /home/ubuntu/run.sh
CMD [ "sh", "/home/ubuntu/run.sh" ]
ENTRYPOINT [ "bash", "-c" ]
I deployed in Openshift. The container is continuously restarting and I dont see any logs also. I have seen the some logs before when the "run.sh" is root(default) i.e not mentioned any adduser in Dockerfile.
According to the docker documentation:
Both CMD and ENTRYPOINT instructions define what command gets executed when running a container.
There are few rules that describe their co-operation:
1. Dockerfile should specify at least one of CMD or ENTRYPOINT commands.
2. CMD will be overridden when running the container with alternative arguments.
CMD and ENTRYPOINT layers are completely different in the above Dockerfile, so ENTRYPOINT overrides CMD layer and that's why CMD layer is never executed.
Just delete ENTRYPOINT layer from the Dockerfile, it is not needed here:
FROM alpine:3.4
RUN apk add --no-cache redis sed bash busybox-suid
#su: must be suid to work properly
COPY redis-master.conf /redis-master/redis.conf
COPY redis-slave.conf /redis-slave/redis.conf
RUN adduser -D ubuntu
USER ubuntu
COPY run.sh /home/ubuntu/run.sh
CMD [ "sh", "/home/ubuntu/run.sh" ]
Update:
I see that [[ ]] is used in run.sh script. This construction works in bash, not in sh. That's why the Dockerfile should be the following:
FROM alpine:3.4
RUN apk add --no-cache redis sed bash busybox-suid
#su: must be suid to work properly
COPY redis-master.conf /redis-master/redis.conf
COPY redis-slave.conf /redis-slave/redis.conf
RUN adduser -D ubuntu
USER ubuntu
COPY run.sh /home/ubuntu/run.sh
CMD [ "bash", "/home/ubuntu/run.sh" ]

Celery Daemon "cannot connect to amqp"

I'm trying to run celery as a daemon in the background on Ubuntu 14.04.
I've followed the instructions at http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html and used the celeryd shell script
#!/bin/sh -e
# ============================================
# celeryd - Starts the Celery worker daemon.
# ============================================
#
# :Usage: /etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}
# :Configuration file: /etc/default/celeryd
#
# See http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts
### BEGIN INIT INFO
# Provides: celeryd
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: celery task worker daemon
### END INIT INFO
#
#
# To implement separate init scripts, copy this script and give it a different
# name:
# I.e., if my new application, "little-worker" needs an init, I
# should just use:
#
# cp /etc/init.d/celeryd /etc/init.d/little-worker
#
# You can then configure this by manipulating /etc/default/little-worker.
#
VERSION=10.1
echo "celery init v${VERSION}."
if [ $(id -u) -ne 0 ]; then
echo "Error: This program can only be used by the root user."
echo " Unprivileged users must use the 'celery multi' utility, "
echo " or 'celery worker --detach'."
exit 1
fi
# Can be a runlevel symlink (e.g. S02celeryd)
if [ -L "$0" ]; then
SCRIPT_FILE=$(readlink "$0")
else
SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"
DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"
CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/${SCRIPT_NAME}"}
# Make sure executable configuration script is owned by root
_config_sanity() {
local path="$1"
local owner=$(ls -ld "$path" | awk '{print $3}')
local iwgrp=$(ls -ld "$path" | cut -b 6)
local iwoth=$(ls -ld "$path" | cut -b 9)
if [ "$(id -u $owner)" != "0" ]; then
echo "Error: Config script '$path' must be owned by root!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with mailicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change ownership of the script:"
echo " $ sudo chown root '$path'"
exit 1
fi
if [ "$iwoth" != "-" ]; then # S_IWOTH
echo "Error: Config script '$path' cannot be writable by others!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
if [ "$iwgrp" != "-" ]; then # S_IWGRP
echo "Error: Config script '$path' cannot be writable by group!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
}
if [ -f "$CELERY_DEFAULTS" ]; then
_config_sanity "$CELERY_DEFAULTS"
echo "Using config script: $CELERY_DEFAULTS"
. "$CELERY_DEFAULTS"
fi
# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
CELERY_APP_ARG="--app=$CELERY_APP"
fi
CELERYD_USER=${CELERYD_USER:-$DEFAULT_USER}
# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYD_PID_FILE" ]; then
CELERYD_PID_FILE="$DEFAULT_PID_FILE"
CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYD_LOG_FILE" ]; then
CELERYD_LOG_FILE="$DEFAULT_LOG_FILE"
CELERY_CREATE_LOGDIR=1
fi
CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERY_BIN=${CELERY_BIN:-"celery"}
CELERYD_MULTI=${CELERYD_MULTI:-"$CELERY_BIN multi"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}
export CELERY_LOADER
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`
# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_CHDIR" ]; then
DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYD_CHDIR"
fi
check_dev_null() {
if [ ! -c /dev/null ]; then
echo "/dev/null is not a character device!"
exit 75 # EX_TEMPFAIL
fi
}
maybe_die() {
if [ $? -ne 0 ]; then
echo "Exiting: $* (errno $?)"
exit 77 # EX_NOPERM
fi
}
create_default_dir() {
if [ ! -d "$1" ]; then
echo "- Creating default directory: '$1'"
mkdir -p "$1"
maybe_die "Couldn't create directory $1"
echo "- Changing permissions of '$1' to 02755"
chmod 02755 "$1"
maybe_die "Couldn't change permissions for $1"
if [ -n "$CELERYD_USER" ]; then
echo "- Changing owner of '$1' to '$CELERYD_USER'"
chown "$CELERYD_USER" "$1"
maybe_die "Couldn't change owner of $1"
fi
if [ -n "$CELERYD_GROUP" ]; then
echo "- Changing group of '$1' to '$CELERYD_GROUP'"
chgrp "$CELERYD_GROUP" "$1"
maybe_die "Couldn't change group of $1"
fi
fi
}
check_paths() {
if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
create_default_dir "$CELERYD_LOG_DIR"
fi
if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
create_default_dir "$CELERYD_PID_DIR"
fi
}
create_paths() {
create_default_dir "$CELERYD_LOG_DIR"
create_default_dir "$CELERYD_PID_DIR"
}
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
_get_pidfiles () {
# note: multi < 3.1.14 output to stderr, not stdout, hence the redirect.
${CELERYD_MULTI} expand "${CELERYD_PID_FILE}" ${CELERYD_NODES} 2>&1
}
_get_pids() {
found_pids=0
my_exitcode=0
for pidfile in $(_get_pidfiles); do
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
my_exitcode=1
else
found_pids=1
echo "$pid"
fi
if [ $found_pids -eq 0 ]; then
echo "${SCRIPT_NAME}: All nodes down"
exit $my_exitcode
fi
done
}
_chuid () {
su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}
start_workers () {
if [ ! -z "$CELERYD_ULIMIT" ]; then
ulimit $CELERYD_ULIMIT
fi
_chuid $* start $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
dryrun () {
(C_FAKEFORK=1 start_workers --verbose)
}
stop_workers () {
_chuid stopwait $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers () {
_chuid restart $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
kill_workers() {
_chuid kill $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers_graceful () {
echo "WARNING: Use with caution in production"
echo "The workers will attempt to restart, but they may not be able to."
local worker_pids=
worker_pids=`_get_pids`
[ "$one_failed" ] && exit 1
for worker_pid in $worker_pids; do
local failed=
kill -HUP $worker_pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} worker (pid $worker_pid) could not be restarted"
one_failed=true
else
echo "${SCRIPT_NAME} worker (pid $worker_pid) received SIGHUP"
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
check_status () {
my_exitcode=0
found_pids=0
local one_failed=
for pidfile in $(_get_pidfiles); do
if [ ! -r $pidfile ]; then
echo "${SCRIPT_NAME} down: no pidfiles found"
one_failed=true
break
fi
local node=`basename "$pidfile" .pid`
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
else
local failed=
kill -0 $pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} (node $node) (pid $pid) is down, but pidfile exists!"
one_failed=true
else
echo "${SCRIPT_NAME} (node $node) (pid $pid) is up..."
fi
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
case "$1" in
start)
check_dev_null
check_paths
start_workers
;;
stop)
check_dev_null
check_paths
stop_workers
;;
reload|force-reload)
echo "Use restart"
;;
status)
check_status
;;
restart)
check_dev_null
check_paths
restart_workers
;;
graceful)
check_dev_null
restart_workers_graceful
;;
kill)
check_dev_null
kill_workers
;;
dryrun)
check_dev_null
dryrun
;;
try-restart)
check_dev_null
check_paths
restart_workers
;;
create-paths)
check_dev_null
create_paths
;;
check-paths)
check_dev_null
check_paths
;;
*)
echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|graceful|kill|dryrun|create-paths}"
exit 64 # EX_USAGE
;;
esac
exit 0
,
which I put in /etc/init.d/celeryd.
I've also got the following config filw (also called celeryd which lives in etc/default/celeryd
# Names of nodes to start
# most will only start one node:
CELERYD_NODES="worker"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples).
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="proj"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/drmclean/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="drmclean"
CELERYD_GROUP="drmclean"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I can easily start the celery service running by using the command:
sudo /etc/init.d/celeryd start
and the service runs as I expect.
However on start-up the service never runs. When I inspect the logfile for the celery, it says.
"[2014-09-17 16:27:41,541: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 2.00 seconds..."
Can anyone help with this error? I can't see when the connection would be refused on start-up but also when using the sudo /etc/init.d/celeryd start command?
Are you connecting from a remote server? If yes, the guest user is not allowed to connect from a remote server. See https://www.rabbitmq.com/access-control.html
No the entire thing is running on my own machine. Actually it appears that my celeryd script which is in:
/etc/init.d/celeryd
is never running or start-up. It's unclear why it isn't though.

Resources