Docker stop/start resets to initial passwords - docker

I've just started using docker by copy-pasting pre-made repos from github.
Here is the scenario and steps:
I've passed mysql/shell root password via environment variable -e and these passwords are set as expected inside entry.sh.
I then go inside container and reset shell/mysql root password to something different.
Now the main issue, each time I do docker stop + start from host, it resets passwords to the initial ones of step1.
Please suggest the changes so it retain the modified step2 passwords even I do docker start/stop.
Used entry.sh and dockerfile scripts can be checked from this github repo.
Thanks.

I just noticed that the entry.sh will always update the root password with $MYSQL_RANDOM_ROOT_PASSWORD on docker start. So assuming we already persist the /var/lib/mysql in the host, we can edit the entry.sh a bit to only update the password when /var/lib/mysql doesn't exists:
#!/bin/sh
# start apache
echo "Starting httpd"
httpd
echo "Done httpd"
# check if mysql data directory is nuked
# if so, install the db
echo "Checking /var/lib/mysql folder"
if [ ! -f /var/lib/mysql/ibdata1 ]; then
echo "Installing db"
mariadb-install-db --user=mysql --ldata=/var/lib/mysql > /dev/null
echo "Installed"
# from mysql official docker repo
if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo >&2 'error: database is uninitialized and password option is not specified '
echo >&2 ' You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_RANDOM_ROOT_PASSWORD'
exit 1
fi
# random password
if [ ! -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo "Using random password"
MYSQL_ROOT_PASSWORD="$(pwgen -1 32)"
echo "GENERATED ROOT PASSWORD: $MYSQL_ROOT_PASSWORD"
echo "Done"
fi
tfile=`mktemp`
if [ ! -f "$tfile" ]; then
return 1
fi
cat << EOF > $tfile
USE mysql;
DELETE FROM user;
FLUSH PRIVILEGES;
GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY "$MYSQL_ROOT_PASSWORD" WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'#'localhost' WITH GRANT OPTION;
UPDATE user SET password=PASSWORD("") WHERE user='root' AND host='localhost';
FLUSH PRIVILEGES;
EOF
echo "Querying user"
/usr/bin/mysqld --user=root --bootstrap --verbose=0 < $tfile
rm -f $tfile
echo "Done query"
# setting ssh root password
if [ -z "$SSH_ROOT_PASSWORD" ]; then
echo >&2 'You need to specify SSH_ROOT_PASSWORD'
exit
fi
# Set root password to root, format is 'user:password'.
echo "root:$SSH_ROOT_PASSWORD" | chpasswd
fi;
echo "Generating ssh keys"
if [ ! -f "/etc/ssh/ssh_host_rsa_key" ]; then
# generate fresh rsa key
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
fi
if [ ! -f "/etc/ssh/ssh_host_dsa_key" ]; then
# generate fresh dsa key
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
fi
#prepare run dir
if [ ! -d "/var/run/sshd" ]; then
mkdir -p /var/run/sshd
fi
ssh-keygen -A
/usr/sbin/sshd
# start mysql
# nohup mysqld_safe --skip-grant-tables --bind-address 0.0.0.0 --user mysql > /dev/null 2>&1 &
echo "Starting mariadb database"
exec /usr/bin/mysqld --user=root --bind-address=0.0.0.0
Basically we just move this block of code to the if block above it

Related

retain the data inside database using docker

I have created database.sql file and added to a folder inside the container so that it can be used by the application.I want that my data should remain persistent even when my container is removed.I tried using volume.But how to add .sql file and make that file consistent.
sudo docker -v /datadir sqldb
here, sqldb is database image name and datadir is mount folder.
Dockerfile of sqldb:
FROM ubuntu:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install mysql-client mysql-server curl
ADD ./my.cnf /etc/mysql/my.cnf
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
ADD database.sql /var/db/database.sql
ENV user root
ENV password password
ENV url file:/var/db/database.sql
ENV right WRITE
ADD ./start-database.sh /usr/local/bin/start-database.sh
RUN chmod +x /usr/local/bin/start-database.sh
EXPOSE 3306
CMD ["/usr/local/bin/start-database.sh"]
start-database.sh file
#!/bin/bash
# This script starts the database server.
echo "Creating user $user for databases loaded from $url"
a
# Import database if provided via 'docker run --env url="http:/ex.org/db.sql"'
echo "Adding data into MySQL"
/usr/sbin/mysqld &
sleep 5
curl $url -o import.sql
# Fixing some phpmysqladmin export problems
sed -ri.bak 's/-- Database: (.*?)/CREATE DATABASE \1;\nUSE \1;/g' import.sql
# Fixing some mysqldump export problems (when run without --databases switch)
# This is not tested so far
# if grep -q "CREATE DATABASE" import.sql; then :; else sed -ri.bak 's/-- MySQL dump/CREATE DATABASE `database_1`;\nUSE `database_1`;\n-- MySQL dump/g' import.sql; fi
mysql --default-character-set=utf8 < import.sql
rm import.sql
mysqladmin shutdown
echo "finished"
# Now the provided user credentials are added
/usr/sbin/mysqld &
sleep 5
echo "Creating user"
echo "CREATE USER '$user' IDENTIFIED BY '$password'" | mysql --default-character-set=utf8
echo "REVOKE ALL PRIVILEGES ON *.* FROM '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "GRANT SELECT ON *.* TO '$user'#'%'; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
echo "finished"
if [ "$right" = "WRITE" ]; then
echo "adding write access"
echo "GRANT ALL PRIVILEGES ON *.* TO '$user'#'%' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql --default-character-set=utf8
fi
# And we restart the server to go operational
mysqladmin shutdown
cp /var/db/database.sql /var/lib/docker/volumes/mysqlvol/database.sql
echo "Starting MySQL Server"
/usr/sbin/mysqld
To keep data persisted even when container is removed please use volumes.
For more info look at:
SO: How to deal with persistent storage (e.g. databases) in docker
Docker Webinar Q&A: Persistent Storage & Docker
Manage data in containers
Compose and volumes
Volume plugins
I hope that helps. If not please ask.

Jenkins to automate deb reprepro repository creation/signing

Problem statement:
Can sign repos from inside normal terminal (also inside docker). From jenkins job, repo creation/signing fails. Job hangs.
Configuration:
Jenkins spawns docker container to create/sign deb repository.
Private and public keys all present.
gpg-agent installed on the docker container to sign the packages.
~/.gnupg/gpg.conf file has "use-agent" enabled
Progress:
Can start gpg-agent using jenkins on the docker container.
Can use gpg-preset-passphrase to cache passphrase.
Can use [OUTSIDE JENKINS]
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}
to fetch the passphrase from gpg-agent and sign the repo.
Problem:
from inside a jenkins job, the command "reprepro --ask-passphrase -Vb ..." hangs.
Code:
starting gpg-agent:
GPGAGENT=/usr/bin/gpg-agent
GNUPG_PID_FILE=${GNUPGHOME}/gpg-agent-info
GNUPG_CFG=${GNUPGHOME}/gpg.conf
GNUPG_CFG=${GNUPGHOME}/gpg-agent.conf
function start_gpg_agent {
GPG_TTY=$(tty)
export GPG_TTY
if [ -r "${GNUPG_PID_FILE}" ]
then
source "${GNUPG_PID_FILE}" count=$(ps lax | grep "${GPGAGENT}" | grep "$SSH_AGENT_PID" | wc -l)
if [ $count -eq 0 ]
then
if ! ${GPGAGENT} 2>/dev/null then
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --enable-ssh-support \
--allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
if [[ $? -eq 0 ]]
then
echo "INFO::agent started"
else
echo "INFO::Agent could not be started. Exit."
exit -101
fi
fi
fi
else
$GPGAGENT --debug-all --options ${BASE_PATH}/sign/gpg-agent.options \
--daemon --allow-preset-passphrase --write-env-file ${GNUPG_PID_FILE}
fi
}
options file:
default-cache-ttl 31536000
default-cache-ttl-ssh 31536000
max-cache-ttl 31536000
max-cache-ttl-ssh 31536000
enable-ssh-support
debug-all
saving passphrase.
/usr/lib/gnupg2/gpg-preset-passphrase -v --preset --passphrase ${_passphrase} ${_fp}
finally (for completion), sign repo:
reprepro --ask-passphrase -Vb . includedeb ${_repo_name} ${_pkg_location}

Share files between host system and docker container using specific UID

I'm trying to share files within a Docker guest using the volume sharing. In order to get the same UID, and therefore interoperability with those files, I would like to create a user in the Docker guest with the same UID as my own user.
In order to test out the idea, I wrote the following simplistic Dockerfile:
FROM phusion/baseimage
RUN touch /root/uid-$UID
Testing it with docker build -t=docktest . and then docker run docktest ls -al /root reveals that the file is simply named uid-.
Is there a means to share host environment variables with Docker during the guest build process?
While researching a solution to this problem, I have found the following article to be a great resource: https://medium.com/#mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
In my scripts, the solution boiled down to the following :
docker run --user $(id -u):$(id -g) -v /hostdirectory:/containerdirectory -v /etc/passwd:/etc/passwd myimage
Of course, id -u can be replaced by other means of retrieving a user's gid, such as stat -c "%u" /somepath
The environment is not shared, you could use -e, --env options to set env variables in container.
I usually use this approach when I want to have the same owner of the mapped volume: I check uid & gid of directory in container and then create a corresponding user. Here my script (setuser.sh) which creates a user for a directory:
#!/bin/bash
setuser() {
if [ -z "$1" ]; then
echo "Usage: $0 <path>"
return
fi
CURRENT_UID=`id -u`
DEST_UID=`stat -c "%u" $1`
if [ $CURRENT_UID = $DEST_UID ]; then
return
fi
DEST_GID=`stat -c "%g" $1`
if [ -e /home/$DEST_UID ]; then
return
fi
groupadd -g $DEST_GID $DEST_GID
useradd -u $DEST_UID -g $DEST_GID $DEST_UID
mkdir -p /home/$DEST_UID
chown $DEST_UID:$DEST_GID /home/$DEST_UID
}
setuser $1
And this is the wrapper script which runs commands as the user, where the directory with permissions is specified either as $USER_DIR or in /etc/user_dir
#!/bin/bash
if [ -z "$USER_DIR" ]; then
if [ -e /etc/user_dir ]; then
export USER_DIR=`head -n 1 /etc/user_dir`
fi
fi
if [ -n "$USER_DIR" ]; then
if [ ! -d "$USER_DIR" ]; then
echo "Please mount $USER_DIR before running this script"
exit 1
fi
. `dirname $BASH_SOURCE`/setuser.sh $USER_DIR
fi
if [ -n "$USER_DIR" ]; then
cd $USER_DIR
fi
if [ -e /etc/user_script ]; then
. /etc/user_script
fi
if [ $CURRENT_UID = $DEST_UID ]; then
"$#"
else
su $DEST_UID -p -c "$#"
fi
P.S. Alleo suggested different approach: to map users and groups files into container and to specify uid and gid. So your container does not depend on built-in users/groups you could use it without additional scripts.
This is not possible and will probably never be possible because of the design philosophy of keeping builds independent of machines. Issue 6822.
I slightly modified #ISanych answer:
#!/usr/bin/env bash
user_exists() {
id -u $1 > /dev/null 2>&1
}
group_exists() {
id -g $1 > /dev/null 2>&1
}
setuser() {
if [[ "$#" != 3 ]]; then
echo "Usage: $0 <path> <user> <group>"
return
fi
local dest_uid=$(stat -c "%u" $1)
local dest_gid=$(stat -c "%g" $1)
if user_exists $dest_uid; then
id -nu $dest_uid
return
fi
local dest_user=$2
local dest_group=$3
if user_exists $dest_user; then
userdel $dest_user
fi
if group_exists $dest_group; then
groupdel $dest_user
fi
groupadd -g $dest_gid $dest_group
useradd -u $dest_uid -g $dest_gid -s $DEFAULT_SHELL -d $DEFAULT_HOME -G root $dest_user
chown -R $dest_uid:$dest_gid $DEFAULT_HOME
id -nu $dest_user
}
REAL_USER=$(setuser $SRC_DIR $DEFAULT_USER $DEFAULT_GROUP)
setuser function accepts user and group names that you want to assign to uid and gid of provided directory. Then if user with such uid exists then it simply returns login corresponding to this uid, otherwise it creates user and group and returns login originally passed to function.
So you get the login of user that owns destination directory.

Celery Daemon "cannot connect to amqp"

I'm trying to run celery as a daemon in the background on Ubuntu 14.04.
I've followed the instructions at http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html and used the celeryd shell script
#!/bin/sh -e
# ============================================
# celeryd - Starts the Celery worker daemon.
# ============================================
#
# :Usage: /etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}
# :Configuration file: /etc/default/celeryd
#
# See http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts
### BEGIN INIT INFO
# Provides: celeryd
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: celery task worker daemon
### END INIT INFO
#
#
# To implement separate init scripts, copy this script and give it a different
# name:
# I.e., if my new application, "little-worker" needs an init, I
# should just use:
#
# cp /etc/init.d/celeryd /etc/init.d/little-worker
#
# You can then configure this by manipulating /etc/default/little-worker.
#
VERSION=10.1
echo "celery init v${VERSION}."
if [ $(id -u) -ne 0 ]; then
echo "Error: This program can only be used by the root user."
echo " Unprivileged users must use the 'celery multi' utility, "
echo " or 'celery worker --detach'."
exit 1
fi
# Can be a runlevel symlink (e.g. S02celeryd)
if [ -L "$0" ]; then
SCRIPT_FILE=$(readlink "$0")
else
SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"
DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"
CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/${SCRIPT_NAME}"}
# Make sure executable configuration script is owned by root
_config_sanity() {
local path="$1"
local owner=$(ls -ld "$path" | awk '{print $3}')
local iwgrp=$(ls -ld "$path" | cut -b 6)
local iwoth=$(ls -ld "$path" | cut -b 9)
if [ "$(id -u $owner)" != "0" ]; then
echo "Error: Config script '$path' must be owned by root!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with mailicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change ownership of the script:"
echo " $ sudo chown root '$path'"
exit 1
fi
if [ "$iwoth" != "-" ]; then # S_IWOTH
echo "Error: Config script '$path' cannot be writable by others!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
if [ "$iwgrp" != "-" ]; then # S_IWGRP
echo "Error: Config script '$path' cannot be writable by group!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
}
if [ -f "$CELERY_DEFAULTS" ]; then
_config_sanity "$CELERY_DEFAULTS"
echo "Using config script: $CELERY_DEFAULTS"
. "$CELERY_DEFAULTS"
fi
# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
CELERY_APP_ARG="--app=$CELERY_APP"
fi
CELERYD_USER=${CELERYD_USER:-$DEFAULT_USER}
# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYD_PID_FILE" ]; then
CELERYD_PID_FILE="$DEFAULT_PID_FILE"
CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYD_LOG_FILE" ]; then
CELERYD_LOG_FILE="$DEFAULT_LOG_FILE"
CELERY_CREATE_LOGDIR=1
fi
CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERY_BIN=${CELERY_BIN:-"celery"}
CELERYD_MULTI=${CELERYD_MULTI:-"$CELERY_BIN multi"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}
export CELERY_LOADER
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`
# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_CHDIR" ]; then
DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYD_CHDIR"
fi
check_dev_null() {
if [ ! -c /dev/null ]; then
echo "/dev/null is not a character device!"
exit 75 # EX_TEMPFAIL
fi
}
maybe_die() {
if [ $? -ne 0 ]; then
echo "Exiting: $* (errno $?)"
exit 77 # EX_NOPERM
fi
}
create_default_dir() {
if [ ! -d "$1" ]; then
echo "- Creating default directory: '$1'"
mkdir -p "$1"
maybe_die "Couldn't create directory $1"
echo "- Changing permissions of '$1' to 02755"
chmod 02755 "$1"
maybe_die "Couldn't change permissions for $1"
if [ -n "$CELERYD_USER" ]; then
echo "- Changing owner of '$1' to '$CELERYD_USER'"
chown "$CELERYD_USER" "$1"
maybe_die "Couldn't change owner of $1"
fi
if [ -n "$CELERYD_GROUP" ]; then
echo "- Changing group of '$1' to '$CELERYD_GROUP'"
chgrp "$CELERYD_GROUP" "$1"
maybe_die "Couldn't change group of $1"
fi
fi
}
check_paths() {
if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
create_default_dir "$CELERYD_LOG_DIR"
fi
if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
create_default_dir "$CELERYD_PID_DIR"
fi
}
create_paths() {
create_default_dir "$CELERYD_LOG_DIR"
create_default_dir "$CELERYD_PID_DIR"
}
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
_get_pidfiles () {
# note: multi < 3.1.14 output to stderr, not stdout, hence the redirect.
${CELERYD_MULTI} expand "${CELERYD_PID_FILE}" ${CELERYD_NODES} 2>&1
}
_get_pids() {
found_pids=0
my_exitcode=0
for pidfile in $(_get_pidfiles); do
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
my_exitcode=1
else
found_pids=1
echo "$pid"
fi
if [ $found_pids -eq 0 ]; then
echo "${SCRIPT_NAME}: All nodes down"
exit $my_exitcode
fi
done
}
_chuid () {
su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}
start_workers () {
if [ ! -z "$CELERYD_ULIMIT" ]; then
ulimit $CELERYD_ULIMIT
fi
_chuid $* start $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
dryrun () {
(C_FAKEFORK=1 start_workers --verbose)
}
stop_workers () {
_chuid stopwait $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers () {
_chuid restart $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
kill_workers() {
_chuid kill $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers_graceful () {
echo "WARNING: Use with caution in production"
echo "The workers will attempt to restart, but they may not be able to."
local worker_pids=
worker_pids=`_get_pids`
[ "$one_failed" ] && exit 1
for worker_pid in $worker_pids; do
local failed=
kill -HUP $worker_pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} worker (pid $worker_pid) could not be restarted"
one_failed=true
else
echo "${SCRIPT_NAME} worker (pid $worker_pid) received SIGHUP"
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
check_status () {
my_exitcode=0
found_pids=0
local one_failed=
for pidfile in $(_get_pidfiles); do
if [ ! -r $pidfile ]; then
echo "${SCRIPT_NAME} down: no pidfiles found"
one_failed=true
break
fi
local node=`basename "$pidfile" .pid`
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
else
local failed=
kill -0 $pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} (node $node) (pid $pid) is down, but pidfile exists!"
one_failed=true
else
echo "${SCRIPT_NAME} (node $node) (pid $pid) is up..."
fi
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
case "$1" in
start)
check_dev_null
check_paths
start_workers
;;
stop)
check_dev_null
check_paths
stop_workers
;;
reload|force-reload)
echo "Use restart"
;;
status)
check_status
;;
restart)
check_dev_null
check_paths
restart_workers
;;
graceful)
check_dev_null
restart_workers_graceful
;;
kill)
check_dev_null
kill_workers
;;
dryrun)
check_dev_null
dryrun
;;
try-restart)
check_dev_null
check_paths
restart_workers
;;
create-paths)
check_dev_null
create_paths
;;
check-paths)
check_dev_null
check_paths
;;
*)
echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|graceful|kill|dryrun|create-paths}"
exit 64 # EX_USAGE
;;
esac
exit 0
,
which I put in /etc/init.d/celeryd.
I've also got the following config filw (also called celeryd which lives in etc/default/celeryd
# Names of nodes to start
# most will only start one node:
CELERYD_NODES="worker"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples).
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="proj"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/drmclean/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="drmclean"
CELERYD_GROUP="drmclean"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I can easily start the celery service running by using the command:
sudo /etc/init.d/celeryd start
and the service runs as I expect.
However on start-up the service never runs. When I inspect the logfile for the celery, it says.
"[2014-09-17 16:27:41,541: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 2.00 seconds..."
Can anyone help with this error? I can't see when the connection would be refused on start-up but also when using the sudo /etc/init.d/celeryd start command?
Are you connecting from a remote server? If yes, the guest user is not allowed to connect from a remote server. See https://www.rabbitmq.com/access-control.html
No the entire thing is running on my own machine. Actually it appears that my celeryd script which is in:
/etc/init.d/celeryd
is never running or start-up. It's unclear why it isn't though.

Bash script for psql

Here is the shell script I am trying to run. It works when just run as a command but getting errors when run from the script.
#!/bin/bash
# sets CE IP addresses to act as LUS on pgsql
#Checks that user is logged in as root
if [ $(id -u) = "0" ]; then
#Asks user for IP of CE1
echo -n "Enter the IP address of your first CE's management module > "
read CE1
$(psql -U asm -d asm -t -c) echo """update zr_fsinstance set lu_order='1' where managementaccesspointhostname = '$CE1';"""
echo "LUS seetings have been completed"
else
#Warns user of error and sends status to stderr
echo "You must be logged in as root to run this script." >&2
exit 1
fi
Here is the error:
psql: option requires an argument -- 'c'
Try "psql --help" for more information.
update zr_fsinstance set lu_order='1' where managementaccesspointhostname = '10.134.39.139';
Instead of
$(psql -U asm -d asm -t -c) echo
"""update zr_fsinstance
set lu_order='1' where managementaccesspointhostname = '$CE1';"""
Try:
$(psql -U asm -d asm -t -c "UPDATE zr_fsinstance set lu_order='1'
where managementaccesspointhostname = ${CE1};")
OR (if you prefer):
`psql -U asm -d asm -t -c "UPDATE zr_fsinstance set lu_order='1'
where managementaccesspointhostname = ${CE1};"`

Resources