Supervisord keeps repeating a process with an exit of zero. - docker

I am working on having supervisord send a curl request upon start. however, despite my best efforts, I set auto restart to false, it keeps running the script.
[program:slack-client]
priority=99
autorestart=false
command=bash -c 'SOME BASH COMMAND'
The error that keeps repeating is
INFO exited: slack-client (exit status 0; not expected)

I faced with similar behavior of rsyslog. This workaround helped me (but I'm not sure if that's exactly your case):
[program:rsyslog]
command = rsyslogd -n -c3
startsecs = 5
stopwaitsecs = 5

Related

docker-compose healthcheck retry frequency != interval

I recently set up healthchecks in my docker-compose config.
It is doing great and I like it. Here's a typical example:
services:
app:
healthcheck:
test: curl -sS http://127.0.0.1:4000 || exit 1
interval: 5s
timeout: 3s
retries: 3
start_period: 30s
My container is quite slow to boot, hence I set up a 30 seconds start_period.
But it doesn't really fit my expectation: I don't need check every 5 seconds, but I need to know when the container is ready for the first time as soon as possible for my orchestration, and since my start_period is approximative, if it is not ready yet at first check, I have to wait for interval before retry.
What I'd like to have is:
While container is not healthy, retry every 5 seconds
Once it is healthy, check every 1 minute
Ain't there a way to achieve this out-of-the-box with docker-compose?
I could write a custom script to achieve this, but I'd rather have a native solution if it is possible.
Unfortunately, this is not possible out of the box.
All the duration set are final. They can't be changed depending on the container state.
However, according to the documentation, the probe does not seem to wait for the start_period to finish before checking your test. The only thing it does is that any failure hapenning during start_period will not be considered as an error.
Below is the sentence that make me think that :
start_period provides initialization time for containers that need time to bootstrap. Probe failure during that period will not be counted towards the maximum number of retries. However, if a health check succeeds during the start period, the container is considered started and all consecutive failures will be counted towards the maximum number of retries.
I encourage you to test if this is really the case as I've never really paid any attention if the healthcheck is tested during the start period or not.
And if it is the case, you can probably increase your start_period if you're unsure about the duration and also increase the interval in order to find a good compromise.
I wrote a script that does this, though I'd rather find a native solution:
#!/bin/sh
HEALTHCHECK_FILE="/root/.healthchecked"
COMMAND=${*?"Usage: healthcheck_retry <COMMAND>"}
if [ -r "$HEALTHCHECK_FILE" ]; then
LAST_HEALTHCHECK=$(date -r "$HEALTHCHECK_FILE" +%s)
# FIVE_MINUTES_AGO=$(date -d 'now - 5 minutes' +%s)
FIVE_MINUTES_AGO=$(echo "$(( $(date +%s)-5*60 ))")
echo "Healthcheck file present";
# if (( $LAST_HEALTHCHECK > $FIVE_MINUTES_AGO )); then
if [ $LAST_HEALTHCHECK -gt $FIVE_MINUTES_AGO ]; then
echo "Healthcheck too recent";
exit 0;
fi
fi
if $COMMAND ; then
echo "\"$COMMAND\" succeed: updating file";
touch $HEALTHCHECK_FILE;
exit 0;
else
echo "\"$COMMAND\" failed: exiting";
exit 1;
fi
Which I use: test: /healthcheck_retry.sh curl -fsS localhost:4000/healthcheck
The pain is that I need to make sure the script is available in every container, so I have to create an extra volume for this:
image: postgres:11.6-alpine
volumes:
- ./scripts/utils/healthcheck_retry.sh:/healthcheck_retry.sh

How to monitor resources during slurm job?

I'm running jobs on our university cluster (regular user, no admin rights), which uses the SLURM scheduling system and I'm interested in plotting the CPU and memory usage over time, i.e while the job is running. I know about sacct and sstat and I was thinking to include these commands in my submission script, e.g. something in the line of
#!/bin/bash
#SBATCH <options>
# Running the actual job in background
srun my_program input.in output.out &
# While loop that records resources
JobStatus="$(sacct -j $SLURM_JOB_ID | awk 'FNR == 3 {print $6}')"
FIRST=0
#sleep time in seconds
STIME=15
while [ "$JobStatus" != "COMPLETED" ]; do
#update job status
JobStatus="$(sacct -j $SLURM_JOB_ID | awk 'FNR == 3 {print $6}')"
if [ "$JobStatus" == "RUNNING" ]; then
if [ $FIRST -eq 0 ]; then
sstat --format=AveCPU,AveRSS,MaxRSS -P -j ${SLURM_JOB_ID} >> usage.txt
FIRST=1
else
sstat --format=AveCPU,AveRSS,MaxRSS -P --noheader -j ${SLURM_JOB_ID} >> usage.txt
fi
sleep $STIME
elif [ "$JobStatus" == "PENDING" ]; then
sleep $STIME
else
sacct -j ${SLURM_JOB_ID} --format=AllocCPUS,ReqMem,MaxRSS,AveRSS,AveDiskRead,AveDiskWrite,ReqCPUS,AllocCPUs,NTasks,Elapsed,State >> usage.txt
JobStatus="COMPLETED"
break
fi
done
However, I'm not really convinced of this solution:
sstat unfortunately doesn't show how many cpus are used at the
moment (only average)
MaxRSS is also not helpful if I try to record memory usage over time
there still seems to be some error (script doesn't stop after job finishes)
Does anyone have an idea how to do that properly? Maybe even with top or htop instead of sstat? Any help is much appreciated.
Slurm offers a plugin to record a profile of a job (PCU usage, memory usage, even disk/net IO for some technologies) into a HDF5 file. The file contains a time series for each measure tracked, and you can choose the time resolution.
You can activate it with
#SBATCH --profile=<all|none|[energy[,|task[,|filesystem[,|network]]]]>
See the documentation here.
To check that this plugin is installed, run
scontrol show config | grep AcctGatherProfileType
It should output AcctGatherProfileType = acct_gather_profile/hdf5.
The files are created in the folder referred to in the ProfileHDF5Dir Slurm configuration parameter (in slurm.conf)
As for your script, you could try replacing sstat with an SSH connection to the compute nodes to run ps. Assuming pdsh or clush is installed, you could run something like:
pdsh -j $SLURM_JOB_ID ps -u $USER -o pid,state,cputime,%cpu,rssize,command --columns 100 >> usage.txt
This will give you CPU and memory usage per process.
As a final note, your job never terminates simply because it will terminate when the while loop terminates, and the while loop will terminate when the job terminates... The condition "$JobStatus" == "COMPLETED" will never be observed from within the script. When the job is completed, the script is killed.

opensipsctl start gives an error: opensips.pid does not exist

When I run opensipsctl start command for start opensips that time I got one error.
ERROR: PID file /var/run/opensips.pid does not exist -- OpenSIPS start failed
So please help me to solve it.
open up opensipsctl, it includes the file opensipsctlrc, which defined $PID_FILE as /var/run/opensips.pid
Then in opensipsctl, when you run start one of the checks is..
if [ ! -s $PID_FILE ] ; then
echo
merr "PID file $PID_FILE does not exist -- OpenSIPS start failed"
exit 1
fi
Which is saying if then check of whethever '/var/run/opensips.pid exists and is bigger than 0 bytes' fails, then echo out the above error.
This means the file isn't being created.
If you look just above that line it does..
if [ $SYSLOG = 1 ] ; then
$OSIPSBIN -P $PID_FILE $STARTOPTIONS 1>/dev/null 2>/dev/null
else
$OSIPSBIN -P $PID_FILE -E $STARTOPTIONS
fi
Which is where opensips actually starts. I would suggest adding the following to your opensips.cfg if you havn't already..
# Logging
debug=6
log_stderror=no
log_facility=LOG_LOCAL0
..now everything will be logged to /var/log/syslog on boot.
Try boot again, then look at that log for info about what's happened.
Another thing to check, is the user you're running opensips as has permission to access the directory it's trying to create the pid file in.
I had the same error & it was driving me mad as well. I managed to trace it down to one of two things - I had both!
1/ A misconfiguration in the OpenSIPS config file. journalctl -xe should be able to tell you what the error is
2/ Something else is listening on the port that you are trying to listen on
For 2, you can try the below, if you have Ubuntu, to see if anything is already listening on that port
lsof -i :5060
I was able to see logs and fix issue by below steps
Set log_level=4 in opensips.cfg to view debug logs in /var/log/syslog
debug is deprecated in 2.4 and higher version.
You can refer here for different log level

Capistrano many failed status on linked_files and linked_dirs what do they mean?

OK so all is working perfectly well as far as I can see, but I do see a lot of "failed" status on most of the linked_files and linked_dirs tasks and I am wondering if they deserve any attention. Here are a few examples:
DEBUG [423a17e1] Running [ -L /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ] on xxx.xxx.xxx.xxx
DEBUG [423a17e1] Command: [ -L /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ]
DEBUG [423a17e1] Finished in 0.470 seconds with exit status 1 (failed).
DEBUG [541d2f8a] Running [ -d /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ] on xxx.xxx.xxx.xxx
DEBUG [541d2f8a] Command: [ -d /home/caluebat/www/ravenfort/releases/20160312213815/tmp/pids ]
DEBUG [541d2f8a] Finished in 0.476 seconds with exit status 1 (failed).
I was unable to find any detail on the capistrano official docs and they send either here or the mailing list for questions.
I would appreciate any clarification on the above failures.
Thank you very much.
Don't worry about it!
Whenever cap runs a command that returns a non-zero result, it prints the line in red and says "failed". This can be misleading, because it runs a lot of commands just to see what already exists. For instance [ -d foo ] means "Is there a directory named foo?" It's not actually a failure, it's just cap inspecting the target machine to find out what work it needs to do.
If cap hits a real error, it will quit early and you'll get a stack trace and/or actual error message.

Supervisord as Windows Service on Cygwin

I am attempting to run Celery as a Windows Service using Supervisord. I followed the configuration laid out on the Celery site and [here][1]. I have set up a virtual environment to run supervisord through cygwin.I have highlighted the lines I think are most important (with **). It appears supervisord and rabbitMQ are working. The problem is with Celery.
I setup the service with the commands:
$ cygrunsrv --install supervisord --path /usr/bin/python --args "/usr/bin/supervisord -n -c /usr/etc/supervisord.conf"
$ supervisord
UPDATED: I now have the following in my supervisord.log file:
2014-08-07 12:46:40,676 INFO exited: celery (exit status 1; not expected)
2014-08-07 12:47:07,187 INFO Increased RLIMIT_NOFILE limit to 1024
2014-08-07 12:47:07,238 INFO RPC interface 'supervisor' initialized
2014-08-07 12:47:07,251 INFO daemonizing the supervisord process
2014-08-07 12:47:07,253 INFO supervisord started with pid 7508
2014-08-07 12:47:08,272 INFO spawned: 'celery' with pid 8056
**2014-08-07 12:47:08,833 INFO success: celery entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)**
The config file is:
[inet_http_server] ; inet (TCP) server disabled by default
port=127.0.0.1:8072 ; (ip_address:port specifier, *:port for all iface)
username = user
password = 123
[supervisord]
logfile= /home/HBA/venv/logFiles/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
;user=HBA ; (default is current user, required if root)
childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=http://127.0.0.1:8072 ; use an http:// url to specify an inet socket
[program:celery]
command= celery worker -A runLogProject --loglevel=INFO ; the program (relative uses PATH, can take args)
directory= /home/HBA/venv/runLogProject
environment=PATH="/home/HBA/venv/;/home/HBA/venv/Scripts/"
numprocs=1
stdout_logfile= /home/HBA/venv/logFiles/%(program_name)s/worker.log ; stdout log path, NONE for none; default AUTO
stderr_logfile= /home/HBA/venv/logFiles/%(program_name)s/worker.log ; stderr log path, NONE for none; default AUTO
autostart=true ; start at supervisord start (default: true)
autorestart=true ; whether/when to restart (default: unexpected)
startsecs=0
stopwaitsecs=1000
killasgroup=true
My celery log file gives me:
**[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-4' pid:12284 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-3' pid:4432 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-2' pid:9120 exited with 'signal -1'
[2014-08-07 19:46:40,584: ERROR/MainProcess] Process 'Worker-1' pid:6280 exited with 'signal -1'**
C:\Python27\lib\site-packages\celery\apps\worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
[2014-08-07 19:47:08,822: WARNING/MainProcess] C:\Python27\lib\site-packages\celery\apps\worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.
The pickle serializer is a security concern as it may give attackers
the ability to execute any command. It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
**[2014-08-07 19:47:08,944: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2014-08-07 19:47:08,954: INFO/MainProcess] mingle: searching for neighbors
[2014-08-07 19:47:09,963: INFO/MainProcess] mingle: all alone**
C:\Python27\lib\site-packages\celery\fixups\django.py:236: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2014-08-07 19:47:09,982: WARNING/MainProcess] C:\Python27\lib\site-packages\celery\fixups\django.py:236: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2014-08-07 19:47:09,982: WARNING/MainProcess] celery#CORONADO ready.
I solved my issue using the following command: /home/HBA/venv/Scripts/celery worker -A runLogProject --loglevel=INFO
My biggest issue was an unfamiliarity with virtual environments. I needed to make sure the files were in the correct folders within the venv.

Resources