Start monit as a foreground process in docker - docker

I want to start monit process in docker.
Since its getting daemonized, the container is getting completed once monit starts. What is the best way to run it as a foreground process?

From https://wiki.gentoo.org/wiki/Monit:
Running monit in the foreground
To run monit in the foreground and provide feedback on everything it is detecting, use the -Ivv option:
root #monit -Ivv

Related

jenkins kills ssh session when supervisord restarts

I'm using jenkins to do a few actions in a remote server.
I have an Execute Shell command in which I do the following:
sudo ssh <remote server> 'sudo service supervisor restart'
sleep 30
When jenkins reaches the first line I can see 'Restarting Supervisor' but after a moment I see that jenkins closed the ssh connection and moved on to the second line.
I tried adding a 'sleep 30' after the restart command but it still doesn't work.
Seems jenkins doesn't wait for the supervisor restart command to be completed.
Problem is it's not something that always happens, just sometimes, but it does make a lot of problems when it fails.
I think you can never be certain all processes started by supervisord are in a 'ready' state after a restart. Even is the restart action would wait for processes to be started, it wouldn't know if they are 'ready'.
In docker-compose setups that need to know if a certain service is available I've used an extra 'really ready' check for this - optionally in a loop with a sleep/wait. If the process that you are starting opens a port you can use one of the variations of 'wait-for' for this.

How to detect orphaned sidekiq process after capistrano deploy?

We have a Rails 4.2 application that runs alongisde a sidekiq process for long tasks.
Somehow, in a deploy a few weeks ago something went south (the capistrano deploy process didn't effectively stopped it, wasn't able to figure out why) and there was left an orphaned sidekiq process running that was competing with the current one for jobs on the redis queue. Because this process source became outdated it started giving random results on our application (depending on which process captured the job) and we got a very hard time until we figured this out..
How I can stop this from happenning ever again? I mean, I can ssh into the VPS after every deploy and run ps aux | grep sidekiq to check if there is more than one.. but it's not practical.
Use your init system to manage Sidekiq, not Capistrano.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

Docker with a Rails App-Workers not running

So I have a Rails Application that has multiple types of workers. I decided to try and run the rails app with Docker, with a separate image for each type of worker (Resque, DelayedJob, a scheduler, different configurations). The problem is that the workers with queues (DelayedJob + Resque) are not picking up jobs (using both to rule out the queuing system itself). I can see the jobs enqueued, they're there, but the workers never pick up anything off the queue. If I run a worker off the console, it works just fine.
The images are based on Cedarish-https://github.com/progrium/cedarish
The web workers that are sitting behind NGINX seem to be doing fine, though I have noticed some issues with them sometimes becoming non-responsive after a while but not sure if that's related.
Any idea as to what could cause a worker, run under Docker and successfully connecting to Redis + MySQL, to just ignore the job queue and not pick anything up?
Guessing this has something to do with my Docker configuration...
Turns out this was an operating system problem-Docker was running up to 100% CPU usage and just generally misbehaving.
This was on a GCE instance with Debian 7 with backports.
The following fixed the problem:
sudo aptitude install bridge-utils libvirt-bin debootstrap
vi /etc/default/grub
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
sudo reboot

Why is Supervisor not recognizing code changes?

I'm using Supervisor to manage my node.js applicaton on an EC2 instance with git for deployment. Supervisor does a good job of making sure my application stays up, but whenever I push new server-side code to my remote server, it tends to not recognize those changes. I need to kill the supervisor process and restart it. Is there something I'm doing wrong, or is this standard behavior?
This is standard behaviour; supervisord does not detect changes in code. It only restarts processes if they themselves stop or die.
Just instruct supervisord to restart the application whenever you push changes. supervisorctl restart programname is fine, no need to kill and restart supervisord itself.
If the supervisord configuration changed, use supervisorctl update.

Unicorn and upstart

I'm having a hard time writing an upstart configuration file to start (and keep alive) the unicorn web server on a Ubuntu box.
How should I set the respawn and expect parameters? With respawn enabled the process is continously restarted (I see on top its PID changing continuously and the old one becoming zombie). If I remove the directive then the process isn't restarted when it dies.
According to the upstart documentation the expect parameter might be crucial: what's the unicorn fork-wise behaviour? Any clue?

Resources