I have a daemon that I'm starting along with the server using an initializer file.
I want to stop this daemon once the server stops, but I'm not sure where to put a script that would run when the server stops.
Initializers get automatically loaded when the server starts. Is there a similar "destroyers" folder? Where would I put code that I want to run when the server stops?
Thanks!
Here's a link that might be of interest, http://github.com/costan/daemonz
Related
I am using Mongooseim 3.2.0 from the source code on the ubuntu server. Below are concern:
What is the best way to run mongooseim as a service so that it automatically restarts if mongooseim crashes or system restarts?
How to interact via terminal with already running mongooseim instance on the ubuntu server like "mongooseimctl live". My guess is running "mongooseimctl live" will try to create another instance. I just want to see the live logs and interaction and don't want to keep scrolling the long log files for this purpose.
I apologize if the answer to above is obvious but just want to follow the best guidance.
mongooseimctl live or mongooseimctl foreground is mostly useful for development or smoke testing a deployment (unless you're running inside a container). For real world use cases you should start the server in the background with mongooseimctl start.
Back to the container - the best approach for containerised applications is to run them in the foreground, therefore in a container startup script use mongooseimctl foreground.
Once the server is running (no matter how it was started) attaching a shell to troubleshoot issues can be done with mongooseimctl debug. This is the command to use when you get the Protocol 'inet_tcp': the name mongooseim#localhost seems to be in use by another Erlang node error. Be careful if it's a production environment - you can easily take the server down with access to this shell.
If you're just interested in watching logs, with no interactive access to the server internals that the shell offers, a simple tail -f /your-configured-mongooseim-log-dir/* should be enough.
Ubuntu nowadays uses systemd for managing its services' lifetimes. A systemd .service file can be found at https://github.com/esl/MongooseIM/blob/master/tools/pkg/platforms/debian_stretch/files/build/mongooseim.service - we use it for packaging into Debian/Ubuntu .deb packages.
I'm using an EC2 instance for hosting a rails application. I'm deploying with capistrano and I had already included sidekiq and it's working fine. However, sometimes on deploy, and sometimes sporadically, sidekiq stops running and I don't notice until some tasks that use sidekiq doesn't run.
I could do something on deploy to check that, but if it stops to work eventually after deploy, that would still be a problem.
I would like to know what is the best way, in that scenario, to check periodically if sidekiq is running, and if not to, run it.
I thought of doing a bash script for that, but apparently, when I run sidekiq from command line, it creates another process with a different pid of the one launched by sidekiq... so I think it could get messy.
Any help is appreciated. Thanks!
Learn and use systemd to manage the service.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process
I have a console application written in .Net Core. I will be running this in a Docker container. I would like to gracefully stop the process when a docker stop command is given, rather than just letting the process get killed in the middle of doing something. Is there a way that I can listen for this signal from within the console application? Before containers, I would just have the console app listen for something to be typed in the console window. If there is a way to have docker send a message through standard input, I could work with that, I just do not know enough about Docker, yet, to know what is possible.
My solution is going to end up being to not use a console application, after all. I learned that if I create a web project, I can tell when the container has requested a shutdown with the ApplicationStopping CancellationToken in the website's Startup. So, rather than having the container start up a long running console application, it will just host a website that has no web content. The website will just start up my long running process, and when the container sends a signal to the website that it is shutting down, my process can gracefully stop.
public void Configure(IApplicationBuilder app
, IHostingEnvironment env
, IApplicationLifetime applicationLifetime)
{
applicationLifetime.ApplicationStarted.Register(ApplicationStarted);
applicationLifetime.ApplicationStopping.Register(ApplicationStopping);
applicationLifetime.ApplicationStopped.Register(ApplicationStopped);
}
You can pass a command to a running process in a Docker container using docker exec
I am building a web service where users submit pdf files and from these files the content in text is extracted using Tika. I am using Tika in server mode on the same machine that I host my Django website.
My question is, is there a way to automate the restart of the Tika server when it shuts down for any reason? How can I build a script and run this so whenever the Tika server goes down this gets traced and the server restarts again? My ultimate goal for this is not to check every day from the console if Tika is down, neither to realize that the service is down when a user complains that her pdf does get extracted.
Since you're using a recent copy of Ubuntu, your easiest option is probably to create a custom Upstart job for it. On other unixes, you'd want something similar for their init system, and on Windows I think something with Apache Commons Daemon to wrap it as a Windows service is likely the best bet.
As covered in this post over on Ask Ubuntu, the key thing you'll want is the respawn option, to tell upstart to re-launch the Tika server if it happens to fail, and a limit in case it gets really broken for some reason.
You'll want to create a file /etc/init/tika-server.conf, with contents along the lines of:
description "Apache Tika Server"
start on filesystem or runlevel [2345]
stop on shutdown
respawn
respawn limit 3 12
exec java -jar /path/to/tika/tika-server-1.10-SNAPSHOT.jar
Tweak the path to your Tika Server jar, and add any options / parameters you want to the end.
With that done, to init-checkconf /etc/init/tika-server.conf to check it's valid, then service tika-server start to start it.
At that point, you can head to http://localhost:9998/ and see it running! If it dies, upstart will restart it for you.
We have a docker application that when it is deployed has to do a warmup, else the first request will be really slow.
It is a shell script that just caches the routes and classes.
We are using the same dockerfile for development and would like to keep doing that.
How can we do that?
You would override the entrypoint with a custom script which runs your original entry point command and then warm-up shell script.
You would have to make sure the last command is long running to keep the container running. You could use Supervisor for this purpose.