I've written a check_mk Nagios plugin that is monitoring a REST API. I only want a single instance of this service/script on the entire monitoring service, not a service instance per host.
However, when I add the script to the /local/lib/nagios/plugin directory and configure a classical activate & passive monitoring check in WATO, it creates a service for each host.
Is this possible or am I doing this the wrong way?
Dropping scripts in /opt/omd/sites/{site}/local/lib/nagios/plugins, then defining custom checks in /opt/omd/sites/{site}/etc/check_mk/conf.d/wato/rules.mk is how I ended up having a single check.
Related
currently my web application is running on a server, where all the services (nginx, php, etc.) are installed directly in the host system. Now I wanted to use docker to separate these different services into specific containers. Nginx and php-fpm are working fine. But in the web application pdfs can be generated, which is done using wkhtmltopdf and as I want to follow the single-service-per-container pattern, I want to add an additional container which houses wkhtmltopdf and takes care of this specific service.
The problem is: how can I do that? How can I call the wkhtmltopdf binary from the php-fpm container?
One solution is to share the docker.socket, but that is a big security flaw, so I really don‘t like to it.
So, is there any other way to achieve this? And isn‘t this "microservice separation" one of the main purposes/goals of docker?
Thanks for your help!
You can't directly call binaries from one container to another. ("Filesystem isolation" is also a main goal of Docker.)
In this particular case, you might consider "generate a PDF" as an action your service takes and not a separate service in itself, and so executing the binary as a subprocess is a means to an end. This doesn't even raise any complications since presumably mkhtmltopdf isn't a long-running process, you'll launch it once per request and not respond until the subprocess runs to completion. I'd install or include it in the Dockerfile that packages your PHP application, and be architecturally content with that.
Otherwise the main communication between containers is via network I/O and so you'd have to wrap this process in a simple network protocol, probably a minimal HTTP service in your choice of language/framework. That's probably not worth it for this, but it's how you'd turn this binary into "a separate service" that you'd package and run as a separate container.
I want to communicate between 2 apps stored in different docker containers, both part of the same docker network. I'll be using a message queue for this ( RabbitMQ )
Should I make a 3rd Docker container that will run as my RabbitMQ server, and then just make a channel on it for those 2 specific containers ? So that later on I can make more channels if I need for example a 3rd app that needs to communicate with the other 2?
Regards!
Yes, it is the best way to utilize containers, and it will allow you to scale, also you can use the official RabbitMQ container and concentrate on your application.
If you started using containers, than it's the right way to go. But if you your app is deployed in cloud (AWS, Azure and so on) it's better to use cloud queue service which is already configured, is updated automatically, has monitoring and so on.
I'd like also to point out that docker containers it's only a way to deploy your application components. Application shouldn't take care about how your components (services, dbs, queues and so on) are deployed. For app service a message queue is simply a service located somewhere, accessible by connection parameters.
I'm testing gui apps in slave machine . so I want Jenkins slave should be connected with master whenever I logged in. Right now I did some batch scripts to invoke slave connection. Is there any other way to do this same work then please let me know.
Guessing from your "batch scripts" I assume you're using Windows. See Install Slave as a Windows service.
BUT: See also Windows Service needs desktop and network access:
Windows service can run under either a network authenticated user, or the local system. The network user services do not interact with the desktop, and local system services do not have access to network resources. To my knowledge, there is no way around this without spawning sub-processes as different users. However, there is a work-around. Split your service into two services one runs under the local system, and can interact with the desktop. The other will run under the network user account with access to the desired network resources. Set up these services to communicate with each other, and each can supply the functionality that it has access to. NOTE: when setting up the services in your install package you may want to make one of the services dependent on the other to make sure that both run together.
I couldn't have said it better.
UPDATE
In other words: When needing Desktop and network access batch scripts launched via:
the Autostart folder
setting the appropriate group policy
logging in programmatically: Windows Vista/7 Programmatically login, C# - Programmatically Log-off and Log-on a user
are the ways to go.
I am evaluating Icinga and Sensu for general service/host monitoring. One of the things we do with our services is manage them via orchestration tools (Mesos in our case). This prevents a service from necessarily running on any given host (it can run on any worker node).
Because we use service discovery, I can definitely write a monitoring plugin to execute my checks without having to know the host the service is executing a priori.
Icinga's service definitions seem to mandate that a service is tied to a host though. However, its host definitions don't require you to specify much of anything about the host. My question is this: Can I make a dummy host for a service or otherwise specify that a service isn't correlated with a particular host?
Too late, but you of cause can create dummy host.
Use check_command check_dummy
As Part of setting up continuous integration using bitten, I would like to set up some bitten-slaves on windows. However, bitten documentation lacks instructions on how to register bitten slave as a service.
Looking at Microsoft's documentation on How to create a Windows service by using Sc.exe, I've tried the following:
sc create bitten-slave binPath= "C:\Python26\Scripts\bitten-slave.exe --verbose
--log=C:\dev\bitten.log http://svn/cgi-bin/trac.cgi/builds"
The service was indeed created. But trying to start it, I get the following error:
The bitten-slave service failed to
start due to the following error: The
service did not respond to the start
or control request in a timely
fashion.
What am I doing wrong?
Any random program can't run as a service in Windows, the application needs to be specially written to talk to the service controller.
An application that wants to be a service needs to first be written in such a way that it can handle start, stop, and pause messages from the Service Control Manager.
However, Microsoft does provide a generic service wrapper, SRVANY, which can be used to run an arbitrary program as a service. I use SRVANY to run several python scripts as services, so it should work properly.
This page on the Bitten wiki describes a simple Python script that can be configured as a scheduled task to ensure the Bitten slave is kept running.