Are services intrinsically tied to hosts in Icinga? - monitoring

I am evaluating Icinga and Sensu for general service/host monitoring. One of the things we do with our services is manage them via orchestration tools (Mesos in our case). This prevents a service from necessarily running on any given host (it can run on any worker node).
Because we use service discovery, I can definitely write a monitoring plugin to execute my checks without having to know the host the service is executing a priori.
Icinga's service definitions seem to mandate that a service is tied to a host though. However, its host definitions don't require you to specify much of anything about the host. My question is this: Can I make a dummy host for a service or otherwise specify that a service isn't correlated with a particular host?

Too late, but you of cause can create dummy host.
Use check_command check_dummy

Related

How to containerize database dependent services?

Example: I got a microservice 'Alpha', which usually connects to 'http://localhost:3306/dbforalpha'. The service depends on that database. Now I want to containerize both, the database and the service. Of course the address of the database is changing, so that I can not even build an image for service 'Alpha'.
Now I am wondering how to deal with that problem? There must be a easier way than waiting until the database container is running to check it's ip:port. Do tools like kubernetes solve this issue?
Docker comes with a service discovery mechanism (this is the basic term for how services know how to talk to each other), containers can be linked together, and you can use DNS to talk to them.
For example, your alpha service could be linked to your database, and connect to db:3306, and Docker would set the necessary /etc/hosts entries in alpha, so it could resolve db to an IP.

When Should I use the host network with docker

I understand that if I use the host network driver for a container, that container’s network stack is not isolated from the Docker host.
I also believe understand conceptually that a good reasons to still do it might be when "Security is not an Issue or concern" and network throughput performance is important but I am struggling to think of a real world example of when I can or should do this. A naive example I can think of is a public facing load-balancer or static file web server.
I realize it may be possible to mitigate the security concerns outside of using host services like AWS or Google Cloud if hosted there but what if that wasn't an option!
When would or should an you use it in a production environment?
How can you mitigate the security concerns regardless of hosting environment?
How should you interact with other services in other docker networks?
I am struggling to think of a real world example of when I can or should do this. ... When would or should an you use it in a production environment?
Your application does not run on TCP or UDP, but another protocol
Your application requires a large range of incoming ports to be published (by default a docker-proxy process is spawned per published port, this can be excessive for a large range)
Your application works with multi-cast or broadcast network traffic
Your application needs to modify the networking layer of the host itself, e.g. a VPN
How can you mitigate the security concerns regardless of hosting environment?
You need to trust this application. You've removed a layer of docker namespacing and at that point, the container is a packaging format and likely fits in with the rest of your tooling, but doesn't require the same security approach you may have for other containers.
How should you interact with other services in other docker networks?
You would interact via published ports of the other containers, same as you would an application running outside of a container that needs to connect to an application inside of a container.
but I am struggling to think of a real world example of when I can or should do this.
Here is real world example: We use host network to speed up build stage of our gitlab ci/cd pipeline.
Container in question is up and running only during build phase, doesn't have any port exposed, needs faster network to download all the necessary pieces to build and push docker image and we experienced (in some intermittent occasions) issues with throughput and inconsistent behavior during build stage that we resolved with host network. Although with host network we "expose" ip of such a container, we still don't expose any ports and after build phase is finished container is discarded.
I know this doesn't answers all of your questions, but is requested real world example.

How can I deploy docker swarm user docker stack with services by order

I have an issue about Docker Swarm.
I have tried to deploy my app with Docker Swarm Mode.
But I can not arrange my services start by order, although I was used depends_on (It’s recommended not support for docker stack deploy).
How can I deploy that with services start by order
Ex:
Service 1 starting
Service 2 wait for Service 1
Please help.
This is not supported by Swarm.
Swarm is designed for high availability. When encountering problems (services or hosts fail) services will be restartet in the order they failed.
If you have clear dependencies between your services and they can't handle waiting for the other service to be available or reconnecting, your system won't work.
Your services should be written in a way that they can handle any service being redeployed at any time.
There is no orchestration system that recommend or support this feature.
So forgot about it because this is a very bad idea.
Application infrastructure (here the container) should not depend on the database health, but your application itself must depend on the database health.
You see the difference?
For instance, the application could display an error message "Not ready yet" or "This feature is disabled because elasticsearch is down" etc...
So even if this is possible to implement this pattern (aka "wait-for", with kubernetes, you can use initContainer to "wait" for another service up&ready), I strongly recommend to move this logic to your application.

How to manage multiple projects on Docker?

In our company ~7 projects, each based on Docker. Each project contain base services, like MySQL, Nginx, PHP. Some of projects communicate with other projects. Because of many services on same port, we make new docker host (docker-machine) for each project. From here few problems are coming:
VirtualBox assign random IP to each Docker host, depends on sequence of executing.
Hard to switch from project to project, need to set different shell envs all the time. Easy to make mistake.
Well, I'm searching for more enterprise solution to manage many docker machines. Or a some technique that can help me with current situation.
I had similar problems last summer.
First, I started to deploy my projects to swarm-cluster as services, instead of clustering several docker VMs. This enabled me to play around services with only the service IDs. It is important that how to separate projects into services, this part may be cumbersome depending on your project.
https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/
Then, I build my configuration and monitoring software once on swarm-manager and use it. You can use your automation tools on docker-manager to control services.
A virtual machine consumes resources and it is better to avoid it if is not necessarily. Instead you could deploy the projects in the docker swarm on bare metals.
But because every project has an entry point that needs to be accesible from the outside world (i.e. https://site1.com and https://site2.com) you can't expose the same port (443 or 80) for all the frontend services in the same swarm. For this you can use a reverse proxy like HAProxy or Nginx that forwards the requests to the right service based on the hostname. The reverse proxy could be also a service in the swarm. In this situation you should not expose the projects' ports anymore.
A reverse proxy has many other advantages, like SSL termination (this makes the SSL certificate management a lot easier).
If you add the projects to the same custom network then the services from different projects could communicate securely and directly, using their docker service name and the internal port (i.e. 80).

Docker, Registrator and Consul by example

I am new to both Docker and Consul, and am trying to get a feel for how containerized apps could use Consul for both service registry and KV pair config management ("configuration").
My understanding was that I could:
Create an image that runs Consul server, so something like this; then
Spin up three of these Docker-Consul containers (thus forming a cluster/quorum) on myvm01.example.com (an Ubuntu VM); then
Refactor my app to use Consul and create a Docker image that runs my app and Consul agent, with the agent configured to join the 3-node quorum at startup. On startup, my app uses the local Consul agent to pull down all of its configurations, stored as KV pairs. It also pulls in registered/healthy services, and uses a local load balancing tool to balance the services it integrates with.
Run my app's containers on, say, myvm02.example.com (another Ubuntu VM).
So to begin with, if any of this seems like I am misunderstanding the normal/proper uses of Docker and Consul (sans Registrator), please begin by correcting me!
Assuming I'm more or less correct, I recently stumbled across Registrator and am now even more confused. Registrator seems to be some middleman between your app containers and your Consul (or whatever registry you use) servers.
After reading their Quickstart tutorial, it sounds like what you're supposed to do is:
Deploy my Consul cluster/quorum containers to myvm01.example.com like before
Instead of "Dockerizing" my app to use Consul directly, I simply integrate it with Registrator
Then I deploy a Registrator container somewhere, and configure it to integrate with Consul
Then I deploy my app containers. They integrate with Registrator, and Registrator in turn integrates with Consul.
My concerns:
Is my understanding here correct or way off base? If so, how?
What is actually gained by the addition of Registrator. It doesn't seem (to the untrained eye at least) like anything more than a layer of indirection between the app and the service registry.
Will I still be able to leverage Consul's KV config service through Registrator?
Is my understanding here correct or way off base? If so, how?
It seems to me, that it's not a good solution, to have all cluster/quorum members running inside the same VM. It's not so bad if you use it for development or tetsing or something, where you don't care much about reliability, but not for production.
Once your VM dies, you'll loose all the advantages you have by creating a cluster. And even more, you can loose all the data you have in K/V store, because you are running Consul servers inside a docker containers, which should be additionaly configured to share the configuration between runs.
As for the rest, I see it the same as you.
What is actually gained by the addition of Registrator.
From my point of view, the main thing is, that you don't have to provide an instance of Consul Agent in every container you run. And the container with the image you run is responsible only for their main functions, not for registering itself somewhere. You may simply pull an image and just run a container with it, to make it's service available, without making additional work.
Will I still be able to leverage Consul's KV config service through Registrator?
Unfortunately, no. At least, we didn't find a solution to use it this way, when we were looking for something to make service discovering and configuration management. We came to conclusion, that Registrator is not a proxy for K/V store and is used only to automate service discovery. So you have to use some other logic to access consul's K/V store.
Update: furthermore, here is 2 articles: "Automatic Docker Service Announcement with Registrator" and "Automatic container registration with Consul and Registrator", I found usefull to understand Registrator role in service discovery process.

Resources