Monitor other container with telegraf (TIG stack) - docker

I would like to monitor my docker containers with a TIG (Telegraf, InfluxDB and Grafana) stack running in containers too.
I would like my architecture to be like this:
I'm using this stack for TIG, but I'm open to any idea.
Do you have any idea how I could achieve that? Thanks.

Rather than that you should point out something like this:
In here you would only need to create base Docker Image that have the Telegraf agent installed and how to connect to InfluxDB, and the plugins selection on how to collect information from your Containers. From that point everything should be trivial.

Take a look at Telegraf docker input plugin. If you do not need to monitor something complex this may be what you need. A single Telegraf instance on a host. No need to build it in inside a docker image.

Related

How to create a single project out of multiple docker images

I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/

How do I implement Prometheus monitoring in Openshift projects?

We have an openshift container platform url that contains multiple projects like
project1
project2
project3
Each project contains several pods that we are currently monitoring with NewRelic like
pod1
pod2
pod3
We are trying to implement Prometheus + Grafana for all these projects separately.
It's too confusing with online articles as none of them described with the configuration that we have now.
Where do we start?
What do we add to docker images?
Is there any procedure to monitor the containers using cAdvisor on openshift?
Some say we need to add maven dependency in project. Some say we need to modify the code. Some say we need to add prometheus annotations for docker containers. Some say add node-exporter. What is the node-exporter in first place? Is it another container that looks for containers metrics? Can I install that as part of my docker images? Can anyone point me to an article or something with similar configuration?
Your question is pretty broad, so the answer will be the same :)
Just to clarify - in your question:
implement Prometheus + Grafana for all these projects separately
Are going to have for each project dedicated installation of Kubernetes? Prometheus + Grfana? Or you are going to have 1 cluster for all of them?
In general, I think, the answer should be:
Use Prometheus Operator as recommended (https://github.com/coreos/prometheus-operator)
Once operator installed - you'll be able to get most of your data just by config changes - for example, you will get the Grafa and Node Exporters in the cluster by single config changes
In our case (we are not running Open Shift, but vanilla k8s cluster) - we are running multiple namespaces (like your projects), which has it representation in Prometheus
To be able to monitor pod's "applicational metrics", you need to use Prometheus client for your language, and to tell Prometheus to scrape the metrics (usually, it is done by ServiceMonitors).
Hope this will shed some light.

Docker general configuration

For my personnal developping environement, I'm trying to learn docker and the best pratices for good configuration. But now I have à lot of question about how docker is working.
I'm on windows then I'm using hyper-v for running MobyLinuxVM, first I would like to know if it's possible to connect my-self to the VM to see what is inside and what is used for. Secondly I would like to know what this VM is used for? Is it for the deamon? Thirdly I would like to know where the daemon is set (wath is running) then what is the job of the service com.docker.service? Finaly is there a way by commande line to show the actual deamon ip and port and an example to show how the docker cli is connecting to?
Thanks, if someone is able to help me, because I'm a little bit lost.

how to use cloudify to auto-heal/scale docker containers

In my project, I'm using cloudify to start and configure the docker containers.
Now I'm wondering how to write YAML files to auto-heal/scale those containers.
My topology is like this: a Compute node contains a Docker-Container node, and in the latter runs several containers.
I've noticed cloudify does the job of auto-healing on the base of the Compute node. So can't I trigger an auto-heal workflow by containers' statuses?
And for auto-scale, I installed the monitor agent and configured the basic collectors. The CPU use percent seems not able to trigger the workflow. cloudify docs about diamond plugin mentions some built-in collectors. Unfortunately, I failed to figure out how to config the collectors.
In hope of some inspirations. Any opinions are appreciated. Thanks~
The docker nodes should be in the right groups for scale and heal.
You can look at this example
scale-heal example
It does exactly what you are looking for

How to collect metrics from services running on docker containers using collectd, telegraph or similar tools

What is the common practice to get metrics from services running inside Docker containers, using tools like CollectD, or InfluxDD Telegraph?
These tools are normally configured to run as agents in the system and get metrics from localhost.
I have read collectd docs and some plugins allow to get metrics from remote systems so I could have for example, an NGINX container and then a collectd container to get the metrics, but there isnt a simpler way?
Also, I dont want to use Supervisor or similar tools to run more that "1 process per container".
I am thinking about this in conjunction with a System like DC/OS or Kubernetes.
What do you think?
Thank you for your help.

Resources