How do I implement Prometheus monitoring in Openshift projects? - docker

We have an openshift container platform url that contains multiple projects like
project1
project2
project3
Each project contains several pods that we are currently monitoring with NewRelic like
pod1
pod2
pod3
We are trying to implement Prometheus + Grafana for all these projects separately.
It's too confusing with online articles as none of them described with the configuration that we have now.
Where do we start?
What do we add to docker images?
Is there any procedure to monitor the containers using cAdvisor on openshift?
Some say we need to add maven dependency in project. Some say we need to modify the code. Some say we need to add prometheus annotations for docker containers. Some say add node-exporter. What is the node-exporter in first place? Is it another container that looks for containers metrics? Can I install that as part of my docker images? Can anyone point me to an article or something with similar configuration?

Your question is pretty broad, so the answer will be the same :)
Just to clarify - in your question:
implement Prometheus + Grafana for all these projects separately
Are going to have for each project dedicated installation of Kubernetes? Prometheus + Grfana? Or you are going to have 1 cluster for all of them?
In general, I think, the answer should be:
Use Prometheus Operator as recommended (https://github.com/coreos/prometheus-operator)
Once operator installed - you'll be able to get most of your data just by config changes - for example, you will get the Grafa and Node Exporters in the cluster by single config changes
In our case (we are not running Open Shift, but vanilla k8s cluster) - we are running multiple namespaces (like your projects), which has it representation in Prometheus
To be able to monitor pod's "applicational metrics", you need to use Prometheus client for your language, and to tell Prometheus to scrape the metrics (usually, it is done by ServiceMonitors).
Hope this will shed some light.

Related

How to create a single project out of multiple docker images

I have been working on a project where I have had several docker containers:
Three OSRM routing servers
Nominatim server
Container where the webpage code is with all the needed dependencies
So, now I want to prepare a version that a user could download and run. What is the best practice to do such a thing?
Firstly, I thought maybe to join everything into one container, but I have read that it is not recommended to have several processes in one place. Secondly, I thought about wrapping up everything into a VM, but that is not really a "program" that a user can launch. And my third idea was to maybe, write a script, that would download each container from Docker Hub separately and launch the webpage. But, I am not sure if that is best practice, or maybe there are some better ideas.
When you need to deploy a full project composed of several containers.
You may use a specialized tool.
A well known for mono-server usage is docker-compose:
Compose is a tool for defining and running multi-container Docker applications
https://docs.docker.com/compose/
You could provide to your users :
docker-compose file
your application docker images (ex: through docker hub).
Regarding clusters/cloud, we talk more about orchestrator like docker swarm, Kubernetes, nomad
Kubernetes's documentation is the following:
https://kubernetes.io/

Dockerfile/ Docker Compose for multiple projects with different ports

I need your suggestions to architect a solution. I have a solution(.NET Core) with 5 API projects.
Solution
Project 1
Project 2
Project 3
Project 4
Project 5
which runs on different ports, Like
http://localhost:10500/api/values ,
http://localhost:10501/api/values .. so on
http://localhost:10504/api/values
(Only the post number changes )
Requirement is to dockerize this solution and run in Kubernetes cluster via Kube Ingress, what is better way of Implementing?
1) Create one Image deploy the solution and EXpose multiple ports?
2) Use Docker COmpose and build proj1 Export port, build project 2 expose port 2 etc?
Any ideas much appreciated, please?
I think the right solution depends on the requirements. Choosing option 1 would have those consequences:
one container with all services within, if it crashes, all services are down
scaling: each service will be scaled up, even if only one of them has to handle most of the traffic.
updates: changes in one service result in the »redeployment« of all the others within the container.
monitoring: Gathering metrics for each service can be gained based on the running containers, if all services run in one container you can monitor them all as one, or you need to implement you own way to separate logs.
In short: Since you can use a dedicated Image for each service, if you use docker-compose, you have a much finer granularity for updates, scaling and monitoring.

how to use cloudify to auto-heal/scale docker containers

In my project, I'm using cloudify to start and configure the docker containers.
Now I'm wondering how to write YAML files to auto-heal/scale those containers.
My topology is like this: a Compute node contains a Docker-Container node, and in the latter runs several containers.
I've noticed cloudify does the job of auto-healing on the base of the Compute node. So can't I trigger an auto-heal workflow by containers' statuses?
And for auto-scale, I installed the monitor agent and configured the basic collectors. The CPU use percent seems not able to trigger the workflow. cloudify docs about diamond plugin mentions some built-in collectors. Unfortunately, I failed to figure out how to config the collectors.
In hope of some inspirations. Any opinions are appreciated. Thanks~
The docker nodes should be in the right groups for scale and heal.
You can look at this example
scale-heal example
It does exactly what you are looking for

Can I use docker to automatically spawn instances of whole microservices?

I have a microservice with about 6 seperate components.
I am looking to sell instances of this microservice to people who need dedicated versions of it for their project.
Docker seems to be the solution to doing this as easily as possible.
What is still very unclear to me is, is it possible to use docker to deploy whole instances of microservices within a cloud service like GCP or AWS?
Is this something more specific to the Cloud provider itself?
Basicly in the end, I'd like to be able to, via code, start up a whole new instance of my microservice within its own network having each component be able to speak to eachother.
One big problem I see is assigning IP's to the containers so that they will find each other, independent of which network they are in. Is this even possible or is this not yet feasible with current cloud techonology?
Thanks a lot in advance, I know this is a big one...
They is definitely feasible and is nowadays one of the most popular ways to ship and deploy applications. However, the procedure to deploy varies slightly based on the cloud provider that you choose.
The good news is that the packaging of your microservices with Docker is independent from the cloud provider you use. You basically need to package each component in a Docker image, and deploy these images to a cloud platform.
All popular cloud platforms nowadays support deployment of docker containers. You can use in addition popular frameworks such as Docker swarm or Kubernetes on these cloud platforms to orchestrate the microservices deployment.

Any recommendation about run Prometheus in docker container or not?

Our team decided to switch to Prometheus monitoring. So I wonder how to setup highly available fault tolerant Prometheus installation.
We have a bunch of small projects, running on AWS ECS, almost all services are containerized. So I have some questions.
Should we containerize the Prometheus?
That means to run 2 EC2 instances with one Prometheus container per instance and one NodeExporter per instance. And run highly available Alert Manager in the container with Wave Mesh per instance in separate instances.
Or just install Prometheus binary and other stuff on EC2 and forget about containerizing them?
Any ideas? Are some best practices exist for highly available Prometheus setup?
Don't run node_exporter inside of a container as you'll greatly limit the number of metrics exposed.
There is also a HA guide in relation to Prometheus setups that may be of use to you.
Also this question would be better suited to the Prometheus user mailing list
Running Prometheus inside a container works if you configure some additional options, especially for the node_exporter. The challenges of this approach relate to the fact that node_exporter gathers metrics from what it sees as the local machine - a container in this case - and we want it to gather metrics from the host instead. Prometheus offers options to override the filesystem mount points from which this data is gathered.
See "Step 2" in https://www.digitalocean.com/community/tutorials/how-to-install-prometheus-using-docker-on-ubuntu-14-04 for detailed instructions.

Resources