Our team decided to switch to Prometheus monitoring. So I wonder how to setup highly available fault tolerant Prometheus installation.
We have a bunch of small projects, running on AWS ECS, almost all services are containerized. So I have some questions.
Should we containerize the Prometheus?
That means to run 2 EC2 instances with one Prometheus container per instance and one NodeExporter per instance. And run highly available Alert Manager in the container with Wave Mesh per instance in separate instances.
Or just install Prometheus binary and other stuff on EC2 and forget about containerizing them?
Any ideas? Are some best practices exist for highly available Prometheus setup?
Don't run node_exporter inside of a container as you'll greatly limit the number of metrics exposed.
There is also a HA guide in relation to Prometheus setups that may be of use to you.
Also this question would be better suited to the Prometheus user mailing list
Running Prometheus inside a container works if you configure some additional options, especially for the node_exporter. The challenges of this approach relate to the fact that node_exporter gathers metrics from what it sees as the local machine - a container in this case - and we want it to gather metrics from the host instead. Prometheus offers options to override the filesystem mount points from which this data is gathered.
See "Step 2" in https://www.digitalocean.com/community/tutorials/how-to-install-prometheus-using-docker-on-ubuntu-14-04 for detailed instructions.
Related
A small introduction to history. I am building a small service (website) where the user is provided with all sorts of tools that work according to the parameters specified by the user himself. In my implementation, it turns out that the tools are one big script that runs in the docker. It turns out that my service should launch a new docker container for each user.
I was thinking about using "aws fargate" or "gcloud run", or any other resource that makes it possible to run a docker container.
But I'm interested. What if there are 1000 or 10000 users, each one will have its own docker container, is that good? Do the services (aws, gcloud) have any restrictions, or is it a bad implementation?
Based upon my understanding you have suggested that you instantiate a Docker container for each of your users, I think there are a couple of issues with this:
Depending on how many users you have you get into the realms of too many containers. (each container will consume resources, not just Memory and CPU but also TCP/IP pool exhaustion.)
Isolation -> Read containers are not VMs
Im new to docker and am wanting to accomplish something but I am unsure on how to Orchestrate my docker containers to do this.
What I want to do:
I have an API that in simple does a calculation from a requested file. It loads the file (around 80mb) from disk to memory then keep it in memory for 2 hours (caching).
Im wanting to have an architecture where for example when the container gets overwhelmed with requests a new one fires up, and when the original container frees its memory and the requests slow down then the container shuts down.
Is Memory and CPU Container Orchestration possible?
Thank You,
/Jeremy
Docker itself is not dedicated to the orchestration multiple containers. You need to use some container orchestration environment. The most popular are Kubernetes, Docker Swarm, and Apache Mesos. Or if you want to run in the Cloud, then some vendor-specific, like AWS ECS.
Here's a good list of container clustering toolkit.
In all these environments it's possible to configure what you described. If you're completely new to the topic, then I recommend installing Docker-for-Desktop which comes with built-in Kubernetes and play with that in your local.
For sure, container orchestration system is what you want to be able efficiently manage your docker containers.
You can find current complete list of solutions for production environment in this spreadsheet
Tools, like kubernetes will give you reach set of benefits eg
Provisioning and deployment of containers
Redundancy and availability of containers
Scaling up or removing containers to spread application load evenly
across host infrastructure
Allocation of resources between containers
Load balancing of service discovery between containers
Health monitoring of containers and hosts
In Kubernetes there is a Horizontal Pod Autoscaler, that
automatically scales the number of pods in a replication controller,
deployment, replica set or stateful set based on observed CPU
utilization (or, with custom metrics support, on some other
application-provided metrics). Note that Horizontal Pod Autoscaling
does not apply to objects that can’t be scaled, for example,
DaemonSets.
As for beginning I would recommend you start with minikube.
More advanced ways are setup manually cluster using kubeadm either look into the cloud providers
Please be aware that you will not have option to modify cloud based control plane. More info in my related answer
I'm using docker on a bare metal server. I'm pretty happy with docker-compose to configure and setup applications.
Still some features are missing, like configuration management and monitoring maybe there are other solutions to solve this issues but I'm a bit overwhelmed by the feature set of Kubernetes and can't judge if it would help me here.
I'm also open for recommendations to solve the requirements separately:
Configuration / Secret management
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Remot container control (SSH is okay with only one Server)
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
I'm sure Kubernetes covers some of these features, but I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
I hope the questions scope is not too broad, else please use the comment section and help me to narrow down the question.
Thanks.
I think the Kubernetes is absolutely much your requests and it is what you need.
Let's start one by one.
I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
No, it is not focused on Clouds. Kubernates can be installed almost on any bare-metal platform (include ARM) and have many tools and instructions which can help you to do it. Also, it is easy to deploy it on your local PC using Minikube, which will prepare local cluster for you within VMs or right in your OS (only for Linux).
Configuration / Secret management
Kubernates has a powerful configuration and management based on special objects which can be attached to your containers. You can read more about configuration management in that article.
Moreover, some tools like Helm can provide you more automation and range of preconfigured applications, which you can install using a single command. And you can prepare your own charts for it.
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Kubernetes has its own dashboard where you can get many kinds of information: current applications status, configuration, statistics and many more. Also, Kubernetes has great integration with Heapster which can be used with Grafana for powerful visualization of almost anything.
Remot container control (SSH is okay with only one Server)
Kubernetes controlling tool kubectl can get logs and connect to containers in the cluster without any problems. As an example, to connect a container "myapp" you just need to call kubectl exec -it myapp sh, and you will get sh session in the container. Also, you can connect to any application inside your cluster using kubectl proxy command, which will forward a port you need to your PC.
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
Kubernetes can be scaled up to thousands of nodes. Or can have only one. It is your choice. Independent of a cluster size, you will get production-grade networking, service discovery and load balancing.
So, do not afraid, just try to use it locally with Minikube. It will make many of operation tasks more simple, not more complex.
What is the common practice to get metrics from services running inside Docker containers, using tools like CollectD, or InfluxDD Telegraph?
These tools are normally configured to run as agents in the system and get metrics from localhost.
I have read collectd docs and some plugins allow to get metrics from remote systems so I could have for example, an NGINX container and then a collectd container to get the metrics, but there isnt a simpler way?
Also, I dont want to use Supervisor or similar tools to run more that "1 process per container".
I am thinking about this in conjunction with a System like DC/OS or Kubernetes.
What do you think?
Thank you for your help.
I have been looking into Docker containerization for a while now but few things are still confusing to me. I understand that all the containers are grouped into a cluster and cluster management tools like Docker Swarm, DC/OS, Kubernetes or Rancher can be used to manage docker containers. I have been testing out Container cluster management with DC/OS and Kubernetes, but still a few questions remain unanswered to me.
How does auto scaling in container level help us in production servers? How does the application serve traffic from multiple containers?
Suppose we have deployed a web application using containers and they have auto scaled. How does the traffic flow to the containers? How are the sessions managed?
What metrics are calculated for autoscaling containers?
The autoscaling in DC/OS (note: Mesosphere is the company, DC/OS the open source project) the autoscaling is described in detail in the docs. Essentially the same as with Kubernetes, you can use either low-level metrics such as CPU utilization to decide when to increase the number of instances of an app or higher-level stuff like app throughput, for example using the Microscaling approach.
Regarding your question how the routing works (how are requests forwarded to an instance, that is a single container running): you need a load balancer and again, DC/OS provides you with this out of the box. And again, the options are detailed out in the docs, essentially: HAProxy-based North-South or IPtables-based, East-West (cluster internal) load balancers.
Kubernetes has concept called service. A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. Kubernetes uses services to serve traffic from multiple containers. You can read more about services here.
AFAIK, Sessions are managed outside kubernetes, but Client-IP based session affinity can be selected by setting service.spec.sessionAffinity to "ClientIP". You can read more about Service and session affinity here
Multiple metrics like cpu and memory can be used for autoscaling containers. There is a good blog you can read about autoscaling, when and how.