Measure average resource utilization of docker containers from jmeter - docker

I'm doing a performance testing for a server, which deployed on docker containers (have three docker containers). I'm using jmeter to conduct performance testing.
How can I get the average resource utilization of three docker containers into jmeter during the testing time??

Depending on what you're trying to achieve from simplest to the most complex:
docker stats command
Special solutions for docker containers health monitoring like cAdvisor
Special solutions for JMeter for monitoring remote servers like:
PerfMon Plugin
SSHMon Plugin
APM tools like AppDynamics or Dynatrace

Related

How to limit number of docker docker containers to be created per host without using schedulers like docker swarm or kubernetes

A simple question about docker!
I want to deploy my containerised application where I am supposed to launch more containers, right now I am not using any schedulers for resource allocation. I have configured cgroup as well as namespace configurations. It will only help us to allocate system resource for docker engine. But it would like to restrict the number of container per host. Is it possible to achieve by tuning linux kernel paramters or docker engine configurations. I plan to manage it by kubernetes. But its just for my clarification.

Can I run Kubernetes and Swarm at the same time?

Pretty basic question. We have an existing swarm and I want to start migrating to Kubernetes. Can I run both using the same docker hosts?
See the official documentation for Docker for Mac at https://docs.docker.com/docker-for-mac/kubernetes/ stating:
When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.
So: yes, both should be able to run in parallel.
If you're using Docker on Linux you won't have the convenient tools available like in Docker for Mac/Windows, but both orchestrators should still be able to run in parallel without further issues. On system level, details like e.g. ports on a network interface are still shared resources, so they cannot be bound by different orchestrators.

Kubernetes vs. Docker: What Does It Really Mean?

I know that Docker and Kubernetes aren’t direct competitors. Docker is the container platform and containers are coordinated and scheduled by Kubernetes, which is a tool.
What does it really mean and how can I deploy my app on Docker for Azure ?
Short answer:
Docker (and containers in general) solve the problem of packaging an application and its dependencies. This makes it easy to ship and run everywhere.
Kubernetes is one layer of abstraction above containers. It is a distributed system that controls/manages containers.
My advice: because the landscape is huge... start learning and putting the pieces of the puzzle together by following a course. Below I have added some information from the:
Introduction to Kubernetes, free online course from The Linux Foundation.
Why do we need Kubernetes (and other orchestrators) above containers?
In the quality assurance (QA) environments, we can get away with running containers on a single host to develop and test applications. However, when we go to production, we do not have the same liberty, as we need to ensure that our applications:
Are fault-tolerant
Can scale, and do this on-demand
Use resources optimally
Can discover other applications automatically, and communicate with each other
Are accessible from the external world
Can update/rollback without any downtime.
Container orchestrators are the tools which group hosts together to form a cluster, and help us fulfill the requirements mentioned above.
Nowadays, there are many container orchestrators available, such as:
Docker Swarm: Docker Swarm is a container orchestrator provided by Docker, Inc. It is part of Docker Engine.
Kubernetes: Kubernetes was started by Google, but now, it is a part of the Cloud Native Computing Foundation project.
Mesos Marathon: Marathon is one of the frameworks to run containers at scale on Apache Mesos.
Amazon ECS: Amazon EC2 Container Service (ECS) is a hosted service provided by AWS to run Docker containers at scale on its infrastructrue.
Hashicorp Nomad: Nomad is the container orchestrator provided by HashiCorp.
Kubernetes is built on Docker technology. It is an orchestration tool for Docker container whereas Docker is a technology to create and deploy containers.
Docker, starting with a platform-as-a-service (PaaS) provider named dotCloud.
All in all, Kubernetes is related to the Docker container, allowing you to implement application portability and extensibility in container orchestration.
DOCKER
Easy and fast to install and configure
Functionality is provided and limited by the Docker API
Quick container deployment and scaling even in very large clusters
Automated internal load balancing through any node in the cluster
Simple shared local volumes
Kubernetes
Require some work to get up and running
Client, API and YAML definitions are unique to Kubernetes
Provides strong guarantees to cluster states at the expense of speed
To Enable load balancing requires manual service configuration
Volumes shared within pods
This is just a basic idea which at least explains the difference.If you want to go in depth see my posts
http://www.thecreativedev.com/an-introduction-to-kubernetes/
http://www.thecreativedev.com/learn-docker-works/
Docker and Kubernetes are complementary. Docker provides an open standard for packaging and distributing containerized applications, while Kubernetes provides for the orchestration and management of distributed, containerized applications created with Docker. In other words, Kubernetes provides the infrastructure needed to deploy and run applications built with Docker.

Docker telemetry and performance monitoring

What will telemetry and monitoring tools show if I lunch in (2 options)
docker container
host system
Will they show cpu\memory and etc usage of container only or of host system?
What are best practise? Monitoring software in each container or in host system?
What you want to do is monitor both, the host(s) and the containers running on them. A good way to do that is run a container that collects all data on each docker host. That is how Sematext Docker Agent runs, for example -- it runs as a tiny container on each Docker host and collects all host+containers metrics, events, and logs. It then parses logs, can route them, blacklist/whitelist them, auto-discovery new containers, and so on. In the end logs end up in Logsene and metrics and events end up in SPM, which gives you a single pane of glass sort of view into all your Docker ops bits, with alerting, anomaly detection, correlation, and so on. I hope this helps and points you in the right direction.
The results should be exactly the same, because Docker containers are sharing their resources (unlike virtual machines).
Putting an agent in your containers is not advisable, not just for performance reasons, but it is an anti-pattern in the Docker world, where each container should run a single process. Better is to run a monitoring agent on the host or in a separate container that can be configured to extract metrics from the other containers. This is the way we work at CoScale. If you are interested, have a look at our solution for monitoring Docker.

Working monitoring solution for Docker Containers and Swarm?

I'm looking for the monitoring solution for the web application, deployed as a Swarm of Docker containers spread through 7-10 VMs. High level requirements are:
Configurable Web and REST interface to performance dashboard
General performance metrics on VM levels (CPU/Memory/IO)
Alerts when containers and/or VMs are going offline/restart
Possibility to drill down into containers process activity when needed
Host OS are CoreOS and Ubuntu
Any recommendations/best practices here?
NOTE: external Kibana installation is being used to collect application logs from Logstash agents deployed on VMs.
Based on your requirements, it sounds like Sematext Docker Agent would be a good fit. It runs as a tiny container on each Docker host and collects all host+containers metrics, events, and logs. It can parse logs, route them, blacklist/whitelist them, has container auto-discovery, and so on. In the end logs end up in Logsene and metrics and events end up in SPM, which gives you a single pane of glass sort of view into all your Docker ops bits, with alerting, anomaly detection, correlation, and so on.
I am currently evaluating bosun with scollector + cAdvisor support. Look ok so far.
Edit:
It should meet all the listed requirements and a little bit more. :)
Take a look at Axibase Time-Series Database / Google Cadvisor / collectd stack.
Disclosure: I work for the company that develops ATSD.
Deploy 1 Cadvisor container per VM to collect Docker container statistics. Cadvisor front-end allows you to view top container processes.
Deploy 1 ATSD container to ingest data from multiple Cadvisor instances.
Deploy collectd daemon on each VM to collect host statistics, configure collectd daemons to stream data into ATSD using write_atsd plugin.
Dashboards:
Host:
Container:
API / SQL:
https://github.com/axibase/atsd/tree/master/api#api-categories
Alerts:
ATSD comes with a built-in rule engine. You can configure a rule to watch when containers stops collecting data and trigger an email or system command.

Resources