The Kubernetes documentation states it's possible to use Elasticsearch and Kibana for cluster level logging.
Is this possible to do this on the instance of Kubernetes that's shipped with Docker for Windows as per the documentation? I'm not interested in third party Kubernetes manifests or Helm charts that mimic this behavior.
Kubernetes is an open-source system for automating deployment, scaling,
and management of containerized applications.
It is a complex environment with a huge amount of information regarding the state of cluster and events
processed during execution of pods lifecycle and health checking off all nodes and whole Kubernetes
cluster.
I do not have practice with Docker for Windows, so my point of view is based on Kubernetes with Linux containers
perspective.
To collect and analyze all of this information there are some tools like Fluentd, Logstash
and they are accompanied by tools such as Elasticsearch and Kibana.
Those cluster-level log aggregation can be realized using Kubernetes orchestration framework.
So we can expect that some running containers take care of gathering data and other containers
take care of other aspects of abstractions like analyzing and presentation layer.
Please notice that some solutions depend on cloud platform features where Kubernetes environment
is running. For example, GCP offers Stackdriver Logging.
We can mention some layers of log probes and analyses:
monitoring a pod
is the most rudimentary form of viewing Kubernetes logs.
You use the kubectl commands to fetch log data for each pod individually.
These logs are stored in the pod and when the pod dies, the logs die with them.
monitoring a node. Collected log for each node are stored in a JSON file. This file can get really large.
Node-level logs are more persistent than pod-level ones.
monitoring a cluster.
Kubernetes doesn’t provide a default logging mechanism for the entire cluster, but leaves this up
to the user and third-party tools to figure out. One approach is to build on the node-level logging.
This way, you can assign an agent to log every node and combine their output.
As you see, there is a niche on cluster level monitoring, so there is a reason to aggregate current logs and
offer a practical way to analyze and present results.
On the node level logging, popular log aggregator is Fluentd. It is implemented as a Docker container,
and it is run parallel with pod lifecycle. Fluentd does not store the logs themselves.
Instead, it sends their logs to an Elasticsearch cluster that stores the log information in a replicated set of nodes.
It looks like Elasticsearch is used as a data store of aggregated logs of working nodes.
This aggregator cluster consists of a pod with two instances of Elasticsearch.
The aggregated logs in the Elasticsearch cluster can be viewed using Kibana.
This presents a web interface, which provides a more convenient interactive method for querying the ingested logs
The Kibana pods are also monitored by the Kubernetes system to ensure they are running healthily and the expected
number of replicas are present.
The lifecycle of these pods is controlled by a replication-controller specification similar in nature to how the
Elasticsearch cluster was configured.
Back to your question. I'm pretty sure that the mentioned above also works with Kubernetes and Dockers
for Windows. From the other hand, I think the cloud platform or the Linux premise environment
is a natural space to live for them.
Answer was inspired by Cluster-level Logging of Containers with Containers and Kubernetes Logging articles.
I also like Configuring centralized logging from Kubernetes page and used An Introduction
to logging in Kubernetes at my beginning with Kubernetes.
Related
Im new to docker and am wanting to accomplish something but I am unsure on how to Orchestrate my docker containers to do this.
What I want to do:
I have an API that in simple does a calculation from a requested file. It loads the file (around 80mb) from disk to memory then keep it in memory for 2 hours (caching).
Im wanting to have an architecture where for example when the container gets overwhelmed with requests a new one fires up, and when the original container frees its memory and the requests slow down then the container shuts down.
Is Memory and CPU Container Orchestration possible?
Thank You,
/Jeremy
Docker itself is not dedicated to the orchestration multiple containers. You need to use some container orchestration environment. The most popular are Kubernetes, Docker Swarm, and Apache Mesos. Or if you want to run in the Cloud, then some vendor-specific, like AWS ECS.
Here's a good list of container clustering toolkit.
In all these environments it's possible to configure what you described. If you're completely new to the topic, then I recommend installing Docker-for-Desktop which comes with built-in Kubernetes and play with that in your local.
For sure, container orchestration system is what you want to be able efficiently manage your docker containers.
You can find current complete list of solutions for production environment in this spreadsheet
Tools, like kubernetes will give you reach set of benefits eg
Provisioning and deployment of containers
Redundancy and availability of containers
Scaling up or removing containers to spread application load evenly
across host infrastructure
Allocation of resources between containers
Load balancing of service discovery between containers
Health monitoring of containers and hosts
In Kubernetes there is a Horizontal Pod Autoscaler, that
automatically scales the number of pods in a replication controller,
deployment, replica set or stateful set based on observed CPU
utilization (or, with custom metrics support, on some other
application-provided metrics). Note that Horizontal Pod Autoscaling
does not apply to objects that can’t be scaled, for example,
DaemonSets.
As for beginning I would recommend you start with minikube.
More advanced ways are setup manually cluster using kubeadm either look into the cloud providers
Please be aware that you will not have option to modify cloud based control plane. More info in my related answer
I've read kubernetes and minikube docs and it's not explicit if minikube implementation supports automatically log rotation (deleting the pod logs periodically) in order to prevent the memory to be overloaded by the logs.
I'm not talking about the various centralized logging stacks used to collect, persist and analyze logs, but the standard pod log management of minikube.
In kubernetes official documentation is specified:
An important consideration in node-level logging is implementing log rotation, so that logs don’t consume all available storage on the node. Kubernetes currently is not responsible for rotating logs, but rather a deployment tool should set up a solution to address that. For example, in Kubernetes clusters, deployed by the kube-up.sh script, there is a logrotate tool configured to run each hour. You can also set up a container runtime to rotate application’s logs automatically, for example by using Docker’s log-opt. In the kube-up.sh script, the latter approach is used for COS image on GCP, and the former approach is used in any other environment. In both cases, by default rotation is configured to take place when log file exceeds 10MB.
Of course if we're not in GCP and we don't use kube-up.sh to start the cluster (or we don't use Docker as container tool) but we spin up our Cluster with Minikube what happens?
As per the implementation
Minikube now uses systemd which has built in log rotation
Refer this issue
I am running Kubernetes cluster on GKE. Running the monolithic application and now migrating to microservices so both are running parallel on cluster.
A monolithic application is simple python app taking the memory of 200Mb around.
K8s cluster is simple single node cluster GKE having 15Gb memory and 4vCPU.
Now i am thinking to apply the HPA for my microservices and monolithic application.
On single node i have also installed Graylog stack which include (elasticsearch, mongoDb, Graylog pod). Sperated by namespace Devops.
In another namespace monitoring there is Grafana, Prometheus, Alert manager running.
There is also ingress controller and cert-manager running.
Now in default namespace there is another Elasticsearch for application use, Redis, Rabbitmq running. These all are single pod, Type statefulsets or deployment with volume.
Now i am thinking to apply the HPA for microservices and application.
Can someone suggest how to add node-pool on GKE and auto scale. When i added node in pool and deleted old node from GCP console whole cluster restarted and service goes down for while.
Plus i am thinking to use the affinity/anti-affinity so can someone suggest devide infrastructure and implement HPA.
From the wording in your question, I suspect that you want to move your current workloads to the new pool without disruption.
Since this action represents a voluntary disruption, you can start by defining a PodDisruptionBudget to control the number of pods that can be evicted in this voluntary disruption operation:
A PDB limits the number of pods of a replicated application that are down simultaneously from voluntary disruptions.
The settings in the PDB depend on your application and your business needs, for a reference on the values to apply, you can check this.
Following this, you can drain the nodes where your application is scheduled since it will be "protected" by the budget and, drain uses the Eviction API instead of directly deleting the pods, which should make evictions graceful.
Regarding Affinity, I'm not sure how it fits in the beforementioned goal that you're trying to achieve. However, there is an answer of this particular regard in the comments.
I have been looking into Docker containerization for a while now but few things are still confusing to me. I understand that all the containers are grouped into a cluster and cluster management tools like Docker Swarm, DC/OS, Kubernetes or Rancher can be used to manage docker containers. I have been testing out Container cluster management with DC/OS and Kubernetes, but still a few questions remain unanswered to me.
How does auto scaling in container level help us in production servers? How does the application serve traffic from multiple containers?
Suppose we have deployed a web application using containers and they have auto scaled. How does the traffic flow to the containers? How are the sessions managed?
What metrics are calculated for autoscaling containers?
The autoscaling in DC/OS (note: Mesosphere is the company, DC/OS the open source project) the autoscaling is described in detail in the docs. Essentially the same as with Kubernetes, you can use either low-level metrics such as CPU utilization to decide when to increase the number of instances of an app or higher-level stuff like app throughput, for example using the Microscaling approach.
Regarding your question how the routing works (how are requests forwarded to an instance, that is a single container running): you need a load balancer and again, DC/OS provides you with this out of the box. And again, the options are detailed out in the docs, essentially: HAProxy-based North-South or IPtables-based, East-West (cluster internal) load balancers.
Kubernetes has concept called service. A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them. Kubernetes uses services to serve traffic from multiple containers. You can read more about services here.
AFAIK, Sessions are managed outside kubernetes, but Client-IP based session affinity can be selected by setting service.spec.sessionAffinity to "ClientIP". You can read more about Service and session affinity here
Multiple metrics like cpu and memory can be used for autoscaling containers. There is a good blog you can read about autoscaling, when and how.
I know that GKE is driven by kubernetes underneath. But I don't seem to still get is that what part is taken care by GKE and what by k8s in the layering? The main purpose of both, as it appears to me is to manage containers in a cluster. Basically, I am looking for a simpler explanation with an example.
GKE is a managed/hosted Kubernetes (i.e. it is managed for you so you can concentrate on running your pods/containers applications)
Kubernetes does handle:
Running pods, scheduling them on nodes, guarantee no of replicas per Replication Controller settings (i.e. relaunch pods if they fail, relocate them if the node fails)
Services: proxy traffic to the right pod wherever it is located.
Jobs
In addition, there are several 'add-ons' to Kubernetes, some of which are part of what makes GKE:
DNS (you can't really live without it, even thought it's an add-on)
Metrics monitoring: with influxdb, grafana
Dashboard
None of these are out-of-the-box, although they are fairly easy to setup, but you need to maintain them.
There is no real 'logging' add-on, but there are various projects to do this (using Logspout, logstash, elasticsearch etc...)
In short Kubernetes does the orchestration, the rest are services that would run on top of Kubernetes.
GKE brings you all these components out-of-the-box, and you don't have to maintain them. They're setup for you, and they're more 'integrated' with the Google portal.
One important thing that everyone needs is the LoadBalancer part:
- Since Pods are ephemeral containers, that can be rescheduled anywhere and at any time, they are not static, so ingress traffic needs to be managed separately.
This can be done within Kubernetes by using a DaemonSet to fix a Pod on a specific node, and use a hostPort for that Pod to bind to the node's IP.
Obviously this lacks fault tolerance, so you could use multiple and do DNS round robin load balancing.
GKE takes care of all this too with external Load Balancing.
(On AWS, it's similar, with ALB taking care of load balancing in Kubernetes)
GKE (Google Container Engine) is only container platform, which Kubernetes can manage. It is not a kubernetes-like with "differences".
As mentioned in "Docker and Kubernetes and AppC " (May 2015, that can change):
Docker is currently the only supported runtime in GKE (Google Container Engine) our commercial containers product, and in GAE (Google App Engine), our Platform-as-a-Service product.
You can see Kubernetes used on GKE in this example: "Spinning Up Your First Kubernetes Cluster on GKE" from Rimantas Mocevicius.
The gcloud API will still make kubernetes commands behind the scene.
GKE will organize its platform through Kubernetes master
Every container cluster has a single master endpoint, which is managed by Container Engine.
The master provides a unified view into the cluster and, through its publicly-accessible endpoint, is the doorway for interacting with the cluster.
The managed master also runs the Kubernetes API server, which services REST requests, schedules pod creation and deletion on worker nodes, and synchronizes pod information (such as open ports and location) with service information.
In short, without getting into technical details,
GKE is managed Kubernetes, similar to how Google's Cloud Composer is managed Apache Airflow and Cloud Dataflow is managed Apache Beam.
So, some of Google Cloud Platform's services (GKE, Cloud Composer, Cloud Dataflow) are managed implementations of various open source technologies (Kubernetes, Airflow, Beam).