We have openwhisk setup on onPrem on Docker. I want to moniter it via prometheus and grafana tool. How can I intergrate Kamon with my cluster and prometheous? As a DevOps guy, I need to monitor every single point of my openwhisk cluster.
Also, there is a docker image "kamon/grafana_graphite" which may help me in the cluster monitoring. But there is zero documentation available that how can I connect it to my openwhisk cluster.
You can find the documentation inside the docs directory
https://github.com/apache/openwhisk/blob/master/docs/metrics.md
Related
I have a project to containerize several applications (Gitlab, Jenkins, Wordpress, Python Flask app...). Currently each application runs on a Compute Engine VM each at GCP. My goal would be to move everything to a cluster (Swarm or Kubernetes).
However I have different questions about Docker Swarm on Google Cloud Platform:
How can I expose my Python application on the outside (HTTP load balancer) as well as the other applications only available in my private VPC ?
From what I've seen on the internet, I have the impression that docker swarm is very little used. Should I go for a kubernetes cluster instead ? (I have good knowledge of Docker/Kubernetes)
It is difficult to find information about Docker Swarm in cloud providers. What would be an architecture with Docker Swarm on GCP?
Thanks for your help.
I'd create a template and from that an instance group for all VM, which shall host the Docker swarm. And a separate instance or instance group for said internal purposes - so that there is a strict separation, which can then be used to route the internal & external traffic accordingly (this would apply in any case). Google Kubernetes Engine is about the same as such an instance group, but Google managed infrastructure. See the tutorial, there's not much difference - except that it better integrates with gcloud & kubectl. While there is no requirement to want or need to maintain the underlying infrastructure, GKE is probably less effort.
What you are basically asking is:
Kubernetes vs. Docker Swarm: What’s the Difference?
Docker Swarm vs Kubernetes: A Helpful Guide for Picking One
Kubernetes vs. Docker: What Does it Really Mean?
Docker Swarm vs. Kubernetes: A Comparison
Kubernetes vs Docker Swarm
I'm currently working on making a game where the servers need to be replicated on pods inside the Minikube cluster (I'm using Minikube since it's just a school project). However, I cannot seem to figure out how to make these game servers accessible from other networks. I have tried making a NodePort service, that points to the deployment of my server image, and when using the Minikube cluster IP + my service's port I can connect fine.
What would be the best approach to solve this issue?
By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the your container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes Service in this example as a type LoadBalancer.
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
Minikube is great for local setups but not for real clusters. It spins up only a one-node cluster for development and testing. It is useless if you want to use multiple servers environment.
Taking under consideration that you want to play fast with scalability across the nodes it is useful to create Kubernetes cluster - using Kubespray or Kubeadm.
If you want to run an actual local Kubernetes, use a local VM and then create K8s cluster using these tools for automated deployment.
Kubeadm
The kubeadm tool is good if you are new in cloud technologies -run k8s cluster for the first time. You can install and use kubeadm on various machines: your laptop, a set of cloud servers and more. Whether you’re deploying into the cloud or on-premises, you can integrate kubeadm into provisioning systems such as Ansible or Terraform.
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle management, including self-hosted layouts, dynamic discovery services and so on.
Kubespray
Kubespray supports deployments on AWS, Google Compute Engine, Microsoft Azure, OpenStack, and bare metal. It enables deployment of Kubernetes highly available clusters. It supports a variety of Linux distributions and CI. Kubespray supports kubeadm.
There isn't really a good way to do this. Minikube is for local development. If you want to run an actual local Kubernetes, start a local VM directly and use something like kubeadm and kubespray.
I have few servers which is on premise. I need to monitor Nginx and Docker containers using stackdriver. I installed the monitoring agent from GCP and I'm having difficulties in configuring it. I'm actually new to this monitoring and logging in GCP, If someone can help me or post some articles related to this it would be so helpful. Thanks.
I am new to cluster container management, and this question is the basis for all the freshers over here.
I read some documentation, but still, my understanding is not too clear, so any leads.. helping to understand?
Somewhere it is mentioned, Minikube is used to run Kubernetes locally. So if we want to maintain cluster management in my four-node Raspberry Pi, then Minikube is not the option?
Does Minikube support only a one-node system?
Docker Compose is set of instructions and a YAML file to configure and start multiple Docker containers. Can we use this to start containers of the different hosts? Then for simple orchestration where I need to call container of the second host, I don't need any cluster management, right?
What is the link between Docker Swarm and Kubernetes? Both are independent cluster management. Is it efficient to use Kubernetes on Raspberry Pi? Any issue, because I was told that Kubernetes in single node takes the complete memory and CPU usage? Is it true?
Is there other cluster management for Raspberry Pi?
I think this 4-5 set will help me better.
Presuming that your goal here is to run a set of containers over a number of different Raspberry Pi based nodes:
Minikube isn't really appropriate. This starts a single virtual machine on a Windows, MacOS or Linux and installs a Kubernetes cluster into it. It's generally used by developers to quickly start-up a cluster on their laptops or desktops for development and testing purposes.
Docker Compose is a system for managing sets of related containers. So for example if you had a web server and database that you wanted to manage together you could put them in a single Docker Compose file.
Docker Swarm is a system for managing sets of containers across multiple hosts. It's essentially an alternative to Kubernetes. It has fewer features than Kubernetes, but it is much simpler to set up.
If you want a really simple multi-node Container cluster, I'd say that Docker swarm is a reasonable choice. If you explicitly want to experiment with Kubernetes, I'd say that kubeadm is a good option here. Kubernetes in general has higher resource requirements than Docker Swarm, so it could be somewhat less suited to it, although I know people have successfully run Kubernetes clusters on Raspberry Pis.
Docker Compose
A utility to to start multiple docker containers on a single host using a single docker-compose up. This makes it easier to start multiple containers at once, rather than having do mutliple docker run commands.
Docker swarm
A native container orchestrator for Docker. Docker swarm allows you to create a cluster of docker containers running on multiple machines. It provides features such as replication, scaling, self-healing i.e. starting a new container when one dies ...
Kubernetes
Also a container orchestrator. Kubernetes and Docker swarm can be considered as alternatives to one another. They both try to handle managing containers starting in a cluster
Minikube
Creating a real kubernetes cluster requires having multiple machines either on premise or on a cloud platform. This is not always convenient if someone is just new to Kubernetes and trying to learn by playing around with Kubernetes. To solve that minikube allows you to start a very basic Kubernetes cluster that consists of a single VM on you machine, which you can use to play around with Kubernetes.
Minikube is not for a production or multi-node cluster. There are many tools that can be used to create a multi-node Kubernetes cluster such as kubeadm
Containers are the future of application deployment. Containers are smallest unit of deployment in docker. There are three components in docker as docker engine to run a single container, docker-compose to run a multi-container application on a single host and docker-swarm to run multi-container application across hosts which also an orchestration tool.
In kubernetes, the smallest unit of deployment is Pod(which is composed of multiple container). Minikube is a single node cluster where you can install it locally and try, test and feel the kubernetes features locally. But, you can't scale this to more than a single machine. Kubernetes is an orchestration tool like Docker Swarm but more prominent than Docker Swarm with respect to features, scaling, resiliency, and security.
You can do the analysis and think about which tool will be fit for your requirements. Each one having their own pros or cons like docker swarm is good and easy to manage small clusters whereas kubernetes is much better for larger once. There is another orchestration tool Mesos which is also popular and used in largest size clusters.
Check this out, Choose your own Adventure but, it's just a general analogy and only to understand because all the three technologies are evolving rapidly.
I get the impression you're mostly looking for confirmation and am happy to help with that if I can.
Yes, minikube is local-only
Yes, minikube is intended to be single-node
Docker-compose isn't really an orchestration system like swarm and Kubernetes are. It helps with running related containers on a single host, but it is not used for multi-host.
Kubernetes and Docker Swarm are both container orchestration systems. These systems are good at managing scaling up, but they have an overhead associated with them so they're better suited to multi-node.
I don't know the range of orchestration options for Raspberry Pi, but there are Kubernetes examples out there such as Build Your Own Cloud with Kubernetes and Some Raspberry Pi.
For Pi, you can use Docker Swarm Mode on one or more Pi's. You can even run ARM emulation for testing on Docker for Windows/Mac before trying to get it all working directly on a Pi. Same goes for Kubernetes, as it's built-in to Docker for Windows/Mac now (no minikube needed).
Alex Ellis has a good blog on Pi and Docker and this post may help too.
I've been playing around with orchestrating Docker containers on a subnet of Raspberry Pis (3Bs).
I found Docker-swarm easiest to set up and work with, and adequate for my purposes. Guide: https://docs.docker.com/engine/swarm/swarm-tutorial/
For Kubernetes there are two main options; k3s and microk8s. Some guides:
k3s
https://bryanbende.com/development/2021/05/07/k3s-raspberry-pi-initial-setup
microk8s
https://ubuntu.com/tutorials/how-to-kubernetes-cluster-on-raspberry-pi#1-overview
I know that Docker and Kubernetes aren’t direct competitors. Docker is the container platform and containers are coordinated and scheduled by Kubernetes, which is a tool.
What does it really mean and how can I deploy my app on Docker for Azure ?
Short answer:
Docker (and containers in general) solve the problem of packaging an application and its dependencies. This makes it easy to ship and run everywhere.
Kubernetes is one layer of abstraction above containers. It is a distributed system that controls/manages containers.
My advice: because the landscape is huge... start learning and putting the pieces of the puzzle together by following a course. Below I have added some information from the:
Introduction to Kubernetes, free online course from The Linux Foundation.
Why do we need Kubernetes (and other orchestrators) above containers?
In the quality assurance (QA) environments, we can get away with running containers on a single host to develop and test applications. However, when we go to production, we do not have the same liberty, as we need to ensure that our applications:
Are fault-tolerant
Can scale, and do this on-demand
Use resources optimally
Can discover other applications automatically, and communicate with each other
Are accessible from the external world
Can update/rollback without any downtime.
Container orchestrators are the tools which group hosts together to form a cluster, and help us fulfill the requirements mentioned above.
Nowadays, there are many container orchestrators available, such as:
Docker Swarm: Docker Swarm is a container orchestrator provided by Docker, Inc. It is part of Docker Engine.
Kubernetes: Kubernetes was started by Google, but now, it is a part of the Cloud Native Computing Foundation project.
Mesos Marathon: Marathon is one of the frameworks to run containers at scale on Apache Mesos.
Amazon ECS: Amazon EC2 Container Service (ECS) is a hosted service provided by AWS to run Docker containers at scale on its infrastructrue.
Hashicorp Nomad: Nomad is the container orchestrator provided by HashiCorp.
Kubernetes is built on Docker technology. It is an orchestration tool for Docker container whereas Docker is a technology to create and deploy containers.
Docker, starting with a platform-as-a-service (PaaS) provider named dotCloud.
All in all, Kubernetes is related to the Docker container, allowing you to implement application portability and extensibility in container orchestration.
DOCKER
Easy and fast to install and configure
Functionality is provided and limited by the Docker API
Quick container deployment and scaling even in very large clusters
Automated internal load balancing through any node in the cluster
Simple shared local volumes
Kubernetes
Require some work to get up and running
Client, API and YAML definitions are unique to Kubernetes
Provides strong guarantees to cluster states at the expense of speed
To Enable load balancing requires manual service configuration
Volumes shared within pods
This is just a basic idea which at least explains the difference.If you want to go in depth see my posts
http://www.thecreativedev.com/an-introduction-to-kubernetes/
http://www.thecreativedev.com/learn-docker-works/
Docker and Kubernetes are complementary. Docker provides an open standard for packaging and distributing containerized applications, while Kubernetes provides for the orchestration and management of distributed, containerized applications created with Docker. In other words, Kubernetes provides the infrastructure needed to deploy and run applications built with Docker.