When i create the RC as given in the nfs tutorial of kubernetes to create the nfs server,
it uses 100% cpu of a n1-standard-1 node from GCE:
Pod logs returns nothing wrong:
> kubectl logs nfs-server-*****
Serving /exports
NFS started
Is that normal that nfs consume so much cpu?
There is an issue in the NFS image you were using.
Related
While playing around with Docker and orchestration (kubernetes) I had to install and use minikube to create a simple sandbox environment. At the beginning I thought that minikube installs some kind of VM and run the "minified" kubernetes environment inside the same, however, after the installation listing my local Docker running containers I found minikube running as a container!!
Why minikube itself run as a Docker container? and how can it runs other containers?
Experimental Docker support looks to have been added in minikube 1.7.0, and started becoming the default runtime in minikube 1.9.0. As I'm writing this, current is 1.15.1.
The minikube documentation on the "docker" driver notes, particularly on a native-Linux host, there is not an intermediate virtual machine: if you can run Kubernetes in a container, it can use the entire host system's resources without special configuration or partitioning. The previous minikube-on-VirtualBox installation required preallocating memory and disk to the VM, and it was easy to get those settings wrong. Even on non-Linux hosts, if you're running Docker Desktop, sharing its hidden Linux VM can improve resource utilization, and you don't need to decide to allocate exactly 2 GB RAM to Docker Desktop and exactly 4 GB to the minikube VM.
For a long time it's been possible, but discouraged, to run a separate Docker daemon inside a Docker container; similarly, it's possible, but usually discouraged, to run a multi-process init system in a container. If you do both of these things then you can have the core Kubernetes components (etcd, apiserver, kubelet, ...) inside a single container pretending to be a Kubernetes node. It also helps here that Kubernetes already knows how to pull Docker images, which minimizes some of the confusing issues with running Docker in Docker.
Initially we have used docker containers and ansible for deploying our containers. Now the management has suggested to use Kubernetes (cri-o) for the same deployment. Earlier we have used docker stats and top commands to get the statistics of CPU and memory usage. Can we get the same stats in Kubernetes? Do we have any tool which will give CPU and memory usage like PRTG or Prometheus ? Do we have any sensors for PRTG for kubernetes ? I am new to this, Can someone help me on this please.
You can deploy metrics-server on to your existing kubernetes cluster. It is a cluster wide aggregator of resource usage of data.
You can download the below tar file and deploy the metrics server as pod on existing kubernetes cluster.
curl -O https://github.com/kubernetes-incubator/metrics-server/archive/v0.3.4.tar.gz
tar -xzf v0.3.4.tar.gz
kubectl apply -f metrics-server-0.3.4/deploy/1.8+/
Then if you run kubectl top command, you can see the cpu/memory usages details.
Example:
kubectl top nodes
kubectl top pods
I'm using my laptop as a single-noded Docker Swarm cluster.
After deploying my cluster, it gets extremely slow to run a docker build command. Even if a command is cached (e.g. RUN chmod ...), it takes sometimes minutes to complete.
How can I debug this and understand what's the cause of the slowdown?
Context
Number of services in my swarm cluster: 22
Docker version: 18.04-ce
Host OS: Linux 4.15.15
Host Arch: x86_64
Host specs: i7, 16GB of RAM, SSD/HDD hybrid disk (docker images are stored in the HDD part)
Using VMs or docker-machine: No
In this case, it turned out to be too much disk I/O.
As I've mentioned above, my laptop's storage is separated into an SSD disk and an HDD disk. The docker images are stored in the HDD disk, but so are the docker volumes created (which I initially overlooked).
The cluster that I am running locally contains a PostgreSQL database that receives a lot of writes. Those writes were clogging my HDD disk, so the solution to this specific problem was to mount PostgreSQL's storage in a directory stored in the SDD disk. Find below the debug procedure.
I found this out by using iostat like instructed in this blog post:
iostat -x 2 5
By looking at the output of this command, it became clear that my HDD disk's %utilized param was up to 99%, so it was probably the culprit. Next, I ran iotop and dockerd+postgres was at the top of the list.
In conclusion, if your containers are very I/O intensive, they could slow down the whole docker infrastructure to a crawl.
I have been working with docker to run my scripts on chrome-node and firefox -node and debug with the selenium-hub image where it runs smoothly, but when I use the same with k8s the whole system slows down. Why is this happening, any idea. I am using minikubes for kubernetes and docker toolbox and docker compose for docker.
Thanks,
There would definitely be an additional overhead when you start Kubernetes using minikube locally, compared to just starting a Docker container on the host.
In order to have a Kubernetes cluster, minikube creates a VM on the machine where the Kubernetes components will run in addition to the Docker container.
Anyway, minikube is not a production way for running Kubernetes. It is mostly meant for local development and testing. Therefore, you shouldn't evaluated kubernetes performance based on a minikube installation.
I would like to run Aerospike cluster on Docker containers managed by Kubernetes on CoreOS on Google Compute Engine (GCE). But since GCE does not permit multicast, I have to use Mesh heartbeat as described here, which has to be set up by specifying all node's IP addresses and ports; it seems so inflexible to me.
Is there any recommended cloud-config settings for Aerospike cluster on Kubernetes/CoreOS/GCE with flexibility of the cluster being kept?
An alternative to specifying all mesh seed IP addresses is to use the asinfo tip command.
Please see:
http://www.aerospike.com/docs/reference/info/#tip
the tip command
asinfo -v 'tip:host=172.16.121.138;port=3002'
The above command could be added to a script or orchestration tool with correct ips.
You may also find addtional info on the aerospike Forum:
Aerospike Forum
You can get the pod IPs from a service via a DNS query with the integrated DNS - if you set clusterIP: "none", a
dig +short svcname.namespace.svc.cluster.local
Will return each pod ip in the service.
When we talked with the Aerospike engineers during pre-sales they recommended against running Aerospike inside a Docker container (Kubernetes or not). Their reasoning was that when running inside of Docker Aerospike is prevented from accessing the SSD hardware directly and the SSD drivers running through Docker aren't as efficient as running on bare metal (or VM). Many of the optimizations they have written weren't able to be taken advantage of.