Kubernetes bare metal NFS PVs error with elasticsearch helm chart - docker

I deployed Kubernetes on a bare metal dedicated server using conjure-up kubernetes on Ubuntu 18.04 LTS. This also means the nodes are LXD containers.
I need persistent volumes for Elasticsearch and MongoDB, and after some research I decided that the simplest way of getting that to work in my deployment was an NFS share.
I created an NFS share in the host OS, with the following configuration:
/srv/volumes 127.0.0.1(rw) 10.78.69.*(rw,no_root_squash)
10.78.69.* appears to be the bridge network used by Kubernetes, at least looking at ifconfig there's nothing else.
Then I proceeded to create two folders, /srv/volumes/1 and /srv/volumes/2
I created two PVs from these folders with this configuration for the first (the second is similar):
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-pv1
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: /srv/volumes/1
server: 10.78.69.1
Then I deploy the Elasticsearch helm chart (https://github.com/helm/charts/tree/master/incubator/elasticsearch) and it creates two claims which successfully bind to my PVs.
The issue is that afterwards the containers seem to encounter errors:
Error: failed to start container "sysctl": Error response from daemon: linux runtime spec devices: lstat /dev/.lxc/proc/17848/fdinfo/24: no such file or directory
Back-off restarting failed container
Pods view
Persistent Volume Claims view
I'm kinda stuck here. I've tried searching for the error but I haven't been able to find a solution to this issue.
Previously before I set the allowed IP in /etc/exports to 10.78.69.* Kubernetes would tell me it got "permission denied" from the NFS server while trying to mount, so I assume that now mounting succeeded, since that error disappeared.
EDIT:
I decided to purge the helm deployment and try again, this time with a different storage type, local-storage volumes. I created them following the guide from Canonical, and I know they work because I set up one for MongoDB this way and it works perfectly.
The configuration for the elasticsearch helm deployment changed since now I have to set affinity for the nodes on which the persistent volumes were created:
values.yaml:
data:
replicas: 1,
nodeSelector:
elasticsearch: data
master:
replicas: 1,
nodeSelector:
elasticsearch: master
client:
replicas: 1,
cluster:
env: {MINIMUM_MASTER_NODES: "1"}
I deployed using
helm install --name site-search -f values.yaml incubator/elasticsearch
These are the only changes, however elasticsearch still presents the same issues.
Additional information:
kubectl version:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
The elasticsearch image is the default one in the helm chart:
docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.1
The various pods' (master, client, data) logs are empty.
The error is the same.

I was able to solve the issue by running sysctl -w vm.max_map_count=262144 myself on the host machine, and removing the "sysctl" init container which was trying to do this unsuccessfully.

It looks like an often issue and it is observed in various environments and configurations. However it's quite unclear what exactly causing it. Could you provide more details about your software versions, log fragments, etc?

Related

How can I deploy elasticsearch to kubernete?

I installed minikube on my Mac and I'd like to deploy elasticsearch on this k8s cluster. I followed this instruction: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
The file I created is:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.10.0
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
when I run kubectl apply -f es.yaml, I got this error: error: unable to recognize "es.yaml": no matches for kind "Elasticsearch" in version "elasticsearch.k8s.elastic.co/v1"
It says kind is not matched. I wonder how I can make it work. I searched k8s doc and it seems kind can be service, pod, deployment. But why the above instruction uses Elasticsearch as the kind? What value of kind should I specify?
I think you might have missed the step of installing CRD and the operator for ElasticSearch. Have you followed this step https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html?
Service, Pod, Deployment etc are Kubernetes native resources. Kubernetes provides a way to write custom resources also, using CRDs. Elasticsearch is one such example, so you have to define custom resource before using it for Kubernetes to understand that.

Kubernetes can not find the server via clean installation

I have been trying to learn Kubernetes with Docker to run containers and manage them with Kubernetes.
I use this web-page for installations: https://kubernetes.io/docs/tasks/tools/install-kubectl/
I have my own Debian/Linux server machine that I want to build and configure Kubernetes.
After following the kubectl installation steps, I get an error like:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Error from server (NotFound): the server could not find the requested resource
kubectl version --short:
Client Version: v1.16.3
Error from server (NotFound): the server could not find the requested resource
microk8s.kubectl version --short:
Client Version: v1.16.3
Server Version: v1.16.3
I have tried the local microk8s and used as microk8s.kubectl and with that installation, I was able to configure and even make the container work. However, the regular kubectl can not find the server. These two have different installations and different names, folders etc. I assume that one will not break or have any impact on the other one.
Edit: Based on the suggestion of Suresh, I did kubectl config view and the result is:
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
Does anyone have any idea how to solve this problem?
microk8s.kubectl config view --raw > $HOME/.kube/config
If you already have kubectl installed and you want to use it to access the microk8s deployment you can export the cluster's config with accessing-kubernetes on microk8s
Microk8s put the kubeconfig file at different location.
To avoid colliding with a kubectl already installed and to avoid overwriting any existing Kubernetes configuration file, microk8s adds a microk8s.kubectl command, configured to exclusively access the new microk8s install. When following instructions online, make sure to prefix kubectl with microk8s.
microk8s.kubectl get nodes
microk8s.kubectl get services

Is there any better way for changing the source code of a container instead of creating a new image?

What is the best way to change the source code of my application running as Kubernetes pod without creating a new version of image so I can avoid time taken for pushing and pulling image from repository?
You may enter the container using bash if it installed on the image and modify it using -
docker exec -it <CONTAINERID> /bin/bash
However, this isn’t advisable solution. If your modifications succeed, you should update the Dockerfile accordingly or else you risk losing your work and ability to share it with others.
Have the container pull from git on creation?
Setup CI/CD?
Another way to achieve a similar result is to leave the application source outside of the container and mount the application source folder in the container.
This is especially useful when developing web applications in environments such as PHP: your container is setup with your Apache/PHP stack and /var/www/html is configured to mount your local filesystem.
If you are using minikube, it already mounts a host folder within the minikube VM. You can find the exact paths mounted, depending on your setup, here:
https://kubernetes.io/docs/getting-started-guides/minikube/#mounted-host-folders
Putting it all together, this is what a nginx deployment would look like on kubernetes, mounting a local folder containing the web site being displayed:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: sources
readOnly: true
volumes:
- name: sources
hostPath:
path: /Users/<username>/<source_folder>
type: Directory
Finally we have resolved the issue. Here, we changed our image repository from docker hub to aws ecr in the same region where we are running kubernetes cluster. Now, it is taking very lesstime for pushing/pulling images.
This is definitely not recommended for production.
But if your intention is local development with kubernetes, take a look at these tools:
Telepresence
Telepresence is an open source tool that lets you run a single service
locally, while connecting that service to a remote Kubernetes cluster.
Kubectl warp
Warp is a kubectl plugin that allows you to execute your local code
directly in Kubernetes without slow image build process.
The kubectl warp command runs your command inside a container, the same
way as kubectl run does, but before executing the command, it
synchronizes all your files into the container.
I think it should be taken as process to create new images for each deployment.
Few benefits:
immutable images: no intervention in running instance this will ensure image run in any environment
rollback: if you encounter issues in new version, rollback to previous version
dependencies: new versions may have new dependencies

How to reboot kubernetes pod and keep the data

I'm now using kubernetes to run the Docker container.I just create the container and i use SSH connect to my pods.
I need to do some system config change so i need to reboot the container but when i`reboot the container it will lose all the data in the pod. kubernetes will run a new pod just like the Docker image original.
So how can i reboot the pod and just keep the data in it?
The kubernetes was offered my Bluemix
You need to learn more about containers as your question suggests that you are not fully grasping the concepts.
Running SSH in a container is an anti-pattern, a container is not a virtual machine. So remove the SSH server from it.
the fact that you run SSH indicates that you may be running more than one process per container. This is usually bad practice. So remove that supervisor and call your main process directly in your entrypoint.
Setup your container image main process to use environment variables or configuration files for configuration at runtime.
The last item means that you can define environment variables in your Pod manifest or use Kubernetes configmaps to store configuration file. Your Pod will read those and your process in your container will get configured properly. If not your Pod will die or your process will not run properly and you can just edit the environment variable or config map.
My main suggestion here is to not use Kubernetes until you have your docker image properly written and your configuration thought through, you should not have to exec in the container to get your process running.
Finally, more generally, you should not keep state inside a container.
For you to store your data you need to set up persistent storage, if you're using for example Google Cloud as your platform, you would need to create a disk to store your data on and define the use of this disk in your manifest.
With Bluemix it looks like you just have to create the volumes and use them.
bx ic volume-create myapplication_volume ext4
bx ic run --volume myapplication_volume:/data --name myapplication registry.eu-gb.bluemix.net/<my_namespace>/my_image
Bluemix - Persistent storage documentation
I don't use Bluemix myself so i'll proceed with an example manifest using Google's persistent disks.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapplication
namespace: default
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: myapplication
template:
metadata:
labels:
app: myapplication
spec:
containers:
- name: myapplication
image: eu.gcr.io/myproject/myimage:latest
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- mountPath: /data
name: myapplication-volume
volumes:
- name: myapplication-volume
gcePersistentDisk:
pdName: mydisk-1
fsType: ext4
Here the disk mydisk-1 is mapped to the /data mountpoint.
The only data that will persist after reboots will be inside that folder.
If you want to store your logs for example you could symlink the logs folder.
/var/log/someapplication -> /data/log/someapplication
It works, but this is NOT recommended!
It's not clear to me if you're sshing to the nodes or using some tool to execute a shell inside the containers. Even though running multiple processes per container is bad practice it seems to be working very well, if you keep tabs on memory and cpu use.
Running a ssh server and cronjobs in the same container for example will absolutely work though it's not the best of solutions.
We've been using supervisor with multiple (2-5) processses in production for over a year now and it's working surprisingly well.
For more information about persistent volumes in a variety of platforms.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Can Kubernetes be used like Docker Compose?

I have been digging through the Kubernetes documentation for hours. I understand the core design, and the notion of services, controllers, pods, etc.
What I don't understand, however, is the process in which I can declaratively configure the cluster. That is, a way for me to write a config file (or a set thereof) to define the makeup, and scaling options of the cloud deployment. I want to be able to declare which containers I want in which pods, how they will communicate, how they will scale, etc. without running a ton of cli commands.
Is there docker-compose functionality for Kubernetes?
I want my application to be defined in git—to be version controlled–without relying on manual cli interactions.
Is this possible to do in a concise way? Is there a reference that is more clear than the official documentation?
If you're still looking, maybe this tool can help: https://github.com/kelseyhightower/compose2kube
You can create a compose file:
# sample compose file with 3 services
web:
image: nginx
ports:
- "80"
- "443"
database:
image: postgres
ports:
- "5432"
cache:
image: memcached
ports:
- "11211"
Then use the tool to convert it to kubernetes objects:
compose2kube -compose-file docker-compose.yml -output-dir output
Which will create these files:
output/cache-rc.yaml
output/database-rc.yaml
output/web-rc.yaml
Then you can use kubectl to apply them to kubernetes.
If you have existing Docker Composer files, you may take a look at the Kompose project.
kompose is a tool to help users who are familiar with docker-compose move to Kubernetes. kompose takes a Docker Compose file and translates it into Kubernetes resources.
kompose is a convenience tool to go from local Docker development to managing your application with Kubernetes. Transformation of the Docker Compose format to Kubernetes resources manifest may not be exact, but it helps tremendously when first deploying an application on Kubernetes.
To run docker-compose.yaml file or your own, run:
kompose up
To convert docker-compose.yaml into Kubernetes deployments and services with one simple command:
$ kompose convert -f docker-compose.yaml
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
For more info, check: http://kompose.io/
Docker has officially announced the docker-compose functionality for the kubernetes cluster. So from now on you can compose the kubernetes resources in a file and apply them using that single file.
First we need to install the Compose on Kubernetes controller into your Kubernetes cluster. This controller uses the standard Kubernetes extension points to introduce the Stack to the Kubernetes API. Check the full documentation to install the docker compose controller:
https://github.com/docker/compose-on-kubernetes
Let's write a simple compose yaml file:
version: "3.7"
services:
web:
image: dockerdemos/lab-web
ports:
- "33000:80"
words:
image: dockerdemos/lab-words
deploy:
replicas: 3
endpoint_mode: dnsrr
db:
image: dockerdemos/lab-db
We’ll then use the docker client to deploy this to a Kubernetes cluster running the controller:
$ docker stack deploy --orchestrator=kubernetes -c docker-compose.yml words
Waiting for the stack to be stable and running...
db: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
web: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
words: Ready [pod status: 1/3 ready, 2/3 pending, 0/3 failed]
Stack words is stable and running
We can then interact with those objects via the Kubernetes API. Here you can see we’ve created the lower-level objects like Services, Pods, Deployments and ReplicaSets automatically:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/db 1 1 1 1 57s
deployment.apps/web 1 1 1 1 57s
deployment.apps/words 3 3 3 3 57s
It’s important to note that this isn’t a one-time conversion. The Compose on Kubernetes API Server introduces the Stack resource to the Kubernetes API. So we can query and manage everything at the same level of abstraction as we’re building the application. That makes delving into the details above useful for understanding how things work, or debugging issues, but not required most of the time:
$ kubectl get stack
NAME STATUS PUBLISHED PORTS PODS AGE
words Running 33000 5/5 4m
Kubernetes certainly has its own yaml (as shown in "Deploying Applications")
But as "Docker Clustering Tools Compared: Kubernetes vs Docker Swarm", it was not written (just) for Docker, and it has its own system.
You could use docker-compose to start Kubernetes though, as shown in "vyshane/kid": that does mask some of the kubectl commands cli in scripts (which can be versioned).

Resources