How can I deploy elasticsearch to kubernete? - docker

I installed minikube on my Mac and I'd like to deploy elasticsearch on this k8s cluster. I followed this instruction: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
The file I created is:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.10.0
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
when I run kubectl apply -f es.yaml, I got this error: error: unable to recognize "es.yaml": no matches for kind "Elasticsearch" in version "elasticsearch.k8s.elastic.co/v1"
It says kind is not matched. I wonder how I can make it work. I searched k8s doc and it seems kind can be service, pod, deployment. But why the above instruction uses Elasticsearch as the kind? What value of kind should I specify?

I think you might have missed the step of installing CRD and the operator for ElasticSearch. Have you followed this step https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html?
Service, Pod, Deployment etc are Kubernetes native resources. Kubernetes provides a way to write custom resources also, using CRDs. Elasticsearch is one such example, so you have to define custom resource before using it for Kubernetes to understand that.

Related

Kubernetes fails to pull images from gitlab registry unknown-sha256: <4ca..252> unexpected commit digest precondition

Been learning kubernetes in the past several weeks. I've recently built a bare-metal kubernetes cluster with (3) master nodes and (3) worker nodes (containerd runtime). Installed an another stand-alone bare-metal gitlab server with container registry enabled.
I was successful in building a simple nginx container with a custom index.html using docker build and pushed it to the registry; up until this point everything works great.
Now I wanted to create a simple pod using the image built above.
So, did the following steps.
Created a deploy token with read_registry access
Created a secret in kubernetes with username and the token as the password
Inserted imagePullSecrets to the deployment yaml file.
kubectl apply -f nginx.yaml.
Kubernetes pod status stays in ImagePullBackOff.
Failed to pull image "<gitlab-host>:5050/<user>/<project>/nginx:v1": rpc error: code = FailedPrecondition desc = failed to pull and unpack image
"<gitlab-host>:5050/<user>/<project>/nginx:v1": failed commit on ref "unknown-sha256:4ca40a571e91ac4c425500a504490a65852ce49c1f56d7e642c0ec44d13be252": unexpected commit digest sha256:0d899af03c0398a85e36d5cd7ee9a8828e5618db255770a4a96331785ff26d9c, expected sha256:4ca40a571e91ac4c425500a504490a65852ce49c1f56d7e642c0ec44d13be252: failed precondition.
Troubleshooting steps followed.
docker login from another server works.
docker pull works
In one of the worker nodes where kubernetes was scheduling the pod, I did ctr image pull which works
Did some googling but couldn't find any solutions. So, here I am as a last resort to figure this out.
Appreciate any help that I get.
My Deployment nginx.yml file
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: <gitlab-host>:5050/<username>/<project>/nginx:v1
imagePullPolicy: IfNotPresent
name: nginx
imagePullSecrets:
- name: regcred
I found the problem. I made a silly mistake in /etc/containerd/config.toml in the registry section and not mentioning the endpoint with port number <gitlab-host>:5050.
And also adding the private registries in config.toml is not necessary unless you want to run ctr command on the k8s nodes.

Local kubernetes Hello World in nodejs with Docker

I've been following tutorial videos and trying to understand to build a small minimalistic application. The videos I followed are pulling containers from the registries while I'm trying to test, build and deploy everything locally at the moment if possible. Here's my setup.
I've the latest docker installed with Kubernetes enabled on mac OS.
A helloworld NodeJS application running with Docker and Docker Compose
TODO: I'd like to be able to start my instances, let's say 3 in the kubernetes cluster
Dockerfile
FROM node:alpine
COPY package.json package.json
RUN npm install
COPY . .
CMD ["npm", "start"]
docker-compose.yml
version: '3'
services:
user:
container_name: users
build:
context: ./user
dockerfile: Dockerfile
Creating a deployment file with the help of this tutorial and it may have problems since I'm merging information both from youtube as well as the web link.
Creating a miminalistic yml file for to be able to get up and running, will study other aspects like readiness and liveness later.
apiVersion: v1
kind: Service
metadata:
name: user
spec:
selector:
app: user
ports:
- port: 8080
type: NodePort
Please review the above yml file for correctness, so the question is what do I do next?
The snippets you provide are regrettably insufficient but you have the basics.
I had a Google for you for a tutorial and -- unfortunately -- nothing obvious jumped out. That doesn't mean that there isn't one, just that I didn't find it.
You've got the right idea and there are quite a few levels of technology to understand but, I commend your approach and think we can get you there.
Let's start with a helloworld Node.JS tutorial
https://nodejs.org/en/docs/guides/getting-started-guide/
Then you want to containerize this
https://nodejs.org/de/docs/guides/nodejs-docker-webapp/
For #3 below, the last step here is:
docker build --tag=<your username>/node-web-app .
But, because you're using Kubernetes, you'll want to push this image to a public repo. This is so that, regardless of where your cluster runs, it will be able to access the container image.
Since the example uses DockerHub, let's continue using that:
docker push <your username>/node-web-app
NB There's an implicit https://docker.io/<your username>/node-web-app:latest here
Then you'll need a Kubernetes cluster into which you can deploy your app
I think microk8s is excellent
I'm a former Googler but Kubernetes Engine is the benchmark (requires $$$)
Big fan of DigitalOcean too and it has Kubernetes (also $$$)
My advice is (except microk8s and minikube) don't ever run your own Kubernetes clusters; leave it to a cloud provider.
Now that you have all the pieces, I recommend you just:
kubectl run yourapp \
--image=<your username>/node-web-app:latest \
--port=8080 \
--replicas=1
I believe kubectl run is deprecated but use it anyway. It will create a Kubernetes Deployment (!) for you with 1 Pod (==replica). Feel free to adjust that value (perhaps --replicas=2) if you wish.
Once you've created a Deployment, you'll want to create a Service to make your app accessible (top of my head) this command is:
kubectl expose deployment/yourapp --type=NodePort
Now you can query the service:
kubectl get services/yourapp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yourapp NodePort 10.152.183.27 <none> 80:32261/TCP 7s
NB The NodePort that's been assigned (in this case!) is :32261 and so I can then interact with the app using curl http://localhost:32261 (localhost because I'm using microk8s).
kubectl is powerful. Another way to determine the NodePort is:
kubectl get service/yourapp \
--output=jsonpath="{.spec.ports[0].nodePort}"
The advantage of the approach of starting from kubectl run is you can then easily determine the Kubernetes configuration that is needed to recreate this Deployment|Service by:
kubectl get deployment/yourapp \
--format=yaml \
> ./yourapp.deployment.yaml
kubectl get service/yourapp \
--format=yaml \
> ./yourapp.service.yaml
These commands will interrogate the cluster, retrieve the configuration for you and pump it into the files. It will include some instance data too but the gist of it shows you what you would need to recreate the deployment. You will need to edit this file.
But, you can test this by first deleting the deployment and the service and then recreating it from the configuration:
kubectl delete deployment/yourapp
kubectl delete service/yourapp
kubectl apply --filename=./yourapp.deployment.yaml
kubectl apply --filename=./yourapp.service.yaml
NB You'll often see multiple resource configurations merged into a single YAML file. This is perfectly valid YAML but you only ever see it used by Kubernetes. The format is:
...
some: yaml
---
...
some: yaml
---
Using this you could merge the yourapp.deployment.yaml and yourapp.service.yaml into a single Kubernetes configuration.

Kubernetes bare metal NFS PVs error with elasticsearch helm chart

I deployed Kubernetes on a bare metal dedicated server using conjure-up kubernetes on Ubuntu 18.04 LTS. This also means the nodes are LXD containers.
I need persistent volumes for Elasticsearch and MongoDB, and after some research I decided that the simplest way of getting that to work in my deployment was an NFS share.
I created an NFS share in the host OS, with the following configuration:
/srv/volumes 127.0.0.1(rw) 10.78.69.*(rw,no_root_squash)
10.78.69.* appears to be the bridge network used by Kubernetes, at least looking at ifconfig there's nothing else.
Then I proceeded to create two folders, /srv/volumes/1 and /srv/volumes/2
I created two PVs from these folders with this configuration for the first (the second is similar):
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-pv1
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: /srv/volumes/1
server: 10.78.69.1
Then I deploy the Elasticsearch helm chart (https://github.com/helm/charts/tree/master/incubator/elasticsearch) and it creates two claims which successfully bind to my PVs.
The issue is that afterwards the containers seem to encounter errors:
Error: failed to start container "sysctl": Error response from daemon: linux runtime spec devices: lstat /dev/.lxc/proc/17848/fdinfo/24: no such file or directory
Back-off restarting failed container
Pods view
Persistent Volume Claims view
I'm kinda stuck here. I've tried searching for the error but I haven't been able to find a solution to this issue.
Previously before I set the allowed IP in /etc/exports to 10.78.69.* Kubernetes would tell me it got "permission denied" from the NFS server while trying to mount, so I assume that now mounting succeeded, since that error disappeared.
EDIT:
I decided to purge the helm deployment and try again, this time with a different storage type, local-storage volumes. I created them following the guide from Canonical, and I know they work because I set up one for MongoDB this way and it works perfectly.
The configuration for the elasticsearch helm deployment changed since now I have to set affinity for the nodes on which the persistent volumes were created:
values.yaml:
data:
replicas: 1,
nodeSelector:
elasticsearch: data
master:
replicas: 1,
nodeSelector:
elasticsearch: master
client:
replicas: 1,
cluster:
env: {MINIMUM_MASTER_NODES: "1"}
I deployed using
helm install --name site-search -f values.yaml incubator/elasticsearch
These are the only changes, however elasticsearch still presents the same issues.
Additional information:
kubectl version:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
The elasticsearch image is the default one in the helm chart:
docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.1
The various pods' (master, client, data) logs are empty.
The error is the same.
I was able to solve the issue by running sysctl -w vm.max_map_count=262144 myself on the host machine, and removing the "sysctl" init container which was trying to do this unsuccessfully.
It looks like an often issue and it is observed in various environments and configurations. However it's quite unclear what exactly causing it. Could you provide more details about your software versions, log fragments, etc?

Is there any better way for changing the source code of a container instead of creating a new image?

What is the best way to change the source code of my application running as Kubernetes pod without creating a new version of image so I can avoid time taken for pushing and pulling image from repository?
You may enter the container using bash if it installed on the image and modify it using -
docker exec -it <CONTAINERID> /bin/bash
However, this isn’t advisable solution. If your modifications succeed, you should update the Dockerfile accordingly or else you risk losing your work and ability to share it with others.
Have the container pull from git on creation?
Setup CI/CD?
Another way to achieve a similar result is to leave the application source outside of the container and mount the application source folder in the container.
This is especially useful when developing web applications in environments such as PHP: your container is setup with your Apache/PHP stack and /var/www/html is configured to mount your local filesystem.
If you are using minikube, it already mounts a host folder within the minikube VM. You can find the exact paths mounted, depending on your setup, here:
https://kubernetes.io/docs/getting-started-guides/minikube/#mounted-host-folders
Putting it all together, this is what a nginx deployment would look like on kubernetes, mounting a local folder containing the web site being displayed:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: sources
readOnly: true
volumes:
- name: sources
hostPath:
path: /Users/<username>/<source_folder>
type: Directory
Finally we have resolved the issue. Here, we changed our image repository from docker hub to aws ecr in the same region where we are running kubernetes cluster. Now, it is taking very lesstime for pushing/pulling images.
This is definitely not recommended for production.
But if your intention is local development with kubernetes, take a look at these tools:
Telepresence
Telepresence is an open source tool that lets you run a single service
locally, while connecting that service to a remote Kubernetes cluster.
Kubectl warp
Warp is a kubectl plugin that allows you to execute your local code
directly in Kubernetes without slow image build process.
The kubectl warp command runs your command inside a container, the same
way as kubectl run does, but before executing the command, it
synchronizes all your files into the container.
I think it should be taken as process to create new images for each deployment.
Few benefits:
immutable images: no intervention in running instance this will ensure image run in any environment
rollback: if you encounter issues in new version, rollback to previous version
dependencies: new versions may have new dependencies

how to deploy Kubernetes nginx controller with kubeadm (k8s 1.4)?

AWS + Kubeadm (k8s 1.4)
I tried following the README at:
https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx
but that doesnt seem to work. I asked around in slack, and it seems the yamls are out-dated, which i had to modify as such
first i deployed default-http-backend using yaml found on git:
https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/default-backend.yaml
Next, the ingress-RC i had to modify:
https://gist.github.com/lilnate22/5188374
(note the change to get path to healthz to reflect default-backend as well as the port change to 10254 which is apparently needed according to slack)
Everything is running fine
kubectl get pods i see the ingress-controller
kubectl get rc i see 1 1 1 for the ingress-rc
i then deploy the simple echoheaders application (according to git readme):
kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.4 --replicas=1 --port=8080
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
next i created a simple ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: echoheaders-x
servicePort: 80
both get ing and describe ing gives be a good sign:
Name: test-ingress
Namespace: default
Address: 172.30.2.86 <---this is my private ip
Default backend: echoheaders-x:80 (10.38.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
* * echoheaders-x:80 (10.38.0.2:8080)
but attempting to go to nodes public ip doesnt seem to work, as i am getting "unable to reach server`
Unfortunately it seems that using ingress controllers with Kubernetes clusters set up using kubeadm doesn't is not supported at the moment.
The reason for this is that the ingress controllers specify a hostPort in order to become available on the public IP of the node, but the cluster created by kubeadm uses the CNI network plugin which does not support hostPort at the moment.
You may have better luck picking a different way to set up the cluster which does not use CNI.
Alternatively, you can edit your ingress-rc.yaml to declare "hostNetwork: true" under the "spec:" section. Specifying hostNetwork will cause the containers to run using the host's network namespace, giving them access to the network interfaces, routing tables and iptables rules of the host. Think of this as equivalent to "docker run" with the option --network="host".
Ok for all those that came here wondering the same thing..here is how i solved it.
PRECURSOR: the documentation is ambiguous such that reading the docs, i was under the impression, that running through the README would allow me to visit http://{MY_MASTER_IP} and get to my services...this is not true.
in order to get ingress_controller, I had to create a service for ingress-controller, and then expose that service via nodePort. this allowed me to access the services (in the case of README, echoheaders) via http://{MASTER_IP}: {NODEPORT}
there is an "issue" with nodePort that you get a random port#, which somewhat defeats the purpose of ingress... to solve that i did the following:
First: I needed to edit kube-api to allow a lower nodePort IP.
vi /etc/kubernetes/manifests/kube-apiserver.json
then in the kube-api containers arguments section add: "--service-node-port-range=80-32767",
this will allow nodePort to be from 80-32767.
** NOTE: i would probably not recommend this for production...**
Next, i did kubectl edit svc nginx-ingress-controller and manually edited nodePort to port 80.
this way, i can go to {MY_MASTER_IP} and get to echoheaders.
now what i am able to do is, have different Domains pointed to {MY_MASTER_IP} and based on host (similar to README)
you can just use the image nginxdemos/nginx-ingress:0.3.1 ,you need not build yourself
#nate's answer is right
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service
has a bit more details.
They do not recommend setting the service's node port range though
This question is the first in the search results of Google, I will add my solution.
kubeadm v1.18.12
helm v3.4.1
Yes, the easiest way is to use helm. Also I use standard ingress https://github.com/kubernetes/ingress-nginx
Add the repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Install ingress
helm install ingress --namespace ingress --create-namespace --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true ingress-nginx/ingress-nginx
Daedmonset makes ingress readily available on every node in your cluster.
hostNetwork=true specify uses the node public IP address.
After that, you need to configure the rules for ingress and set the necessary DNS records.

Resources