Unable to start pod under kubernetes namespace - docker

I have created a namespace on my physical K8 cluster.
Now I'm trying to spin-up resources with the help of *dep.yaml and namespace mentioned in the file.
Also created secrets under the same namespace.
But Status showing 'ContainerCreating'.
application-dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-name
namespace: namespace-service
labels:
module: testmodule
...
Note: It's working in the default namespace.

If a Pod is stuck in state ContainerCreating, it may be good to check the Events with kubectl describe pod and check the Events in the bottom.
There is usually an Event that explain why the Pod fails to reach state Running.

Can you share the rest of the deployment file?
If you are currently running a service (assuming that it exposes a specific NodePort) in the default namespace, you cannot expose the same port again on a different namespace.
Also, as mentioned, you can always get more information using the "describe" command -
kubectl describe <resource> (can be a pod, deployment, service, etc)

Related

How can I deploy elasticsearch to kubernete?

I installed minikube on my Mac and I'd like to deploy elasticsearch on this k8s cluster. I followed this instruction: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
The file I created is:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.10.0
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
when I run kubectl apply -f es.yaml, I got this error: error: unable to recognize "es.yaml": no matches for kind "Elasticsearch" in version "elasticsearch.k8s.elastic.co/v1"
It says kind is not matched. I wonder how I can make it work. I searched k8s doc and it seems kind can be service, pod, deployment. But why the above instruction uses Elasticsearch as the kind? What value of kind should I specify?
I think you might have missed the step of installing CRD and the operator for ElasticSearch. Have you followed this step https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html?
Service, Pod, Deployment etc are Kubernetes native resources. Kubernetes provides a way to write custom resources also, using CRDs. Elasticsearch is one such example, so you have to define custom resource before using it for Kubernetes to understand that.

Kubernetes make changes to annotation to force update deployment

Hey I have a wider problem as when I update secrets in kubernetes they are not implemented in pods unless they are ugprades/reschedules or just re-deployed; I saw the other stackoverflow post about it but noone of the solutions fit me Update kubernetes secrets doesn't update running container env vars
Also so the in-app solution of python script on pod to update its secret automatically https://medium.com/analytics-vidhya/updating-secrets-from-a-kubernetes-pod-f3c7df51770d but it seems like a long shot and I came up with solution to adding annotation to deployment manifest - and hoping it would re-schedule pods everytime a helm chart would put a new timestamp in it - it does put it but it doesn't reschedule - any thought how to force that behaviour ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
lastUpdate: {{ now }}
also I dont feel like adding this patch command to ci/cd deployment, as its arbitraty and - well doesnt feel like right solution
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
didn't anyone else find better solution to re-deploy pods on changed secrets ?
Kubernetes by itself does not do rolling update of a deployment automatically when a secret is changed. So there needs to a controller which will do that for you automatically. Take a look at Reloader which is a controller that watches if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset and Statefulset.
Add reloader.stakater.com/auto annotation to the deployment with name xxx and have a ConfigMap called xxx-configmap or Secret called xxx-secret.
This will discover deployments/daemonsets/statefulset automatically where xxx-configmap or xxx-secret is being used either via environment variable or from volume mount. And it will perform rolling upgrade on related pods when xxx-configmap or xxx-secret are updated
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
reloader.stakater.com/auto: "true"

how to modify pod memory limit of a running pod

I am using YML to create pod and have specified the resource request and limit. Now I am not aware of how to modify the resource limits of a running pod. For example
memory-demo.yml
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
Then I run the command oc create -f memory-demo.yml and a pod name memory-demo get created.
My Question is what should I do to modify the memory limits from 200Mi --> 600 Mi, Do I need to delete the existing pod and recreate the pod using the modified YML file?
I am a total newbie. Need help.
First and for most, it is very unlikely that you really want to be (re)creating Pods directly. Dig into what Deployment is and how it works. Then you can simply apply the change to the spec template in deployment and kubernetes will upgrade all the pods in that deployment to match new spec in a hands free rolling update.
Live change of the memory limit for a running container is certainly possible but not by means of kubernetes (and will not be reflected in kube state if you do so). Look at docker update.
You can use replace command to modify an existing object based on the contents of the specified configuration file (documentation)
oc replace -f memory-demo.yml
EDIT: However, some spec can not be update. The only way is delete and re-create the pod.

How to update Kubernetes secrets for all namespaces

I am using docker and kubernetes on Google Cloud Platform, with the Kubernetes Engine.
I have secrets configurated in a app.yaml file like so :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
namespace: $CI_COMMIT_REF_SLUG
labels:
app: app
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: gcr.io/engagement-org/app:$CI_COMMIT_SHA
imagePullPolicy: Always
ports:
- containerPort: 9000
env:
- name: MAILJET_APIKEY_PUBLIC
valueFrom:
secretKeyRef:
name: mailjet
key: apikey_public
- name: MAILJET_APIKEY_PRIVATE
valueFrom:
secretKeyRef:
name: mailjet
key: apikey_private
Each time I push on a new branch, a new namespace is created through a deploy in my gitlab-ci file. Secrets are created like so :
- kubectl create secret generic mailjet --namespace=$CI_COMMIT_REF_SLUG --from-literal=apikey_public=$MAILJET_APIKEY_PUBLIC --from-literal=apikey_private=$MAILJET_APIKEY_PRIVATE || echo 'Secret already exist';
Now, I have updated my mailjet api keys and want to make the change to all namespaces.
I can edit the secret on each namespace by getting a shell on the pods and running kubectl edit secret mailjet --namespace=<namespace_name>
What I want is to send the new secret values to the new pods that will be created in the future. When I deploy a new one, it still uses the old values.
From what I understand, the gitlab-ci file uses the app.yaml file to replace the environment variables with values. But I don't understand where app.yaml finds the original values.
Thank you for your help.
In general, Kubernetes namespaces are designed to provide isolation for components running inside them. For this reason, the Kubernetes API is not really designed to perform update operations across namespaces, or make secrets usable across namespaces.
That being said, there are a few things to solve this issue.
1. Use a single namespace & Helm releases instead of separate namespaces
From the looks of it, you are using Gitlab CI to deploy individual branches to review environments (presumably using Gitlab's Review App feature?). The same outcome can be achieved by deploying all Review Apps into the same namespace, and using Helm to manage multiple deployments ("releases" in Helm-speak) of the same application within a single namespace.
Within the gitlab-ci.yml, creating a Helm release for a new branch might look similar to this:
script:
- helm upgrade --namespace default --install review-$CI_COMMIT_REF_SLUG ./path/to/chart
Of course, this requires that you have defined a Helm chart for your application (which, in essence is just a set of YAML templates with a set of default variables that can then be overridden for individual releases). Refer to the documentation (linked above) for more information on creating Helm charts.
2. Keep secrets in sync across namespaces
We have had a similar issue a while ago and resorted to writing a custom Kubernetes controller that keeps secrets in sync across namespaces. It's open source and you can find it on GitHub (use with caution, though). It is based on annotations and provides unidirectional propagation of changes from a single, authoritative parent secret:
apiVersion: v1
kind: Secret
metadata:
name: mailjet
namespace: some-kubernetes-namespace
annotations:
replicator.v1.mittwald.de/replicate-from: default/mailjet
With the secret replicator deployed in your cluster, using this annotation will propagate all changes made to the mailjet secret in the default namespace to all secrets in any namespaced annotated like show above.
Now there is a way to share or sync secret across namespaces and its by using the ClusterSecret operator:
https://github.com/zakkg3/ClusterSecret

how to deploy Kubernetes nginx controller with kubeadm (k8s 1.4)?

AWS + Kubeadm (k8s 1.4)
I tried following the README at:
https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx
but that doesnt seem to work. I asked around in slack, and it seems the yamls are out-dated, which i had to modify as such
first i deployed default-http-backend using yaml found on git:
https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/default-backend.yaml
Next, the ingress-RC i had to modify:
https://gist.github.com/lilnate22/5188374
(note the change to get path to healthz to reflect default-backend as well as the port change to 10254 which is apparently needed according to slack)
Everything is running fine
kubectl get pods i see the ingress-controller
kubectl get rc i see 1 1 1 for the ingress-rc
i then deploy the simple echoheaders application (according to git readme):
kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.4 --replicas=1 --port=8080
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
next i created a simple ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: echoheaders-x
servicePort: 80
both get ing and describe ing gives be a good sign:
Name: test-ingress
Namespace: default
Address: 172.30.2.86 <---this is my private ip
Default backend: echoheaders-x:80 (10.38.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
* * echoheaders-x:80 (10.38.0.2:8080)
but attempting to go to nodes public ip doesnt seem to work, as i am getting "unable to reach server`
Unfortunately it seems that using ingress controllers with Kubernetes clusters set up using kubeadm doesn't is not supported at the moment.
The reason for this is that the ingress controllers specify a hostPort in order to become available on the public IP of the node, but the cluster created by kubeadm uses the CNI network plugin which does not support hostPort at the moment.
You may have better luck picking a different way to set up the cluster which does not use CNI.
Alternatively, you can edit your ingress-rc.yaml to declare "hostNetwork: true" under the "spec:" section. Specifying hostNetwork will cause the containers to run using the host's network namespace, giving them access to the network interfaces, routing tables and iptables rules of the host. Think of this as equivalent to "docker run" with the option --network="host".
Ok for all those that came here wondering the same thing..here is how i solved it.
PRECURSOR: the documentation is ambiguous such that reading the docs, i was under the impression, that running through the README would allow me to visit http://{MY_MASTER_IP} and get to my services...this is not true.
in order to get ingress_controller, I had to create a service for ingress-controller, and then expose that service via nodePort. this allowed me to access the services (in the case of README, echoheaders) via http://{MASTER_IP}: {NODEPORT}
there is an "issue" with nodePort that you get a random port#, which somewhat defeats the purpose of ingress... to solve that i did the following:
First: I needed to edit kube-api to allow a lower nodePort IP.
vi /etc/kubernetes/manifests/kube-apiserver.json
then in the kube-api containers arguments section add: "--service-node-port-range=80-32767",
this will allow nodePort to be from 80-32767.
** NOTE: i would probably not recommend this for production...**
Next, i did kubectl edit svc nginx-ingress-controller and manually edited nodePort to port 80.
this way, i can go to {MY_MASTER_IP} and get to echoheaders.
now what i am able to do is, have different Domains pointed to {MY_MASTER_IP} and based on host (similar to README)
you can just use the image nginxdemos/nginx-ingress:0.3.1 ,you need not build yourself
#nate's answer is right
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service
has a bit more details.
They do not recommend setting the service's node port range though
This question is the first in the search results of Google, I will add my solution.
kubeadm v1.18.12
helm v3.4.1
Yes, the easiest way is to use helm. Also I use standard ingress https://github.com/kubernetes/ingress-nginx
Add the repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Install ingress
helm install ingress --namespace ingress --create-namespace --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true ingress-nginx/ingress-nginx
Daedmonset makes ingress readily available on every node in your cluster.
hostNetwork=true specify uses the node public IP address.
After that, you need to configure the rules for ingress and set the necessary DNS records.

Resources