apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
selector:
matchLabels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
template:
metadata:
labels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
containers:
- name: xxx
image: xxx
env:
- name: GUID
valueFrom:
fieldRef:
fieldPath: spec.template.metadata.labels.client
I tried passing existing value from the definition to the env variable using different expressions and all of them didnt work:
error converting fieldPath: field label not supported: spec.template.metadata.labels.client
upd: found what you can pass in, doesnt help...
I have to essentially repeat myself 4 times, is there a way to have less repeating in the pod definition to ease management? According to this you can pass in something, it doesnt say what though.
ps. Do i really need same guid in the spec.template and spec.selector? It doesnt work without that
You don’t necessarily need to use guids here, those are just lables and names...
Secondly, they refer to different things (althought some of them have to be the same in some cases):
metadata name is name of Deployment in question. You will use it to reference and manipulator this specific Deployment during its lifecycle.
labels and matchlabels need to be the same if you want them matched together, which in this case you want. Kubernetes is strong and quite flexible when it comes to labeling and different assets can have multiple labels on them (say pod can have labels: app:Postfix, tier: backend, layer: mysql, env:dev). It stands to reason that label(s) that you want matched and label(s) to be matched have to be the same in order to be matched.
As for automation of labeling in Deployment to avoid repetition, maybe helm Charts or some other ‘automating kubernetes’ approach, depending on your actual need, would be better?
Additional note: for passing label to env variable following can be used starting from kubernetes 1.9:
...
template:
metadata:
labels:
label_name: label-value
...
env:
- name: ENV_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['label_name']
Below is full mock code to demonstrate this (client 1.9.3, server 1.9.0):
# cat d.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-guidhere
spec:
selector:
matchLabels:
client: guidhere
template:
metadata:
labels:
client: guidhere
spec:
containers:
- name: some-name
image: nginx
env:
- name: GUIDENV
valueFrom:
fieldRef:
fieldPath: metadata.labels['client']
# after: kubectl create -f d.yaml and connecting to container
# echo $GUIDENV responds with "guidhere"
And I've just tried this and works correctly (mind k8s versions).
Related
There is a kubernetes cluster with 100 nodes, I have to clean the specific images manually, I know the kubelet garbage collect may help, but it isn't applied in my case.
After browsing the internet , I found a solution - docker in docker, to solve my problem.
I just wanna remove the image in each node one time, is there any way to run a job in each node one time?
I checked the kubernetes labels and podaffinity, but still no ideas, any body could help?
Also, I tried to use daemonset to solve the problem, but turns out that it can only remove the image for a part of nodes instead of all nodes, I don't what might be the problem...
here is the daemonset example:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: test-ds
labels:
k8s-app: test
spec:
selector:
matchLabels:
k8s-app: test
template:
metadata:
labels:
k8s-app: test
spec:
containers:
- name: test
env:
- name: DELETE_IMAGE_NAME
value: "nginx"
image: busybox
command: ['sh', '-c', 'curl --unix-socket /var/run/docker.sock -X DELETE http://localhost/v1.39/images/$(DELETE_IMAGE_NAME)']
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock-volume
ports:
- containerPort: 80
volumes:
- name: docker-sock-volume
hostPath:
# location on host
path: /var/run/docker.sock
If you want to run you job on single specific Node you can us the Nodeselector in POD spec
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
nodeSelector:
name: node3
daemon set ideally should resolve your issues, as it creates the PODs on each available Node in the cluster.
You can read more about the affinity at here : https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
nodeSelector provides a very simple way to constrain pods to nodes
with particular labels. The affinity/anti-affinity feature, greatly
expands the types of constraints you can express. The key enhancements
are
The affinity/anti-affinity language is more expressive. The language
offers more matching rules besides exact matches created with a
logical AND operation;
You can use the Affinity in Job YAML something like
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0
Update
Now if you have issue with the Deamon affinity with the Job is also useless, as Job will create the Single POD which will get schedule to Single node as per affinity. Either create 100 job with different affinity rules or you use Deployment + Affinity to schedule the Replicas on different nodes.
We will create one Deployment with POD affinity and make sure, multiple PODs of a single deployment won't get scheduled on one Node.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 100
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: <Image>
ports:
- containerPort: 80
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- test
topologyKey: "kubernetes.io/hostname"
Try using this deployment template and replace your image here. You can reduce replicas first to 10 instead of 100 to check it's spreading PODs or not.
Read more at : https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#an-example-of-a-pod-that-uses-pod-affinity
Extra :
You can also write and use your custom CRD : https://github.com/darkowlzz/daemonset-job which will behave as daemon set and job
When I try to build a new image to Kubernetes, I got this error:
**unable to decode "K8sDeploy.yaml": no kind "Deployment" is registered for version "apps/v1"**
Thie error began when I updated the Kubernetes version, here my version info:
Client Version: v1.19.2
Server Version: v1.16.13
I tried also to build by my localhost and does work, but by Jenkins don't.
Somebody knows to solve this?
To check what apiVersion supports a Deployment resource in your kubernetes cluster you may run:
$ kubectl explain deployment | head -2
and you can be almost sure that the result will be as follows:
KIND: Deployment
VERSION: apps/v1
All modern kubernetes versions use apps/v1, which was available since v1.9, so for quite a long time already. As you may see here, older versions which were still available in kubernetes 1.15 have been deprecated in 1.16.
Client Version: v1.19.2 Server Version: v1.16.13
As stated above, in kubernetes 1.16, Deployment must use apps/v1 and there is no possibility to use older api versions like extensions/v1beta1, apps/v1beta1 or apps/v1beta2 which were still avilable in 1.15.
Your issue seems to me rather an error from Jenkins (possibly old version of Jenkins itself or some of its plugins or perhaps something with its configuration) which is not able to recognize/parse the correct (and currently required) apiVersion for Deployment resource.
For troubleshooting purpose you can try and change the apiVersion to one of the listed above. This should give you a different error (this time from kubernetes API server) as in 1.16 it won't be able to recognize it.
But at least it should give you a clue. If with older apiVersion your Jenkins doesn't complain any more, it would mean that it is set to work with older API versions and an update may help.
I see you filed an issue on kubernetes GitHub so let's wait what they say, but as I said before to me it doesn't look like an issue with kubernetes but rather with Jenkins ability to parse a legitimate Deployment yaml.
I updated my Kubernetes version from 1.5 to 1.9, when i force the Kubectl command, does work, just by Jenkis doesn't, as you request follow my k8sdeploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cbbox
labels:
app: cbbox
spec:
replicas: 1
selector:
matchLabels:
app: cbbox
template:
metadata:
labels:
app: cbbox
spec:
containers:
- image: myregistryrepository
name: cbbox
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: SECRET_DB_IP
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_DB_IP
- name: SECRET_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_DB_PASSWORD
- name: SECRET_DB_USER
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_DB_USER
- name: SECRET_LDAP_DOMAIN
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_LDAP_DOMAIN
- name: SECRET_LDAP_URLS
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_LDAP_URLS
- name: SECRET_LDAP_BASE_DN
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_LDAP_BASE_DN
- name: SECRET_TIME_ZONE
valueFrom:
secretKeyRef:
name: cbboxsecret
key: SECRET_TIME_ZONE
imagePullSecrets:
- name: acrcredentials
---
apiVersion: v1
kind: Service
metadata:
name: cbbox
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: cbbox
Just check for case sensitivity in YAML file
In my case it was 'kind' field
kind: deployment
changed it to...
kind: Deployment
I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on Digital Oceans. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.
I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: testit-01-deployment
spec:
replicas: 4
#number of replicas generated
selector:
#assigns labels to the pods for future selection
matchLabels:
app: testit
version: v01
template:
metadata:
Labels:
app: testit
version: v01
spec:
containers:
-name: testit-container
image: teejayfamo/testit
ports:
-containerPort: 80
I ran this line on cmd in the folder container:
kubectl apply -f deployment.yaml --validate=false
Error from server (BadRequest): error when creating "deployment.yaml":
Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: decode
slice: expect [ or n, but found {, error found in #10 byte of
...|tainers":{"-name":"t|..., bigger context
...|:"testit","version":"v01"}},"spec":{"containers":{"-name":"testit-container","image":"teejayfamo/tes|...
I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?
Since this is the top result of the search, I thought I should add another case when this can occur. In my case, it was coming because there was no double quote on numeric env. var. Log did provide a subtle hint, but it was not very helpful.
Log
..., bigger context ...|c-server-service"},{"name":"SERVER_PORT","value":80}]
Env variable - the value of SERVER_PORT needs to be in double quote.
env:
- name: SERVER_HOST
value: grpc-server-service
- name: SERVER_PORT
value: "80"
Kubernetes issue is still open.
There are syntax errors in your yaml file.
This should work.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: testit-01-deployment
spec:
replicas: 4
#number of replicas generated
selector:
#assigns labels to the pods for future selection
matchLabels:
app: testit
version: v01
template:
metadata:
labels:
app: testit
version: v01
spec:
containers:
- name: testit-container
image: teejayfamo/testit
ports:
- containerPort: 80
The problem was:
Labels should be labels
The syntax of - name: and - containerPort were not formatted properly in spec.containers section.
Hope this helps.
I'm using config maps to inject env variables into my containers. Some of the variables are created by concatenating variables, for example:
~/.env file
HELLO=hello
WORLD=world
HELLO_WORLD=${HELLO}_${WORLD}
I then create the config map
kubectl create configmap env-variables --from-env-file ~/.env
The deployment manifests reference the config map.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: my-image
image: us.gcr.io/my-image
envFrom:
- configMapRef:
name: env-variables
When I exec into my running pods, and execute the command
$ printenv HELLO_WORLD
I expect to see hello_world, but instead I see ${HELLO}_${WORLD}. The variables aren't expanded, and therefore my applications that refer to these variables will get the unexpanded value.
How do I ensure the variables get expanded?
If it matters, my images are using alpine.
I can't find any documentation on interpolating environment variables, but I was able to get this to work by removing the interpolated variable from the configmap and listing it directly in the deployment. It also works if all variables are listed directly in the deployment. It looks like kubernetes doesn't apply interpolation to variables loaded from configmaps.
For instance, this will work:
Configmap
apiVersion: v1
data:
HELLO: hello
WORLD: world
kind: ConfigMap
metadata:
name: env-variables
namespace: default
Deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: my-image
image: us.gcr.io/my-image
envFrom:
- configMapRef:
name: env-variables
env:
- name: HELLO_WORLD
value: $(HELLO)_$(WORLD)
I'm thinking about just expanding the variables before creating the configMap and uploading to kubernetes
Another parallel approach would be to use kustomize:
kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.
It's like make, in that what it does is declared in a file, and it's like sed, in that it emits edited text.
The sed part should be able to generate the right expanded value in your yaml file.
I have been using kubernetes for a while now.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0+2831379", GitCommit:"283137936a
498aed572ee22af6774b6fb6e9fd94", GitTreeState:"not a git tree", BuildDate:"2016-07-05T15:40:25Z", GoV
ersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db
386f62781338b0483733b3", GitTreeState:"clean", BuildDate:"", GoVersion:"", Compiler:"", Platform:""}
I usually set an Ingress, Service and Replication Controller for each project.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: portifolio
name: portifolio-ingress
spec:
rules:
- host: www.cescoferraro.xyz
http:
paths:
- path: /
backend:
serviceName: portifolio
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
selector:
name: portifolio
ports:
- name: web
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: ReplicationController
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
replicas: 1
selector:
name: portifolio
template:
metadata:
namespace: portifolio
labels:
name: portifolio
spec:
containers:
- image: cescoferraro/portifolio:latest
imagePullPolicy: Always
name: portifolio
env:
- name: KUBERNETES
value: "true"
- name: BRANCH
value: "production"
My "problem" is that for deploying my app I usually do:
kubectl -f delete kubernetes.yaml
kubectl -f create kubernetes.yaml
I wish I could use a single command to deploy, whenever my app is up or down. Rolling updates do not work when I use the same image,(I think its a bug on my kubernetes server version). But it also do not work when the app has never been deployed at all.
I have read about Deployments, I wonder how it would help me?
Goals
1. Deploy if app is brand new
2. Replace existing pods with new ones using a new image from docker registry.
I don't think keeping all resources inside one single manifest helps you with what you want to achieve, since your Service, Ingress and ReplicationController are not likely to change simultaneously.
If all you want to do is roll out new pods, I would recommend you to replace your ReplicationController with a Deployment. Manifests have almost the exact same syntax so it's easy to migrate from standard RCs, and you could perform a server-side rolling update with a single kubectl replace -f manifest.yml.
Please note that even with a Deployment resource you can't trigger a redeployment if nothing changed in your manifest. kubectl replace would just do nothing. Therefore you could for example increment or change a tag inside your manifest in order to force the deployment, if needed (eg. revision: 003).
As already written in the previous answer, it is recommended to use a Deployment instead of a ReplicationController for this.
Using imagePullPolicy: Always will only ensure that Kubernetes does a docker pull before starting new PODs. It does not force recreation of PODs when nothing in the Deployment resource changes.
I would suggest to add 2 things to your solution:
Add a label to the Deployment with the value CURRENT_DATE as a placeholder value
Add a simple shell script to your project which replaces the placeholder with the current date+time and then uses kubectl to apply the resources.
Example Bash script
#!/usr/bin/env bash
sed "s/CURRENT_DATE/$(date)/" kubernetes.yaml | kubectl apply -f -
Then use this script for redeployment instead of calling kubectl by yourself.
This is only meant as a very simple example. When it comes to creating/applying/patching resources in Kubernetes, things tend to get more and more complicated by time. If this happens, consider using some more advanced templating solutions, e.g. by using Python and Jinja2.
You could use a deployment for this. Create it the first time, and after that you only need to do kubectl set image deploy/my-app app=user/image:tag --record and you're good to go.
Doing that, you can also do cool things like kubectl rollout undo deploy/my-app or get history and status.
You might consider using Argo.
Argo is an open-source workflow engine for Kubernetes. It allows to define complex microservices-based application deployment using YAML in source repo and automatically re-deploy app on YAML change (e.g. on every commit to production branch) .