oc/kubectl patch replaces whole line - docker

I am using oc patch with op to replace one string in deployment, following is the command:-
oc patch dc abc --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "ab-repository/" },{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "bc-repository/" }]'
what it is doing is it changes below:-
Before:- ab-repository/ab:1.0.0
After:- bc-repository/
what I want is this:-
Before:- ab-repository/ab:1.0.0
After:- bc-repository/ab:1.0.0
Please let me know what i am doing wrong here.
Below is the YAML
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: ruleengine
namespace: apps
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
name: ruleengine
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
name: ruleengine
spec:
containers:
- image: ab-repository/ab:1.0.0 ### containers should be provided in the form of an array

The 'replace' operation works like remove/add entire value:
This operation is functionally identical to a "remove" operation for
a value, followed immediately by an "add" operation at the same
location with the replacement value.
There's no such JSON patch operation as replace value partially (RFC6902, RFC7386)
You can get image like:
oc get dc ruleengine -o=jsonpath='{..image}'
Then manipulate the value with sed and use it in 'oc patch'

Related

Getting errors when using "imagePullSecrets" in my kubernetes deployment

I have a kind:deployment file, and they are forcing the image to be defined down here in the "initContainers", but I can't get my image in my own registry to load. If I try to put
imagePullSecrets:
- name: regcred
in line with the "image" down below, I get error converting YAML to JSON: yaml: found character that cannot start any token. And I get the same thing if I move it around to different spots. Any ideas how I can use imagePullCreds here?
spec:
template:
metadata:
spec:
initContainers:
- env:
- name: "BOOTSTRAP_DIRECTORY"
value: "/bootstrap-data"
image: "my-custom-registry.com/my-image:1.6.24-SNAPSHOT"
imagePullPolicy: "Always"
name: "bootstrap"
Check if you are using tabs for indentation; YAML doesn't allow tabs; it requires spaces.
Also, You should use imagePullSecrets under spec instead of under containers.
spec:
template:
metadata:
spec:
imagePullSecrets:
- name: regcred
initContainers:

error when creating "deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment

I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on Digital Oceans. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.
I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: testit-01-deployment
spec:
replicas: 4
#number of replicas generated
selector:
#assigns labels to the pods for future selection
matchLabels:
app: testit
version: v01
template:
metadata:
Labels:
app: testit
version: v01
spec:
containers:
-name: testit-container
image: teejayfamo/testit
ports:
-containerPort: 80
I ran this line on cmd in the folder container:
kubectl apply -f deployment.yaml --validate=false
Error from server (BadRequest): error when creating "deployment.yaml":
Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: decode
slice: expect [ or n, but found {, error found in #10 byte of
...|tainers":{"-name":"t|..., bigger context
...|:"testit","version":"v01"}},"spec":{"containers":{"-name":"testit-container","image":"teejayfamo/tes|...
I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?
Since this is the top result of the search, I thought I should add another case when this can occur. In my case, it was coming because there was no double quote on numeric env. var. Log did provide a subtle hint, but it was not very helpful.
Log
..., bigger context ...|c-server-service"},{"name":"SERVER_PORT","value":80}]
Env variable - the value of SERVER_PORT needs to be in double quote.
env:
- name: SERVER_HOST
value: grpc-server-service
- name: SERVER_PORT
value: "80"
Kubernetes issue is still open.
There are syntax errors in your yaml file.
This should work.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: testit-01-deployment
spec:
replicas: 4
#number of replicas generated
selector:
#assigns labels to the pods for future selection
matchLabels:
app: testit
version: v01
template:
metadata:
labels:
app: testit
version: v01
spec:
containers:
- name: testit-container
image: teejayfamo/testit
ports:
- containerPort: 80
The problem was:
Labels should be labels
The syntax of - name: and - containerPort were not formatted properly in spec.containers section.
Hope this helps.

How to avoid repeating GUID in deployment definition

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
selector:
matchLabels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
template:
metadata:
labels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
containers:
- name: xxx
image: xxx
env:
- name: GUID
valueFrom:
fieldRef:
fieldPath: spec.template.metadata.labels.client
I tried passing existing value from the definition to the env variable using different expressions and all of them didnt work:
error converting fieldPath: field label not supported: spec.template.metadata.labels.client
upd: found what you can pass in, doesnt help...
I have to essentially repeat myself 4 times, is there a way to have less repeating in the pod definition to ease management? According to this you can pass in something, it doesnt say what though.
ps. Do i really need same guid in the spec.template and spec.selector? It doesnt work without that
You don’t necessarily need to use guids here, those are just lables and names...
Secondly, they refer to different things (althought some of them have to be the same in some cases):
metadata name is name of Deployment in question. You will use it to reference and manipulator this specific Deployment during its lifecycle.
labels and matchlabels need to be the same if you want them matched together, which in this case you want. Kubernetes is strong and quite flexible when it comes to labeling and different assets can have multiple labels on them (say pod can have labels: app:Postfix, tier: backend, layer: mysql, env:dev). It stands to reason that label(s) that you want matched and label(s) to be matched have to be the same in order to be matched.
As for automation of labeling in Deployment to avoid repetition, maybe helm Charts or some other ‘automating kubernetes’ approach, depending on your actual need, would be better?
Additional note: for passing label to env variable following can be used starting from kubernetes 1.9:
...
template:
metadata:
labels:
label_name: label-value
...
env:
- name: ENV_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['label_name']
Below is full mock code to demonstrate this (client 1.9.3, server 1.9.0):
# cat d.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-guidhere
spec:
selector:
matchLabels:
client: guidhere
template:
metadata:
labels:
client: guidhere
spec:
containers:
- name: some-name
image: nginx
env:
- name: GUIDENV
valueFrom:
fieldRef:
fieldPath: metadata.labels['client']
# after: kubectl create -f d.yaml and connecting to container
# echo $GUIDENV responds with "guidhere"
And I've just tried this and works correctly (mind k8s versions).

How to define Kubernetes Job using a private docker registry?

I have a simple Kubernetes job (based on the http://kubernetes.io/docs/user-guide/jobs/work-queue-2/ example) which uses a Docker image that I have placed as a public image on my dockerhub account. It all loks like this:
job.yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
Now I want to try to instead use a private Docker registry which requires authentication as in:
docker login https://myregistry.com
But I can't find anything about how I add username and password to my job.yaml file. How is it done?
You need to use ImagePullSecrets.
Once you create a secret object, you can refer to it in your pod spec (the spec value that is the parent of containers:
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
Ofcourse, you'll have to create the secret (as per the docs). This is what this will look like:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: mynamespace
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
The value of .dockerconfigjson is a base64 encoding of this file: .docker/config.json.
The key point: A job spec contains a pod spec. So whatever knowledge you gain about pod specs can be applied to jobs as well.

How to set secret data to kubernetes secrets by yaml?

I am using kubernetes to deploy a rails app to google container engine.
Follow the kubernetes secrets document: http://kubernetes.io/v1.1/docs/user-guide/secrets.html
I created a web controller file:
# web-controller.yml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- name: web
image: gcr.io/my-project-id/myapp:v1
ports:
- containerPort: 3000
name: http-server
env:
secret:
- secretName: mysecret
And created a secret file:
# secret.yml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
RAILS_ENV: production
When I run:
kubectl create -f web-controller.yml
It showed:
error: could not read an encoded object from web-controller.yml: unable to load "web-controller.yml": json: cannot unmarshal object into Go value of type []v1.EnvVar
error: no objects passed to create
Maybe the yaml format is wrong in the web-controller.yml file. Then how to write?
secret.yml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
RAILS_ENV: production
stringData is the easymode version of what you're after, one thing though.
you'll see the cleartext original yaml used to create the secret in the annotation (and if you used the above method that means you'll have a human readable secret in your annotation, if you use the below method you'll have the base64'd secret in your annotation), unless you follow up with the erase annotation command like so:
kubectl apply -f secret.yml
kubectl annotate secret mysecret kubectl.kubernetes.io/last-applied-configuration-
(the - at the end is what says to erase it)
kubectl get secret mysecret -n=api -o yaml
(to confirm)
Alternatively you'd do
Bash# echo production | base64
cHJvZHVjdGlvbgo=
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
RAILS_ENV: cHJvZHVjdGlvbgo=
You need to base64 encode the value and your key must be a valid DNS label, that is, replace RAILS_ENV with, for example, rails-env. See also this end-to-end example I put together here for more details and concrete steps.
We do not currently support secrets exposed as env vars.
Lets Say we adding imagepull secrets in deployment now follow the steps,
kubectl create secret docker-registry secret-name --docker-server=<registry-server-url> --docker-username=<Username> --docker-password=<password> --docker-email=<your-email>
Now refer this in deployment yaml file,
apiVersion: v1
kind: Deployment
metadata:
name: test-deployment
spec:
containers:
- name: test-app
image: <Image-name-private>
imagePullSecrets:
- name: secret-name
OR
Lets say you have some api key for access the application.
kubectl create secret generic secret-name --from-literal api-key="<your_api-key"
Now refer this in deployment like this.
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: secret-name
key: api-key

Resources