Getting errors when using "imagePullSecrets" in my kubernetes deployment - docker

I have a kind:deployment file, and they are forcing the image to be defined down here in the "initContainers", but I can't get my image in my own registry to load. If I try to put
imagePullSecrets:
- name: regcred
in line with the "image" down below, I get error converting YAML to JSON: yaml: found character that cannot start any token. And I get the same thing if I move it around to different spots. Any ideas how I can use imagePullCreds here?
spec:
template:
metadata:
spec:
initContainers:
- env:
- name: "BOOTSTRAP_DIRECTORY"
value: "/bootstrap-data"
image: "my-custom-registry.com/my-image:1.6.24-SNAPSHOT"
imagePullPolicy: "Always"
name: "bootstrap"

Check if you are using tabs for indentation; YAML doesn't allow tabs; it requires spaces.
Also, You should use imagePullSecrets under spec instead of under containers.
spec:
template:
metadata:
spec:
imagePullSecrets:
- name: regcred
initContainers:

Related

How to use Local docker image in kubernetes via kubectl

I created customize Docker Image and stored in my local system Now I want use that Docker Image via kubectl .
Docker image:-
1:- docker build -t backend:v1 .
Then Kubernetes file:-
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
image: backend
imagePullPolicy: Never
name: backend
resources: {}
restartPolicy: Always
status: {}```
Command to run kubectl:- kubectl create -f backend-deployment.yaml
**getting Error:-**
error: error validating "backend-deployment.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "image" in io.k8s.api.core.v1.EnvVar, ValidationError(Deployment.spec.template.spec.containers[0].env[9]): unknown field "imagePullPolicy" in io.k8s.api.core.v1.EnvVar]; if you choose to ignore these errors, turn validation off with --validate=false
Local Registry
Set the local registry first using this command
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Image Tag
Given a Dockerfile, the image could be built and tagged this easy way:
docker build -t localhost:5000/my-image
Image Pull Policy
the field imagePullPolicy should then be changed to Never get the right image from the right repo.
given this sample pod template
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: app
image: localhost:5000/my-image
imagePullPolicy: Never
Deploy Pod
The pod can be deployed using:
kubectl create -f pod.yml
Hope this comes in handy :)
As the error specifies unknown field "image" and unknown field "imagePullPolicy"
There is syntax error in your kubernetes deployment file.
Make these changes in your yaml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
namespace: web-console
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
containers:
- name: backend
image: backend
imagePullPolicy: Never
env:
- name: mail_auth_pass
- name: mail_auth_user
- name: mail_from
- name: mail_greeting
- name: mail_service
- name: mail_sign
- name: mongodb_url
value: mongodb://mongodb.mongodb.svc.cluster.local/console
- name: server_host
value: "0.0.0.0"
- name: server_port
value: "3000"
- name: server_sessionSecret
value: "1234"
resources: {}
restartPolicy: Always
status: {}
Validate your kubernetes yaml file online using https://kubeyaml.com/
Or with kubectl apply --validate=true --dry-run=true -f deployment.yaml
Hope this helps.

error when creating "deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment

I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on Digital Oceans. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.
I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: testit-01-deployment
spec:
replicas: 4
#number of replicas generated
selector:
#assigns labels to the pods for future selection
matchLabels:
app: testit
version: v01
template:
metadata:
Labels:
app: testit
version: v01
spec:
containers:
-name: testit-container
image: teejayfamo/testit
ports:
-containerPort: 80
I ran this line on cmd in the folder container:
kubectl apply -f deployment.yaml --validate=false
Error from server (BadRequest): error when creating "deployment.yaml":
Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: decode
slice: expect [ or n, but found {, error found in #10 byte of
...|tainers":{"-name":"t|..., bigger context
...|:"testit","version":"v01"}},"spec":{"containers":{"-name":"testit-container","image":"teejayfamo/tes|...
I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?
Since this is the top result of the search, I thought I should add another case when this can occur. In my case, it was coming because there was no double quote on numeric env. var. Log did provide a subtle hint, but it was not very helpful.
Log
..., bigger context ...|c-server-service"},{"name":"SERVER_PORT","value":80}]
Env variable - the value of SERVER_PORT needs to be in double quote.
env:
- name: SERVER_HOST
value: grpc-server-service
- name: SERVER_PORT
value: "80"
Kubernetes issue is still open.
There are syntax errors in your yaml file.
This should work.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: testit-01-deployment
spec:
replicas: 4
#number of replicas generated
selector:
#assigns labels to the pods for future selection
matchLabels:
app: testit
version: v01
template:
metadata:
labels:
app: testit
version: v01
spec:
containers:
- name: testit-container
image: teejayfamo/testit
ports:
- containerPort: 80
The problem was:
Labels should be labels
The syntax of - name: and - containerPort were not formatted properly in spec.containers section.
Hope this helps.

AzureFunctions AppInsights Logging does not work in Azure AKS

I've been using Azure functions (non static with proper DI) for a short while now. I recently added ApplicationInsights by using the APPINSIGHTS_INSTRUMENTATIONKEY key. When debugging locally it works all fine.
If I run it by publishing the function and using the following dockerfile to run it locally on docker it works fine as well.
FROM mcr.microsoft.com/azure-functions/dotnet:2.0-alpine
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
COPY ./publish/ /home/site/wwwroot
However. If i go a step further and try to deploy it to kubernetes (in my case Azure AKS) by using the following YAML files. The function starts fine with log files showing the loading of the Application insights parameter. However, it does not log to insights.
deployment.yaml
apiVersion: v1
kind: Secret
metadata:
name: mytestfunction-secrets
namespace: "testfunction"
type: Opaque
data:
ApplicationInsights: YTljOTA4ZDgtMTkyZC00ODJjLTkwNmUtMTI2OTQ3OGZhYjZmCg==
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
imagePullSecrets:
- name: imagepullsecrets
However. I did alter the yaml by not storing the key as a secret and then it did work.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mytestfunction
namespace: "testfunction"
labels:
app: mytestfunction
spec:
replicas: 1
template:
metadata:
namespace: "testfunction"
labels:
app: mytestfunction
spec:
containers:
- image: mytestfunction:1.1
name: mytestfunction
ports:
- containerPort: 5000
imagePullPolicy: Always
env:
- name: AzureFunctionsJobHost__Logging__Console__IsEnabled
value: 'true'
- name: ASPNETCORE_ENVIRONMENT
value: PRODUCTION
- name: ASPNETCORE_URLS
value: http://+:5000
- name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
value: '5'
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
imagePullSecrets:
- name: imagepullsecrets
I'm kind of surprised by the fact that the difference between the notation is causing Azure functions to not log to insights. My impression was that the running application does not care or know whether the value came from a secret or a regular notation in kubernetes. Even though it might be debatable whether the instrumentationkey is a secret or not, i would prefer to store it there. Does anyone have an idea why this might cause it?
# Not working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
valueFrom:
secretKeyRef:
name: mytestfunction-secrets
key: ApplicationInsights
# Working
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: a9c908d8-192d-482c-906e-1269478fab6f
These are the versions i'm using
Azure Functions Core Tools (2.4.419)
Function Runtime Version: 2.0.12332.0
Azure AKS: 1.12.x
Also. The Instrumentation key is a fake one for sharing purposes. not an actual one.

Kubernetes env variable to containers

I want to pass some values from Kubernetes yaml file to the containers. These values will be read in my Java app using System.getenv("x_slave_host").
I have this dockerfile:
FROM jetty:9.4
...
ARG slave_host
ENV x_slave_host $slave_host
...
$JETTY_HOME/start.jar -Djetty.port=9090
The kubernetes yaml file contains this part where I added env section:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: master
spec:
template:
metadata:
labels:
app: master
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: master
image: xregistry.azurecr.io/Y:latest
ports:
- containerPort: 9090
volumeMounts:
- name: shared-data
mountPath: ~/.X/experiment
- env:
- name: slave_host
value: slavevalue
- name: jupyter
image: xregistry.azurecr.io/X:latest
ports:
- containerPort: 8000
- containerPort: 8888
volumeMounts:
- name: shared-data
mountPath: /var/folder/experiment
imagePullSecrets:
- name: acr-auth
Locally when I did the same thing using docker compose, it worked using args. This is a snippet:
master:
image: master
build:
context: ./master
args:
- slave_host=slavevalue
ports:
- "9090:9090"
So now I am trying to do the same thing but in Kubernetes. However, I am getting the following error (deploying it on Azure):
error: error validating "D:\\a\\r1\\a\\_X\\deployment\\kub-deploy.yaml": error validating data: field spec.template.spec.containers[1].name for v1.Container is required; if you choose to ignore these errors, turn validation off with --validate=false
In other words, how to rewrite my docker compose file to kubernetes and passing this argument.
Thanks!
env section should be added under containers, like this:
containers:
- name: master
env:
- name: slave_host
value: slavevalue
To elaborate a on #Kun Li's answer, besides adding environment variables e.g. in the Deployment manifest directly you can create a ConfigMap (or Secret depending on the data being stored) and reference these in your manifests. This is a good way of sharing the same environment variables across applications, compared to manually adding environment variables to several different applications.
Note that a ConfigMap can consist of one or more key: value pairs and it's not limited to storing environment variables, it's just one of the use cases. And as i mentioned before, consider using a Secret if the data is classified as sensitive.
Example of a ConfigMap manifest, in this case used for storing an environment variable:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-env-var
data:
slave_host: slavevalue
To create a ConfigMap holding one key=value pair using kubectl create:
kubectl create configmap my-env --from-literal=slave_host=slavevalue
To get hold of all environment variables configured in a ConfigMap use the following in your manifest:
containers:
envFrom:
- configMapRef:
name: my-env-var
Or if you want to pick one specific environment variable from your ConfigMap containing several variables:
containers:
env:
- name: slave_host
valueFrom:
configMapKeyRef:
name: my-env-var
key: slave_host
See this page for more examples of using ConfigMap's in different situations.

How to avoid repeating GUID in deployment definition

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
selector:
matchLabels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
template:
metadata:
labels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
containers:
- name: xxx
image: xxx
env:
- name: GUID
valueFrom:
fieldRef:
fieldPath: spec.template.metadata.labels.client
I tried passing existing value from the definition to the env variable using different expressions and all of them didnt work:
error converting fieldPath: field label not supported: spec.template.metadata.labels.client
upd: found what you can pass in, doesnt help...
I have to essentially repeat myself 4 times, is there a way to have less repeating in the pod definition to ease management? According to this you can pass in something, it doesnt say what though.
ps. Do i really need same guid in the spec.template and spec.selector? It doesnt work without that
You don’t necessarily need to use guids here, those are just lables and names...
Secondly, they refer to different things (althought some of them have to be the same in some cases):
metadata name is name of Deployment in question. You will use it to reference and manipulator this specific Deployment during its lifecycle.
labels and matchlabels need to be the same if you want them matched together, which in this case you want. Kubernetes is strong and quite flexible when it comes to labeling and different assets can have multiple labels on them (say pod can have labels: app:Postfix, tier: backend, layer: mysql, env:dev). It stands to reason that label(s) that you want matched and label(s) to be matched have to be the same in order to be matched.
As for automation of labeling in Deployment to avoid repetition, maybe helm Charts or some other ‘automating kubernetes’ approach, depending on your actual need, would be better?
Additional note: for passing label to env variable following can be used starting from kubernetes 1.9:
...
template:
metadata:
labels:
label_name: label-value
...
env:
- name: ENV_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['label_name']
Below is full mock code to demonstrate this (client 1.9.3, server 1.9.0):
# cat d.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-guidhere
spec:
selector:
matchLabels:
client: guidhere
template:
metadata:
labels:
client: guidhere
spec:
containers:
- name: some-name
image: nginx
env:
- name: GUIDENV
valueFrom:
fieldRef:
fieldPath: metadata.labels['client']
# after: kubectl create -f d.yaml and connecting to container
# echo $GUIDENV responds with "guidhere"
And I've just tried this and works correctly (mind k8s versions).

Resources