On Minikube using KubeCtl, I run an image created by Docker using the following command:
kubectl run my-service --image=my-service-image:latest --port=8080 --image-pull-policy Never
But on Minukube, a different configuration is to be applied to the application. I prepared some environment variables in a deployment file and want to apply them to the images on Minikube. Is there a way to tell KubeCtl to run those images using a given deployment file or even a different way to provide the images with those values?
I tried the apply verb of KubeCtl for example, but it tries to create the pod instead of applying the configuration on it.
In minukube/kubernetes you need to apply the environment variables in the yaml file of your pod/deployment.
Here is a example of how you can configure the environment variables in a deployment spec:
apiVersion: apps/v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Here you can find more information abour environment variables.
In this case, if you want to change any value, you need to delete the pod and apply it again. But if you use deployment all modification can be done using kubectl apply command.
Related
I am trying to avoid having to create three different images for separate deployment environments.
Some context on our current ci/cd pipeline:
For the CI portion, we build our app into a docker container and then submit that container to a security scan. Once the security scan is successful, the container gets put into a private container repository.
For the CD portion, using helm charts, we pull the container from the repository and then deploy to a company managed Kubernetes cluster.
There was an ask and the solution was to use a piece of software in the container. And for some reason (I'm the devops person and not the software engineer) the software needs environment variables (specific to the deployment environment) passed to it when it starts. How would we be able to start and pass environment variables to this software at deployment?
I could just create three different images with the environment variables but I feel like that is an anti-pattern. It takes away from the flexibility of having one image that can be deployed to different environments.
Can any one point me to resources that can accomplish starting an application with specific environment variables using Helm? I've looked but did not find a solution or anything that pointed me to the right direction. As a plan b, I'll just create three different images but I want to make sure that there is not a better way.
Depending on the container orchestration, you can pass the env in differnt ways:
Plain docker:
docker run -e MY_VAR=MY_VAL <image>
Docker compose:
version: '3'
services:
app:
image: '<image>'
environment:
- MY_VAR=my-value
Check on docker-compose docs
Kubernetes:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: "my value"
Check on kubernetes-docu
Helm:
Add the values in your values.yaml:
myKey: myValue
Then reference it in your helm template:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
spec:
containers:
- name: app
image: <image>
env:
- name: MY_VAR
value: {{ .Values.myKey }}
Check out the helm docs.
I have a docker file which I've written for a React application. This app takes a .json config file that it uses at run time. The file doesn't contain any secrets.
So I've built the image without the config file, and now I'm unsure how to transfer the JSON file when I run it up.
I'm looking at deploying this in production using a CI/CD process which would entail:
gitlab (actions) building the image
pushing this to a docker repository
Kubernetes picking this up and running/starting the container
I think it's at the last point that I want to add the JSON configuration.
My question is: how do I add the config file to the application when k8's starts it up?
If I understand correctly, k8s doesn't have any local storage to create a volume from to copy it in? Can I give docker run a separate git repo where I can hold the config files?
You should take a look at configmap.
From k8s documentation configmap:
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
In your case, you want as a volume to have a file.
apiVersion: v1
kind: ConfigMap
metadata:
name: your-app
data:
config.json: #you file name
<file-content>
A configmap can be create manually or generated from a file using:
Directly in the cluster:kubectl create configmap <name> --from-file <path-to-file>.
In a yaml file:kubectl create configmap <name> --from-file <path-to-file> --dry-run=client -o yaml > <file-name>.yaml.
When you got your configmap, you must modify your deployment/pod to add a volume.
apiVersion: apps/v1
kind: Deployment
metadata:
name: <your-name>
spec:
...
template:
metadata:
...
spec:
...
containers:
- name: <container-name>
...
volumeMounts:
- mountPath: '<path>/config.json'
name: config-volume
readOnly: true
subPath: config.json
volumes:
- name: config-volume
configMap:
name: <name-of-configmap>
To deploy to your cluster, you can use plain yaml or I suggest you take a look at Kustomize or Helm chartsĀ.
They are both popular system to deploy applications. If kustomize, there is the configmap generator feature that fit your case.
Good luck :)
Hey I have a wider problem as when I update secrets in kubernetes they are not implemented in pods unless they are ugprades/reschedules or just re-deployed; I saw the other stackoverflow post about it but noone of the solutions fit me Update kubernetes secrets doesn't update running container env vars
Also so the in-app solution of python script on pod to update its secret automatically https://medium.com/analytics-vidhya/updating-secrets-from-a-kubernetes-pod-f3c7df51770d but it seems like a long shot and I came up with solution to adding annotation to deployment manifest - and hoping it would re-schedule pods everytime a helm chart would put a new timestamp in it - it does put it but it doesn't reschedule - any thought how to force that behaviour ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
lastUpdate: {{ now }}
also I dont feel like adding this patch command to ci/cd deployment, as its arbitraty and - well doesnt feel like right solution
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
didn't anyone else find better solution to re-deploy pods on changed secrets ?
Kubernetes by itself does not do rolling update of a deployment automatically when a secret is changed. So there needs to a controller which will do that for you automatically. Take a look at Reloader which is a controller that watches if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset and Statefulset.
Add reloader.stakater.com/auto annotation to the deployment with name xxx and have a ConfigMap called xxx-configmap or Secret called xxx-secret.
This will discover deployments/daemonsets/statefulset automatically where xxx-configmap or xxx-secret is being used either via environment variable or from volume mount. And it will perform rolling upgrade on related pods when xxx-configmap or xxx-secret are updated
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
labels: xxx
annotations:
reloader.stakater.com/auto: "true"
I have successfully built Docker images and ran them in a Docker swarm. When I attempt to build an image and run it with Docker Desktop's Kubernetes cluster:
docker build -t myimage -f myDockerFile .
(the above successfully creates an image in the docker local registry)
kubectl run myapp --image=myimage:latest
(as far as I understand, this is the same as using the kubectl create deployment command)
The above command successfully creates a deployment, but when it makes a pod, the pod status always shows:
NAME READY STATUS RESTARTS AGE
myapp-<a random alphanumeric string> 0/1 ImagePullBackoff 0 <age>
I am not sure why it is having trouble pulling the image - does it maybe not know where the docker local images are?
I just had the exact same problem. Boils down to the imagePullPolicy:
PC:~$ kubectl explain deployment.spec.template.spec.containers.imagePullPolicy
KIND: Deployment
VERSION: extensions/v1beta1
FIELD: imagePullPolicy <string>
DESCRIPTION:
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
More info:
https://kubernetes.io/docs/concepts/containers/images#updating-images
Specifically, the part that says: Defaults to Always if :latest tag is specified.
That means, you created a local image, but, because you use the :latest it will try to find it in whatever remote repository you configured (by default docker hub) rather than using your local. Simply change your command to:
kubectl run myapp --image=myimage:latest --image-pull-policy Never
or
kubectl run myapp --image=myimage:latest --image-pull-policy IfNotPresent
I had this same ImagePullBack error while running a pod deployment with a YAML file, also on Docker Desktop.
For anyone else that finds this via Google (like I did), the imagePullPolicy that Lucas mentions above can also be set in the deployment yaml file. See the spec.templage.spec.containers.imagePullPolicy in the yaml snippet below (3 lines from the bottom).
I added that and my app deployed successfully into my local kube cluser, using the kubectl yaml deploy command: kubectl apply -f .\Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
labels:
app: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: node-web-app:latest
imagePullPolicy: Never
ports:
- containerPort: 3000
You didn't specify where myimage:latest is hosted, but essentially ImagePullBackoff means that I cannot pull the image because either:
You don't have networking setup in your Docker VM that can get to your Docker registry (Docker Hub?)
myimage:latest doesn't exist in your registry or is misspelled.
myimage:latest requires credentials (you are pulling from a private registry). You can take a look at this to configure container credentials in a Pod.
I am running kubeadm alpha version to set up my kubernates cluster.
From kubernates , I am trying to pull docker images which is hosted in nexus repository.
When ever I am trying to create a pods , It is giving "ImagePullBackOff" every time. Can anybody help me on this ?
Detail for this are present in https://github.com/kubernetes/kubernetes/issues/41536
Pod definition :
apiVersion: v1
kind: Pod
metadata:
name: test-pod
labels:
name: test
spec:
containers:
- image: 123.456.789.0:9595/test
name: test
ports:
- containerPort: 8443
imagePullSecrets:
- name: my-secret
You need to refer to the secret you have just created from the Pod definition.
When you create the secret with kubectl create secret docker-registry my-secret --docker-server=123.456.789.0 ... the server must exactly match what's in your Pod definition - including the port number (and if it's a secure one then it also must match up with the docker command line in systemd).
Also, the secret must be in the same namespace where you are creating your Pod, but that seems to be in order.
I received similar error while launching containers from the amazon ECR registry. The issue was that I didn;t mention the exact "Image URI" location in deployment file.