Kubernetes how to make Deployment to update image - docker

I do have deployment with single pod, with my custom docker image like:
containers:
- name: mycontainer
image: myimage:latest
During development I want to push new latest version and make Deployment updated.
Can't find how to do that, without explicitly defining tag/version and increment it for each build, and do
kubectl set image deployment/my-deployment mycontainer=myimage:1.9.1

You can configure your pod with a grace period (for example 30 seconds or more, depending on container startup time and image size) and set "imagePullPolicy: "Always". And use kubectl delete pod pod_name.
A new container will be created and the latest image automatically downloaded, then the old container terminated.
Example:
spec:
terminationGracePeriodSeconds: 30
containers:
- name: my_container
image: my_image:latest
imagePullPolicy: "Always"
I'm currently using Jenkins for automated builds and image tagging and it looks something like this:
kubectl --user="kube-user" --server="https://kubemaster.example.com" --token=$ACCESS_TOKEN set image deployment/my-deployment mycontainer=myimage:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"
Another trick is to intially run:
kubectl set image deployment/my-deployment mycontainer=myimage:latest
and then:
kubectl set image deployment/my-deployment mycontainer=myimage
It will actually be triggering the rolling-update but be sure you have also imagePullPolicy: "Always" set.
Update:
another trick I found, where you don't have to change the image name, is to change the value of a field that will trigger a rolling update, like terminationGracePeriodSeconds. You can do this using kubectl edit deployment your_deployment or kubectl apply -f your_deployment.yaml or using a patch like this:
kubectl patch deployment your_deployment -p \
'{"spec":{"template":{"spec":{"terminationGracePeriodSeconds":31}}}}'
Just make sure you always change the number value.

UPDATE 2019-06-24
Based on the #Jodiug comment if you have a 1.15 version you can use the command:
kubectl rollout restart deployment/demo
Read more on the issue:
https://github.com/kubernetes/kubernetes/issues/13488
Well there is an interesting discussion about this subject on the kubernetes GitHub project. See the issue: https://github.com/kubernetes/kubernetes/issues/33664
From the solutions described there, I would suggest one of two.
First
1.Prepare deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: registry.example.com/apps/demo:master
imagePullPolicy: Always
env:
- name: FOR_GODS_SAKE_PLEASE_REDEPLOY
value: 'THIS_STRING_IS_REPLACED_DURING_BUILD'
2.Deploy
sed -ie "s/THIS_STRING_IS_REPLACED_DURING_BUILD/$(date)/g" deployment.yml
kubectl apply -f deployment.yml
Second (one liner):
kubectl patch deployment web -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
Of course the imagePullPolicy: Always is required on both cases.

kubectl rollout restart deployment myapp
This is the current way to trigger a rolling update and leave the old replica sets in place for other operations provided by kubectl rollout like rollbacks.

I use Gitlab-CI to build the image and then deploy it directly to GCK. If use a neat little trick to achieve a rolling update without changing any real settings of the container, which is changing a label to the current commit-short-sha.
My command looks like this:
kubectl patch deployment my-deployment -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"build\":\"$CI_COMMIT_SHORT_SHA\"}}}}}}"
Where you can use any name and any value for the label as long as it changes with each build.
Have fun!

It seems that k8s expects us to provide a different image tag for every deployment. My default strategy would be to make the CI system generate and push the docker images, tagging them with the build number: xpmatteo/foobar:456.
For local development it can be convenient to use a script or a makefile, like this:
# create a unique tag
VERSION:=$(shell date +%Y%m%d%H%M%S)
TAG=xpmatteo/foobar:$(VERSION)
deploy:
npm run-script build
docker build -t $(TAG) .
docker push $(TAG)
sed s%IMAGE_TAG_PLACEHOLDER%$(TAG)% foobar-deployment.yaml | kubectl apply -f - --record
The sed command replaces a placeholder in the deployment document with the actual generated image tag.

We could update it using the following command:
kubectl set image deployment/<<deployment-name>> -n=<<namespace>> <<container_name>>=<<your_dockerhub_username>>/<<image_name you want to set now>>:<<tag_of_the_image_you_want>>
For example,
kubectl set image deployment/my-deployment -n=sample-namespace my-container=alex/my-sample-image-from-dockerhub:1.1
where:
kubectl set image deployment/my-deployment - we want to set the image of the deployment named my-deployment
-n=sample-namespace - this deployment belongs to the namespace named as sample-namespace. If your deployment belongs to the default namespace, no need to mention this part in your command.
my-container is the container name which was previously mentioned in the YAML file of your deployment configuration.
alex/my-sample-image-from-dockerhub:1.1 is the new image which you want to set for the deployment and run the container for. Here, alex is the username of the dockerhub image(if applicable), my-sample-image-from-dockerhub:1.1 the image and tag you want to use.

Another option which is more suitable for debugging but worth mentioning is to check in revision history of your rollout:
$ kubectl rollout history deployment my-dep
deployment.apps/my-dep
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
To see the details of each revision, run:
kubectl rollout history deployment my-dep --revision=2
And then returning to the previous revision by running:
$kubectl rollout undo deployment my-dep --to-revision=2
And then returning back to the new one.
Like running ctrl+z -> ctrl+y (:
(*) The CHANGE-CAUSE is <none> because you should run the updates with the --record flag - like mentioned here:
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
(**) There is a discussion regarding deprecating this flag.

I am using Azure DevOps to deploy the containerize applications, I am easily manage to overcome this problem by using the build ID
Everytime its builds and generate the new Build ID, I use this build ID as tag for docker image here is example
imagename:buildID
once your image is build (CI) successfully, in CD pipeline in deployment yml file I have give image name as
imagename:env:buildID
here evn:buildid is the azure devops variable which having value of build ID.
so now every time I have new changes to build(CI) and deploy(CD).
please comment if you need build definition for CI/CD.

Related

Can I push a local (CI) docker image to cluster using kubectl in CI?

I am using a CI and have built a Docker Image. I want to pass the image built in the CI to kubectl to take and place it in the cluster I have specified by my kubeconfig. This is as opposed to having the cluster reach out to a registry like dockerhub to retrieve the image. Is this possible?
So far I cannot get this to work and I am thinking I will be forced to create a secret on my cluster to just use my private docker repo. I would like to exhaust my options to not have to use any registry. Also as an alternative I already login to docker on my CI and would like to ideally only have to use those credentials once.
I thought setting the ImagePullPolicy on my deployment might do it but I think it is referring to the cluster context. Which makes me wonder if there is some other way to add an image to my cluster with something like a kubectl create image.
Maybe I am just doing something obvious wrong?
Here is my deploy script on my CI
docker build -t <DOCKERID>/<PROJECT>:latest -t <DOCKERID>/<PROJECT>:$GIT_SHA -f ./<DIR>/Dockerfile ./<DIR>
docker push <DOCKERID>/<PROJECT>:latest
docker push <DOCKERID>/<PROJECT>:$GIT_SHA
kubectl --kubeconfig=$HOME/.kube/config apply -f k8s
kubectl --kubeconfig=$HOME/.kube/config set image deployment/<DEPLOYMENT NAME> <CONTAINER NAME>=<DOCKERID>/<PROJECT>:$SHA
And this Dockerfile:
apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployment name>
spec:
replicas: 3
selector:
matchLabels:
component: <CONTAINER NAME>
template:
metadata:
labels:
component: <CONTAINER NAME>
spec:
containers:
- name: <CONTAINER NAME>
image: <DOCKERID>/<PROJECT>
imagePullPolicy: IfNotPresent
ports:
- containerPort: <PORT>
I want to pass the image [...] to kubectl to take and place it in the cluster [...] as opposed to having the cluster reach out to a registry like dockerhub to retrieve the image. Is this possible?
No. Generally the only way an image gets into a cluster is by a node pulling an image named in a pod spec. And even there "in the cluster" is a little vague; each node will have a different collection of images, based on which pods have ever run there and what's historically been cleaned up.
There are limited exceptions for developer-oriented single-node environments (you can docker build an image directly in a minikube VM, then set a pod to run it with imagePullPolicy: Never) but this wouldn't apply to a typical CI system.
I would like to exhaust my options to not have to use any registry.
Kubernetes essentially requires a registry. If you're using a managed Kubernetes from a public-cloud provider (EKS/GKE/AKS/...), there is probably a matching image registry offering you can use (ECR/GCR/ACR/...).
Preferred way in k8s setting is to update k8s definitions or helm chart values or kustomize values (whichever you are using) with image and its sha256 digest that needs to be deployed.
Using sha256 digest is preferable to tags in production since docker image tags are mutable.
Next, instead of using kubectl from CI/CD it's preferred to use a GitOps tool - either Argo or Flux on the k8s side to pull the correct images based on definitions in Git.
There you need some sort of system to route and manage images - which one should go to which environment. Here is my article with an example how it can be achieved (this one is using my tool): https://itnext.io/building-kubernetes-cicd-pipeline-with-github-actions-argocd-and-reliza-hub-e7120b9be870

Kubernates pods with :latest Image issue

I'm using Kubernates for production environment (I'm new for these kinds of configuration), This is an example for one of my depolyment files(with changes):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myProd
labels:
app: thisIsMyProd
spec:
replicas: 3
selector:
matchLabels:
app: thisIsMyProd
template:
metadata:
labels:
app: thisIsMyProd
spec:
containers:
- name: myProd
image: DockerUserName/MyProdProject # <==== Latest
ports:
- containerPort: 80
Now, I wanted to make it works with the travis ci, So I made something similar to this:
sudo: required
services:
- docker
env:
global:
- LAST_COMMIT_SHA=$(git rev-parse HEAD)
- SERVICE_NAME=myProd
- DOCKER_FILE_PATH=.
- DOCKER_CONTEXT=.
addons:
apt:
packages:
- sshpass
before_script:
- docker build -t $SERVICE_NAME:latest -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
script:
# Mocking run test cases
deploy:
- provider: script
script: bash ./deployment/deploy-production.sh
on:
branch: master
And finally here is the deploy-production.sh script:
#!/usr/bin/env bash
# Log in to the docker CLI
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
# Build images
docker build -t $DOCKER_USERNAME/$SERVICE_NAME:latest -t $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
# Take those images and push them to docker hub
docker push $DOCKER_USERNAME/$SERVICE_NAME:latest
docker push $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA
# Run deployment script in deployment machine
export SSHPASS=$DEPLOYMENT_HOST_PASSWORD
ssh-keyscan -H $DEPLOYMENT_HOST >> ~/.ssh/known_hosts
# Run Kubectl commands
kubctl apply -f someFolder
kubctl set image ... # instead of the `...` the rest command that sets the image with SHA to the deployments
Now here are my questions:
When travis finish its work, the deploy-production.sh script with run when it is about merging to the master branch, Now I've a concern about the kubectl step, for the first time deployment, when we apply the deployment it will pull the image from dockerhup and try to run them, after that the set image command will run changing the image of these depolyment. Does this will make the deployment to happen twice?
When I try to deploy for the second time, I noticed the deployment used old version from the latest image because it found it locally. After searching I found imagePullPolicy and I set it to always. But imagine that I didn't use that imagePullPolicy attribute, what would really happen in this case? I know that old-version code containers for the first apply command. But isn't running the set image will fix that? To clarify my question, Is kubernetes using some random way to select pods that are going to go down? Like it doesn't mark the pods with the order which the commands run, so it will detect that the set image pods should remain and the apply pods are the one who needs to be terminated?
Doesn't pulling every time is harmful? Should I always make the deployment image somehow not to use the latest is better to erase that hassle?
Thanks
If the image tag is the same in both apply and set image then only the apply action re-deploy the Deployment (in which case you do not need the set image command). If they refer to different image tags then yes, the deployment will be run twice.
If you use latest tag, applying a manifest that use the latest tag with no modification WILL NOT re-deploy the Deployment. You need to introduce a modification to the manifest file in order to force Kubernetes to re-deploy. Like for my case, I use date command to generate a TIMESTAMP variable that is passed as in the env spec of the pod container which my container does not use in any way, just to force a re-deploy of the Deployment. Or you can also use kubectl rollout restart deployment/name if you are using Kubernetes 1.15 or later.
Other than wasted bandwidth or if you are being charged by how many times you pull a docker image (poor you), there is no harm with additional image pull just to be sure you are using the latest image version. Even if you use a specific image tag with version numbers like 1.10.112-rc5, they will be case where you or your fellow developers forget to update the version number when pushing a modified image version. IMHO, imagePullPolicy=always should be the default rather than explicitly required.

Is there a way to set "imagePullPolicy" for Cloud Run Service?

I would like to be able to automatically update my Google Cloud Run Services once my image has been updated on Google Container Registry.
I need to update multiple Cloud Run services based on the same image (which has a tag of :latest ), so I expected this to work.
# build & push the container image
- name: "gcr.io/kaniko-project/executor:latest"
args: ["--cache=true", "--cache-ttl=48h", "--destination=gcr.io/project/titan:latest"]
Currently, my titan image gets updated but no new Revision is deployed to Cloud Run.
Google Cloud Run does not automatically deploy a revision when you push a new image to a tag reference. There are many good reasons it doesn’t.
When a Cloud Run revision is deployed, it computes the sha256 hash of the image reference.
Therefore when you specify a container image with :latest tag, Cloud Run uses its sha256 reference to deploy and scale out that revision of your service. When you update :latest tag to point to the new image, Cloud Run will still use the previous image. It would be a dangerous and slippery slope otherwise.
If you need to auto-deploy new revisions to Cloud Run based on a new image push, I recommend two solutions:
Make “gcloud beta run deploy” command a step in your Google Cloud Build process. (easy) https://cloud.google.com/run/docs/continuous-deployment
Write a GCF/Run service that deploys your app to Cloud Run every time there’s a new image pushed by subscribing to Google Cloud Build (or GCR) notifications through PubSub. (much harder)
The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always.
omit the imagePullPolicy and use :latest as the tag for the image to use.
omit the imagePullPolicy and the tag for the image to use.
enable the AlwaysPullImages admission controller.
Note that you should avoid using :latest tag, see Best Practices for Configuration for more information.
For example, creating a YAML file dummy.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: whatever
image: index.docker.io/DOCKER_USER/PRIVATE_REPO_NAME:latest
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
imagePullSecrets:
- name: myregistrykey
Then run:
kubectl create -f dummy.yaml

Docker for Windows Kubernetes pod gets ImagePullBackOff after creating a new deployment

I have successfully built Docker images and ran them in a Docker swarm. When I attempt to build an image and run it with Docker Desktop's Kubernetes cluster:
docker build -t myimage -f myDockerFile .
(the above successfully creates an image in the docker local registry)
kubectl run myapp --image=myimage:latest
(as far as I understand, this is the same as using the kubectl create deployment command)
The above command successfully creates a deployment, but when it makes a pod, the pod status always shows:
NAME READY STATUS RESTARTS AGE
myapp-<a random alphanumeric string> 0/1 ImagePullBackoff 0 <age>
I am not sure why it is having trouble pulling the image - does it maybe not know where the docker local images are?
I just had the exact same problem. Boils down to the imagePullPolicy:
PC:~$ kubectl explain deployment.spec.template.spec.containers.imagePullPolicy
KIND: Deployment
VERSION: extensions/v1beta1
FIELD: imagePullPolicy <string>
DESCRIPTION:
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
More info:
https://kubernetes.io/docs/concepts/containers/images#updating-images
Specifically, the part that says: Defaults to Always if :latest tag is specified.
That means, you created a local image, but, because you use the :latest it will try to find it in whatever remote repository you configured (by default docker hub) rather than using your local. Simply change your command to:
kubectl run myapp --image=myimage:latest --image-pull-policy Never
or
kubectl run myapp --image=myimage:latest --image-pull-policy IfNotPresent
I had this same ImagePullBack error while running a pod deployment with a YAML file, also on Docker Desktop.
For anyone else that finds this via Google (like I did), the imagePullPolicy that Lucas mentions above can also be set in the deployment yaml file. See the spec.templage.spec.containers.imagePullPolicy in the yaml snippet below (3 lines from the bottom).
I added that and my app deployed successfully into my local kube cluser, using the kubectl yaml deploy command: kubectl apply -f .\Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
labels:
app: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: node-web-app:latest
imagePullPolicy: Never
ports:
- containerPort: 3000
You didn't specify where myimage:latest is hosted, but essentially ImagePullBackoff means that I cannot pull the image because either:
You don't have networking setup in your Docker VM that can get to your Docker registry (Docker Hub?)
myimage:latest doesn't exist in your registry or is misspelled.
myimage:latest requires credentials (you are pulling from a private registry). You can take a look at this to configure container credentials in a Pod.

Usage of Kubectl command and deployment of pods using Kubernetes and Jenkins

I am trying to implement CI/CD pipeline for my spring boot microservice deployment. Here I have some sample microservices. When I am exploring about Kubernetes , I found that pods, services, replica sets/ controller, statefulsets etc. I understood those Kubernetes terminologies properly. And I am planning to use Docker hub for my image registry.
My Requirement
When there is a commit made to my SVN code repository, then the Jenkins need to pull code from Subversion repository and need to build the project , create docker image, push into Docker hub - as mentioned earlier. And after that need to deploy into my test environment from Dockerhub by pulling by Jenkins.
My Confusion
When am I creating services and pods, how I can define the docker image path within pod/services / statefulsets? Since it pulling from Docker hub for deployment.
Can I directly add kubectl command within Jenkins pipeline schedule job? How can I use kubectl command for Kubernetes deployment?
Jenkins can do anything you can do given that the tools are installed and accessible. So an easy solution is to install docker and kubectl on Jenkins and provide him with the correct kube config so he can access the cluster. So if your host can use kubectl you can have a look at the $HOME/.kube/config file.
So in your job you can just use kubectl like you do from your host.
Regarding the images from Docker Hub:
Docker Hub is the default Docker Registry for Docker anyway so normally there is no need to change anything in your cluster only if you want to use your own Private Hosted Registry. If you are running your cluster at any cloud provider I would use there Docker registries because they are better integrated.
So this part of a deployment will pull nginx from Docker Hub no need to specify anything special for it:
spec:
containers:
- name: nginx
Image: nginx:1.7.9
So ensure Jenkins can do the following things from command line:
build Docker images
Push Docker Images (make sure you called docker login on Jenkins)
Access your cluster via kubectl get pods
So an easy pipeline needs to simply do this steps:
trigger on SVN change
checkout code
create a unique version which could be Build number, SVN Revision, Date)
Build / Test
Build Docker Image
tag Docker Image with unique version
push Docker Image
change image line in Kubernetes deployment.yaml to newly build version (if your are using Jenkins Pipeline you can use readYaml, writeYaml to achive this)
call kubectl apply -f deployment.yaml
Depending on your build system and languages used there are some useful tools which can help building and pushing the Docker Image and ensuring a unique tag. For example for Java and Maven you can use Maven CI Friendly Versions with any maven docker plugin or jib.
To create deployment you need to create a yaml file.
In the yaml file you the row:
image: oronboni/serviceb
Leads you to the container that in this case in DockerHub:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: serviceb
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: serviceb
template:
metadata:
labels:
app: serviceb
spec:
containers:
- name: serviceb
image: oronboni/serviceb
ports:
- containerPort: 5002
I strongly suggest that you will see the kubernetes deployment webinar in the link below:
https://m.youtube.com/watch?v=_vHTaIJm9uY
Good luck.

Resources