Is there a way to set "imagePullPolicy" for Cloud Run Service? - google-cloud-run

I would like to be able to automatically update my Google Cloud Run Services once my image has been updated on Google Container Registry.
I need to update multiple Cloud Run services based on the same image (which has a tag of :latest ), so I expected this to work.
# build & push the container image
- name: "gcr.io/kaniko-project/executor:latest"
args: ["--cache=true", "--cache-ttl=48h", "--destination=gcr.io/project/titan:latest"]
Currently, my titan image gets updated but no new Revision is deployed to Cloud Run.

Google Cloud Run does not automatically deploy a revision when you push a new image to a tag reference. There are many good reasons it doesn’t.
When a Cloud Run revision is deployed, it computes the sha256 hash of the image reference.
Therefore when you specify a container image with :latest tag, Cloud Run uses its sha256 reference to deploy and scale out that revision of your service. When you update :latest tag to point to the new image, Cloud Run will still use the previous image. It would be a dangerous and slippery slope otherwise.
If you need to auto-deploy new revisions to Cloud Run based on a new image push, I recommend two solutions:
Make “gcloud beta run deploy” command a step in your Google Cloud Build process. (easy) https://cloud.google.com/run/docs/continuous-deployment
Write a GCF/Run service that deploys your app to Cloud Run every time there’s a new image pushed by subscribing to Google Cloud Build (or GCR) notifications through PubSub. (much harder)

The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always.
omit the imagePullPolicy and use :latest as the tag for the image to use.
omit the imagePullPolicy and the tag for the image to use.
enable the AlwaysPullImages admission controller.
Note that you should avoid using :latest tag, see Best Practices for Configuration for more information.
For example, creating a YAML file dummy.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: whatever
image: index.docker.io/DOCKER_USER/PRIVATE_REPO_NAME:latest
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
imagePullSecrets:
- name: myregistrykey
Then run:
kubectl create -f dummy.yaml

Related

Why are microservices not restarted on GKE - "failed to copy: httpReaderSeeker: failed open: could not fetch content descriptor"

I'm running microservices on GKE and using skaffold for management.
Everything works fine for a week and suddenly all services are killed (not sure why).
Logging shows this same info for all services:
There is no indication that something is going wrong in the logs before all services fail. It looks like the pods are all killed at the same time by GKE for whatever reason.
What confuses me is why the services do not restart.
kubectl describe pod auth shows a "imagepullbackoff" error.
When I simulate this situation on the test system (same setup) by deleting a pod manually, all services restart just fine.
To deploy the microservices, I use skaffold.
---deployment.yaml for one of the microservices---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
namespace: development
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
volumes:
- name: google-cloud-key
secret:
secretName: pubsub-key
containers:
- name: auth
image: us.gcr.io/XXXX/auth
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: NATS_CLIENT_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NATS_URL
value: 'http://nats-srv:4222'
- name: NATS_CLUSTER_ID
value: XXXX
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
Any idea why the microservices don't restart? Again, they run fine after deploying them with skaffold and also after simulating pod shutdown on the test system ... what changed here?
---- Update 2021.10.30 -------------
After some further digging in the cloud log explorer, I figured out that the pod is trying to pull the previously build image but fails. If I pull the image on cloud console manually using the image name plus tag as listed in the logs, it works just fine (thus the image is there).
The log gives the following reason for the error:
Failed to pull image "us.gcr.io/scout-productive/client:v0.002-72-gaa98dde#sha256:383af5c5989a3e8a5608495e589c67f444d99e7b705cfff29e57f49b900cba33": rpc error: code = NotFound desc = failed to pull and unpack image "us.gcr.io/scout-productive/client#sha256:383af5c5989a3e8a5608495e589c67f444d99e7b705cfff29e57f49b900cba33": failed to copy: httpReaderSeeker: failed open: could not fetch content descriptor sha256:4e9f2cdf438714c2c4533e28c6c41a89cc6c1b46cf77e54c488db30ca4f5b6f3 (application/vnd.docker.image.rootfs.diff.tar.gzip) from remote: not found"
Where is that "sha256:4e9f2cdf438714c2c4533e28c6c41a89cc6c1b46cf77e54c488db30ca4f5b6f3" coming from that it cannot find according to the error message?
Any help/pointers are much appreciated!
Thanks
When Skaffold causes an image to be built, the image is pushed to a repository (us.gcr.io/scout-productive/client) using a tag generated with your specified tagging policy (v0.002-72-gaa98dde; this tag looks like the result of using Skaffold's default gitCommit tagging policy). The remote registry returns the image digest of the received image, a SHA256 value computed from the image metadata and image file contents (sha256:383af5c5989a3e8a5608495e589c67f444d99e7b705cfff29e57f49b900cba33). The image digest is unique like a fingerprint, whereas a tag is just name → image digest mapping, and which may be updated to point to a different image digest.
When deploying your application, Skaffold rewrites your manifests on-the-fly to reference the specific built container images by using both the generated tag and the image digest. Container runtimes ignore the image tag when an image digest is specified since the digest identifies a unique image.
So the fact that your specific image cannot be resolved means that the image has been deleted from your repository. Are you, or someone on your team, deleting images?
Some people run image garbage collectors to delete old images. Skaffold users do this to delete the interim images generated by skaffold dev between dev loops. But some of these collectors are indiscriminate, such as only keeping images tagged with latest. Since Skaffold tags images using a configured tagging policy, such collectors can delete your Skaffold-built images. To avoid problems, either tune your collector (e.g., only deleted untagged images), or have Skaffold build your dev and production images to different repositories. For example, GCP's Artifact Registry allows having multiple independent repositories in the same project, and your GKE cluster can access all such repositories.
The imagepullbackoff means that kubernetes couldn't download the images from the registry - that could mean that the image with that name/tag doesn't exist, OR the the credentials to the registry is wrong/expired.
From what I see in your deployment.yml there aren't provided any credentials to the registry at all. You can do it by providing imagePullSecret. I never used Skaffold, but my assumptions is that you login to private registry in Skaffold, and use it to deploy the images, so when Kubernetes tries to redownload the image from the registry by itself, it fails because because of lack of authorization.
I can propose two solutions:
1. ImagePullSecret
You can create secret resource witch will contain credentials to private registry, then define that secret in the deployment.yml, that way Kubernetes will be able to authorize at your private registry and pull the images.
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
2. ImagePullPolicy
You can prevent re-downloading your images, by setting ImagePullPolicy:IfNotPresent in the deployment.yml . It will reuse the one which is available locally. Using this method requires defining image tags properly. For example using this with :latest tag can lead to not pulling the "newest latest" image, because from cluster perspective, it already has image with that tag so it won't download a new one.
https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
To resolve problem with pods being killed, can you share Kubernetes events from the time when pods are being killed? kubectl -n NAMESPACE get events This would maybe give more information.

Can I push a local (CI) docker image to cluster using kubectl in CI?

I am using a CI and have built a Docker Image. I want to pass the image built in the CI to kubectl to take and place it in the cluster I have specified by my kubeconfig. This is as opposed to having the cluster reach out to a registry like dockerhub to retrieve the image. Is this possible?
So far I cannot get this to work and I am thinking I will be forced to create a secret on my cluster to just use my private docker repo. I would like to exhaust my options to not have to use any registry. Also as an alternative I already login to docker on my CI and would like to ideally only have to use those credentials once.
I thought setting the ImagePullPolicy on my deployment might do it but I think it is referring to the cluster context. Which makes me wonder if there is some other way to add an image to my cluster with something like a kubectl create image.
Maybe I am just doing something obvious wrong?
Here is my deploy script on my CI
docker build -t <DOCKERID>/<PROJECT>:latest -t <DOCKERID>/<PROJECT>:$GIT_SHA -f ./<DIR>/Dockerfile ./<DIR>
docker push <DOCKERID>/<PROJECT>:latest
docker push <DOCKERID>/<PROJECT>:$GIT_SHA
kubectl --kubeconfig=$HOME/.kube/config apply -f k8s
kubectl --kubeconfig=$HOME/.kube/config set image deployment/<DEPLOYMENT NAME> <CONTAINER NAME>=<DOCKERID>/<PROJECT>:$SHA
And this Dockerfile:
apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployment name>
spec:
replicas: 3
selector:
matchLabels:
component: <CONTAINER NAME>
template:
metadata:
labels:
component: <CONTAINER NAME>
spec:
containers:
- name: <CONTAINER NAME>
image: <DOCKERID>/<PROJECT>
imagePullPolicy: IfNotPresent
ports:
- containerPort: <PORT>
I want to pass the image [...] to kubectl to take and place it in the cluster [...] as opposed to having the cluster reach out to a registry like dockerhub to retrieve the image. Is this possible?
No. Generally the only way an image gets into a cluster is by a node pulling an image named in a pod spec. And even there "in the cluster" is a little vague; each node will have a different collection of images, based on which pods have ever run there and what's historically been cleaned up.
There are limited exceptions for developer-oriented single-node environments (you can docker build an image directly in a minikube VM, then set a pod to run it with imagePullPolicy: Never) but this wouldn't apply to a typical CI system.
I would like to exhaust my options to not have to use any registry.
Kubernetes essentially requires a registry. If you're using a managed Kubernetes from a public-cloud provider (EKS/GKE/AKS/...), there is probably a matching image registry offering you can use (ECR/GCR/ACR/...).
Preferred way in k8s setting is to update k8s definitions or helm chart values or kustomize values (whichever you are using) with image and its sha256 digest that needs to be deployed.
Using sha256 digest is preferable to tags in production since docker image tags are mutable.
Next, instead of using kubectl from CI/CD it's preferred to use a GitOps tool - either Argo or Flux on the k8s side to pull the correct images based on definitions in Git.
There you need some sort of system to route and manage images - which one should go to which environment. Here is my article with an example how it can be achieved (this one is using my tool): https://itnext.io/building-kubernetes-cicd-pipeline-with-github-actions-argocd-and-reliza-hub-e7120b9be870

Kubernates pods with :latest Image issue

I'm using Kubernates for production environment (I'm new for these kinds of configuration), This is an example for one of my depolyment files(with changes):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myProd
labels:
app: thisIsMyProd
spec:
replicas: 3
selector:
matchLabels:
app: thisIsMyProd
template:
metadata:
labels:
app: thisIsMyProd
spec:
containers:
- name: myProd
image: DockerUserName/MyProdProject # <==== Latest
ports:
- containerPort: 80
Now, I wanted to make it works with the travis ci, So I made something similar to this:
sudo: required
services:
- docker
env:
global:
- LAST_COMMIT_SHA=$(git rev-parse HEAD)
- SERVICE_NAME=myProd
- DOCKER_FILE_PATH=.
- DOCKER_CONTEXT=.
addons:
apt:
packages:
- sshpass
before_script:
- docker build -t $SERVICE_NAME:latest -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
script:
# Mocking run test cases
deploy:
- provider: script
script: bash ./deployment/deploy-production.sh
on:
branch: master
And finally here is the deploy-production.sh script:
#!/usr/bin/env bash
# Log in to the docker CLI
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
# Build images
docker build -t $DOCKER_USERNAME/$SERVICE_NAME:latest -t $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
# Take those images and push them to docker hub
docker push $DOCKER_USERNAME/$SERVICE_NAME:latest
docker push $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA
# Run deployment script in deployment machine
export SSHPASS=$DEPLOYMENT_HOST_PASSWORD
ssh-keyscan -H $DEPLOYMENT_HOST >> ~/.ssh/known_hosts
# Run Kubectl commands
kubctl apply -f someFolder
kubctl set image ... # instead of the `...` the rest command that sets the image with SHA to the deployments
Now here are my questions:
When travis finish its work, the deploy-production.sh script with run when it is about merging to the master branch, Now I've a concern about the kubectl step, for the first time deployment, when we apply the deployment it will pull the image from dockerhup and try to run them, after that the set image command will run changing the image of these depolyment. Does this will make the deployment to happen twice?
When I try to deploy for the second time, I noticed the deployment used old version from the latest image because it found it locally. After searching I found imagePullPolicy and I set it to always. But imagine that I didn't use that imagePullPolicy attribute, what would really happen in this case? I know that old-version code containers for the first apply command. But isn't running the set image will fix that? To clarify my question, Is kubernetes using some random way to select pods that are going to go down? Like it doesn't mark the pods with the order which the commands run, so it will detect that the set image pods should remain and the apply pods are the one who needs to be terminated?
Doesn't pulling every time is harmful? Should I always make the deployment image somehow not to use the latest is better to erase that hassle?
Thanks
If the image tag is the same in both apply and set image then only the apply action re-deploy the Deployment (in which case you do not need the set image command). If they refer to different image tags then yes, the deployment will be run twice.
If you use latest tag, applying a manifest that use the latest tag with no modification WILL NOT re-deploy the Deployment. You need to introduce a modification to the manifest file in order to force Kubernetes to re-deploy. Like for my case, I use date command to generate a TIMESTAMP variable that is passed as in the env spec of the pod container which my container does not use in any way, just to force a re-deploy of the Deployment. Or you can also use kubectl rollout restart deployment/name if you are using Kubernetes 1.15 or later.
Other than wasted bandwidth or if you are being charged by how many times you pull a docker image (poor you), there is no harm with additional image pull just to be sure you are using the latest image version. Even if you use a specific image tag with version numbers like 1.10.112-rc5, they will be case where you or your fellow developers forget to update the version number when pushing a modified image version. IMHO, imagePullPolicy=always should be the default rather than explicitly required.

Kubernetes how to make Deployment to update image

I do have deployment with single pod, with my custom docker image like:
containers:
- name: mycontainer
image: myimage:latest
During development I want to push new latest version and make Deployment updated.
Can't find how to do that, without explicitly defining tag/version and increment it for each build, and do
kubectl set image deployment/my-deployment mycontainer=myimage:1.9.1
You can configure your pod with a grace period (for example 30 seconds or more, depending on container startup time and image size) and set "imagePullPolicy: "Always". And use kubectl delete pod pod_name.
A new container will be created and the latest image automatically downloaded, then the old container terminated.
Example:
spec:
terminationGracePeriodSeconds: 30
containers:
- name: my_container
image: my_image:latest
imagePullPolicy: "Always"
I'm currently using Jenkins for automated builds and image tagging and it looks something like this:
kubectl --user="kube-user" --server="https://kubemaster.example.com" --token=$ACCESS_TOKEN set image deployment/my-deployment mycontainer=myimage:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"
Another trick is to intially run:
kubectl set image deployment/my-deployment mycontainer=myimage:latest
and then:
kubectl set image deployment/my-deployment mycontainer=myimage
It will actually be triggering the rolling-update but be sure you have also imagePullPolicy: "Always" set.
Update:
another trick I found, where you don't have to change the image name, is to change the value of a field that will trigger a rolling update, like terminationGracePeriodSeconds. You can do this using kubectl edit deployment your_deployment or kubectl apply -f your_deployment.yaml or using a patch like this:
kubectl patch deployment your_deployment -p \
'{"spec":{"template":{"spec":{"terminationGracePeriodSeconds":31}}}}'
Just make sure you always change the number value.
UPDATE 2019-06-24
Based on the #Jodiug comment if you have a 1.15 version you can use the command:
kubectl rollout restart deployment/demo
Read more on the issue:
https://github.com/kubernetes/kubernetes/issues/13488
Well there is an interesting discussion about this subject on the kubernetes GitHub project. See the issue: https://github.com/kubernetes/kubernetes/issues/33664
From the solutions described there, I would suggest one of two.
First
1.Prepare deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: registry.example.com/apps/demo:master
imagePullPolicy: Always
env:
- name: FOR_GODS_SAKE_PLEASE_REDEPLOY
value: 'THIS_STRING_IS_REPLACED_DURING_BUILD'
2.Deploy
sed -ie "s/THIS_STRING_IS_REPLACED_DURING_BUILD/$(date)/g" deployment.yml
kubectl apply -f deployment.yml
Second (one liner):
kubectl patch deployment web -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
Of course the imagePullPolicy: Always is required on both cases.
kubectl rollout restart deployment myapp
This is the current way to trigger a rolling update and leave the old replica sets in place for other operations provided by kubectl rollout like rollbacks.
I use Gitlab-CI to build the image and then deploy it directly to GCK. If use a neat little trick to achieve a rolling update without changing any real settings of the container, which is changing a label to the current commit-short-sha.
My command looks like this:
kubectl patch deployment my-deployment -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"build\":\"$CI_COMMIT_SHORT_SHA\"}}}}}}"
Where you can use any name and any value for the label as long as it changes with each build.
Have fun!
It seems that k8s expects us to provide a different image tag for every deployment. My default strategy would be to make the CI system generate and push the docker images, tagging them with the build number: xpmatteo/foobar:456.
For local development it can be convenient to use a script or a makefile, like this:
# create a unique tag
VERSION:=$(shell date +%Y%m%d%H%M%S)
TAG=xpmatteo/foobar:$(VERSION)
deploy:
npm run-script build
docker build -t $(TAG) .
docker push $(TAG)
sed s%IMAGE_TAG_PLACEHOLDER%$(TAG)% foobar-deployment.yaml | kubectl apply -f - --record
The sed command replaces a placeholder in the deployment document with the actual generated image tag.
We could update it using the following command:
kubectl set image deployment/<<deployment-name>> -n=<<namespace>> <<container_name>>=<<your_dockerhub_username>>/<<image_name you want to set now>>:<<tag_of_the_image_you_want>>
For example,
kubectl set image deployment/my-deployment -n=sample-namespace my-container=alex/my-sample-image-from-dockerhub:1.1
where:
kubectl set image deployment/my-deployment - we want to set the image of the deployment named my-deployment
-n=sample-namespace - this deployment belongs to the namespace named as sample-namespace. If your deployment belongs to the default namespace, no need to mention this part in your command.
my-container is the container name which was previously mentioned in the YAML file of your deployment configuration.
alex/my-sample-image-from-dockerhub:1.1 is the new image which you want to set for the deployment and run the container for. Here, alex is the username of the dockerhub image(if applicable), my-sample-image-from-dockerhub:1.1 the image and tag you want to use.
Another option which is more suitable for debugging but worth mentioning is to check in revision history of your rollout:
$ kubectl rollout history deployment my-dep
deployment.apps/my-dep
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
To see the details of each revision, run:
kubectl rollout history deployment my-dep --revision=2
And then returning to the previous revision by running:
$kubectl rollout undo deployment my-dep --to-revision=2
And then returning back to the new one.
Like running ctrl+z -> ctrl+y (:
(*) The CHANGE-CAUSE is <none> because you should run the updates with the --record flag - like mentioned here:
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
(**) There is a discussion regarding deprecating this flag.
I am using Azure DevOps to deploy the containerize applications, I am easily manage to overcome this problem by using the build ID
Everytime its builds and generate the new Build ID, I use this build ID as tag for docker image here is example
imagename:buildID
once your image is build (CI) successfully, in CD pipeline in deployment yml file I have give image name as
imagename:env:buildID
here evn:buildid is the azure devops variable which having value of build ID.
so now every time I have new changes to build(CI) and deploy(CD).
please comment if you need build definition for CI/CD.

Kubernetes AWS deployment can not set docker credentials

I set up a Kubernetes cluster on AWS using kube-up script with one master and two minions. I want to create a pod that uses a private docker image. So I need to add my credential to docker daemons of each minion of the cluster. But I don't know how to log into the minions created by AWS script. What is the recommended way to pass credentials to the docker demons of each minion?
Probably the best method for you is ImagePullSecrets - you will create secret (docker config), which be will be used for image pull. Read more about different concepts of using private registry http://kubernetes.io/docs/user-guide/images/#using-a-private-registry
Explained here: https://kubernetes.io/docs/concepts/containers/images/
There are 3 options for ImagePullPolicy: Always, IfNotPresent and Never
1) example of yaml:
...
spec:
containers:
- name: uses-private-image
image: $PRIVATE_IMAGE_NAME
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
2) By default, the kubelet will try to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.
This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.
All pods will have read access to any pre-pulled images.

Resources