Docker Kubernetes (Mac) - Autoscaler unable to find metrics - docker

I have installed a local instance of Kubernetes via Docker on my Mac.
Following the walkthrough on how to activate autoscaling on a deployment I have experienced an issue. The autoscaler can't read the metrics.
When I am running kubectl describe hpa the current cpu usage comes back as unknown / 50% with the warnings:
Warning FailedGetResourceMetric:
horizontal-pod-autoscaler unable to get metrics for resource cpu:
unable to fetch metrics from API: the server could not find the
requested resource (get pods.metrics.k8s.io)
Warning FailedComputeMetricsReplicas
horizontal-pod-autoscaler failed to get cpu utilization: unable to
get metrics for resource cpu: unable to fetch metrics from API: the
server could not find the requested resource (get pods.metrics.k8s.io)
I have installed the metrics-server via git clone https://github.com/kubernetes-incubator/metrics-server.gitand installed it with kubectl create -f deploy/1.8+

I finally got it working..
Here are the full steps I took to get things working:
Have Kubernetes running within Docker
Delete any previous instance of metrics-server from your Kubernetes instance with kubectl delete -n kube-system deployments.apps metrics-server
Clone metrics-server with git clone https://github.com/kubernetes-incubator/metrics-server.git
Edit the file deploy/1.8+/metrics-server-deployment.yaml to override the default command by adding a command section that didn't exist before. The new section will instruct metrics-server to allow for an insecure communications session (don't verify the certs involved). Do this only for Docker, and not for production deployments of metrics-server:
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
Add metrics-server to your Kubernetes instance with kubectl create -f deploy/1.8+ (if errors with the .yaml, write this instead: kubectl apply -f deploy/1.8+)
Remove and add the autoscaler to your deployment again. It should now show the current cpu usage.
EDIT July 2020:
Most of the above steps hold true except the metrics-server has changed and that file does not exist anymore.
The repo now recommends installing it like this:
apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
So we can now download this file,
curl -L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml --output components.yaml
add --kubelet-insecure-tls under args (L88) to the metrics-server deployment and run
kubectl apply -f components.yaml

For who are use Internal-IP here may work for you. Follow #Mr.Turtle above at step 4. add more one command.
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.3
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP

We upgraded to AWS EKS version 1.13.7 and that's when we started having problems with HPA, It turns out on my deployment I had to specified a value for resources.requests.cpu=200m and the HPA started working for me.

Had same issue while using my kubernetes kubeadm lab and the updated procedure is here
https://github.com/kubernetes-sigs/metrics-server
This solved the issue:
horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)

If someone still has problems fixing this issue, this helped me fix it on minikube:
I had 2 deployments with the same label, something like this:
kind: Deployment
metadata:
name: webserver
spec:
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
---
kind: Deployment
metadata:
name: database
spec:
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
I renamed the label and matchLabels of the database (e.g. to app: db), then deleted both deployments and applied the new config - et voilĂ  it worked. (after hours of trying to solve the problem..)
Further informations to this issue: https://github.com/kubernetes/kubernetes/issues/79365

I have deployed over EKS and i was facing the same issue.
muhasan#admins-MacBook-Pro devops % kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
backend-iam-deployment Deployment/backend-iam-deployment 36278272/100Mi, <unknown>/50% 1 10 1 10m
in my deployment i just specified resources and that helped me to get HPA running
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-iam-deployment
labels:
app: backend-iam-deployment
spec:
replicas: 1
selector:
matchLabels:
app: backend-iam-deployment
template:
metadata:
labels:
app: backend-iam-deployment
spec:
containers:
- name: backend-iam-deployment
image: <imagename>
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
requests:
cpu: 200m
startupProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 10
failureThreshold: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 5
imagePullSecrets:
- name: us-east-1-ecr-registry
After applying resource limit to my deployment HPA started working for me.
muhasan#admins-MacBook-Pro fx.identitymanagement % kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
backend-iam-deployment Deployment/backend-iam-deployment 23216128/100Mi, 1%/50% 1 10 1 24m

Related

How to make a deployment file for a kubernetes service that depends on images from Amazon ECR?

A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c

k3s image pull from private registries

I've been looking at different references on how to enable k3s (running on my pi) to pull docker images from a private registry on my home network (server laptop on my network). If someone can please point my head in the right direction? This is my approach:
Created the docker registry on my server (and making accessible via port 10000):
docker run -d -p 10000:5000 --restart=always --local-docker-registry registry:2
This worked, and was able to push-pull images to it from the "server pc". I didn't add authentication TLS etc. yet...
(viewing the images via docker plugin on VS Code).
Added the inbound firewall rule on my laptop server, and tested that the registry can be 'seen' from my pi (so this also works):
$ curl -ks http://<server IP>:10000/v2/_catalog
{"repositories":["tcpserialpassthrough"]}
Added the registry link to k3s (k3s running on my pi) in registries.yaml file, and restarted k3s and the pi
$ cat /etc/rancher/k3s/registries.yaml
mirrors:
pwlaptopregistry:
endpoint:
- "http://<host IP here>:10000"
Putting the registry prefix to my image endpoint on a deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcpserialpassthrough
spec:
selector:
matchLabels:
app: tcpserialpassthrough
replicas: 1
template:
metadata:
labels:
app: tcpserialpassthrough
spec:
containers:
- name: tcpserialpassthrough
image: pwlaptopregistry/tcpserialpassthrough:vers1.3-arm
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8001
hostPort: 8001
protocol: TCP
command: ["dotnet", "/app/TcpConnector.dll"]
However, when I check the deployment startup sequence, it's still not able to pull the image (and possibly also still referencing docker hub?):
kubectl get events -w
LAST SEEN TYPE REASON OBJECT MESSAGE
8m24s Normal SuccessfulCreate replicaset/tcpserialpassthrough-88fb974d9 Created pod: tcpserialpassthrough-88fb974d9-b88fc
8m23s Warning FailedScheduling pod/tcpserialpassthrough-88fb974d9-b88fc 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
8m23s Warning FailedScheduling pod/tcpserialpassthrough-88fb974d9-b88fc 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
8m21s Normal Scheduled pod/tcpserialpassthrough-88fb974d9-b88fc Successfully assigned default/tcpserialpassthrough-88fb974d9-b88fc to raspberrypi
6m52s Normal Pulling pod/tcpserialpassthrough-88fb974d9-b88fc Pulling image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm"
6m50s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Error: ErrImagePull
6m50s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Failed to pull image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": failed to resolve reference "docker.io/pwlaptopregistry/tcpserialpassthrough:vers1.3-arm": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
6m3s Normal BackOff pod/tcpserialpassthrough-88fb974d9-b88fc Back-off pulling image "pwlaptopregistry/tcpserialpassthrough:vers1.3-arm"
3m15s Warning Failed pod/tcpserialpassthrough-88fb974d9-b88fc Error: ImagePullBackOff
Wondered if the issue is with authorization, and added based on basic auth, following this youtube guide, but the same issue persists.
Also noted that that /etc/docker/daemon.json must be edited to allow unauthorized, non-TLS connections, via:
{
"Insecure-registries": [ "<host IP>:10000" ]
}
but seemed that this needs to be done on node side, whereas nodes don't have docker cli installed??
... this is so stupid, have no idea why a domain name and port needs to be specified as the "name" of your referred registry, but anyway this solved my issue (for reference):
$cat /etc/rancher/k3s/registries.yaml
mirrors:
"<host IP>:10000":
endpoint:
- "http://<host IP>:10000"
and restarting k3s:
systemctl restart k3s
Then in your deployment, referring to that in your image path as:
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcpserialpassthrough
spec:
selector:
matchLabels:
app: tcpserialpassthrough
replicas: 1
template:
metadata:
labels:
app: tcpserialpassthrough
spec:
containers:
- name: tcpserialpassthrough
image: <host IP>:10000/tcpserialpassthrough:vers1.3-arm
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8001
hostPort: 8001
protocol: TCP
command: ["dotnet", "/app/TcpConnector.dll"]
imagePullSecrets:
- name: mydockercredentials
referring to registry's basic auth details saved as a secret:
$ kubectl create secret docker-registry mydockercredentials --docker-server host IP:10000 --docker-username username --docker-password password
You'll be able to verify the pull process via
$ kubectl get events -w

Pod is in Pending state on single node Kubernetes cluster

I am using the YAML file to deploy the container on Kubernetes with some replication factor on a hosted machine.
YAML File
apiVersion: apps/v1
kind: Deployment
metadata:
name: mojo-deployment
labels:
app: mojo
spec:
selector:
matchLabels:
app: mojo
replicas: 3
template:
metadata:
labels:
app: mojo
spec:
containers:
- name: mojo
image: mojo:1.0.1
ports:
- containerPort: 9000
---
#Services Info
apiVersion: v1
kind: Service
metadata:
name: mojo-services
spec:
selector:
app: mojo
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
#Ingress Configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mojo-ingress
annotations:
kubernetes.io/ingress.class: mojo
spec:
backend:
serviceName: mojo-services
servicePort: 80
Steps:
Build Docker image using `docker build -t mojo:1.0 .
docker image ls show me an image id.
Skipping docker build command to deploy image on container. Do I need to do it? or kubectl service will take care of it.
Run kubectl apply -f Prod.yaml. It shows
deployment.apps/mojo-deployment created
service/mojo-services created
ingress.networking.k8s.io/mojo-ingress created
kubectl get service returns
kubectl get pod returns
kubectl get deployment returns
Questions?
Do I need to build the container before deploying YAML file? I tried it but still kubernetes not running.
Why all pods are showing Pending status.
Deployment is also showing pending status.
Though I am trying to access the Ingress with :80 and cannot access it.
Edit
pod description
Name: mojo-deployment-6665bdc557-s57m7
Namespace: default
Priority: 0
Node: <none>
Labels: app=mojo
pod-template-hash=6665bdc557
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mojo-deployment-6665bdc557
Containers:
mojo:
Image: mojo:1.0
Port: 9000/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tjx6p
(ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-tjx6p:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tjx6p
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From
Message
---- ------ ---- ---- -------
Warning FailedScheduling 70s (x45 over 67m) default-scheduler 0/1
nodes are available: 1 node(s) were unschedulable.
Edit 2
After removing the taint from the master node.
1. kubectl get node returns
kubectl get pod returns
kubectl describe node : https://gist.github.com/amixpal/333bffd6ab91def749267f30d4ffb079
If you have only one node (master) , then usually a Taint will be added to it which will make master node unschedulable. Remove taint from the master (and all other nodes, if there is more than one) using below.
kubectl taint nodes --all node-role.kubernetes.io/master-
Edit: Based on the node describe output, the CNI not ready.
Please make sure all CNI related Pods are running and healthy
Your container manifest should include downloadable docker image or k8s node should already contain docker image:
containers:
- name: mojo
image: mojo:1.0.1
ports:
- containerPort: 9000
Please answer: How your mojo:1.0.1 docker image appears on kubernetes nodes?
All pods wait to image be available.
Deployment wait for all pods will be in status Running.
K8s services make ingress be available after deployment be ready.

kubectl top node `error: metrics not available yet` . Using metrics-server as Heapster Depricated

Kubernetes not able to find metric-server api.I am using Kubernetes with Docker on Mac. I was trying to do HPA from following example. However, when I execute command kubectl get hpa, My target still was unknown. Then I tried, kubectl describe hpa. Which gave me error like below:
Name: php-apache
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sun, 07 Oct 2018 12:36:31 -0700
Reference: Deployment/php-apache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 5%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 1h (x34 over 5h) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Warning FailedGetResourceMetric 1m (x42 over 5h) horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from resource metrics API
I am using metrics-server as suggested in Kubernetes documentation. I also tried doing same just using Minikube. But that also didn't work.
Running kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes outputs :
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": []
}
Use official metrics server - https://github.com/kubernetes-sigs/metrics-server
If you use one master node, run this command to create the metrics-server:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
If you have HA (High availability) cluster, use this:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml
Then use can use kubectl top nodes or kubectl top pods -A and get something like:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
Solution(if using Minikube):
Changed context of Kubernetes to Minikube.
Enabled metrics-server and Disabled heapster in minikube.
minikube addons disable heapster
minikube addons enable metrics-server
Deploy metrics-server in your cluster using the following steps:
git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server
kubectl create -f deploy/1.7/ (if Kubernetes version 1.7)
OR
kubectl create -f deploy/1.8+/(if Kubernetes version 1.8+)
Start minikube dashboad and minikube service [your service].
Try kubectl top node.
I found this (https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/) resource helpful.
Use the below command in your cluster to setup Metrics Server:
kubectl apply -f https://raw.githubusercontent.com/pythianarora/total-practice/master/sample-kubernetes-code/metrics-server.yaml
This should work just fine.
I turned off TLS in metrics-server and it started working after so I updated the YAML and reposted.
I know this is a late answer, but I have had some issues with Docker Kubernetes and Autoscaler myself without finding a good answer on the internet. After many days of debugging, I found out that there were some connectivity issues with the metrics-server (couldn't connect to the pods).
I turned off TLS in metrics-server and it started working.. I answered my own post here if someone else is experiencing the same issue:
Docker Kubernetes (Mac) - Autoscaler unable to find metrics
Make sure that you are using the latest metrics-server on your Kubernetes cluster. The below command helped me.
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
reference link: https://github.com/kubernetes-sigs/metrics-server
The problem is the certificate,
Update the yaml's deployment section which you downloaded to install metrics-server as,
Add this part in the file at: --kubelet-insecure-tls
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server/metrics-server:v0.3.7
imagePullPolicy: IfNotPresent
args:
- --kubelet-insecure-tls
Complete Deployment will look like this now:
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server/metrics-server:v0.3.7
imagePullPolicy: IfNotPresent
args:
- --kubelet-insecure-tls
- --cert-dir=/tmp
- --secure-port=4443
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: tmp-dir
mountPath: /tmp
nodeSelector:
kubernetes.io/os: linux
When using minikube 1.4.0, there is no need to do anything else than minikube start: kubectl top node/pod should work out-of-the box.

How to update a Kubernetes deployment on Google Container Engine?

I've followed a few guides, and I've got CI set up with Google Container Engine and Google Container Registry. The problem is my updates aren't being applied to the deployment.
So this is my deployment.yml which contains a Kubernetes Service and Deployment:
apiVersion: v1
kind: Service
metadata:
name: my_app
labels:
app: my_app
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: my_app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my_app
spec:
replicas: 1
template:
metadata:
labels:
app: my_app
spec:
containers:
- name: node
image: gcr.io/me/my_app:latest
ports:
- containerPort: 3000
resources:
requests:
memory: 100
- name: phantom
image: docker.io/wernight/phantomjs:2.1.1
command: ["phantomjs", "--webdriver=8910", "--web-security=no", "--load-images=false", "--local-to-remote-url-access=yes"]
ports:
- containerPort: 8910
resources:
requests:
memory: 1000
As part of my CI process I run a script which updates the image in google cloud registry, then runs kubectl apply -f /deploy/deployment.yml. Both tasks succeed, and I'm notified the Deployment and Service has been updated:
2016-09-28T14:37:26.375Zgoogleclouddeploymentservice "my_app" configured
2016-09-28T14:37:27.370Zgoogleclouddeploymentdeployment "my_app" configured
Since I've included the :latest tag on my image, I thought the image would be downloaded each time the deployment is updated. Acccording to the docs a RollingUpdate should also be the default strategy.
However, when I run my CI script which updates the deployment - the updated image isn't downloaded and the changes aren't applied. What am I missing? I'm assuming that since nothing is changing in deployment.yml, no update is being applied. How do I get Kubernetes to download my updated image and use a RollingUpdate to deploy it?
You can force an update of a deployment by changing any field, such as a label. So in my case, I just added this at the end of my CI script:
kubectl patch deployment fb-video-extraction -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
We have recently published a technical overview of how the approach that we call GitOps approach can be implemented in GKE.
All you need to do is configure GCR builder to pick-up code changes from Github and run builds, you then install Weave Cloud agent in your cluster and connect to a repo where YAML files are stored, and the agent will take care of updating the repo with new images and applying the changes to the cluster.
For a more high-level overview, see also:
The GitOps Pipeline
Deploy Applications & Manage Releases
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.

Resources