Kubernetes Workflow - docker

I have been using kubernetes for a while now.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0+2831379", GitCommit:"283137936a
498aed572ee22af6774b6fb6e9fd94", GitTreeState:"not a git tree", BuildDate:"2016-07-05T15:40:25Z", GoV
ersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db
386f62781338b0483733b3", GitTreeState:"clean", BuildDate:"", GoVersion:"", Compiler:"", Platform:""}
I usually set an Ingress, Service and Replication Controller for each project.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: portifolio
name: portifolio-ingress
spec:
rules:
- host: www.cescoferraro.xyz
http:
paths:
- path: /
backend:
serviceName: portifolio
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
selector:
name: portifolio
ports:
- name: web
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: ReplicationController
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
replicas: 1
selector:
name: portifolio
template:
metadata:
namespace: portifolio
labels:
name: portifolio
spec:
containers:
- image: cescoferraro/portifolio:latest
imagePullPolicy: Always
name: portifolio
env:
- name: KUBERNETES
value: "true"
- name: BRANCH
value: "production"
My "problem" is that for deploying my app I usually do:
kubectl -f delete kubernetes.yaml
kubectl -f create kubernetes.yaml
I wish I could use a single command to deploy, whenever my app is up or down. Rolling updates do not work when I use the same image,(I think its a bug on my kubernetes server version). But it also do not work when the app has never been deployed at all.
I have read about Deployments, I wonder how it would help me?
Goals
1. Deploy if app is brand new
2. Replace existing pods with new ones using a new image from docker registry.

I don't think keeping all resources inside one single manifest helps you with what you want to achieve, since your Service, Ingress and ReplicationController are not likely to change simultaneously.
If all you want to do is roll out new pods, I would recommend you to replace your ReplicationController with a Deployment. Manifests have almost the exact same syntax so it's easy to migrate from standard RCs, and you could perform a server-side rolling update with a single kubectl replace -f manifest.yml.
Please note that even with a Deployment resource you can't trigger a redeployment if nothing changed in your manifest. kubectl replace would just do nothing. Therefore you could for example increment or change a tag inside your manifest in order to force the deployment, if needed (eg. revision: 003).

As already written in the previous answer, it is recommended to use a Deployment instead of a ReplicationController for this.
Using imagePullPolicy: Always will only ensure that Kubernetes does a docker pull before starting new PODs. It does not force recreation of PODs when nothing in the Deployment resource changes.
I would suggest to add 2 things to your solution:
Add a label to the Deployment with the value CURRENT_DATE as a placeholder value
Add a simple shell script to your project which replaces the placeholder with the current date+time and then uses kubectl to apply the resources.
Example Bash script
#!/usr/bin/env bash
sed "s/CURRENT_DATE/$(date)/" kubernetes.yaml | kubectl apply -f -
Then use this script for redeployment instead of calling kubectl by yourself.
This is only meant as a very simple example. When it comes to creating/applying/patching resources in Kubernetes, things tend to get more and more complicated by time. If this happens, consider using some more advanced templating solutions, e.g. by using Python and Jinja2.

You could use a deployment for this. Create it the first time, and after that you only need to do kubectl set image deploy/my-app app=user/image:tag --record and you're good to go.
Doing that, you can also do cool things like kubectl rollout undo deploy/my-app or get history and status.

You might consider using Argo.
Argo is an open-source workflow engine for Kubernetes. It allows to define complex microservices-based application deployment using YAML in source repo and automatically re-deploy app on YAML change (e.g. on every commit to production branch) .

Related

How to make a deployment file for a kubernetes service that depends on images from Amazon ECR?

A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c

Minikube services work when run from command line, but applying through YAML doesn't work

Heres image of my Kubernetes services.
Todo-front-2 is working instance of my app, which I deployed with command line:
kubectl run todo-front --image=todo-front:v7 --image-pull-policy=Never
kubectl expose deployment todo-front --type=NodePort --port=3000
And it's working great. Now I want to move on and use todo-front.yaml file to deploy and expose my service. Todo-front service refers to my current try on it. My deployment file looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: todo-front
spec:
replicas: 1
selector:
matchLabels:
app: todo-front
template:
metadata:
labels:
app: todo-front
spec:
containers:
- name: todo-front
image: todo-front:v7
env:
- name: REACT_APP_API_ROOT
value: "http://localhost:12000"
imagePullPolicy: Never
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: todo-front
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
selector:
app: todo-front
I deploy it using:
kubectl apply -f deployment/todo-front.yaml
Here is the output
But when I run
minikube service todo-front
It redirects me to URL saying "Site can't be reached".
I can't figure out what I'm doing wrong. Ports should be ok, and my cluster should be ok since I can get it working by only using command-line without external YAML files. Both deployments are also using the same docker-image. I have also tried changing all ports now "3000" to something different, in case they clash with existing deployment todo-front-2, no luck.
Here is also a screenshot of pods and their status:
Anyone with more experience with Kube and Docker cares to take a look? Thank you!
You can run below commands to generate the yaml files without applying it to the cluster and then compare it with the yamls you manually created and see if there is a mismatch. Also instead of creating yamls manually yourself you can apply the generated yamls itself.
kubectl run todo-front --image=todo-back:v7 --image-pull-policy=Never --dry-run -o yaml > todo-front.yaml
kubectl expose deployment todo-front --type=NodePort --port=3000 --dry-run -o yaml > todo-depoloyment.yaml

Kubernetes: The code change does not appear, is there a way to sync?

In Dockerfile I have mentioned volume like:
COPY src/ /var/www/html/ but somehow my code changes don't appear like it used to only with Docker. Unless I remove Pods, it does not appear. How to sync it?
I am using minikube.
webserver.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: php-apache
image: learningk8s_website
imagePullPolicy: Never
ports:
- containerPort: 80
When your container spec says:
image: learningk8s_website
imagePullPolicy: Never
The second time you kubectl apply it, Kubernetes determines that it's exactly the same as the Deployment spec you already have and does nothing. Even if it did generate new Pods, the server is highly likely to notice that it already has an image learningk8s_website:latest and won't pull a new one; indeed, you're explicitly telling Kubernetes not to.
The usual practice here is to include some unique identifier in the image name, such as a date stamp or commit hash.
IMAGE=$REGISTRY/name/learningk8s_website:$(git rev-parse --short HEAD)
docker build -t "$IMAGE" .
docker push "$IMAGE"
You then need to make the corresponding change in the Deployment spec and kubectl apply it. This will cause Kubernetes to notice that there is some change in the pod spec, create new pods with the new image, and destroy the old pods (in that order). You may find a templating engine like Helm to be useful to make it easier to inject this value into the YAML.

Editing nodeSelector doesn't rearrange pods in ReplicaSet

I have created the following ReplicaSet
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: nginx-test
spec:
replicas: 2
template:
metadata:
name: nginx
namespace: default
labels:
env: beta
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
nodeSelector:
domain: cloud
This runs the pods in a node marked as cloud. Now I change the nodeSelector using the command
kubectl edit rs/nginx-test
And change the nodeSelector to edge. However the pods are not moved to the edge node. This is working for a Deployment, however not for ReplicaSet. Any ideas
Here are my 2 nodes:
NAME STATUS AGE VERSION LABELS
x1 Ready 5d v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,domain=cloud,kubernetes.io/hostname=xxxx,node-role.kubernetes.io/master=
x2 Ready 5d v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,domain=edge,kubernetes.io/hostname=xxxx
Official Kubernetes documentation recommends that you use a Deployment, which creates ReplicaSets, rather than use ReplicaSets directly.
"A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all."
https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
It is not unheard of to use a ReplicaSet by itself, but generally not recommended.
If a Deployment works for you, I recommend sticking with that, unless you are using ReplicaSets for some custom thing that Deployments don't work for.

How to update a Kubernetes deployment on Google Container Engine?

I've followed a few guides, and I've got CI set up with Google Container Engine and Google Container Registry. The problem is my updates aren't being applied to the deployment.
So this is my deployment.yml which contains a Kubernetes Service and Deployment:
apiVersion: v1
kind: Service
metadata:
name: my_app
labels:
app: my_app
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
selector:
app: my_app
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my_app
spec:
replicas: 1
template:
metadata:
labels:
app: my_app
spec:
containers:
- name: node
image: gcr.io/me/my_app:latest
ports:
- containerPort: 3000
resources:
requests:
memory: 100
- name: phantom
image: docker.io/wernight/phantomjs:2.1.1
command: ["phantomjs", "--webdriver=8910", "--web-security=no", "--load-images=false", "--local-to-remote-url-access=yes"]
ports:
- containerPort: 8910
resources:
requests:
memory: 1000
As part of my CI process I run a script which updates the image in google cloud registry, then runs kubectl apply -f /deploy/deployment.yml. Both tasks succeed, and I'm notified the Deployment and Service has been updated:
2016-09-28T14:37:26.375Zgoogleclouddeploymentservice "my_app" configured
2016-09-28T14:37:27.370Zgoogleclouddeploymentdeployment "my_app" configured
Since I've included the :latest tag on my image, I thought the image would be downloaded each time the deployment is updated. Acccording to the docs a RollingUpdate should also be the default strategy.
However, when I run my CI script which updates the deployment - the updated image isn't downloaded and the changes aren't applied. What am I missing? I'm assuming that since nothing is changing in deployment.yml, no update is being applied. How do I get Kubernetes to download my updated image and use a RollingUpdate to deploy it?
You can force an update of a deployment by changing any field, such as a label. So in my case, I just added this at the end of my CI script:
kubectl patch deployment fb-video-extraction -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
We have recently published a technical overview of how the approach that we call GitOps approach can be implemented in GKE.
All you need to do is configure GCR builder to pick-up code changes from Github and run builds, you then install Weave Cloud agent in your cluster and connect to a repo where YAML files are stored, and the agent will take care of updating the repo with new images and applying the changes to the cluster.
For a more high-level overview, see also:
The GitOps Pipeline
Deploy Applications & Manage Releases
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.

Resources