i have a Maven project on my local machine and a docker image in my repo and im using gitlab and jenkins to automate builds, and now with current setup I want to continously deploy to Kubernetes. I have no idea on how this is done. Any input will be appreciated.
my yaml file looks like this
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
component: web
spec:
containers:
- name: client
image: <image>
ports:
- containerPort: 3000
The easiest way will be to set the name of the new image. See here:
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
You will need to have access from your gitlab/Jenkins to your cluster.
Another option will be to use some kind of kubernetes deployment tool such as helm, or any other solution. This case will help you in more complicated scenarios where you also want to update your configuration files (k8s yamls).
Once the image is built and pushed to the container repository you just have to set the new image
>>> docker build -t repo-name/whatever-app:<version>
>>> docker push repo-name/whatever-app:<version>
>>> kubectl set image deployment/my-deployment mycontainer=repo-name/whatever-app:<version>
You can use this exemplary jenkins pipeline, to build and deploy your dockerized maven-app to Kubernetes with helm. It consists of following steps:
Git clone and setup
Build and local tests
Publish Docker and Helm
Deploy to dev and test
Deploy to staging and test
Optionally deploy to production and test
It think it's a nice starting point to realize CI/CD with Jenkins & Kubernets.
Related
I am using a CI and have built a Docker Image. I want to pass the image built in the CI to kubectl to take and place it in the cluster I have specified by my kubeconfig. This is as opposed to having the cluster reach out to a registry like dockerhub to retrieve the image. Is this possible?
So far I cannot get this to work and I am thinking I will be forced to create a secret on my cluster to just use my private docker repo. I would like to exhaust my options to not have to use any registry. Also as an alternative I already login to docker on my CI and would like to ideally only have to use those credentials once.
I thought setting the ImagePullPolicy on my deployment might do it but I think it is referring to the cluster context. Which makes me wonder if there is some other way to add an image to my cluster with something like a kubectl create image.
Maybe I am just doing something obvious wrong?
Here is my deploy script on my CI
docker build -t <DOCKERID>/<PROJECT>:latest -t <DOCKERID>/<PROJECT>:$GIT_SHA -f ./<DIR>/Dockerfile ./<DIR>
docker push <DOCKERID>/<PROJECT>:latest
docker push <DOCKERID>/<PROJECT>:$GIT_SHA
kubectl --kubeconfig=$HOME/.kube/config apply -f k8s
kubectl --kubeconfig=$HOME/.kube/config set image deployment/<DEPLOYMENT NAME> <CONTAINER NAME>=<DOCKERID>/<PROJECT>:$SHA
And this Dockerfile:
apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployment name>
spec:
replicas: 3
selector:
matchLabels:
component: <CONTAINER NAME>
template:
metadata:
labels:
component: <CONTAINER NAME>
spec:
containers:
- name: <CONTAINER NAME>
image: <DOCKERID>/<PROJECT>
imagePullPolicy: IfNotPresent
ports:
- containerPort: <PORT>
I want to pass the image [...] to kubectl to take and place it in the cluster [...] as opposed to having the cluster reach out to a registry like dockerhub to retrieve the image. Is this possible?
No. Generally the only way an image gets into a cluster is by a node pulling an image named in a pod spec. And even there "in the cluster" is a little vague; each node will have a different collection of images, based on which pods have ever run there and what's historically been cleaned up.
There are limited exceptions for developer-oriented single-node environments (you can docker build an image directly in a minikube VM, then set a pod to run it with imagePullPolicy: Never) but this wouldn't apply to a typical CI system.
I would like to exhaust my options to not have to use any registry.
Kubernetes essentially requires a registry. If you're using a managed Kubernetes from a public-cloud provider (EKS/GKE/AKS/...), there is probably a matching image registry offering you can use (ECR/GCR/ACR/...).
Preferred way in k8s setting is to update k8s definitions or helm chart values or kustomize values (whichever you are using) with image and its sha256 digest that needs to be deployed.
Using sha256 digest is preferable to tags in production since docker image tags are mutable.
Next, instead of using kubectl from CI/CD it's preferred to use a GitOps tool - either Argo or Flux on the k8s side to pull the correct images based on definitions in Git.
There you need some sort of system to route and manage images - which one should go to which environment. Here is my article with an example how it can be achieved (this one is using my tool): https://itnext.io/building-kubernetes-cicd-pipeline-with-github-actions-argocd-and-reliza-hub-e7120b9be870
Let's say I have a deployment that looks something like this:
apiVersion: v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
template:
kind: Pod
metadata: myapp-pod
labels:
apptype: front-end
containers:
- name: nginx
containers: <--what is supposed to go here?-->
How do I properly build a container using an existing Dockerfile without having to push a build image up to Docker hub?
Kubernetes can't build images. You all but are required to use an image registry. This isn't necessarily Docker Hub: the various public-cloud providers (AWS, Google, Azure) all have their own registry offerings, there are some third-party ones out there, or you can run your own.
If you're using a cloud-hosted Kubernetes installation (EKS, GKE, ...) the "right" way to do this is to push your built image to the corresponding image registry (ECR, GCR, ...) before you run it.
docker build -t gcr.io/my/image:20201116 .
docker push gcr.io/my/image:20201116
containers:
- name: anything
image: gcr.io/my/image:20201116
There are some limited exceptions to this in a very local development environment. For example, if you're using Minikube as a local Kubernetes installation, you can point docker commands at it, so that docker build builds an image inside the Kubernetes context.
eval $(minikube docker-env)
docker build -t my-image:20201116 .
containers:
- name: anything
image: my-image:20201116 # matches `docker build -t` option
imagePullPolicy: Never # since you manually built it inside the minikube Docker
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment check this out.
Make sure you give a good read at the documentation :)
I am trying to implement CI/CD pipeline for my spring boot microservice deployment. Here I have some sample microservices. When I am exploring about Kubernetes , I found that pods, services, replica sets/ controller, statefulsets etc. I understood those Kubernetes terminologies properly. And I am planning to use Docker hub for my image registry.
My Requirement
When there is a commit made to my SVN code repository, then the Jenkins need to pull code from Subversion repository and need to build the project , create docker image, push into Docker hub - as mentioned earlier. And after that need to deploy into my test environment from Dockerhub by pulling by Jenkins.
My Confusion
When am I creating services and pods, how I can define the docker image path within pod/services / statefulsets? Since it pulling from Docker hub for deployment.
Can I directly add kubectl command within Jenkins pipeline schedule job? How can I use kubectl command for Kubernetes deployment?
Jenkins can do anything you can do given that the tools are installed and accessible. So an easy solution is to install docker and kubectl on Jenkins and provide him with the correct kube config so he can access the cluster. So if your host can use kubectl you can have a look at the $HOME/.kube/config file.
So in your job you can just use kubectl like you do from your host.
Regarding the images from Docker Hub:
Docker Hub is the default Docker Registry for Docker anyway so normally there is no need to change anything in your cluster only if you want to use your own Private Hosted Registry. If you are running your cluster at any cloud provider I would use there Docker registries because they are better integrated.
So this part of a deployment will pull nginx from Docker Hub no need to specify anything special for it:
spec:
containers:
- name: nginx
Image: nginx:1.7.9
So ensure Jenkins can do the following things from command line:
build Docker images
Push Docker Images (make sure you called docker login on Jenkins)
Access your cluster via kubectl get pods
So an easy pipeline needs to simply do this steps:
trigger on SVN change
checkout code
create a unique version which could be Build number, SVN Revision, Date)
Build / Test
Build Docker Image
tag Docker Image with unique version
push Docker Image
change image line in Kubernetes deployment.yaml to newly build version (if your are using Jenkins Pipeline you can use readYaml, writeYaml to achive this)
call kubectl apply -f deployment.yaml
Depending on your build system and languages used there are some useful tools which can help building and pushing the Docker Image and ensuring a unique tag. For example for Java and Maven you can use Maven CI Friendly Versions with any maven docker plugin or jib.
To create deployment you need to create a yaml file.
In the yaml file you the row:
image: oronboni/serviceb
Leads you to the container that in this case in DockerHub:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: serviceb
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: serviceb
template:
metadata:
labels:
app: serviceb
spec:
containers:
- name: serviceb
image: oronboni/serviceb
ports:
- containerPort: 5002
I strongly suggest that you will see the kubernetes deployment webinar in the link below:
https://m.youtube.com/watch?v=_vHTaIJm9uY
Good luck.
What is the best way to change the source code of my application running as Kubernetes pod without creating a new version of image so I can avoid time taken for pushing and pulling image from repository?
You may enter the container using bash if it installed on the image and modify it using -
docker exec -it <CONTAINERID> /bin/bash
However, this isn’t advisable solution. If your modifications succeed, you should update the Dockerfile accordingly or else you risk losing your work and ability to share it with others.
Have the container pull from git on creation?
Setup CI/CD?
Another way to achieve a similar result is to leave the application source outside of the container and mount the application source folder in the container.
This is especially useful when developing web applications in environments such as PHP: your container is setup with your Apache/PHP stack and /var/www/html is configured to mount your local filesystem.
If you are using minikube, it already mounts a host folder within the minikube VM. You can find the exact paths mounted, depending on your setup, here:
https://kubernetes.io/docs/getting-started-guides/minikube/#mounted-host-folders
Putting it all together, this is what a nginx deployment would look like on kubernetes, mounting a local folder containing the web site being displayed:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: sources
readOnly: true
volumes:
- name: sources
hostPath:
path: /Users/<username>/<source_folder>
type: Directory
Finally we have resolved the issue. Here, we changed our image repository from docker hub to aws ecr in the same region where we are running kubernetes cluster. Now, it is taking very lesstime for pushing/pulling images.
This is definitely not recommended for production.
But if your intention is local development with kubernetes, take a look at these tools:
Telepresence
Telepresence is an open source tool that lets you run a single service
locally, while connecting that service to a remote Kubernetes cluster.
Kubectl warp
Warp is a kubectl plugin that allows you to execute your local code
directly in Kubernetes without slow image build process.
The kubectl warp command runs your command inside a container, the same
way as kubectl run does, but before executing the command, it
synchronizes all your files into the container.
I think it should be taken as process to create new images for each deployment.
Few benefits:
immutable images: no intervention in running instance this will ensure image run in any environment
rollback: if you encounter issues in new version, rollback to previous version
dependencies: new versions may have new dependencies
I'm intending to have a CD Pipeline with Jenkins which takes my application, publishes a docker image to my private docker repository. I think I know how to do that.
What I'm unsure about it the Kubernetes part. I want to take that image and deploy it to my private Kubernetes cluster (currently 1 Master & 1 Slave).
Question: Does that Jenkins Slave which has kubectl and docker installed need to be part of the Kubernetes cluster in order to trigger a deployment? How can I trigger that deployment?
Assuming that you have the following deployment in your cluster:
apiVersion: apps/v1beta1 # for versions before 1.6.0 use
extensions/v1beta1
kind: Deployment
metadata:
name: foobar-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: foobar-app
spec:
containers:
- name: foobar
image: foobar-image:v1
ports:
- containerPort: 80
You would have to somehow have Jenkins tell your Kubernetes master the following command:
kubectl set image deployment/foobar-deployment foobar=foobar-image:version
where version is the new verion you just created with Jenkins. This will automatically trigger a redeploy with this version.
As long as you have access to your Kubernetes master that has your cluster on it (via ssh or similar), you can just pass the above command. Don't forget to keep track of version when you pass this command.
You can trigger the deployment from Yenkin using kubectl command. for Quick start copy kubernetes cluster admin.conf or $HOME/.kube/config file to Jenkin slave server. Then you can run kubectl like this.
Kubectl --kubeconfig=admin.conf create –f <deployment.yml>
Note:
This will give full admin access to cluster, for long tream your can create account with deployment role and use that account for deployment.
I'm intending to have a CD Pipeline with Jenkins which takes my application, publishes a docker image to my private docker repository. I think I know how to do that.
That's right, this is the part we are all familiar with.
My advice is that you actually, don't need to do much more in CI.
What I'm unsure about it the Kubernetes part. I want to take that image and deploy it to my private Kubernetes cluster (currently 1 Master & 1 Slave).
It's hard to use CI reliably as a source of truth where you can track what's deployed where. What you can do instead is store app configuration (Deployment+Service YAML files) in a git repository and have git reconciliation operator that connects that repository to the cluster, you can even have multiple cluster setup this way.
Question: Does that Jenkins Slave which has kubectl and docker installed need to be part of the Kubernetes cluster in order to trigger a deployment? How can I trigger that deployment?
Some folks do run CI (such as Jenkins) in their Kubernetes clusters, and it is a legitimate approach, however, this means you have more things to run, and cut your self out from all the hosted CI options out there.
The approach that we've been practising for a while now, is called GitOps and we blogged about various benefits of this approach:
GitOps - Operations by Pull Request
The GitOps Pipeline - Part 2
GitOps Part 3 - Observability
See also:
Storing Secure Sealed Secrets using GitOps
Technical Overview.
Weave Flux: The VCS Reconciliation Operator Source Code on Github
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.