I want to set up a Gitlab CD to Kubernetes and I read this article
However, I am wondering, how is it that my K8 cluster would be updated with my latest Docker images?
For example, in my .gitlab-ci.yaml file I will have a build, test, and release stage that ultimately updates my cloud Docker images. By setting up the deploy stage as instructed in the article:
deploy:
stage: deploy
image: redspreadapps/gitlabci
script:
- null-script
would Spread then know to "magically" update my K8 cluster (perhaps by repulling all images, perform rolling-updates) as long as I set up my directory structure of K8 resources as is specified by Spread?
I don't have a direct answer, but from looking at the spread project it seems pretty dead. Last commit in Aug last year with a bunch of issues and not supporting any of the newer kubernetes constructs (e.g. deployments).
The typical way to update images in kubernetes nowadays is to run a command like kubectl set image <deployment-name> <image>. This will in turn perform a rolling update on the deployment and shutting down a POD at a time updating it with the new image. See this doc.
Since spread is from before that, I assume they must use rolling update replication controller with a command like kubectl rolling-update NAME -f FILE and picking up the new image from the configuration file in their project folder (assuming it changed). See this doc.
Related
Consider this partial k8s deployment manifest for a basic rest application called myapp
spec:
replicas: 1
...
containers:
name: myapp
image: us.gcr.io/my-org/myapp.3
...
resources:
limits:
cpu: 1000m
memory: 1.5Gi
We store incremental builds in Google Container Registry (GCR) eg (myapp.1, myapp.2, myapp.3). In the current setup, our CI system (Jenkins) does the following:
Docker builds new image myapp.4 and uploads to GCR
Runs kubectl set image myapp.4 to update the deployment.
This works well for most deploys, but what about changes to the deployment manifest itself? For example, if we changed resource > cpu to 1500m, we'd now have to manually run kubectl apply. This step needs to be automated, which brings me to my point: instead of using kubectl set image couldn't the build system itself just run kubectl apply each time? When is it appropriate to use kubectl set image vs. kubectl apply in a CI/CD pipeline?
As long as the new image were provided, wouldn't kubectly apply handle both image updates AND other config changes? If so, what are the pros/cons against just kubcetl set image?
PS - our deploys are simple and mainly rely on single replica and 100% uptime is not necessarily required.
With a kubectl set image, you only patch the image deployed in your deployment. To patch the other values (CPU, memory, replicas,...) you can use other commands like path or set repiclas
The problem is that you loose the consistency with your original YAML definition. If your cluster crash and you want to recreate one, you won't have the exact same deployment because your YAML file will be outdated (the "patches" won't be known)
With kubectl apply you overwrite the existing control plane config and you set the exact content of your YAML. It's more consistent and it's a common practice when you are in GitOps mode.
Which one to use? All depends on what you need and what you want to achieve. I prefer the kubectl apply mode for its consistency and its replayability, but it's up to you!
Use whatever suits your case the best. Just remember that the CI will be the "source of truth" when it comes to where resources and such are applied. If you change it somewhere else and run the CI job, it will update it again.
Most CI engines have the ability to only trigger a certain job in case a file have been changed.
That way, you could patch the image in case the manifest have not changed and run a full apply in case the manifest have changed.
I usually use the image patch method personally, as I prefer to have my Kube files separated from my sourcecode, but as said, whatever fits your case the best!
Each time you make changes to your application code or Kubernetes configuration, you have two options to update your cluster: kubectl apply or kubectl set image.
You can use kubectl set to make changes to an object's image, resources (compute resource such as CPU and memory), or selector fields.
You can use kubectl apply to update a resource by applying a new or updated configuration
i have simple docker-copose.yml which builds 4 containers. The containers run's on EC2.
docker-compose change ~ twice a day on master branch, and each change we need to deploy the new containers on production
this is what i'm doing:
docker-compose down --rmi all
git pull origin master
docker-compose build -d
i'm removing images to avoid conflicts so that once i'm starting the service i have fresh images
This process takes me around ~ 1 minutes,
what is the best practice to spin up docker-compose, any suggestion to improve this ?
You can do the set of commands you show natively in Docker, without using git or another source-control tool as part of the deployment process.
Whenever you have a change to your source tree, build a new Docker image and push it to a Docker repository. This can be Docker Hub, or if you're on AWS already, Amazon ECR. Each build should have a unique image tag, such as a source control commit ID or a time stamp. You can set up a continuous-integration tool to do all of this for you automatically.
Once you have this, your docker-compose.yml file needs to be updated with the version number to deploy. If you only have a single image you're deploying, you can straightforwardly use Compose variable substitution to fill it in
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:${TAG:-latest}
If you have multiple images you can set multiple environment variables or produce an updated docker-compose.yml file with the values filled in, but you will need to know all of the image versions together at deployment time.
Now when you go to deploy it you only need to run
TAG=20200317.0412 docker-compose up -d
to set the environment variable and trigger Compose. Compose will see that the image you're trying to run for that container is different from what's already running, pull the updated image, and replace the container for you. You don't need to manually remove the old containers or stop the entire stack.
If git is part of your workflow now, it's probably because you're mounting application code into your container. You will also need to delete any volumes: that overwrite the content in the image. Also make sure you make this change in your CI system (so you're testing the actual image you're deploying to production) and in development (similarly).
This particular task becomes slightly easier with a cluster-management system like Kubernetes (or Amazon EKS), though it brings many other complexities elsewhere. In Kubernetes you need to send an updated Deployment spec to the Kubernetes API server, but you can do this without direct ssh access to the target system and only needing to know the specific version of the one image you're updating, and with multiple replicas you can get a zero-downtime upgrade. Both using a Docker repository and using a unique image tag per build are basically required in this setup: images are the only way code gets into the cluster, and changing the image tag string is what triggers code to be redeployed.
If I have one ubuntu container and I ssh to it and make one file after the container is destroyed or I reboot the container the new file was destroyed because the kubernetes load the ubuntu image that does not contain my changes.
My question is what should I do to save any changes?
I know it can be done because some cloud provider do that.
For example:
ssh ubuntu#POD_IP
mkdir new_file
ls
new_file
reboot
after reboot I have
ssh ubuntu#POD_IP
ls
ls shows nothing
But I want to it save my current state.
And I want to do it automatically.
If I use docker commit I can not control my images because it makes hundreds of images. because I should make images by every changes.
If I want to use storage I should mount /. but kubernetes does not allow me to mount /. and it gives me this error
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/': invalid mount config for type "bind": invalid specification: destination can't be '/'
You can try to use docker commit but you will need to ensure that your Kubernetes cluster is picking up the latest image that you committed -
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
This is going to create a new image out of your container which you can feed to Kubernetes.
Ref - https://docs.docker.com/engine/reference/commandline/commit/
Update 1 -
In case you want to do it automatically, you might need to store the changed state or the files at a centralized file system like NFS etc & then mount it to all running containers whenever required with the relevant permissions.
K8s ref - https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Docker and Kubernetes don't work this way. Never run docker commit. Usually you have very little need for an ssh daemon in a container/pod and you need to do special work to make both the sshd and the main process both run (and extra work to make the sshd actually be secure); your containers will be simpler and safer if you just remove these.
The usual process involves a technique known as immutable infrastructure. You never change code in an existing container; instead, you change a recipe to build a container, and tell the cluster manager that you want an update, and it will tear down and rebuild everything from scratch. To make changes in an application running in a Kubernetes pod, you typically:
Make and test your code change, locally, with no Docker or Kubernetes involved at all.
docker build a new image incorporating your code change. It should have a unique tag, often a date stamp or a source control commit ID.
(optional but recommended) docker run that image locally and run integration tests.
docker push the image to a registry.
Change the image tag in your Kubernetes deployment spec and kubectl apply (or helm upgrade) it.
Often you'll have an automated continuous integration system do steps 2-4, and a continuous deployment system do the last step; you just need to commit and push your tested change.
Note that when you docker run the image locally in step 3, you are running the exact same image your production Kubernetes system will run. Resist the temptation to mount your local source tree into it and try to do development there! If a test fails at this point, reduce it to the simplest failing case, write a unit test for it, and fix it in your local tree. Rebuilding an image shouldn't be especially expensive.
Your question hints at the unmodified ubuntu image. Beyond some very early "hello world" type experimentation, there's pretty much no reason to use this anywhere other than the FROM line of a Dockerfile. If you haven't yet, you should work through the official Docker tutorial on building and running custom images, which will be applicable to any clustering system. (Skip all of the later tutorials that cover Docker Swarm, if you've already settled on Kubernetes as an orchestrator.)
I'm deployed a nodejs app using docker, I don't know how to update the deploy after my nodejs app updated.
Currently, I have to remove the old docker container and image when updating the nodejs app each time.
I expect that it's doesn't need to remove the old image and container when I nodejs app updated.
You tagged this "production". The standard way I've done this is like so:
Develop locally without Docker. Make all of your unit tests pass. Build and run the container locally and run integration tests.
Build an "official" version of the container. Tag it with a time stamp, version number, or source control tag; but do not tag it with :latest or a branch name or anything else that would change over time.
docker push the built image to a registry.
On the production system, change your deployment configuration to reference the version tag you just built. In some order, docker run a container (or more) with the new image, and docker stop the container(s) with the old image.
When it all goes wrong, change your deployment configuration back to the previous version and redeploy. (...oops.) If the old versions of the images aren't still on the local system, they can be fetched from the registry.
As needed docker rm old containers and docker rmi old images.
Typically much of this can be automated. A continuous integration system can build software, run tests, and push built artifacts to a registry; cluster managers like Kubernetes or Docker Swarm are good at keeping some number of copies of some version of a container running somewhere and managing the version upgrade process for you. (Kubernetes Deployments in particular will start a copy of the new image before starting to shut down old ones; Kubernetes Services provide load balancers to make this work.)
None of this is at all specific to Node. As far as the deployment system is concerned there aren't any .js files anywhere, only Docker images. (You don't copy your source files around separately from the images, or bind-mount a source tree over the image contents, and you definitely don't try to live-patch a running container.) After your unfortunate revert in step 5, you can run exactly the failing configuration in a non-production environment to see what went wrong.
But yes, fundamentally, you need to delete the old container with the old image and start a new container with the new image.
Copy the new version to your container with docker cp, then restart it with docker restart <name>
I have a local docker image that was pushed to private Azure Container Registry. Then in Azure Kubernetes Service I have a cluster where I am using this image - from ACR.
Now I wanted to update the image (realised that I needed to install zip and unzip). I started a local container, made changes, committed them and pushed the new image to ACR. Unfortunately, that`s not enough. My pods are still using the previous version of the image, without zip.
Bit more details and what I tried:
Inside the helm chart I am using "latest" tag;
Compared the digest sha of my local "latest" image and what I have in ACR - they are the same;
Started the "latest" container locally (docker run -it --rm -p 8080:80 My-REPO.azurecr.io/MY-IMAGE:latest) - it has zip installed
Deleted existing pods in kubernetes; newly created ones are still missing zip
Deleted the release and recreated it - still nothing.
I am pushing to ACR using docker push MY-REPO.azurecr.io/MY-IMAGE:latest
So my question is - what am I missing? How to properly update this setup?
You should be looking for a setup like this:
Your Docker images have some unique tag, not latest; a date stamp will generally work fine.
Your Helm chart should take the tag as a parameter in the values.yaml file.
You should use a Kubernetes Deployment (not a bare Pod); in its pod spec part specify the image as something like image: MY-REPO.azurecr.io/MY-IMAGE:{{ .Values.tag }}.
When you have a new build, you can helm update --set tag=20190214; this will push an updated Deployment spec to Kubernetes; and that that will cause it to create new Pods with the new image and then destroy the old Pods with the old image.
The essential problem you're running into is that some textual difference in the YAML file is important to make Kubernetes take some action. If it already has MY-IMAGE:latest, and you try to kubectl apply or equivalent the same pod or deployment spec with exactly the same image string, it will decide that nothing has changed and it doesn't need to do anything. Similarly, when you delete and recreate the pod, the node decides it already has a MY-IMAGE:latest image and doesn't need to go off and pull anything; it just reuses the same (outdated) image it already has.
Some best practices related to the workflow you describe:
Don't use a ...:latest image tag (or any other fixed string); instead, use some unique value like a timestamp, source control commit ID, or release version, where every time you do a deployment you'll have a different tag.
Don't use bare pods; use a higher-level controller instead, most often a Deployment.
Don't use docker commit ever. (If your image crashed in production, how would you explain "oh, I changed some stuff by hand, overwrote the image production is using, and forcibly restarted everything, but I have no record of what I actually did"?) Set up a Dockerfile, check it into source control, and use docker build to make images. (Better still, set up a CI system to build them for you whenever you check in.)