I am modifying the code in my project, and deploying it in kubernetes pod, the pod are deployed successfully, but the latest changes which I have made in my code are not being reflected after deployment, any idea for it?
When you change the code inside your application which will be build into your container, you have to do the following:
Rebuild your image with the latest changes (e.g. via docker build ...)
Push the image to your registry (e.g. via docker push ...)
Update your manifests with the new container version (field spec.containers.image) or if it has the same version/tag, make sure you have the imagePullPolicy set to Always.
In case your manifests did not change (e.g. because the tag remains the same), you need to trigger a rollout in your deployment or delete your Pods manually to ensure it pull the image again and takes the latest changes.
If you're running a Webapp, you might need to clean your browser cache and reload all resources (Shift + F5)
Related
I've gone thru multiple questions posted on the forum but didn't get a clarity regarding my requirement.
I'm building a docker image after every successful CI build, there will be hardly 1 to 2 lines of changes in Dockerfile for every successful build.
Docker Build Command:
$(docker_registry)/$(Build.Repository.Name):azul
Docker Push Command
$(docker_registry)/$(Build.Repository.Name):azul
I wanted to overwrite the current docker image with the latest one(from the latest CI build changes) but retain the same tag - azul. Does docker support this ?
Yes, docker supports it. Every line you execute results in a new layer in image, that contains the changes compared to the previous layer. After modifying the Dockerfile, new layers will be created and the same preceding layers will be reused.
If you want to clean build the whole image with no cached layers, you can use the —no-cache parameter.
Mechanically this works. The new image will replace the old one with that name. The old image will still be physically present on the build system but if you look at the docker images output it will say <none> for its name; commands like docker system prune can clean these up.
The problems with this approach are on the consumer end. If I docker run registry.example.com/image:azul, Docker will automatically pull the image only if it's not already present. This can result in you running an older version of the image that happens to be on a consumer's system. This is especially a problem in cluster environments like Kubernetes, where you need a change in the text of the image name in a Kubernetes deployment specification to trigger an update.
In a CI system especially, I'd recommend assigning some sort of unique tag to every build. This could be based on the source control commit ID, or the branch name and bind number, or the current date, or something else. You can create a fixed tag like this as a convenience to developers (an image is allowed to have multiple tags) but I'd plan to not use this for actual deployments.
I learnt the basics of github and docker and both work well in my environment. On my server, I have project directories, each with a docker-compose.yml to run the necessary containers. These project directories also have the actual source files for that particular app which are mapped to virtual locations inside the containers upon startup.
My question is now- how to create a pro workflow to encapsulate all of this? Should the whole directory (including the docker-compose files) live on github? Thus each time changes are made I push the code to my remote, SSH to the server, pull the latest files and rebuild the container. This rebuilding of course means pulling the required images from dockerhub each time.
Should the whole directory (including the docker-compose files) live on github?
It is best practice to keep all source code including dockerfiles, configuration ... versioned. Thus you should put all the source code, dockerfile, and dockercompose in a git reporitory. This is very common for projects on github that have a docker image.
Thus each time changes are made I push the code to my remote, SSH to the server, pull the latest files and rebuild the container
Ideally this process should be encapsulated in a CI workflow using a tool like Jenkins. You basically push the code to the git repository,
which triggers a jenkins job that compiles the code, builds the image and pushes the image to a docker registry.
This rebuilding of course means pulling the required images from dockerhub each time.
Docker is smart enough to cache the base images that have been previously pulled. Thus it will only pull the base images once on the first build.
I am new to Kubernetes and so I'm wondering what are the best practices when it comes to putting your app's source code into container run in Kubernetes or similar environment?
My app is a PHP so I have PHP(fpm) and Nginx containers(running from Google Container Engine)
At first, I had git volume, but there was no way of changing app versions like this so I switched to emptyDir and having my source code in a zip archive in one of the images that would unzip it into this volume upon start and now I have the source code separate in both images via git with separate git directory so I have /app and /app-git.
This is good because I do not need to share or configure volumes(less resources and configuration), the app's layer is reused in both images so no impact on space and since it is git the "base" is built in so I can simply adjust my dockerfile command at the end and switch to different branch or tag easily.
I wanted to download an archive with the source code directly from repository by providing credentials as arguments during build process but that did not work because my repo, bitbucket, creates archives with last commit id appended to the directory so there was no way o knowing what unpacking the archive would result in, so I got stuck with git itself.
What are your ways of handling the source code?
Ideally, you would use continuous delivery patterns, which means use Travis CI, Bitbucket pipelines or Jenkins to build the image on code change.
that is, every time your code changes, your automated build will get triggered and build a new Docker image, which will contain your source code. Then you can trigger a Deployment rolling update to update the Pods with the new image.
If you have dynamic content, you likely put this a persistent storage, which will be re-mounted on Pod update.
What we've done traditionally with PHP is an overlay on runtime. Basically the container will have a volume mounted to it with deploy keys to your git repo. This will allow you to perform git pull operations.
The more buttoned up approach is to have custom, tagged images of your code extended from fpm or whatever image you're using. That way you would run version 1.3 of YourImage where YourImage would contain code version 1.3 of your application.
Try to leverage continuous integration and continuous deployment. You can use Jenkins as CI/CD server, and create some jobs for building image, pushing image and deploying image.
I recommend putting your source code into docker image, instead of git repo. You can also extract configuration files from docker image. In kubernetes v1.2, it provides new feature 'ConfigMap', so we can put configuration files in ConfigMap. When running a pod, configuration files will be mounted automatically. It's very convenience.
Why RUN git checkout -b mybranch switch to the branch but the content remain the one fetched from the master branch?
The whole point of Docker is that it only rebuilds the part of an image that has changed. It has no way of knowing that the content in the repo has changed, all it knows is that is already has a cached image "slice" for this step in the Dockerfile. So it uses the image it previously built.
As Mark notes, you can force a regeneration using --no-cache. Another option is to have a source code container that is always built using --no-cache which you add volumes to and then use that code via those volumes in a different container (look at 'volumes from' for docker-compose). Then you always get the changes in the repo, as it is built every time from scratch. You may want to look into 'docker-compose' for this sort of work.
When you run docker build look carefully at the output. When it has a cached version of a step it will say as much. When it has to build it, it will also note that.
Have you tried running your build with the "--no-cache" option?
I'd also recommend reading the following:
https://ryanfb.github.io/etc/2015/07/29/git_strategies_for_docker.html
Finally, do you really need to run git within your build? Personally I use Jenkins that runs my build within a workspace that is separately checked out from git.
I have on Docker an automatic build for an image based on ubuntu with some custom configurations to re-use then as base image on other specific Dockerfiles for particular projects. This works okay.
I made a change to it, committed to github which then started and did the automatic build on Docker.
From one of these other projects, I'm calling at the beginning of the Dockerfile FROM myuser/myimage but its not getting the last image with the changes, but rather it keeps caching the old one.
Shouldn't this be automatically?
You need to docker pull the latest version. Docker looks for the image from FROM locally. It doesn't notice if that tag has been updated in the registry where it came from. I have a script that runs docker pull before building images.