docker run git checkout doesn't fetch changes in the branch? - docker

Why RUN git checkout -b mybranch switch to the branch but the content remain the one fetched from the master branch?

The whole point of Docker is that it only rebuilds the part of an image that has changed. It has no way of knowing that the content in the repo has changed, all it knows is that is already has a cached image "slice" for this step in the Dockerfile. So it uses the image it previously built.
As Mark notes, you can force a regeneration using --no-cache. Another option is to have a source code container that is always built using --no-cache which you add volumes to and then use that code via those volumes in a different container (look at 'volumes from' for docker-compose). Then you always get the changes in the repo, as it is built every time from scratch. You may want to look into 'docker-compose' for this sort of work.
When you run docker build look carefully at the output. When it has a cached version of a step it will say as much. When it has to build it, it will also note that.

Have you tried running your build with the "--no-cache" option?
I'd also recommend reading the following:
https://ryanfb.github.io/etc/2015/07/29/git_strategies_for_docker.html
Finally, do you really need to run git within your build? Personally I use Jenkins that runs my build within a workspace that is separately checked out from git.

Related

Rebuild docker image by reusing the same tag?

I've gone thru multiple questions posted on the forum but didn't get a clarity regarding my requirement.
I'm building a docker image after every successful CI build, there will be hardly 1 to 2 lines of changes in Dockerfile for every successful build.
Docker Build Command:
$(docker_registry)/$(Build.Repository.Name):azul
Docker Push Command
$(docker_registry)/$(Build.Repository.Name):azul
I wanted to overwrite the current docker image with the latest one(from the latest CI build changes) but retain the same tag - azul. Does docker support this ?
Yes, docker supports it. Every line you execute results in a new layer in image, that contains the changes compared to the previous layer. After modifying the Dockerfile, new layers will be created and the same preceding layers will be reused.
If you want to clean build the whole image with no cached layers, you can use the —no-cache parameter.
Mechanically this works. The new image will replace the old one with that name. The old image will still be physically present on the build system but if you look at the docker images output it will say <none> for its name; commands like docker system prune can clean these up.
The problems with this approach are on the consumer end. If I docker run registry.example.com/image:azul, Docker will automatically pull the image only if it's not already present. This can result in you running an older version of the image that happens to be on a consumer's system. This is especially a problem in cluster environments like Kubernetes, where you need a change in the text of the image name in a Kubernetes deployment specification to trigger an update.
In a CI system especially, I'd recommend assigning some sort of unique tag to every build. This could be based on the source control commit ID, or the branch name and bind number, or the current date, or something else. You can create a fixed tag like this as a convenience to developers (an image is allowed to have multiple tags) but I'd plan to not use this for actual deployments.

Is "differentially" updating a local Docker image with the latest changes possible?

This is my first time using Docker, so excuse my potentially improper terminology. Our team uses a Docker image that contains the entire development environment, minus the git repo that needs to be cloned manually within a persistent container based on this image. The Docker image hosted on our repository is frequently updated, e.g. to include new 3rd party dependencies and other files, making my local environment - which includes a directory with the git repo where I do my work - out-of-date.
Is there a way to update my local environment with the latest image changes without having to pull the entire image with docker pull image:latest and start from scratch? Assuming there are no conflicts, I would like to preserve my local changes (git repo clone, local filesystem modifications etc.); so I'm looking for something like git pull for Docker, if that makes sense.
I must have missed something, but a search on this issue didn't yield any viable solutions.

How to stop TeamCity from rebuilding docker dependencies every time?

I have a TeamCity build project that parameterizes a docker-compose.yml template with the build versions of a dozen Docker containers, so in order to get the build_counter from each container, I have them set as snapshot dependencies in the docker-compose build job. Each container's Dockerfile and other files are in their own BitBucket repo, and they have triggers for the appropriate files. In the snapshot dependencies in the docker-compose build I have them set to "Do not run new build if there is a suitable one" but it still tries to run all of the dependent builds even though there aren't any changes in their respective repos.
This makes what should be a very simple and quick build in to a very long build. And often times, one of the dependent builds will fail with "could not collect changes: connection refused" and I suspect it has to do with TC trying to hit all of these different repos all at once.
Is there something I can do to not trigger a build of every dependency every time the docker-compose build is run?
Edit:
Here's an example of what our docker-compose.yml.j2 looks like: http://termbin.com/b2xy
Obviously, I've sanitized it for sharing, and our real docker-compose template has about a dozen services listed.
Here is an example Dockerfile for one of the services: http://termbin.com/upins
Rather than changing the source code of your build (parameterized docker-compose.yml) and brute-force your build every time, you could consider building the containers independently while tagging them with a version increment, and labels. After the build store the images in a local registry. Use docker-compose to suit your runtime needs. docker-compose can use multiple yaml files, so if you need other images for a particular build, just pull the other images you need. For production use another yaml file that composes the system to run. Add LABEL to your Dockerfile. See http://label-schema.org//rc1/ for a set of labels that suit your needs.
I know this is old question but I have come across this issue and you can't do what sounds reasonable i.e. get recent green builds without rebuilding. This is partly because of what the snapshot dependencies are designed to do by Jetbrains.
The basic idea is that dependencies are for synchronized revisions of code: that is if you build Compose at a certain time, it will need to use not just its own versions of source code at that point in time but also the code for all the dependencies that also comes from that point of time, regardless of whether anything significant has changed.
In my example, there were always changes because the same repo was used for lots of projects and had unrelated changes that would not trigger a build but would make the project appear behind and cause a build.
If your dependencies have NOT changed and show no changes pending, then they shouldn't build. In this case, you need to tick "Do not run new build if there is a suitable one". "Enforce Revisions Synchronization" is slightly confusing. If ticked, it will find older builds that match the first build after your build was triggered. If unticked, it can use newer builds.

How do you put your source code into Kubernetes?

I am new to Kubernetes and so I'm wondering what are the best practices when it comes to putting your app's source code into container run in Kubernetes or similar environment?
My app is a PHP so I have PHP(fpm) and Nginx containers(running from Google Container Engine)
At first, I had git volume, but there was no way of changing app versions like this so I switched to emptyDir and having my source code in a zip archive in one of the images that would unzip it into this volume upon start and now I have the source code separate in both images via git with separate git directory so I have /app and /app-git.
This is good because I do not need to share or configure volumes(less resources and configuration), the app's layer is reused in both images so no impact on space and since it is git the "base" is built in so I can simply adjust my dockerfile command at the end and switch to different branch or tag easily.
I wanted to download an archive with the source code directly from repository by providing credentials as arguments during build process but that did not work because my repo, bitbucket, creates archives with last commit id appended to the directory so there was no way o knowing what unpacking the archive would result in, so I got stuck with git itself.
What are your ways of handling the source code?
Ideally, you would use continuous delivery patterns, which means use Travis CI, Bitbucket pipelines or Jenkins to build the image on code change.
that is, every time your code changes, your automated build will get triggered and build a new Docker image, which will contain your source code. Then you can trigger a Deployment rolling update to update the Pods with the new image.
If you have dynamic content, you likely put this a persistent storage, which will be re-mounted on Pod update.
What we've done traditionally with PHP is an overlay on runtime. Basically the container will have a volume mounted to it with deploy keys to your git repo. This will allow you to perform git pull operations.
The more buttoned up approach is to have custom, tagged images of your code extended from fpm or whatever image you're using. That way you would run version 1.3 of YourImage where YourImage would contain code version 1.3 of your application.
Try to leverage continuous integration and continuous deployment. You can use Jenkins as CI/CD server, and create some jobs for building image, pushing image and deploying image.
I recommend putting your source code into docker image, instead of git repo. You can also extract configuration files from docker image. In kubernetes v1.2, it provides new feature 'ConfigMap', so we can put configuration files in ConfigMap. When running a pod, configuration files will be mounted automatically. It's very convenience.

How to install Dockerfile from GitLab to allow pull and commit

Is there a way to clone a Dockerfile from GitLab with the docker command?
I want to use the feature that allow pull and commit.
I am not sure if I have understand well but these pull and commit update the Dockerfile from the git repositories ? Or is it only locally in the next images ?
If not, is there a way to get all the change you made from the previous image made by the Dockerfile into another Dockerfile ?
I know you can clone with Git directly, but like for npm, you can also use Git url like git+https:// or git+ssh://
The pull/commit commands affect the related image and operate directly against your configured registry, which is the official Docker Hub Registry unless configured otherwise. Perhaps some confusion may arise from the registry's support for Automated Builds, where the registry is directly bound to a repository and rebuilds the image every time the targeted repository branch changes.
If you wish to reuse someone's Docker image, the best approach is to simply reference it via the FROM instruction in your Dockerfile and effectively fork the image. While it's certainly possible to clone the original source repository and continue editing the Dockerfile contained therein, you usually do not want to go down that path.
So if there exists such a foo/bar image you want to continue building upon, the best, most direct approach to do so is to create your own Dockerfile, inherit the image by setting it as a base for your succeeding instructions via FROM foo/bar and possibly pushing your baz/bar image back into the registry if you want it to be publicly available for others to re-base upon.

Resources