in an github (action) ci world of building container images, is there any simple way of doing the following:
when an base image is updates, rebuild own image* to have latest security patches. Or even better: select an action with the right parameters?
I have found ~10 or so issues regarding that subject on github on various repositories, but no solution that works with github only tooling, some mention azure, some mention to the image builds be managed by docker, however that is not possible in an github only world ..
Related
I have a git repo for a project, which contains both the Dockerfile (in the root directory)and the helm chart (under chart/). The versioning of the docker image and the helm repo are separate, with the following requirements:
if the application code is changed, a new (semver) version of the docker image must be built and pushed, and a new version of the helm chart (version in Chart.yaml), referencing the new image tag (appVersion in Chart.yaml), must be packaged and pushed
if the helm chart itself is changed (eg changes in templates), a new version of the chart (version in Chart.yaml) must also be packaged and pushed.
Changes might happen directly in the master branch, or in a branch that is subsequently merged into master, so I imagine the above actions have to happen either upon push to master, or upon merges into master (in both cases, there might be a testing phase prior to the build, but I think this doesn't change the problem).
First of all, is the above setup the common practice?
Second, I'm not sure about where to save versions. I've read that saving versions in the repo (eg with a VERSIONS file or something) is bad practice, so I was thinking of using git tags, but how? We'd need two series of tags, one for the image and the other for the chart, is it how people do that? Or how would one go about this? Also, where should the version bump happen? Is it normal to have the CI pipeline itself do the bump and commit? Or should it come from the committer?
Thinking out loud, I would like to avoid having the CI do commits and pushes, because I fear that people would then forget to pull and would do local commits before, which would in turn pollute the git history with merges.
So one idea, if going with the automatic version bumping, could be to store the versions in a database or external storage and have the CI pipeline retrieve them, do the necessary bumping and then store them again. In this case I suppose I would have to establish a rule for bumping, for example always bump the patch version. Is this common practice, at least in some scenarios?
If instead we let the developers choose when and what to bump, how do they communicate that? (even in this case, there's still an automated part: when the docker image has been bumped, the helm chart version should be bumped automatically) I've read about conventional commits, but still, they don't store the actual versions. Where do I store the image and chart versions? Should I make them use tags? (but then, two separate series? How?)
Sorry if this all sounds confused, but that's because I am too.
This is my first try at setting up CI for a repo and I'm struggling to find examples, so I'm thinking that I'm either using a wrong approach or missing something obvious.
Thanks for any help!
I want to display non-code differences between current build and the latest known successful build on Jenkins.
By non-code differences I mean things like:
Environment variables (includes Jenkins parameters) (set), maybe with some filter
Version of system tool packages (rpm -qa | sort)
Versions of python packages installed (pip freeze)
While I know how to save and archive these files as part of the build, the only part that is not clear is how to generate the diff/change-report regarding differences found between current build and the last successful build.
Please note that I am looking for a pipeline compatible solution and ideally I would prefer to make this report easily accessible on Jenkins UI, like we currently have with SCM changelogs.
Or to rephrase this, how do I create build manifest and diff it against last known successful one? If anyone knows a standard manifest format that can easily be used to combine all these information it would be great.
you always ask the most baller questions, nice work. :)
we always try to push as many things into code as possible because of the same sort of lack of traceability you're describing with non-code configuration. we start with using Jenkinsfiles, so we capture a lot of the build configuration there (in a way that still shows changes in source control). for system tool packages, we get that into the app by using docker and by inheriting from a specific tag of the docker base image. so even if we want to change system packages or even the python version, for example, that would manifest as an update of the FROM line in the app's Dockerfile. Even environment variables can be micromanaged by docker, to address your other example. There's more detail about how we try to sidestep your question at https://jenkins.io/blog/2017/07/13/speaker-blog-rosetta-stone/.
there will always be things that are hard to capture as code, and builds will therefore still fail and be hard to debug occasionally, so i hope someone pipes up with a clean solution to your question.
I have multiple environments. They are debug, dev, and prod. I'd like to refer to an image by the latest dev (latest) or dev (version 1.1) or prod (latest). How would I go about tagging builds and pushes?
My first thought was to create separate repositories for each environment debug, dev, and prod. But I am starting to wonder if I can do this with just one repository. If its possible to do with one container what would the syntax be when building and pushing?
This is what has worked best for me and my team and I recommend it:
I recommend having a single repo per project for all environments, it is easier to manage. Especially if you have microservices, then your project is composed by multiple microservices. Managing one repo per env per project is a pain.
For example, I have a users api.
The docker repo is users. And this repo is used by alpha, dev and beta.
We create an env variable called $DOCKER_TAG in our CI/CD service and set it at the time the build is created, like this:
DOCKER_TAG: $(date +%Y%m%d).$BUILD_NUMBER => This is in bash.
Where $BUILD_NUMBER is previously set by the build being run when the CI/CD run is triggered. For example, when we merge a PR, a build is triggered, as build no. 1, so $BUILD_NUMBER: 1.
The resulting tag looks like this when used: 20171612.1
so our docker image is: users:20171612.1
Why this format?
It allows us to deploy the same tag on different environments with a
run task.
It helps us keep track when an image was created and what
build it belongs to.
Through the build number, we can find the commit information and map all together as needed, nice for trobleshooting.
It allows us to use the same docker repo per project.
It is nice to know when we created the image from the tag itself.
So, when we merge, we create a single build. Then that build is deployed as needed to the different environments. We don't create an independent build per environment. And we keep track on what's deployed where.
If there's a bug in an environment with certain tag, we pull such tag, build and trobleshoot and reproduce the issue under that condition. If we find an issue, we have the build number in the tag 20171612.1 so we know the build no. 1 has the issue. We check our CI/CD service and that tells us what commit is the most current. We check out that commit hash from git and debug and fix the issue. Then we deploy it as a hotfix, for example.
If you don't have a CI/CD yet, and you are doing this manually, just set the tag in that format manually (pretty much type the full string as is) and instead of a build number, use a commit short git hash (if you are using git):
20170612.ed73d4f
So you know what is the most current commit so you can troubleshoot issues with a specific image and map back to the code to create fixes as needed.
You can also define any other suffix for your tag that maps to the code version, so you can easily troubleshoot (e.g. map to git tags if you are using those).
Try it, adjust it as need it and do what it works best for you and your team. There's many ways to go around tagging. We tried many and this one is our favorite so far.
Hope this is of help.
There's two schools of thought, stable tagging, where you update a single tag, and unique tagging. Each have their pros and cons. Stable tags can create instability when deploying to self healing clusters as a new node might pull a new version, while the rest of the cluster is running a slightly older version. Unique tagging is a best practice for deployment. However, to manage base image updates of OS & Framework patching, you'll want to build upon stable tags in your dockerfile, and enable automatic container builds. For a more detailed walk through, with visuals, here's a post:
https://blogs.msdn.microsoft.com/stevelasker/2018/03/01/docker-tagging-best-practices-for-tagging-and-versioning-docker-images/
I think "lastest" is for the last productive image. It is what I expect in the docker hub, although there are no images in development.
On the other hand you can use tags like for example 0.0.1-dev. When this image is finished you can do the tag again and push, and then the repository will detect that the layers are already in the repository.
Now when you are about a candidate version to come out to production, you have to only have the semantic version despite not being in an environment pruduccion. That's what I would do.
I'm developing a private webapp in JSF which is available over the internet and now reached a stage where I wanted to introduce CI (Which I'm fairly new to) into the whole process. My current project setup looks like this:
myApp-persistence: maven project that handles DB access (DAOs and hibernate stuff)
myApp-core: maven project, that includes all the Java code (Beans and Utils). It has a dependency on myApp-persistence.jar
myApp-a: maven project just with frontend code (xhtml, css, JS). Has a dependency on myApp-core.jar
myApp-b: maven project just with frontend code (xhtml, css, JS). Has a dependency on myApp-core.jar
myApp-a and myApp-b are independent from each other, they are just different instances of the core for two different platforms and only display certain components differently or call different bean-methods.
Currently I'm deploying manually, i.e. use the eclipse built-in export as war function and then manually upload it to the deployments dir of my wildfly server on prod. I'm using BitBucket for versioning control and just recently discovered pipelines in BitBucket and implemented one for each repository (every project is a separate repo). Now myApp-persistence builds perfectly fine because all dependencies are accessible via the public maven repo but myApp-core (hence myApp-a and myApp-b, too) fails of course because myApp-persistence isn't published on the central maven repo.
Is it possible to tell BitBucket somehow to use the myApp-persistence.jar in the corresponding repo on BitBucket?
If yes, how? And can I also tell BitBucket to deploy directly to prod in case the build including tests ran fine?
If no, what would be a best practice to do that? I was thinking of using a second dev server (already available, so no big deal) as a CI server but then still I would need some advise or recommendations on which tools (Jenkins, artifactory, etc.) to use.
One important note maybe: I'm the only person working on this project so this might seem like an overkill but for me the process of setting that up is quite some valuable experience. That said, I'm not necessarily looking for the quickest solution but for the most professional and convenient solution.
From my point of view, you can find the solution in this post-https://christiangalsterer.wordpress.com/2015/04/23/continuous-integration-for-pull-requests-with-jenkins-and-stash/. It guides you step by step how to set up everything. The post is from 2015 but the process and idea are still the same. Hope it helps.
I'm working on creating some docker images to be used for testing on dev machines. I plan to build one for our main app as well as one for each of our external dependencies (postgres, elasticsearch, etc). For the main app, I'm struggling with the decision of writing a Dockerfile or compiling an image to be hosted.
On one hand, a Dockerfile is easy to share and modify over time. On the other hand, I expect that advanced configuration (customizing application property files) will be much easier to do in vim before simply committing an new image.
I understand that I can get to the same result either way, but I'm looking for PROS, CONS, and gotchas with either direction.
As a side note, I plan on wrapping this all together using Fig. My initial impression of this tool has been very positive.
Thanks!
Using a Dockerfile:
You have an 'audit log' that describes how the image is built. For me this is fundamental if it is going to be used in a production pipeline where more people are working and maintainability should be a priority.
You can automate the building process of your image, being an easy way of updating the container with system updates, or if it has to take part in a continuous delivery pipeline.
It is a cleaner way of create the layers of your container (each Dockerfile command is a different layer)
Changing a container and committing the changes is great for testing purposes and for fast development for a conceptual test. But if you plan to use the result image for some time, I would definitely use Dockerfiles.
Apart from this, if you have to modify a file and doing it using bash tools (awk, sed...) results very tedious, you can add any file you wish from outside during the building process.
I totally agree with Javier but you need to understand that one image created with a dockerfile can be different with an image build with the same version of the dockerfile 1 day after.
maybe in your build process you retrieve automatically last updates of an app or the os etc …
And at this time if you need to reproduce a crash or whatever you can’t rely on the dockerfile.