Trigger automatic build on dockerhub when some package is updated in official repository - docker

If I have the automated build set up on DockerHub, for instance, based on ubuntu:yy_mm image and in its Dockerfile I install some package foo-bar-ng through the apt-get, how can I set up the image to be automaticaly rebuilt when the package is updated in Ubuntu repository?

Right now the only approach I see is to develop and spin up separate private service for myself which will monitor the package version in official Ubuntu repository and trigger the rebuild by "Built triggers" DockerHub feature that is available in automatic build settings:
Trigger your Automated Build by sending a POST to a specific endpoint.
For instance, here question about how can new packages be monitored in specific Ubuntu repo.
(Made this as an answer - let the community vote on it and, especially, provide better answer if there is any)

Related

Build a production custom docker image of grafana from it github

I need make some changes on the code of grafana an them compile it.
I have downloaded the github repo, made the changes and rum
docker build -t custom-grafana -f Dockerfile .
As you can see many site over internet.
The problem is that compile in this way make a build from v7.5.0-dev version of grafana, and I need use the latest version...
I cant find de way to compile a custom grafana code using the latest version of gfrafana code.
I need help...
Thank you!
If you have cloned the grafana repository from GitHub than you must checkout the branch/tag that you want to compile.
The latest version at the moment is 7.5.6, you can just check it out before building the docker container.
git checkout v7.5.6

Do I need to share the docker image if I can just share the docker file along with the source code?

I am just starting to learn about docker. Is docker repository (like Docker Hub) useful? I see the docker image as a package of source code and environment configurations (dockerfile) for deploying my application. Well if it's just a package, why can't I just share my source code with the dockerfile (via GitHub for example)? Then the user just downloads it all and uses docker build and docker run. And there is no need to push the docker image to the repository.
There are two good reasons to prefer pushing an image somewhere:
As a downstream user, you can just docker run an image from a repository, without additional steps of checking it out or building it.
If you're using a compiled language (C, Java, Go, Rust, Haskell, ...) then the image will just contain the compiled artifacts and not the source code.
Think of this like any other software: for most open-source things you can download its source from the Internet and compile it yourself, or you can apt-get install or brew install a built package using a package manager.
By the same analogy, many open-source things are distributed primarily as source code, and people who aren't the primary developer package and redistribute binaries. In this context, that's the same as adding a Dockerfile to the root of your application's GitHub repository, but not publishing an image yourself. If you don't want to set up a Docker Hub account or CI automation to push built images, but still want to have your source code and instructions to build the image be public, that's a reasonable decision.
That is how it works. You need to put the configuration files in your code, i.e,
Dockerfile and docker-compose.yml.

Docker hub: What is the best approach to handle versioning of third party tools in automatic builds from github?

I have an app on github which uses a third party open source tool as dependency. I want containerize my app so I've added a Dockerfile to my repo that triggers automatic builds on Docker Hub. That Docker image compiles the third party tool and builds my app.
On Docker Hub I've configured the rules to handle the versioning of my app based on new commits (source branch i.,e docker-repo/myapp:latest) and releases (source tags docker-repo/myapp:v1.0). However, I've pointed statically the dockerfile to the latest version of the third part tool. So my app is ready always with the latest version of its dependency.
Now, here is my question: What is the best approach to handle the versions of that third party tool with Docker Hub? I would like to be able to handle the versioning of my app but also handling the versioning of its dependency. Should I created as many dockerfiles as many versions of the dependency I want to build?
I don't think there's a best practice for this. Some languages create a version out of every version of the upstream tools; e. g. Python which has a tag for ever version of Alpine and Debian. So it's not a matter of should I, it's simply you might want to do or not depending on the clients of your image. In all likelihood, you might want to simply provide a latest image mapped to the latest image of your upstream dependency.

How to tell the software version under a tag on Docker hub

I am quite newbie in docker, and I am trying to find the way to tell version for a docker hub tagged image.
For instance, the jenkins/jenkins:lts-latest image, listed here https://hub.docker.com/r/jenkins/jenkins/tags/, what image version does actually aliase? And how can I infer the correspondent dockerfile/branch in jenkins repo?
I tried with docker search but I couldn't. I tried also to find a clue in the official Jenkins github dockerfile repo: https://github.com/jenkinsci/docker, but I don't see any bindung tag or anything that gives me a hint on the source of the image
Another example, I have a Kubernetes cluster, and when I check my Nexus pod, I see likewise that the image is defined as sonatype/nexus3:latest.
In this case at least I have the imageID: docker-pullable://sonatype/nexus3#sha256:434a2564aa64646464afaf.. but once again I don't know how to map it to the actual version of the software
For the repo you asked, the answer is No.
When setup repo on dockerhub, there are two kinds of options for user to choose as follows:
1) Create Repository:
In this way, dockerhub just create a repo for user, and user need to build his own image on local server, tag it, and push it to dockerhub.
When user push his image to dockerhub, no additional information about the source version will be appended, so can't get any source map from dockerhub.
jenkins/jenkins, just this kind of repo.
2) Create Automated Build
In this way, dockerhub will fetch the code from github or bitbucket, and build the image on its cloud infrastructure, so it will know exactly what source commit is for current docker image.
jenkins/jnlp-slave, just this kind of repo.
Then, you can click its Build Details on the web page, click into one link, e.g. 3.26-1-alpine, you will see log mentioned 0a0239228bf2fd26d2458a91dd507e3c564bc7d0 is the source commit.
To sum up, for the repo you mentioned in the question, they are not Automated Build, so you cannot get the map for the image & source code, but if you happen to find a repo in dockerhub which is Automated Build later & want to know the map, then you can.
As long as I understand your question, you are trying to tag the docker image exact with same version as of your software version. For that I use to create image tag:
$ export VERSION="2.31-b19"
$ docker tag "<user>/<image>:${VERSION}" "<docker_hub_user>/<repo>:latest"
If this is not the case. Please explain your use case a bit more so that we can provide you a better workaround.

Build chain in the cloud?

(I understand this question is somewhat out of scope for stack overflow, because contains more problems and somewhat vague. Suggestions to ask it in the proper ways are welcome.)
I have some open source projects depending in each other.
The code resides in github, the builds happen in shippable, using docker images which in turn are built on docker hub.
I have set up an artifact repo and a debian repository where shippable builds put the packages, and docker builds use them.
The build chain looks like this in terms of deliverables:
pre-zenta docker image
zenta docker image (two steps of docker build because it would time out otherwise)
zenta debian package
zenta-tools docker image
zenta-tools debian package
xslt docker image
adadocs artifacts
Currently I am triggering the builds by pushing to github and sometimes rerunning failed builds on shippable after the docker build ran.
I am looking for solutions for the following problems:
Where to put Dockerfiles? Now they are in the repo of the package needing the resulting docker image for build. This way all information to build the package are in one place, but sometimes I have to trigger an extra build to have the package actually built.
How to trigger build automatically?
..., in a way supporting git-flow? For example if I change the code in zenta develop branch, I want to make sure that zenta-tools will build and test with the development version of it, before merging with master.
Are there a tool with which I can overview the health of the whole build chain?
Since your question is related to Shippable, I've created a support issue for you here - https://github.com/Shippable/support/issues/2662. If you are interested in discussing the best way to handle your scenario, you can also send me an email at support#shippable.com You can set up your entire flow, including building the docker images, using Shippable.

Resources