docker hub webhook for _failed_ build? - docker

I have several automated builds set up on docker hub. I see that I can set up a webhook to POST to a URL on a successful build, but it seems like it's more useful to be notified of a failed build. Is there any way to do that?
I tried adding a webhook and then pushing a deliberately bad RUN instruction to my Dockerfile. The automated build failed as expected but nothing was sent to my webhook.
Some of my builds are triggered not by git pushes but by cron jobs, so even if I tested the build before every commit, it wouldn't catch this situation. Builds that are successful one day could fail the next due to changing contents of URLs downloaded via ADD.
So...is there a way to get a notification of a failed automated build? If not, consider this a feature request.

You can turn on email-based notifications that a build has failed by going under your user settings option and clicking on notifications and checking the appropriate box. (Thanks #docker twitter account; this wasn't obvious to me either! https://twitter.com/DockerSupport/status/555912171792527360 )
As you have observed, a webhook for a POST event is not available for failed builds. I imagine the idea is that these are more for triggering some follow-up event such as telling a machine to pull the new image, while an email notification makes more sense for a failed build.

One option is to poll Docker Hub v2 API and emulate the missing notifications whenever the build history reports a failure (-1) or the build remains queued for too long. The solution is described here: Docker on-failure Webhook and is based on the Axibase Time Series Database sandbox image.
docker run -d -p 8443:8443 -p 9443:9443 \
--name=atsd-sandbox \
--env NAMESPACE='google' \
--env NOTIFY_URL='https://webhook.site/71fd9feb-8751-4afd-9e13-16072a34b259' \
--env ATSD_IMPORT_PATH='https://raw.githubusercontent.com/axibase/atsd-use-cases/master/how-to/docker/resources/notify.xml,https://raw.githubusercontent.com/axibase/atsd-use-cases/master/how-to/docker/resources/rule.xml' \
--env COLLECTOR_IMPORT_PATH='https://raw.githubusercontent.com/axibase/atsd-use-cases/master/how-to/docker/resources/job.xml' \
axibase/atsd-sandbox:latest
If the build fails intermittently, you can even program the rule to 'retaliate' against Docker Hub by initiating a retry using remote triggers.
Disclaimer: I work for Axibase.

Related

How to find the status of automated builds in Docker Hub

How to find the status of automated builds which get triggered either manually or via the Trigger url for a Docker Hub project.
https://hub.docker.com/repository/docker/company/webapp/builds
I would like to understand if its running and if its running the current status (i.e still running or build completed)
List of past builds
Status of all builds (time initiated, completed, build status)
Try with this:
curl https://hub.docker.com/api/build/v1/source/?image={namespace}/{repo} | jq '.objects[0].state'
Add bearer token in your request if it is a private repo
-H "Authorization: Bearer {Token}"
docker ps --no-trunc
will display the list of full containers and builds and details of the other running containers.

How to implement build verification process for GPL compliance

We provide our customers open source library with our fixes in binary and source code form. According to GPL, we should provide compilation scripts and modifications to the source code.
Build script logic: install required packages, clone git repository, apply patches and build it.
The customer should be able to do the same on clean Ubuntu image.
How to implement verification process which uses our scripts/sources and runs it?
Should I use VM and revert the state each time when verify the build?
Should I use some docker image or something else.
If you have docker available, you can use a linux image and a build script. Assuming you already have a working base image, you can run your build script with docker run, for example something like
docker run -i --rm --entrypoint /bin/sh mycontainer < myscript
where mycontainer is your container name, and myscript is path to your build script (which installs dependencies and builds your application), and --rm is specified to clean up the instance after exit. The script is provided via stdin in this example, but you could include it in your container and run it directly.
If you use github or gitlab CI, you can add the automatic build to a pipeline job to automatically run it on git commits (for example every time master branch is updated). If you have docker images already configured, you only need to add a job to the CI system.

How do I check for build status, running Jenkins jobs from a BitBucket pipeline?

We're using BitBucket to host our Git repositories.
We have defined build jobs in a locally-hosted Jenkins server.
We are wondering whether we could use BitBucket pipelines to trigger builds in Jenkins, after pull request approvals, etc.
Triggering jobs in Jenkins, through its REST API is fairly straightforward.
1: curl --request POST --user $username:$api_token --head http://jenkins.mydomain/job/myjob/build
This returns a location response header. By doing a GET on that, we can obtain information about the queued item:
2: curl --user $username:$api_token http://jenkins.mydomain/queue/item/<item#>/api/json
This returns JSON describing the queued item, indicating whether the item is blocked, and why. If it's not, it includes the URL for the build. With that, we can check the status of the build, itself:
3: curl -–user $username:$api_token http://jenkins.mydomain/job/myjob/<build#>/api/json
This will return yet more json, indicating whether the job is currently building, and if it's completed, whether the build succeeded.
Now BitBucket pipeline steps run in Docker containers, and have to run on Linux. Our Jenkins build jobs run on a number of platforms, not all of which are Linux. But BitBucket shouldn't care. Making the necessary REST API calls can be done in Linux, as I am in the examples above.
But how do we script this?
Do we create a single step that runs a shell script that runs command #1, then repeatedly calls command #2 until the build is started, then repeatedly calls command #3 until the build is done?
Or do we create three steps, one for each? Do BitBucket pipelines provide for looping on steps? Calling a step, waiting for a bit, then calling it again until it succeeds?
I think you should either use Bitbucket pipeline or Jenkins pipeline. Using both will give you to many options and make the project more complex than it should be.

Docker trigger jenkins job when image is pushed

I am trying to build a jenkins job(trigger builds remotely) on docker image build, build all I am getting on docker hub is following:
HISTORY
ID Status Date & Time
7345... ! ERROR 10/12/17 10:03
Reason (I assume): Docker is not authenticated to post to the jenkins url.
Question: How can I trigger the job automatically when an image gets pushed to docker hub?
Pull and run Watchtower docker image to poll any third-party public Docker image on Docker Hub or Quay that you need (typically as a base image of your own containers). Here's how. "Polling" here does not imply crudely pulling the whole image every 5 minutes or so - we are monitoring periodically for changes in the image, downloading only the checksum (SHA digest) most of the time (when there are no changes in the locally cached image).
Install the Build Token Root Plugin in your Jenkins server and set it up to receive Slack-formatted notifications secured with a token to trigger builds remotely or - safer - locally (those triggers will be coming from Watchtower container, not Slack). Here's how.
Set up Watchtower to post Slack messages to your Jenkins endpoint upon every change in the image(s) (tags) that you want. Here's how.
Optionally, if your scale is so large that you could end up overloading and bringing down the entire Docker Hub with a flood HTTP GET requests (should the time triggers go wrong and turn into a tight loop) make sure to build in some safety checks on top of Watchtower to "watch the watchman".
You can try the following plugin: https://wiki.jenkins.io/display/JENKINS/CloudBees+Docker+Hub+Notification
Which claims to do what you're looking for.
You can configure a WebHook in DockerHub wich will trigger the Jenkins-Build.
Docker Hub webhooks targeting your Jenkings server endpoint require making periodic copies of the image to another repo that you own [see my other answer with Docker Hub -> Watchman -> Jenkins integration through Slack notifications].
More details
You need to set up a cron job with periodic polling (docker pull) of the source repo to [docker] pull its `latest' tag, and if a change is detected, re-tag it as your own and [docker] push to a repo you own (e.g. a "clone" of the source Docker Hub repo) where you have set up a webhook targeting your Jenkings build endpoint.
Then and only then (in a repo you own) will Jenkins plugins such as Docker Hub Notification Trigger work for you.
Polling for Dockerfile / release changes
As a substitute of polling the registry for image changes (which need not generate much network traffic thanks to the local cache of docker images) you can also poll the source Dockerfile on Github using wget. For instance Dockerfiles of the official Docker Hub images are here. In case when the Github repo makes releases, you can get push notifications of them using Github Watch > Releases Only feature and if they have CI docker builds. Docker images will usually be available with a delay after code releases, even with complete automation, so image polling is more reliable.
Other projects
There was also a proposal for a 2019 Google Summer of Code project called Polling Docker Registries for Image Changes that tried to solve this problem for Jenkins users (incl. apparently Google), but sadly it was not taken up by participants.
Run a cron job with a periodic docker search to list all tags in the docker image of interest (here's the script). Note that this script requires the substitution of the jannis/jq image with an existing image (e.g. docker run --rm -i imega/jq).
Save resulting tags list to a file, and monitor it for changes (e.g. with inotifywait).
Fire a POST request using curl to your Jenkins server's endpoint using Generic Webhook Trigger plugin.
Cautions:
for efficiency reasons this tags listing script should be limited to a few (say, 3) top pages or simple repos with a few tags,
image tag monitoring relies on tags being updated correctly (automatically) after each image change, rather than being stuck in the past, like say Ubuntu tags (e.g. trusty-20190515 was updated a few days ago - late November, without the change in its mid-May tag).

Can a Dockerfile push itself to a registry?

For the use case where a Dockerfile needs to be built for each platform it's on (a bit niche I know), is there a way possible for it to push itself to the registry, i.e. calling docker push from within the Dockerfile?
Currently, this is done:
docker build -t my-registry/<username>/<image>:<version> .
docker login my-registry
docker push <image>
Could the login and push steps be directly or cleverly built into the Dockerfile being built or with a combination of others?
Note: This would operate in a secure environment of trustworthy users (so all users being able to push to the registry is fine).
Note: This is an irregular use of Docker, not a good idea for building/packaging software in general, rather I am using Docker to share environments between developers.
I am wondering why can't you have a wrapper script file (say shell or bat) around the "Dockerfile" to do these steps
docker build -t my-registry/<username>/<image>:<version> .
docker login my-registry
docker push <image>
What is it so specific about "Dockerfile". I know, this is not addressing the question that you asked, I might have totally misunderstood your usecase, but I am just curious.
As others pointed out, this can be easily achieved using a CD systems like Drone.io/Travis/Jenkins etc.
At first this sounds to me like the decently-circulated "Nasa's Space pen Myth". But as I said earlier, you may have a proper valid use case which I am not aware of yet.
Docker build creates image using recipe provided in Dockerfile. Each line in Dockerfile creates new temporary image of file system with different checksum. Image after execution of last line of Dockerfile is your final image of build process and is tagged with provided name.
So it is not possible to put docker push command inside Dockerfile because image creation is not finished yet.
Having a Dockefile push it's own image will never work.
To explain a bit more:
What happens when you build and image: Docker will spawn a container and do everything the Dockerfile specifies. You can even see this when running docker ps during the build. If the exit status of the container is 0 (no errors), an image is created from the container.
We don't really have much control over this process other than the build parameters. It's definitely a chicken and egg problem.
Build systems should to this stuff
It's even fairly easy to do this in Jenkins. The Jenkins setup I have uses a docker plugin and executes build commands to a remote docker hosts.. so the Jenkins nodes only pull the repo, runs a build, then pushes the image to a private repo properly tagged (then deletes the local image). You can run unit tests in docker also by making a separate Dockerfile (gets a bit more complicated when you need external mock services)
Builds per branch/architecture is not too hard to set up. With remote hosts doing the build work we can boost up the job limit in Jenkins fairly high and it can run on cheap hardware.
You can also run Jenkins in docker and make it build images in the docker engine it runs in. I just do that through TLS or the old trick of mapping the socket file into the Jenkins container might still work.
I think I started with the CloudBees Docker Build and Publish plugin and it was fairly easy to use, but now I use a custom plugin so I have no idea of the alternatives.

Resources