Docker: updating image and registry - docker

What is the right workflow for updating and storing images?
For example:
I download source code from GitHub (project with Docker files, docker-compose.yml)
I run "docker build"
And I push new image to Docker Hub (or AWS ECR)
I make some changes in source code
Push changes to GitHub
And what I should do now to update registry (Docker Hub)?
A) Should I run again "docker build" and then push new image (with new tag) to registry?
B) Should I somehow commit changes to existing image and update existing image on Docker Hub?

This will depend on what for you will use your docker image and what "releasing" policy you adopt.
My recommendation is that you sync the tags you keep on Docker Hub with the release/or tags you have in GitHub and automate as much as you can your production with a continuous integration tools like Jenkins and GitHub webooks.
Then your flow becomes :
You do your code modifications and integrate them in GitHub ideally using a pull request scheme. This means your codes will be merged into your master branch.
Your Jenkins is configured so that when master is changed it will build against your docker file and push it to Docker hub. This will erase your "latest" tag and make sure your latest tag in docker hub is always in sync with your master release on GitHub
If you need to keep additional tags, this will be typical because of different branches or releases of your software. You'll do the same as above with the tag hooked up through Jenkins and GitHub webhooks with a non-master branch. For this, take a look at how the official libraries are organized on GitHub (for example on Postgres or MySQL images).

Related

dockerhub automated build from single repo with single dockerfile building multiple images

I have a single git repository on github with:
a Dockerfile which builds multiple images meant to be used together. (a maven build produces a war file and sql files by downloading then from artifact repositories; a multi-stage build then creates a slim tomcat image with the war and a slim mysql image with the sql data preloaded).
a docker-compose.yml file that uses the "target" instruction to build and run containers on the images from the multi-stage build.
This works well during development. But it forces users to build images on their computer. I want the users to be able to only download images. The setup should also be using dockerhub's automated build to keep images up to date.
How can I set it up to achieve this ? What command(s) or file(s) do I give the users to allow them to download images and run containers ? If it is not possible, what can I do to make it possible (split the repo? copypaste the dockerfile? publish intermediate images to dockerhub and ensure a correct build order ? Don't use dockerhub's automated build ?)
To use dockerhub's automated builds you would need to build one image per Dockerfile and have one Dockerfile per repo. The image name comes from the source repository name (with the github org/user name as the docker image user name and the github repo name as the docker image name). Multistage builds work in automated builds but only one image is published per Dockerfile (the final image of the build).
You could build the images in your CI or even on your local machine and then push to dockerhub. You'd just need to have an account on dockerhub and be logged in to that account when you use the docker push command. When doing this push there doesn't have to be any mapping to GitHub repos but your image names should start with <dockerhub_user>/ as a kind of prefix (explained at https://docs.docker.com/docker-hub/repos/). It's ok if they are built under a different name as you could rename by retagging before pushing. This way you can also build the images however you like.
When you have images in dockerhub you can just refer to them in the docker-compose file using the form image: <dockerhub_user>/<dockerhub_image_name>:<tag>. The images will automatically be pulled when the user does docker-compose up.
Here are some tips and links that should help your situation:
Automated builds are a convenient way to deploy your images.
This part is pretty easy. You'll need accounts with Docker Hub and Github. You can register these accounts for free.
When you create a a repository on Docker Hub you can link it to your Github repository to automate the build.
Recommendations:
Split your services into separate Dockerfiles. Ideally you should use separate repositories: Docker Compose will pull them together at the end. A division of services will also help if anyone wants to implement e.g. a cloud database backend for their deployment.
Don't store database files inside a container. Containers should be ephemeral
For a robust design, test your builds.
Docker Hub automated builds are very flexible with the use of build hooks.
This part is a little tricky because I haven't found the best documentation. It also might not be necessary if you split your Dockerfile.
I've successfully created automated builds with multiple tags and targets using a hook at hooks/build but after reading the documentation it looks like you should also be able to use hooks/post_build.
Your hook could simply build the correct layer and push the tag to Docker Hub
For your repository that should look like:
#!/usr/bin/env bash
docker build --target lutece-mysql -t lutece/mysql .
docker push lutece/mysql
If you end up using hooks/build you might need to build the final target as the last step.
Recommendations:
If you need multiple tags for an image use a hook at hooks/post_push to add additional tags. That way each tag should link users to the same image. e.g.
#!/usr/bin/env bash
docker tag lutece/tomcat:master lutece/tomcat:latest
docker push lutece/tomcat:latest
Additionally you can use build hooks to label your image with things like build date and git commit.
Deployment with Docker Compose
Unfortunately I haven't done this part so I can't confirm how to get this to work.
With your repositories in Docker Hub and a working docker-compose.yml your clients may only need to run docker-compose up in the directory with your docker-compose.yml file. Docker Compose should pull in the images from Docker Hub.

Docker hub/store doesn't show build information

I'm having problems with docker continuous integration.
I setup automated builds in cloud.docker.com for my project, but there is not information at all either in their webs (hub/store) or their api, which shows that my build is not automated.
Docker Cloud looks like this:
But in the registry there is no "builds" section:
I guess that should look like other members projects, something like this:
Also, like I said, using the endpoint: https://registry.hub.docker.com/v2/repositories/{user}/{project}/ shows me "automated build: false"
I just realized that, in some way, there is no link between the Docker Cloud automatic builds and Docker Hub ones.
If you create an automated build in Docker Hub, everything works. I don't understand the logic of this, because if you create a repo either in docker cloud or docker hub, they are syncronized as one, but automated builds created on Docker Cloud don't show correctly in Docker Hub/Store.
Both, the Docker Hub and Docker Store builds will be updated whenever you do a push to your repo or a new build is sent with docker push, but the information about the automatic build only will be showed in Docker Cloud if you did it here.

Docker trigger jenkins job when image is pushed

I am trying to build a jenkins job(trigger builds remotely) on docker image build, build all I am getting on docker hub is following:
HISTORY
ID Status Date & Time
7345... ! ERROR 10/12/17 10:03
Reason (I assume): Docker is not authenticated to post to the jenkins url.
Question: How can I trigger the job automatically when an image gets pushed to docker hub?
Pull and run Watchtower docker image to poll any third-party public Docker image on Docker Hub or Quay that you need (typically as a base image of your own containers). Here's how. "Polling" here does not imply crudely pulling the whole image every 5 minutes or so - we are monitoring periodically for changes in the image, downloading only the checksum (SHA digest) most of the time (when there are no changes in the locally cached image).
Install the Build Token Root Plugin in your Jenkins server and set it up to receive Slack-formatted notifications secured with a token to trigger builds remotely or - safer - locally (those triggers will be coming from Watchtower container, not Slack). Here's how.
Set up Watchtower to post Slack messages to your Jenkins endpoint upon every change in the image(s) (tags) that you want. Here's how.
Optionally, if your scale is so large that you could end up overloading and bringing down the entire Docker Hub with a flood HTTP GET requests (should the time triggers go wrong and turn into a tight loop) make sure to build in some safety checks on top of Watchtower to "watch the watchman".
You can try the following plugin: https://wiki.jenkins.io/display/JENKINS/CloudBees+Docker+Hub+Notification
Which claims to do what you're looking for.
You can configure a WebHook in DockerHub wich will trigger the Jenkins-Build.
Docker Hub webhooks targeting your Jenkings server endpoint require making periodic copies of the image to another repo that you own [see my other answer with Docker Hub -> Watchman -> Jenkins integration through Slack notifications].
More details
You need to set up a cron job with periodic polling (docker pull) of the source repo to [docker] pull its `latest' tag, and if a change is detected, re-tag it as your own and [docker] push to a repo you own (e.g. a "clone" of the source Docker Hub repo) where you have set up a webhook targeting your Jenkings build endpoint.
Then and only then (in a repo you own) will Jenkins plugins such as Docker Hub Notification Trigger work for you.
Polling for Dockerfile / release changes
As a substitute of polling the registry for image changes (which need not generate much network traffic thanks to the local cache of docker images) you can also poll the source Dockerfile on Github using wget. For instance Dockerfiles of the official Docker Hub images are here. In case when the Github repo makes releases, you can get push notifications of them using Github Watch > Releases Only feature and if they have CI docker builds. Docker images will usually be available with a delay after code releases, even with complete automation, so image polling is more reliable.
Other projects
There was also a proposal for a 2019 Google Summer of Code project called Polling Docker Registries for Image Changes that tried to solve this problem for Jenkins users (incl. apparently Google), but sadly it was not taken up by participants.
Run a cron job with a periodic docker search to list all tags in the docker image of interest (here's the script). Note that this script requires the substitution of the jannis/jq image with an existing image (e.g. docker run --rm -i imega/jq).
Save resulting tags list to a file, and monitor it for changes (e.g. with inotifywait).
Fire a POST request using curl to your Jenkins server's endpoint using Generic Webhook Trigger plugin.
Cautions:
for efficiency reasons this tags listing script should be limited to a few (say, 3) top pages or simple repos with a few tags,
image tag monitoring relies on tags being updated correctly (automatically) after each image change, rather than being stuck in the past, like say Ubuntu tags (e.g. trusty-20190515 was updated a few days ago - late November, without the change in its mid-May tag).

Git to docker export

I'd like some opinions on this workflow. The intention is to semi-automate and revision control the creation/export of docker containers.
I have some docker directories with a dockerfile etc inside (enough to build a docker image from). At the moment, I've set up a process where this becomes a local git repo, then I set up a bare repo on a remote server. Then I add in an 'update' hook to the remote repo that will take the name of the repo and call a script that proceeds to clone that repo, build docker image, start a container, export container, delete repo. Then I end up with a .tar of my docker container every time I push an update to that repo.
The only issue is that I have to manually copy the hook to each remote repo I set up (considering .git/hooks doesn't get pushed from local).
So I'm looking for some feedback on whether this whole process has any intelligence to it or if I am going about it the completely wrong way.
What you are looking for is called "Continuous Integration".
There are multiple ways to achieve it, but here's how I do it:
Set up a Jenkins server
Put all docker files into one git repo, as modules if necessary
Have Jenkins check for changes in the repo every few minutes
Have Jenkins build the docker images after pulling in the changes

Deploying an existing docker image with Deis

I already have a build server that I generate a docker image for an application with and then put it into cloud storage. This is not an image that can be publicly shared on the docker index.
How can I run this application docker image in deis?
Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile (although I can't find instructions on how to use a Dockerfile instead of a buildpack). This could be considered a legacy integration issue. However, the current setup of running the build service on the application cluster is not good for me, because I want my build server to be a lot more powerful than my application server. Ideally my build server would spin up on demand, although I don't bother with that rigt now.
We are hoping to resolve this feature request with https://github.com/deis/deis/issues/533.
Ideally we see it as "build your image with - insert CI product here - then run deis push --app=appname to deploy your docker image as an application". After that, it would be treated the same as any other application deployed to deis. Basically, deis push is to pushing docker images as git push is to pushing repositories.
In regards to documentation for deploying an application with a Dockerfile, the docs are at http://docs.deis.io/en/latest/developer/dockerfile/, though this workflow will change back to a more sane deployment workflow once https://github.com/deis/deis/pull/967 is merged. There was some technical debt from v0.8.0, and Dockerfile deployments was one of them.
Deis is designed to build your docker image from your git repo via a buildpack or Dockerfile
The quote is not quite right. Deis is actually designed to build the docker image from its own git repo. When you create a deis application using deis create, Deis will create new git remote name deis, that's why you run git push deis master to build you application.
So, you don't need to push your image to a public repository in order to deploy to Deis. All you need is a Dockerfile. Just put your Dockerfile in the root directory of your application and make sure to commit that file, Deis will build the application using Dockerfile, instead of buildpack.
Hope this will help!

Resources