Dockerhub Automated Builds tagging - docker

I have created an automated build on dockerhub, but unfortunately I didn't find any proper documentation for those.
Basically, I am creating a system where docker automated build is triggered as soon as there is a commit in github repository. So how can I keep my docker tag a variable? Such that whenever there is a commit, the image being built is tagged with the latest commit's sha1.
I can put a regex in branch name or tag name, can't I put a regex in docker tag name? Here I wish to trigger the build by specifying the curl by docker tag name.

I don't think what you want is possible. The only variable you can use in the docker tag name is {sourceref}, which expands to the branch or tag name.
I presume this was deliberate -- you'd vastly increase the number of images that Docker Hub had to store if each commit was given a different docker tag.
You could try using a continuous integration/deployment service to build the images outside of Docker Hub. There are many to chose from, but Travis and Circle are popular ones that should be able to do what you want.

Related

Deploy through ArgoCD with same image name and tag ( image:latest )

I have started to learn GitOps ArgoCD. I have one basic doubt. I am unable to test ArgoCD because I do not have any Cluster. It will be so kind of you if you can clear my doubts.
As an example currently I am running my deployment using test:1 docker image. Then using Jenkins I upload test:2 and then put test:2 in place of test:1 then ArgoCD detects the change and applies the new image in a cluster.
But if before I used test:latest then using Jenkins I uploads a new image with same name test:latest. What will happen now? Will ArgoCD deploy the image ( name and tag of the new and previous image are the same )
If you need automation, you can consider Argo CD Image Updater, which does include in its update strategies:
latest/newest-build - Update to the most recently built image found in a registry
It is important to understand, that this strategy will consider the build date of the image, and not the date of when the image was tagged or pushed to the registry.
If you are tagging the same image with multiple tags, these tags will have the same build date.
In this case, Argo CD Image Updater will sort the tag names lexically descending and pick the last tag name of that list.
For example, consider an image that was tagged with the f33bacd, dev and latest tags.
You might want to have the f33bacd tag set for your application, but Image Updater will pick the latest tag name.
argocd-image-updater.argoproj.io/image-list: myimage=some/image
argocd-image-updater.argoproj.io/myimage.update-strategy: latest

Can we compute docker image digest before building the image to check against docker registry?

So we have a git repository that has a number of docker images (multiple Dockerfiles each of which used for a different type of application build in our Jenkins)
Now if one makes a change in one Dockerfile the Jenkins job will build and pushe all other Dockerfiles in the repository. I was wondering, if we were able to calcualte the digest id (SHA256) beforehand and compare that with our docker registry, if there is one already we can skip docker build and docker push.
I couldn't find any command in Docker user guide but here with this post I also wanted to know if we can raise a ticket for this new feature if this approach works and if there is no way of calculating this identifier
Any other suggestion is greatly appreciated

dockerhub automated build from single repo with single dockerfile building multiple images

I have a single git repository on github with:
a Dockerfile which builds multiple images meant to be used together. (a maven build produces a war file and sql files by downloading then from artifact repositories; a multi-stage build then creates a slim tomcat image with the war and a slim mysql image with the sql data preloaded).
a docker-compose.yml file that uses the "target" instruction to build and run containers on the images from the multi-stage build.
This works well during development. But it forces users to build images on their computer. I want the users to be able to only download images. The setup should also be using dockerhub's automated build to keep images up to date.
How can I set it up to achieve this ? What command(s) or file(s) do I give the users to allow them to download images and run containers ? If it is not possible, what can I do to make it possible (split the repo? copypaste the dockerfile? publish intermediate images to dockerhub and ensure a correct build order ? Don't use dockerhub's automated build ?)
To use dockerhub's automated builds you would need to build one image per Dockerfile and have one Dockerfile per repo. The image name comes from the source repository name (with the github org/user name as the docker image user name and the github repo name as the docker image name). Multistage builds work in automated builds but only one image is published per Dockerfile (the final image of the build).
You could build the images in your CI or even on your local machine and then push to dockerhub. You'd just need to have an account on dockerhub and be logged in to that account when you use the docker push command. When doing this push there doesn't have to be any mapping to GitHub repos but your image names should start with <dockerhub_user>/ as a kind of prefix (explained at https://docs.docker.com/docker-hub/repos/). It's ok if they are built under a different name as you could rename by retagging before pushing. This way you can also build the images however you like.
When you have images in dockerhub you can just refer to them in the docker-compose file using the form image: <dockerhub_user>/<dockerhub_image_name>:<tag>. The images will automatically be pulled when the user does docker-compose up.
Here are some tips and links that should help your situation:
Automated builds are a convenient way to deploy your images.
This part is pretty easy. You'll need accounts with Docker Hub and Github. You can register these accounts for free.
When you create a a repository on Docker Hub you can link it to your Github repository to automate the build.
Recommendations:
Split your services into separate Dockerfiles. Ideally you should use separate repositories: Docker Compose will pull them together at the end. A division of services will also help if anyone wants to implement e.g. a cloud database backend for their deployment.
Don't store database files inside a container. Containers should be ephemeral
For a robust design, test your builds.
Docker Hub automated builds are very flexible with the use of build hooks.
This part is a little tricky because I haven't found the best documentation. It also might not be necessary if you split your Dockerfile.
I've successfully created automated builds with multiple tags and targets using a hook at hooks/build but after reading the documentation it looks like you should also be able to use hooks/post_build.
Your hook could simply build the correct layer and push the tag to Docker Hub
For your repository that should look like:
#!/usr/bin/env bash
docker build --target lutece-mysql -t lutece/mysql .
docker push lutece/mysql
If you end up using hooks/build you might need to build the final target as the last step.
Recommendations:
If you need multiple tags for an image use a hook at hooks/post_push to add additional tags. That way each tag should link users to the same image. e.g.
#!/usr/bin/env bash
docker tag lutece/tomcat:master lutece/tomcat:latest
docker push lutece/tomcat:latest
Additionally you can use build hooks to label your image with things like build date and git commit.
Deployment with Docker Compose
Unfortunately I haven't done this part so I can't confirm how to get this to work.
With your repositories in Docker Hub and a working docker-compose.yml your clients may only need to run docker-compose up in the directory with your docker-compose.yml file. Docker Compose should pull in the images from Docker Hub.

Docker: updating image and registry

What is the right workflow for updating and storing images?
For example:
I download source code from GitHub (project with Docker files, docker-compose.yml)
I run "docker build"
And I push new image to Docker Hub (or AWS ECR)
I make some changes in source code
Push changes to GitHub
And what I should do now to update registry (Docker Hub)?
A) Should I run again "docker build" and then push new image (with new tag) to registry?
B) Should I somehow commit changes to existing image and update existing image on Docker Hub?
This will depend on what for you will use your docker image and what "releasing" policy you adopt.
My recommendation is that you sync the tags you keep on Docker Hub with the release/or tags you have in GitHub and automate as much as you can your production with a continuous integration tools like Jenkins and GitHub webooks.
Then your flow becomes :
You do your code modifications and integrate them in GitHub ideally using a pull request scheme. This means your codes will be merged into your master branch.
Your Jenkins is configured so that when master is changed it will build against your docker file and push it to Docker hub. This will erase your "latest" tag and make sure your latest tag in docker hub is always in sync with your master release on GitHub
If you need to keep additional tags, this will be typical because of different branches or releases of your software. You'll do the same as above with the tag hooked up through Jenkins and GitHub webhooks with a non-master branch. For this, take a look at how the official libraries are organized on GitHub (for example on Postgres or MySQL images).

How can I edit my image tags on docker hub?

I have a public docker hub repository, automated build linked to a github repo.
I found I misnamed the tag of my last build.
Is that possible to re-edit the image name manually after building process without influencing the image ?
For the Automated builds, manually pulling, re-tagging and pushing won't work.
First, even if you pull and re-tag your image, you cannot push manually to an Automated Build. You will end up getting Error pushing to registry: Authentication is required.
The true solution would be to go to your Build Details Page, Click on Settings -> Automated Build -> Edit the tag name under Docker Tag Name and hit Save and trigger build. This will create a new tag and triggers the build.
Secondly, you cannot delete the tags (for Automated Builds) on your own. Please contact support#docker.com asking them to delete the tag.
Also, you should refrain from using HTTP DELETE request for Docker Hub. These API Endpoints are only meant for private registry and not for Docker Hub till date. Docker is planning to release the V2 registry Endpoint soon, after which you can safely use the API calls to delete/manipulate tags and images. Until then do not use V1/V2 Endpoints for deleting tags.

Resources