So I have an application called SongKong, I wanted to build a Docker image for it. Within hub.docker.com you can link to a repository containing Dockerfile and build from it, I do not want to call this repos songkong because I already have a songkong repo for the actual application code. So I called the repo songkongdocker but now my hub.docker.com repo is called songkongdocker, it would seem that hub.docker repos are usually named after the application so ideally should just be songkong
So, what is the correct way to name and can my Bitbucket code repository have a different name then the hub.docker.com repository ?
You can change build/repo name when you are creating an Automated build on hub.docker.com. Bitbucket/Github repo name is used as a default name, but you can still edit it.
Related
I am quite newbie in docker, and I am trying to find the way to tell version for a docker hub tagged image.
For instance, the jenkins/jenkins:lts-latest image, listed here https://hub.docker.com/r/jenkins/jenkins/tags/, what image version does actually aliase? And how can I infer the correspondent dockerfile/branch in jenkins repo?
I tried with docker search but I couldn't. I tried also to find a clue in the official Jenkins github dockerfile repo: https://github.com/jenkinsci/docker, but I don't see any bindung tag or anything that gives me a hint on the source of the image
Another example, I have a Kubernetes cluster, and when I check my Nexus pod, I see likewise that the image is defined as sonatype/nexus3:latest.
In this case at least I have the imageID: docker-pullable://sonatype/nexus3#sha256:434a2564aa64646464afaf.. but once again I don't know how to map it to the actual version of the software
For the repo you asked, the answer is No.
When setup repo on dockerhub, there are two kinds of options for user to choose as follows:
1) Create Repository:
In this way, dockerhub just create a repo for user, and user need to build his own image on local server, tag it, and push it to dockerhub.
When user push his image to dockerhub, no additional information about the source version will be appended, so can't get any source map from dockerhub.
jenkins/jenkins, just this kind of repo.
2) Create Automated Build
In this way, dockerhub will fetch the code from github or bitbucket, and build the image on its cloud infrastructure, so it will know exactly what source commit is for current docker image.
jenkins/jnlp-slave, just this kind of repo.
Then, you can click its Build Details on the web page, click into one link, e.g. 3.26-1-alpine, you will see log mentioned 0a0239228bf2fd26d2458a91dd507e3c564bc7d0 is the source commit.
To sum up, for the repo you mentioned in the question, they are not Automated Build, so you cannot get the map for the image & source code, but if you happen to find a repo in dockerhub which is Automated Build later & want to know the map, then you can.
As long as I understand your question, you are trying to tag the docker image exact with same version as of your software version. For that I use to create image tag:
$ export VERSION="2.31-b19"
$ docker tag "<user>/<image>:${VERSION}" "<docker_hub_user>/<repo>:latest"
If this is not the case. Please explain your use case a bit more so that we can provide you a better workaround.
I want to keep the images that I used in my docker hub account while maintaining reference to the pulled image. Something like when you fork a project in github.
Currently I have tried jwilder/nginx-proxy image. Now that I am satisfied with it, I committed the working container to username/nginx-proxy image and push it.
The problem with this approach is it is like a fresh image and it doesn't show the layer from jwilder/nginx-proxy. No documentation or even Dockerfile.
If you push the image, there is no reference to the original, that behavior is normal. You can put that reference or link using your "repo info".
The Dockerfile is only shown if you did an automated build linking your github or bitbucket account and the push is automatically done based on the Dockerfile of your project.
I know that Docker hub is there but it allows only for 1 private repository. Can I put these images on Github/Bitbucket?
In general you don't want to use version control on large binary images (like videos or compiled files) as git and such was intended for 'source control', emphasis on the source. Technically, here's nothing preventing you from doing so and putting the docker image files into git (outside the limits of the service you're using).
One major issue you'll have is git/bitubucket have no integration with Docker as neither provide the Docker Registry api needed for a docker host to be able to pull down the images as needed. This means you'll need to manually pull down out of the version control system holding the image files if you want to use it.
If you're going to do that, why not just use S3 or something like that?
If you really want 'version control' on your images (which docker hub does not do...) you'd need to look at something like: https://about.gitlab.com/2015/02/17/gitlab-annex-solves-the-problem-of-versioning-large-binaries-with-git/
Finally, docker hub only allows one FREE private repo. You can pay for more.
So the way to go is:
Create a repository on Github or Bitbucket
Commit and push your Dockerfile (with config files if necessary)
Create an automated build on Docker Hub which uses the Github / Bitbucket repo as source.
In case you need it all private you can self-host a git service like Gitlab or GOGS and of course you can also selfhost a docker registry service for the images.
Yes, since Sept. 2020.
See "Introducing GitHub Container Registry" from Kayla Ngan:
Since releasing GitHub Packages last year (May 2019), hundreds of millions of packages have been downloaded from GitHub, with Docker as the second most popular ecosystem in Packages behind npm.
Available today as a public beta, GitHub Container Registry improves how we handle containers within GitHub Packages.
With the new capabilities introduced today, you can better enforce access policies, encourage usage of a standard base image, and promote innersourcing through easier sharing across the organization.
Our users have asked for anonymous access for public container images, similar to how we enable anonymous access to public repositories of source code today.
Anonymous access is available with GitHub Container Registry today, and we’ve gotten things started today by publishing a public image of our own super-linter.
GitHub Container Registry is free for public images.
With GitHub Actions, publishing to GitHub Container Registry is easy. Actions automatically suggests workflows based for you based on your work, and we’ve updated the “Publish Docker Container” workflow template to make publishing straightforward.
GitHub is in the process of releasing something similar to ECR or Docker Hub. At the time of writing this, it's in Alpha phase and you can request access.
From GitHub:
"GitHub Package Registry is a software package hosting service, similar to npmjs.org, rubygems.org, or hub.docker.com, that allows you to host your packages and code in one place. You can host software packages privately or publicly and use them as dependencies in your projects."
https://help.github.com/en/articles/about-github-package-registry
I guess you are saying about docker images. You can setup your own private registry which will contain the docker images. If you are not pushing only dockerfiles, but are interested in pushing the whole image, then pushing the images as a whole to github is a very bad idea. Consider a case you have 600 MB of docker image, pushing it to github is like putting 600 MB of data to a github repo, and if you keep on pushing more images there, it will get terribly bad.
Also, docker registry does the intelligent mapping of storing only a single copy of a layer (this layer can be referenced by multiple images). If you use github, you are not going to use this use-case. You will end up storing multiple copies of large files which is really really bad.
I would definitely suggest you to go with a private docker registry rather than going with github.
If there is a real need of putting docker image to github/bitbucket you can try to save it into archive (by using https://docs.docker.com/engine/reference/commandline/save/) and commit/push it to your repository.
I have set up an automated build from a Github repository for an application in Docker hub, but as part of the build process, I'd like to be able to write the current tag name to a local file.
Does Docker hub provide any environment variable with the name of the tag being built, which I can echo into a local VERSION file?
I'd like to avoid actually installing git inside the container to perform a describe, as well as including the entire (currently dockerignored) .git folder to retrieve this info.
Thank you!
Is there a way to clone a Dockerfile from GitLab with the docker command?
I want to use the feature that allow pull and commit.
I am not sure if I have understand well but these pull and commit update the Dockerfile from the git repositories ? Or is it only locally in the next images ?
If not, is there a way to get all the change you made from the previous image made by the Dockerfile into another Dockerfile ?
I know you can clone with Git directly, but like for npm, you can also use Git url like git+https:// or git+ssh://
The pull/commit commands affect the related image and operate directly against your configured registry, which is the official Docker Hub Registry unless configured otherwise. Perhaps some confusion may arise from the registry's support for Automated Builds, where the registry is directly bound to a repository and rebuilds the image every time the targeted repository branch changes.
If you wish to reuse someone's Docker image, the best approach is to simply reference it via the FROM instruction in your Dockerfile and effectively fork the image. While it's certainly possible to clone the original source repository and continue editing the Dockerfile contained therein, you usually do not want to go down that path.
So if there exists such a foo/bar image you want to continue building upon, the best, most direct approach to do so is to create your own Dockerfile, inherit the image by setting it as a base for your succeeding instructions via FROM foo/bar and possibly pushing your baz/bar image back into the registry if you want it to be publicly available for others to re-base upon.