Is there a Java library for Docker repository? I couldn't find any. I know there are few docker api libraries, but I am looking one for the repository.
My use case includes pushing to local repo and removing from it.
I am talking about this: https://docs.docker.com/registry/spec/api/
Actually there are only official API and SDK for python and go.
Develop with Docker Engine SDKs and API
You can find other unofficial API's for Java and C, for example, but I haven't tested their efficiency.
Related
I am just starting to learn about docker. Is docker repository (like Docker Hub) useful? I see the docker image as a package of source code and environment configurations (dockerfile) for deploying my application. Well if it's just a package, why can't I just share my source code with the dockerfile (via GitHub for example)? Then the user just downloads it all and uses docker build and docker run. And there is no need to push the docker image to the repository.
There are two good reasons to prefer pushing an image somewhere:
As a downstream user, you can just docker run an image from a repository, without additional steps of checking it out or building it.
If you're using a compiled language (C, Java, Go, Rust, Haskell, ...) then the image will just contain the compiled artifacts and not the source code.
Think of this like any other software: for most open-source things you can download its source from the Internet and compile it yourself, or you can apt-get install or brew install a built package using a package manager.
By the same analogy, many open-source things are distributed primarily as source code, and people who aren't the primary developer package and redistribute binaries. In this context, that's the same as adding a Dockerfile to the root of your application's GitHub repository, but not publishing an image yourself. If you don't want to set up a Docker Hub account or CI automation to push built images, but still want to have your source code and instructions to build the image be public, that's a reasonable decision.
That is how it works. You need to put the configuration files in your code, i.e,
Dockerfile and docker-compose.yml.
I have an app on github which uses a third party open source tool as dependency. I want containerize my app so I've added a Dockerfile to my repo that triggers automatic builds on Docker Hub. That Docker image compiles the third party tool and builds my app.
On Docker Hub I've configured the rules to handle the versioning of my app based on new commits (source branch i.,e docker-repo/myapp:latest) and releases (source tags docker-repo/myapp:v1.0). However, I've pointed statically the dockerfile to the latest version of the third part tool. So my app is ready always with the latest version of its dependency.
Now, here is my question: What is the best approach to handle the versions of that third party tool with Docker Hub? I would like to be able to handle the versioning of my app but also handling the versioning of its dependency. Should I created as many dockerfiles as many versions of the dependency I want to build?
I don't think there's a best practice for this. Some languages create a version out of every version of the upstream tools; e. g. Python which has a tag for ever version of Alpine and Debian. So it's not a matter of should I, it's simply you might want to do or not depending on the clients of your image. In all likelihood, you might want to simply provide a latest image mapped to the latest image of your upstream dependency.
I develop mostly desktop apps and class libraries, and I am struggling to find an way to host them using pipeline automation.
I know I can push them to a UNC, but then people need to know that path to find them. It works, but is not very user friendly.
What I would like is a way to host them on DevOps Server, like GitHub. On GitHub there is a Release section that you can go to and download the binaries of a project. I know Azure DevOps is geared toward webapps, but has anyone found a way to use build/release pipelines to automate the hosting of binary files?
I think what you are looking for is the Atifacts which is under the Test Plans in your project in Azure DevOps.
You can publish and download your binaries very easily here. Create a Feed and connect to it with any kinds of packages, including built in ones like NuGet and Maven, or you can customize it which called Universal packages in it.
You may find more useful information at Azure Artifacts documentation, learn what is Azure Artifacts and how you can publish and download you binaries via the CLI tool.
I know that Docker hub is there but it allows only for 1 private repository. Can I put these images on Github/Bitbucket?
In general you don't want to use version control on large binary images (like videos or compiled files) as git and such was intended for 'source control', emphasis on the source. Technically, here's nothing preventing you from doing so and putting the docker image files into git (outside the limits of the service you're using).
One major issue you'll have is git/bitubucket have no integration with Docker as neither provide the Docker Registry api needed for a docker host to be able to pull down the images as needed. This means you'll need to manually pull down out of the version control system holding the image files if you want to use it.
If you're going to do that, why not just use S3 or something like that?
If you really want 'version control' on your images (which docker hub does not do...) you'd need to look at something like: https://about.gitlab.com/2015/02/17/gitlab-annex-solves-the-problem-of-versioning-large-binaries-with-git/
Finally, docker hub only allows one FREE private repo. You can pay for more.
So the way to go is:
Create a repository on Github or Bitbucket
Commit and push your Dockerfile (with config files if necessary)
Create an automated build on Docker Hub which uses the Github / Bitbucket repo as source.
In case you need it all private you can self-host a git service like Gitlab or GOGS and of course you can also selfhost a docker registry service for the images.
Yes, since Sept. 2020.
See "Introducing GitHub Container Registry" from Kayla Ngan:
Since releasing GitHub Packages last year (May 2019), hundreds of millions of packages have been downloaded from GitHub, with Docker as the second most popular ecosystem in Packages behind npm.
Available today as a public beta, GitHub Container Registry improves how we handle containers within GitHub Packages.
With the new capabilities introduced today, you can better enforce access policies, encourage usage of a standard base image, and promote innersourcing through easier sharing across the organization.
Our users have asked for anonymous access for public container images, similar to how we enable anonymous access to public repositories of source code today.
Anonymous access is available with GitHub Container Registry today, and we’ve gotten things started today by publishing a public image of our own super-linter.
GitHub Container Registry is free for public images.
With GitHub Actions, publishing to GitHub Container Registry is easy. Actions automatically suggests workflows based for you based on your work, and we’ve updated the “Publish Docker Container” workflow template to make publishing straightforward.
GitHub is in the process of releasing something similar to ECR or Docker Hub. At the time of writing this, it's in Alpha phase and you can request access.
From GitHub:
"GitHub Package Registry is a software package hosting service, similar to npmjs.org, rubygems.org, or hub.docker.com, that allows you to host your packages and code in one place. You can host software packages privately or publicly and use them as dependencies in your projects."
https://help.github.com/en/articles/about-github-package-registry
I guess you are saying about docker images. You can setup your own private registry which will contain the docker images. If you are not pushing only dockerfiles, but are interested in pushing the whole image, then pushing the images as a whole to github is a very bad idea. Consider a case you have 600 MB of docker image, pushing it to github is like putting 600 MB of data to a github repo, and if you keep on pushing more images there, it will get terribly bad.
Also, docker registry does the intelligent mapping of storing only a single copy of a layer (this layer can be referenced by multiple images). If you use github, you are not going to use this use-case. You will end up storing multiple copies of large files which is really really bad.
I would definitely suggest you to go with a private docker registry rather than going with github.
If there is a real need of putting docker image to github/bitbucket you can try to save it into archive (by using https://docs.docker.com/engine/reference/commandline/save/) and commit/push it to your repository.
Can someone tell me how exactly developers can acces to Nexus OSS ? Have i to install it in one server apart with Jenkins for exemple in a VM and after that what i have to do ?
Thanks.
Nexus Repository Manager is a separate application from Jenkins. You can run it in the same or a different VM. After installation you just have to configure the tools you use to connect to it for downloads and publishing and potentially configure repositories.
In terms of the tools e.g. we have a full chapter about configuring Apache Maven and others to Maven repositories as well as example projects. And similar for other formats.
All that and a lot more is covered in the documentation.