Pull Docker from my private docker-registry without specifying the host - docker

I am using docker-registry to pull my own docker images, but I want to do so without the need to specify the host. meanning:
instead of writing:
docker pull <host>:<port>/<dockerImage>
I want to write:
docker pull <dockerImage>
and first it will try to pull the docker from my private registry, before trying to pull it from the public docker registry.
Is it possible?
I tried to change the DOCKER_INDEX_URL to [my_docker_registry_host]:[port], but it doesn't work.

You can modify or add your /etc/sysconfig/docker
ADD_REGISTRY='--add-registry 192.168.0.169:5000'
INSECURE_REGISTRY='--insecure-registry 192.168.0.169:5000'
then modify /etc/systemd/system/docker.service or /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --registry-mirror=http://192.168.0.169:5000
when you pull a image,docker will pull it from your private registry first,and then docker hub if not found in your private registry.I am working on CentOS 7 Docker 1.12.

No, I think it is not supported yet (1.1.2 as write). I guess main reasons is
The local private registry is not the mirror from the public registry, therefore the logical is not that if it can't be found locally, then it goes to public. They are totally different.
Therefore if we setup own private docker repository but keep the same naming, it will mess up.
When you do docker images, and you see ubuntu, how do you know it is from your local private registry or public.
UPDATE: add one sample case
Also if we have a Dockerfile, put the tomcatit use tomcat7 as base
FROM tomcat7
How do you know this build comes from ?
If we want to have strict process or control on the mapping between the private repo and public repo, it will be complicated.
Technically it is possible, but gain less. It loose the power of docker (community)
It is similar case for other package system which demands the unique name for the package.

Related

How to get transferable docker compose stack without dockerhub

I have few docker images composed together in the stack using docker-compose.yml.
Now I want to transfer whole docker compose stack to the other host machine without uploading to the dockerhub,
And deploy it on the docker swarm.
I saw there is a thing called docker compose bundle, would that help?
If you’re deploying on a multi-host swarm (or something similar like Kubernetes or Nomad) you all but need a Docker registry. It doesn’t specifically have to be Docker Hub — quay.io, Amazon’s ECR, Google’s GCR, and self-hosted registries all work fine — but you do need to have pushed the built images somewhere where the orchestrator can retrieve them by name.
I’ve never used docker-compose bundle myself, but its documentation also notes that its operation “requires interaction with a Docker registry”.
The only real alternative is using docker save and docker load to manually move images between machines, but as a manual process it will get tedious very quickly, and you need to make sure an identical set of images are on every machine for consistency. Using a registry will be vastly easier.
The easyest way to do it is to use a Docker registry. The problem with Docker Hub is that you can only have one private registry, the rest must be public or paid.
Thankfully, there are other (free) alternatives:
Deploy your own private registry. Here is a nice tutorial where you can try it in the browser.
Use a free private registry. I personnaly use Codefresh. It can automatically build your image from a private repo (like bitbucket who has free plan too), but you can also just use it like a "simple" docker registry and push and pull your Docker images there.

Is it possible to store the public docker images in private docker registry

For company rules, our VMs can not access internet (can not use http proxy too). I installed a kubernetes cluster by downloading rpm packages and docker images as below:
k8s.gcr.io/kube-apiserver-amd64:v1.11.0
k8s.gcr.io/kube-controller-manager-amd64:v1.11.0
k8s.gcr.io/kube-scheduler-amd64:v1.11.0
k8s.gcr.io/kube-proxy-amd64:v1.11.0 k8s.gcr.io/pause-amd64:3.1
k8s.gcr.io/etcd-amd64:3.2.18 k8s.gcr.io/coredns:1.1.3
quay.io/coreos/flannel:v0.10.0-amd64
Then i install the rpm packages and load these docker images into all VMs. This can successfully install kubernetes although it's hard working.
My question is that Can i use a private docker registry to store these k8s.gcr.io, quay.io and other public registries' images and each VM's docker.service can pull these images like my private images?
There are several solutions:
Since your machines don't have internet connection and you want use the same images names - you need to provide internet to them. It could be done with any PROXY server, like squid or something else. In this case, you'll need to reconfigure docker to make it work behind the proxy
Deploy any local registry solution (e.g Artifactory) and then use it as a mirror for docker
P.S: I am not insisting on using Artifactory, but it could be very convenient. Look, Artifactory provides the ability to create virtual registry. You can agregate another registries (k8s.gcr.io, quay.io, whatever) "under" this virtual one and use it for docker mirror after.
Yeah you should be able to, as long as you have a machine connected to both the public repo and your private repo. You pull the image down from public, tag it, and push to your repo with docker push. ex with ubuntu from https://blog.docker.com/2013/07/how-to-use-your-own-registry/
# First, make sure you have the "ubuntu" repository:
docker pull ubuntu
# Then, find the image id that corresponds to the ubuntu repository
docker images | grep ubuntu | grep latest
ubuntu latest 8dbd9e392a96 12 weeks ago 263 MB (virtual 263 MB)
# Almost there!
# Tag to create a repository with the full registry location.
# The location becomes a permanent part of the repository name.
docker tag 8dbd9e392a96 localhost.localdomain:5000/ubuntu
# Finally, push the new repository to its home location.
docker push localhost.localdomain:5000/ubuntu

docker difference between private registry and the local image registry?

I have something on my mind that is bugging me. When running docker images I see a list of my local images I have in my docker environment. When pulling Images I pull it from a registry and more specific pull the specified tag managed by the repository.
so there is the registry as the big hub to store all image
repositories
and the repository is storing commits/tagged versions of a specific image
But what is docker images then? It's a registry as well isn't it? It holds all images that I've built locally or pulled.
If my claim is valid:
How does it comply with running a private registry (mentioned here https://docs.docker.com/registry/deploying/)
Running this docker run -d -p 5000:5000 --restart=always --name registry registry:2
Would deploy this new registry into my docker images...
So now I have a registry within my registry... registception?
What is the difference besides the custom registry is deployable?
Its not a local image registry as other questions have pointed. It is an image cache. The purpose of the image cache is to avoid having every time to download the same image whenever you do a docker run.
docker images simply lists all the cached images on the machine. Whenever there is newer image on the registry, the image(some layers) are downloaded and cached when doing docker pull .... Also, when a layer exists in the local cache, docker tells you that, example:
Step 2/2 : CMD /bin/bash
---> Using cache
On the other hand, a docker registry is a central repository to store images. It provide a remote api to pull and push images. The local image cache does not have this feature. Images in the local cache are read and stored used local docker commands that simply read files under /var/lib/docker/...
To make things clear, think of Docker remote registries (such as Docker Hub) as the remote Git repositories. You pull Docker images (like git repositories) that you need and you play with it.
Like remote Git repositories such as GitHub\BitBucket, Docker registries are also public and private. Public registries are for public usage and open-source projects. Examples include in like Docker Hub. Where as private registries are for organizational use or for your own. Examples for private registries include Azure Container Registry, EC2 Container Registry etc.
The official Docker Registry image is just a Docker registry image for your own system, you can't share them with others unless you have a server or a public Internet IP address. Think of it as Bonobo Private Git Server for Windows.
Your local image registry as you mentioned are all those images that you have build locally or pulled from a registry public or private you can see it like a local cache of images that you can re use without download or rebuild each time.
Running the registry what actually does is to spin up a server that implements the Docker Registry API which allows users to push, pull, delete and handles the storage of this images and their layers. See it like a central repository like npm, nexus
For example if you run the registry in your.registry.com:5000
You can do things like
docker build -t your.registry.com:5000/my-image:tag .
docker push your.registry.com:5000/my-image:tag
So others that have access to your server can pull it
docker pull your.registry.com:5000/my-image:tag

Is docker phasing out some sites and services or something?

Aim: to deploy a private registry
Discussion
private repository
I have read multiple posts and now I am confused. I have tried to run a docker container that should serve a private docker registry, but it returns an empty UI. Some posts indicate that it has been deprecated, but some other do not.
images
I used to navigate to dockerhub, but now there is https://store.docker.com?
Questions
Has docker registry been phased out?
Should one now use https://store.docker.com in stead of docker hub?
Docker hub still exists and will remain for the open source projects as it always has.
Docker store is a new offering for commercial images.
The standalone registry does not have a UI, don't believe it ever has. It's intended to be accessed with docker push and docker pull commands.

Building a docker container: will it be public?

I want to build a Docker container from a Dockerfile. It contains a private project.
My question is quite simple, and I don't find a clear answer: after having built my image, will it be automatically sent as a public one on Docker repository?
I want to build this container for private use and it's not intended to be retrieved on the public Internet.
docker build -f Dockerfile is a local operation.
You can docker push an image onto Dockerhub, but that is a separate operation and requires a Dockerhub account.
No, it will not automatically be published anywhere.
If you push the image onto Docker Hub, then it would be public unless you marked that repository private. (You get one private repository for free, though you can have more than one if you pay for an account.)

Resources