I have created a local docker registry. Steps I have followed.
Creating certificate files.
mkdir -p /etc/docker/certs.d/123.456.78.9:5000
cp domain.crt /etc/docker/certs.d/123.456.78.9:5000/ca.crt
cp domain.crt /usr/local/share/ca-certificates/ca.crt
update-ca-certificates
Installed Docker registry, as given in official guide
docker run -d -p 5000:5000 --restart=always --name registry -v $PWD/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e
REGISTRY_HTTP_TLS_KEY=/certs/domain.key registry:2
Pulling and pushing Docker images :
docker pull ubuntu:16.04
docker tag ubuntu:16.04 mydocker_registry/my_ubuntu
docker push mydocker_registry/my-ubuntu
My image push tries to access docker.io, so error is obvious.
The push refers to repository [docker.io/mydocker_registry/my_ubuntu]
03901b4a2ea8: Preparing
denied: requested access to the resource is denied
My /etc/hosts file looks like this
123.456.78.9 mydocker_registry
Here I feel I have missed some small step. I can not figure that out.
Thanks in advance.
Try adding your registry as insecure registries.
If you are using Linux, edit your daemon.json under /etc/docker
Add
{
"insecure-registries" : ["registry-ip:registry-port"]
}
And run in terminal
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
you need to add you your registry url in the tag, if the local registry URL is not part of your Docker image tag, by default it will push to official docker registry.
So that is why you are seeing in the push log
The push refers to a repository [docker.io/mydocker_registry/my_ubuntu]
so All you add to add the full path of your docker registry.
docker tag ubuntu:16.04 123.456.78.9:5000/mydocker_registry/my_ubuntu
docker push 123.456.78.9:5000/mydocker_registry/my_ubuntu
Here 123.456.78.9 refer to your local registry. if it is localhost then just 123.456.78.9 this with localhost
You can verify the registry access in browser if it is accessiable you will able to push.
https://myregistry.com/v2/_catalog
or
http://localhost:5000/v2/_catalog
Ok, after days of reading and trying, I have fixed my problem thanks to the helps given by /r/docker redditters :-)
Please note that this is working for your local domain only
Creating certificate files for your domain.
Here my domain is registry.myregistry.com.
openssl req -newkey rsa:4096 -nodes -sha256 -keyout registry.myregistry.com.key -x509 -days 365 -out registry.myregistry.com.crt
mkdir -p /etc/docker/certs.d/registry.myregistry.com:443
Copy certificates files to appropriate locations.
cp registry.myregistry.com.crt /etc/docker/certs.d/registry.myregistry.com:443/ca.crt
cp registry.myregistry.com /usr/local/share/ca-certificates/ca.crt
update-ca-certificates
Docker registry initialization
docker run -d -p 443:443 --restart=always --name registry -v $PWD/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.myregistry.com.crt -e REGISTRY_HTTP_TLS_KEY=/certs/registry.myregistry.com.key registry:2
Pulling and Pushing docker images to registry
docker pull alpine:latest
docker tag alpine:latest registry.myregistry.com:443/myalpine
docker push registry.myregistry.com:443/myalpine
No errors, its pushing successfully.
To be done is, accepting pull requests from other users in the same network.
Related
I have got the Verdaccio running with docker on my system using:
docker pull verdaccio/verdaccio
docker run -it --rm --name verdaccio -p 4873:4873 verdaccio/verdaccio
I can then see my Verdaccio at localhost:4873.
I add a user:
npm adduser --registry http://localhost:4873/
I go to a simple package repo and publish it:
npm publish --registry http://localhost:4873/
I then go to the browser and see my package is in the list on localhost
I then rename my docker container:
docker rename 4d2b---3692 my-container
On docker hub I have a repo called myuser/my-container
I then commit my container
docker commit 93ba9d----e myuser/my-container
Then I push it to Docker Hub (I did log in already)
docker push myuser/my-container
I see that it is updated on my Docker Hub.
I remove everything related to Docker on my computer
docker rmi --force 93ba9d----e
I then see nothing when I run these commands
docker ps -a
docker images
Then I try and pull my container
docker pull myuser/my-container
I then run my container
docker run -it --rm --name verdaccio -p 4873:4873 myuser/my-container
I can then see the Verdaccio page on localhost:4873, however I can not see the published package.
Please let me know what I am missing, thanks.
How to you setup a private secure docker-registry.
I have installed via helm
Now how can I make it secure(TLS certs), so I can push and pull to the registry from docker and from kubernetes deployment?
I can see that there is a Helm configuration:
tlsSecretName Name of secret for TLS certs
Update - current status:
I was able to get cert-manager working and install with TLS:
helm install stable/docker-registry --set tlsSecretName=example-com-tls
I am not strong in certificates - but I am unclear about the following:
1.
Can I now create an Ingress(with a secret to cert) that will only accept incomming request with that certificate? I will look at the suggested link from #xzesstence tomorrow
2.
I guess I need to tell docker push where to find the certificate?
Maybe this(I will try this tomorrow): https://docs.docker.com/engine/security/certificates/
Check out the official Docker Tutorials
https://docs.docker.com/registry/deploying/
and especially the point
Get a certificate
So overall in short, you need to get a certificate and place it in /certs (or change the folder mount of the following docker run command -v /cert).
Also check the certificate name, either rename to domain.crt or change the filename in the docker run command
then run
docker run -d \
--restart=always \
--name registry \
-v "$(pwd)"/certs:/certs \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-p 443:443 \
registry:2
If you don't have a certificate, you can use letsencrypt
https://letsencrypt.org/
Maybe you want to checkout this startscript with letsencrypt certs. (untested from my side)
The advantage of this is, that you have the letsencrypt service integrated which can renew the license automatically
https://gist.github.com/PieterScheffers/63e4c2fd5553af8a35101b5e868a811e
Edit:
Since you are using Docker on a Kubernetes Cluster checkout this great tutorial
https://medium.com/#jmarhee/in-cluster-docker-registry-with-tls-on-kubernetes-758eecfe8254
Maybe I missed something, but I made a local docker image. I have a 3 node swarm up and running. Two workers and one manager. I use labels as a constraint. When I launch a service to one of the workers via the constraint it works perfectly if that image is public.
That is, if I do:
docker service create --name redis --network my-network --constraint node.labels.myconstraint==true redis:3.0.7-alpine
Then the redis service is sent to one of the worker nodes and is fully functional. Likewise, if I run my locally built image WITHOUT the constraint, since my manager is also a worker, it gets scheduled to the manager and runs perfectly well. However, when I add the constraint it fails on the worker node, from docker service ps 2l30ib72y65h I see:
... Shutdown Rejected 14 seconds ago "No such image: my-customized-image"
Is there a way to make the workers have access to the local images on the manager node of the swarm? Does it use a specific port that might not be open? If not, what am I supposed to do - run a local repository?
The manager node doesn't share out the local images from itself. You need to spin up a registry server (or user hub.docker.com). The effort needed for that isn't very significant:
# first create a user, updating $user for your environment:
if [ ! -d "auth" ]; then
mkdir -p auth
fi
touch auth/htpasswd
chmod 666 auth/htpasswd
docker run --rm -it \
-v `pwd`/auth:/auth \
--entrypoint htpasswd registry:2 -B /auth/htpasswd $user
chmod 444 auth/htpasswd
# then spin up the registry service listening on port 5000
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Local Registry" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
# then push your image
docker login localhost:5000
docker tag my-customized-image localhost:5000/my-customized-image
docker push localhost:5000/my-customized-image
# then spin up the service with the new image name
# replace registryhost with ip/hostname of your registry Docker host
docker service create --name custom --network my-network \
--constraint node.labels.myconstraint==true --with-registry-auth \
registryhost:5000/my-customized-image
For me, this step-by-step guide worked. However, it is insecure:
# Start your registry
$ docker run -d -p 5000:5000 --name registry registry:2
# Tag the image so that it points to your registry
$ docker tag my_existing_image localhost:5000/myfirstimage
# Push it to local registry/repo
$ docker push localhost:5000/myfirstimage
# For verification you can use this command:
$ curl -X GET http://localhost:5000/v2/_catalog
# It will print out all images on repo.
# On private registry machine add additional parameters to enable insecure repo:
ExecStart=/usr/bin/dockerd --insecure-registry IP_OF_CURRENT_MACHINE:5000
# Flush changes and restart Docker:
$ systemctl daemon-reload
$ systemctl restart docker.service
# On client machine we should say docker that this private repo is insecure, so create or modifile the file '/etc/docker/daemon.json':
{ "insecure-registries":["hostname:5000"] }
# Restart docker:
$ systemctl restart docker.service
# On swarm mode, you need to point to that registry, so use host name instead, for example: hostname:5000/myfirstimage
Images have to be downloaded to the local cache on each node. The reason is that if you store all of your images on one node only and that node goes down, swarm would have no way to spawn new tasks (containers) on the other nodes.
I personally just pull a copy of all the images on each node before starting the services. That can be done in a bash script or Makefile (eg below)
pull:
#for node in $$NODE_LIST; do
OPTS=$$(docker-machine config $$node)
set -x
docker $$OPTS pull postgres:9.5.2
docker $$OPTS pull elasticsearch:2.3.3
docker $$OPTS pull schickling/beanstalkd
docker $$OPTS pull gliderlabs/logspout
etc ...
set +x
done
I have learn tutorial from that.I have create docker mirror in this command:
docker run -d -p 5555:5000 -e STORAGE_PATH=/mirror -e STANDALONE=false -e MIRROR_SOURCE=https://registry-1.docker.io -e MIRROR_SOURCE_INDEX=https://index.docker.io -v /Users/v11/Documents/docker-mirror:/mirror --restart=always --name mirror registry
And it succeed. Then I start my docker daemon using this command:
docker --insecure-registry 192.168.59.103:5555 --registry-mirror=http://192.168.59.103:5555 -d
Then I use command to pull image like that:
docker pull hello-world
Then it throw error in log, and more detail is:
ERRO[0012] Unable to create endpoint for http://192.168.59.103:5555/:
invalid registry endpoint https://192.168.59.103:5555/v0/: unable to
ping registry endpoint https://192.168.59.103:5555/v0/ v2 ping attempt
failed with error: Get https://192.168.59.103:5555/v2/: EOF v1 ping
attempt failed with error: Get https://192.168.59.103:5555/v1/_ping:
EOF. If this private registry supports only HTTP or HTTPS with an
unknown CA certificate, please add --insecure-registry
192.168.59.103:5555 to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the
flag; simply place the CA certificate at
/etc/docker/certs.d/192.168.59.103:5555/ca.crt
As you can see, it tell me to add '--insecure-registry 192.168.59.103:5555',But I have added it when I start docker daemon. Anyone have idea about it?
You´re probably using boot2docker?
Could you try that:
$ boot2docker init
$ boot2docker up
$ boot2docker ssh "echo $'EXTRA_ARGS=\"--insecure-registry <YOUR INSECURE HOST>\"' | sudo tee -a /var/lib/boot2docker/profile && sudo /etc/init.d/docker restart"
Taken from here
Does the google/docker-registry container exist solely to push/pull images from Google Cloud Storage? I am currently following their instructions on Git and have the docker-registry container running, but can't seem to pull from my bucket.
I started it with:
sudo docker run -d -e GCS_BUCKET=mybucket -p 5000:5000 google/docker-registry
I have a .tar Docker image stored in Cloud Storage, at mybucket/imagename.tar. However, when I execute:
sudo docker pull localhost:5000/imagename.tar
It results in:
2014/07/10 19:15:50 HTTP code: 404
Am I doing this wrong?
You need to docker push to the registy instead of copying your image tar manually.
From where you image is:
docker run -ti --name gcloud-config google/cloud-sdk \
gcloud auth login
docker run -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud config set project <project>
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 \
--volumes-from gcloud-config google/docker-registry
docker tag imagename localhost:5000/imagename
docker push localhost:5000/imagename
Then from the place you want to run the image from (ex: GCE):
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 google/docker-registry
docker run localhost:5000/imagename
When using the google/docker-registry it is preconfigured to use the google buckets.
It should work for any storage (if configuration is overriden), but it's purpose is to be used with the google infrastructure.
The tar file of an exported image should be used when there is no docker registry to manually move images between docker hosts.
You should not upload tar files to the bucket.
To upload images, you should push to the docker-registry container, it will the save the image in the bucket.
The google cloud compute instance that is running the docker registry container must be configured to have write/read access to the bucket.