Docker: google/docker-registry container usage - docker

Does the google/docker-registry container exist solely to push/pull images from Google Cloud Storage? I am currently following their instructions on Git and have the docker-registry container running, but can't seem to pull from my bucket.
I started it with:
sudo docker run -d -e GCS_BUCKET=mybucket -p 5000:5000 google/docker-registry
I have a .tar Docker image stored in Cloud Storage, at mybucket/imagename.tar. However, when I execute:
sudo docker pull localhost:5000/imagename.tar
It results in:
2014/07/10 19:15:50 HTTP code: 404
Am I doing this wrong?

You need to docker push to the registy instead of copying your image tar manually.
From where you image is:
docker run -ti --name gcloud-config google/cloud-sdk \
gcloud auth login
docker run -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud config set project <project>
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 \
--volumes-from gcloud-config google/docker-registry
docker tag imagename localhost:5000/imagename
docker push localhost:5000/imagename
Then from the place you want to run the image from (ex: GCE):
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 google/docker-registry
docker run localhost:5000/imagename

When using the google/docker-registry it is preconfigured to use the google buckets.
It should work for any storage (if configuration is overriden), but it's purpose is to be used with the google infrastructure.
The tar file of an exported image should be used when there is no docker registry to manually move images between docker hosts.
You should not upload tar files to the bucket.
To upload images, you should push to the docker-registry container, it will the save the image in the bucket.
The google cloud compute instance that is running the docker registry container must be configured to have write/read access to the bucket.

Related

why do i keep seeing nginx index.html on localhost when i run my docker image

I installed and run nginx on my linux machine to understand the configurations etc. After a while i decided to remove it safely by following this thread in order to use it in docker
By following this documentaion i run this command
sudo docker run --name ngix -d -p 8080:80 pillalexakis/myrestapi:01
And i saw ngix's homepage at localhost
Then i deleted all ngix images & stopped all containers and i also run this command
sudo docker system prune -a
But now restarted my service by this command
sudo docker run -p 192.168.2.9:7777:8085 phillalexakis/myfirstapi:01 and i keep seeing at localhost ngix index.html
How can i totally remove it ?
Note: I'm new with docker and i might have missed a lot of things. Let me know what extra docker commands should i run in order provide better information.
Assuming your host have been preparing as below
your files (index.html, js, etc) under folder - /myhost/nginx/html
your nginx configuration - /myhost/nginx/nginx.conf
Solution
map your files (call volume) on the fly from outside docker image via docker cli
This is the command
docker run -it --rm -d -p 8080:80 --name web \
-v /myhost/nginx/html:/usr/share/nginx/html \
-v /myhost/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx
copy your files into docker image by build your own docker image via Dockerfile
This is your Dockerfile under /myhost/nginx
FROM nginx:latest
COPY ./html/index.html /usr/share/nginx/html/index.html
This is the command to build your docker image
cd /myhost/nginx
docker build -t pillalexakis/nginx .
This is the command to run your docker image
docker run -it --rm -d -p 8080:80 --name web \
pillalexakis/nginx

Docker container to use same Nexus volume?

I run the following:
mkdir /some/dir/nexus-data && chown -R 200 /some/dir/nexus-data
chown -R 200 /Users/user.name/dockerVolume/nexus
docker run -d -p 8081:8081 --name nexus -v /some/dir/nexus-data:/nexus-data sonatype/nexus3
Now lets say I upload an artifact to Nexus, and stop the nexus container.
If I want another Nexus container open, on port 8082, what Docker command do I run such that it uses the same volume as on port 8081 (so when I run this container, it already contains the artifact that I uploaded before)
Basically, I want both Nexus containers to use the same storage, so that if I upload an artifact to one port, the other port will also have it.
I ran this command, but it didn't seem to work:
docker run --name=nexus2 -p 8082:8081 --volumes-from nexus sonatype/nexus3
Bind mounts which is what you're using as a "volume" has limited functionality as compared to an explicit Docker volume.
I believe the --volumes-from flag only works with volumes managed by Docker.
In order to share the volume between containers with this flag you can have docker create a volume for you with your run command.
Example:
$ docker run -d -p 8081:8081 --name nexus -v nexus-volume:/nexus-data sonatype/nexus3
The above command will create a Docker managed volume for you with the name nexus-volume. You can view the details of the created volume with the command $ docker volume inspect nexus-volume.
Now when you want to run a second container with the same volume you can use the --volumes-from command as you desire.
So doing:
$ docker run --name=nexus2 -p 8082:8081 --volumes-from nexus sonatype/nexus3
Should give you your desired behaviour.

Docker: Swarm worker nodes not finding locally built image

Maybe I missed something, but I made a local docker image. I have a 3 node swarm up and running. Two workers and one manager. I use labels as a constraint. When I launch a service to one of the workers via the constraint it works perfectly if that image is public.
That is, if I do:
docker service create --name redis --network my-network --constraint node.labels.myconstraint==true redis:3.0.7-alpine
Then the redis service is sent to one of the worker nodes and is fully functional. Likewise, if I run my locally built image WITHOUT the constraint, since my manager is also a worker, it gets scheduled to the manager and runs perfectly well. However, when I add the constraint it fails on the worker node, from docker service ps 2l30ib72y65h I see:
... Shutdown Rejected 14 seconds ago "No such image: my-customized-image"
Is there a way to make the workers have access to the local images on the manager node of the swarm? Does it use a specific port that might not be open? If not, what am I supposed to do - run a local repository?
The manager node doesn't share out the local images from itself. You need to spin up a registry server (or user hub.docker.com). The effort needed for that isn't very significant:
# first create a user, updating $user for your environment:
if [ ! -d "auth" ]; then
mkdir -p auth
fi
touch auth/htpasswd
chmod 666 auth/htpasswd
docker run --rm -it \
-v `pwd`/auth:/auth \
--entrypoint htpasswd registry:2 -B /auth/htpasswd $user
chmod 444 auth/htpasswd
# then spin up the registry service listening on port 5000
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Local Registry" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
# then push your image
docker login localhost:5000
docker tag my-customized-image localhost:5000/my-customized-image
docker push localhost:5000/my-customized-image
# then spin up the service with the new image name
# replace registryhost with ip/hostname of your registry Docker host
docker service create --name custom --network my-network \
--constraint node.labels.myconstraint==true --with-registry-auth \
registryhost:5000/my-customized-image
For me, this step-by-step guide worked. However, it is insecure:
# Start your registry
$ docker run -d -p 5000:5000 --name registry registry:2
# Tag the image so that it points to your registry
$ docker tag my_existing_image localhost:5000/myfirstimage
# Push it to local registry/repo
$ docker push localhost:5000/myfirstimage
# For verification you can use this command:
$ curl -X GET http://localhost:5000/v2/_catalog
# It will print out all images on repo.
# On private registry machine add additional parameters to enable insecure repo:
ExecStart=/usr/bin/dockerd --insecure-registry IP_OF_CURRENT_MACHINE:5000
# Flush changes and restart Docker:
$ systemctl daemon-reload
$ systemctl restart docker.service
# On client machine we should say docker that this private repo is insecure, so create or modifile the file '/etc/docker/daemon.json':
{ "insecure-registries":["hostname:5000"] }
# Restart docker:
$ systemctl restart docker.service
# On swarm mode, you need to point to that registry, so use host name instead, for example: hostname:5000/myfirstimage
Images have to be downloaded to the local cache on each node. The reason is that if you store all of your images on one node only and that node goes down, swarm would have no way to spawn new tasks (containers) on the other nodes.
I personally just pull a copy of all the images on each node before starting the services. That can be done in a bash script or Makefile (eg below)
pull:
#for node in $$NODE_LIST; do
OPTS=$$(docker-machine config $$node)
set -x
docker $$OPTS pull postgres:9.5.2
docker $$OPTS pull elasticsearch:2.3.3
docker $$OPTS pull schickling/beanstalkd
docker $$OPTS pull gliderlabs/logspout
etc ...
set +x
done

Docker Private registry - access registry images after container is removed

I created a private docker registry and successfully pushed and pulled some images.
The thing is, the private registry runs in a container and when I remove the container using
sudo docker rm [container name]
I realize that all my pushed images are lost after a new container has been created.
Is there a way to keep the images in a private registry even when that private registry container is removed and another created?
If you start your registry using the dev or local config (assuming the default sample config), you can map the storage_path directory to a directory on the host machine using a volume mount -v.
Using a modified version of the quick start example:
docker run \
-d --name registry \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-v /tmp/registry:/registry \
-p 5000:5000 \
registry
This will store the registry's data on the host machine in /tmp/registry, and persist when the container stops.

Boot2Docker to Google Compute Engine VM: saving Docker container

I am running Boot2Docker v1.0.1 on Windows, and wish to fire up a Docker container I have created on a Google Compute Engine VM.
In order to do so, I need to save the container and upload it to Google Cloud Storage.
I issue the following command:
docker save --output=mycontainer.tar mycontainer:latest
The command completes without error. However, I cannot find the rce_env.tar file anywhere on my hard drive.
Does anyone have any experience with this? If not, is there a better way to run containers on GCE VM's?
You can run google/docker-registry locally to push your container images to GCS.
docker run -ti --name gcloud-config google/cloud-sdk \
gcloud auth login
docker run -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud config set project <project>
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 \
--volumes-from gcloud-config google/docker-registry
docker tag imagename localhost:5000/imagename
docker push localhost:5000/imagename
And then run it on GCE to pull your containers from GCS.
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 google/docker-registry
docker run localhost:5000/imagename
I understand that you are using boot2docker on windows.
On a similar setup, using OSX and boot2docker 1.1.0, the following works:
docker save --output mycontainer.tar mycontainer:latest
As also does redirecting standard output:
docker save mycontainer:latest > mycontainer.tar
GCE now allows to store docker images for your projects using the gcloud command.
you can now run $ gcloud preview docker push gcr.io/YOUR-PROJECT/IMAGE-NAME
Source: https://cloud.google.com/tools/container-registry/#pushing_to_the_registry

Resources