pushing docker image to registery of docker on a different host machine - docker

I have two computers both have docker, I want to keep my docker image I made to the other host the does not connect to internet but is on local LAN.
so this is my machine (I use hello-world image example)
macHost:~ ciasto$ docker tag hello-world 192.168.0.6:5000/hello-world
then I try docker push 192.168.0.6:5000/hello-world
but this throws error:
The push refers to a repository [192.168.0.6:5000/hello-world]
Get https://192.168.0.6:5000/v2/: dial tcp 192.168.0.6:5000: getsockopt: connection refused
so I tried without 5000 port.
$ docker push 192.168.0.6/hello-world-2
that too throw same error:
The push refers to a repository [192.168.0.6/hello-world-2]
Get https://192.168.0.6/v2/: dial tcp 192.168.0.6:443: getsockopt: connection refused
what am I doing wrong ?

The Docker Registry is a specific piece of software; you can't directly docker push an image to another system.
The best workflow is almost certainly to write a Dockerfile that describes how to build your image. This is a simple text file, not totally unlike a shell script, that you'd typically add to your source code repository. Then on the other system you could check out the repository and run docker build and get a functionally equivalent image.
If you have a semi-isolated network you can always run your own registry. Say you set up your local DNS such that the host name my-registry.local resolves to 192.168.0.123; then you can docker tag your local images as my-registry.local/me/imagename, docker push them from one system, and docker pull them from the other.
The lowest-maintenance, least-reproducible, highest-long-term-effort path is to docker save the image on the first system, scp or otherwise transfer it to the second system, and then docker load it there. If you're motivated, you can even do it with one step
docker save me/imagename | ssh elsewhere docker load
You're forced to do this if the "elsewhere" system is actually disconnected from the network and the "copy it to the other system" step involves copying the image file on to removable media. If you're doing this at all regularly, though, or have more than one target system, you'll probably find setting up a local registry to be a good investment.

Related

How to deploy a LOCAL docker image on a REMOTE docker host machine?

Assume that there are two docker host machines A and B and there is a docker image, xyz on machine A. I would like to deploy it to machine B from machine A. How can I do it?
I know that it is possible to operate on containers remotely by setting DOCKER_HOST as below, but this seems to require the docker image xyz already exists on machine B. Also, creating a tar file from the image xyz and coping it to machine B, and then running it on machine B is another way to do it. I wonder if there is a way in which I can do it directly from machine A? Thanks!
docker -H "ssh://my-user#machine-b" run -i - t xyz
As David mentions, the standard way to move images is with a registry. You can self host (e.g. Docker's registry), use a SaaS offering like Hub, or one of the cloud vendors (e.g. ECR, GCR). The advantage of a registry over sending exports is layers are deduplicated, saving you bandwidth when your image only has minor changes between versions. This functionality is also built into the docker engine with the push and pull commands.
With the save and load method, you could do this in two commads, piping the output, but as mentioned before this will transfer all the layers even if you only have minor changes:
docker save ${image} | docker -H ssh://${user}#${host} load
I use Portainer Agent for this. Setup Portainer locally on Machine A on either port 9000 or 9443. Then add the Machine B environment using Portainer Agent on port 8000. Open the Machine B enviornment. You'll then be able to create containers, pull images, deploy stacks, whatever you need from Machine A->Machine B. It's a great way to migrate containers.

What is the best way to deliver docker containers to remote host?

I'm new in docker and docker-compose. I'm using of docker-compose file with several services. I have containers and images on the local machine when I work with docker-compose and my task is to deliver them remote host.
I found several solutions:
I could build my images and push them to the some registry and pull
them on production server. But for this option I need private
registry. And as I think- registry is an unnecessary element. I
wanna run countainers directly.
Save docker image to tar and load it to remote host. I saw this post
Moving docker-compose containersets around between hosts
, but in this case I need to have shell scripts. Or I can use
docker directly
(Docker image push over SSH (distributed)),
but in this case I'm losing the benefits of docker-compose.
Use docker-machine (https://github.com/docker/machine) with general driver. But in this case I can use it for deployng only from
one machine, or I need to configure certificates
(How to set TLS Certificates for a machine in docker-machine). And, again, it isn't simple solution, as for me.
Use docker-compose and parameter host (-H) - But in the last option I need to build images on remote host. Is it possible to
build image on local mashine and push it to remote host?
I could use docker-compose push (https://docs.docker.com/compose/reference/push/) to remote host,
but for this I need to create registry on remote host and I need to
add and pass hostname as parameter to docker compose every time.
What is the best practice to deliver docker containers to remote host?
Via a registry (your first option). All container-oriented tooling supports it, and it's essentially required in cluster environments like Kubernetes. You can use Docker Hub, or an image registry from a public-cloud provider, or a third-party option, or run your own.
If you can't use a registry then docker save/docker load is the next best choice, but I'd only recommend it if you're in something like an air-gapped environment where there's no network connectivity between the build system and the production systems.
There's no way to directly push an image from one system to another. You should avoid enabling the Docker network API for security reasons: anyone who can reach a network-exposed Docker socket can almost trivially root its host.
Independently of the images you will also need to transfer the docker-compose.yml file itself, plus any configuration files you bind-mount into the containers. Ordinary scp or rsync works fine here. There is no way to transfer these within the pure Docker ecosystem.

SSH Between Docker Instances between Hosts

I have a setup that looks like this:
Essentially, two physical machines that exist on the same local network and each machine is running the same docker image. I have exposed a range of ports on both physical machines (2000-3000). The docker image used has both SSH and OpenSSH server installed, and when run, port 22 is mapped to 2222. What I would like to be able to do is SSH from Docker Image Machine-01 to Docker Image Machine-02.
I realize that docker attach, etc exist, however, I do have a specific use case for my application.
I know that my ports are open as I can have netcat listening on one machine, and then use nc -zv machine-02 2000 and get a response. Where I am stuck is getting the connection between the two docker images. It should be noted that I can SSH into the docker image locally (machine-01 can get into its own image, but machine-02 cannot access this)
What is the best way of proceeding with this?

Use Docker Compose offline by using local images and not pulling images

I want to issue the docker-compose command to bring up all the dependent service containers that I have previously pulled down while inside the company network. I am outside the company network so when I try to start my environment the first thing it does is try to call out to the company network and then fails with:
ERROR: Error while pulling image: Get http://myartifactory.service.dev:5000/v1/repositories/my_service/images: dial tcp 127.0.53.53:5000: getsockopt: connection refused
How can I force docker-compose to use the local images and not try to pull down the latest?
You can force docker-compose to use local images by first running:
docker-compose pull --ignore-pull-failures

In virtual-machine Docker push to private registry failed under proxy

I want to push a Docker image to a private registry in the local machine.
The docker is running in a virtual-machine CentOS 7 and I'm working a in a network under a proxy.
What I did is to tag my Docker local image "test_bench_image" obtained from building a dockerfile:
docker tag test_bench_image localhost:5000/test_bench_image
and then I tried to push it:
docker push localhost:5000/test_bench_image
What I get is:
The push refers to a repository [localhost:5000/test_bench_image]
Put http://localhost:5000/v1/repositories/test_bench_image/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
I understood that /etc/sysconfig/docker should include the variable no_proxy to allow pushing to private Docker registry under a proxy. So I included in the file:
...
http_proxy="http://myproxy.es:80"
https_proxy="http://myproxy.es:80"
no_proxy="127.0.0.1:5000"
But I get the same error message after reload the daemon and restart the docker service.
Any help will be really welcome.
Note: My original plan was to use the Docker local image in Jenkins. But the Docker plugin cannot pull the local image since it is not publicly available. So I tried to create a private registry and force Jenkins to pull it from there.
Thanks.
I ran into a similar issue and I had to additionally uncomment and add my private registry's host IP in the section INSECURE_REGISTRY='XX.XXX.XXX.XXX:5000' in /etc/sysconfig/docker file.

Resources