I have two local Docker containers container_a and container_b.
When starting container_b, container_b calls docker pull container_a. However, this fails as container_a is a local container and not hosted on a Docker registry such as Docker Hub.
Is it possible to let container_b use the locally available containers when calling docker pull without changing container_b and without uploading container_a to a Docker registry?
Mount local docker.sock to your container_b that pulls the image.
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Related
I have a local docker registry on my mac and I want to be able to perform this type of command:
docker run quay.io/skopeo/stable inspect
Tried to use
docker run quay.io/skopeo/stable inspect docker-daemon:<image>
(docker-daemon will use the local registry)
Error loading image from docker engine: Cannot connect to the Docker
daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
I guess the error is coming because the docker container I am running doesn't have a local docker registry.
Thanks!
I have a kafka cluster installed in my local windows machine, and I would like to access this cluster from my spring boot application deployed as a container in docker toolbox, here is my application.properties file.
kafka.bootstrapAddress = 127.0.0.1:9092
And when I launch the container I use the host network but it doesn't work.
docker run spring-app:latest --network host
So how can i access this cluster. ?
Thank you in advance.
From the docker run reference, the docker run command usage is like this:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You are not providing the --network option correctly. The option must come before the image name and whatever comes after the image name will be passed to the created container as the command and arguments.
Here is how you should invoke the command to correct your issue:
$ docker run --network host spring-app:latest
I'm running a gitlab/gitlab-ce container on docker. Then , inside it, i want to run a gitlab-runner service, by providing docker as runner. And every single command that i run (e.g docker ps, docker container ..), i get this error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running
P.s: i've tried service docker restart, reinstal docker and gitlab-runner.
By default it is not possible to run docker-in-docker (as a security measure).
You can run your Gitlab container in privileged mode, mount the socket (-v /var/run/docker.sock://var/run/docker.sock) and try again.
Also, there is a docker-in-docker image that has been modified for docker-in-docker usage. You can read up on it here and create your own custom gitlab/gitlab-ce image.
In both cases, the end result will be the same as docker-in-docker isn't really docker-in-docker but lets your manage the hosts docker-engine from within a docker container. So just running the Gitlab-ci-runner docker image on the same host has the same result and is a lot easier.
By default the docker container running gitlab does not have access to your docker daemon on your host. The docker client uses a socket connection to communicate to the docker daemon. This socket is not available in your container.
You can use a docker volume to make the socket of your host available in the container:
docker run -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-ce
Afterwards you will be able to use the docker client in your container to communicate with the docker daemon on the host.
I want to set up my own private docker hub from where I can pull docker images on docker clients.
Taking this link as reference, I executed following commands on one machine:
docker pull registry
docker run -d -p 5000:5000 --name localregistry registry
docker ps
docker pull alpine
docker tag alpine:latest localhost:5000/alpine:latest
docker push localhost:5000/alpine:latest
I want to pull this image on some other machine which is reachable to/from this machine.
$ docker pull <ip_of_machine>:5000/alpine
Using default tag: latest
Error response from daemon: Get https://<ip_of_machine>:5000/v1/_ping: http: server gave HTTP response to HTTPS client
Is it possible to pull docker image from one machine which acts as a docker hub to another machine which is reachable?
Adding below line in docker client machine's /etc/sysconfig/docker file resolved the issue:
INSECURE_REGISTRY='--insecure-registry <ip>:5000'
Assuming by the tags you are using boot2docker or DockerToolbox:
You must open VirtualBox Manager
Select the default machine
Network
NAT
Port forwarding
Add an entry for the 5000 port
Regards
the scenario: I have a host that has a running docker daemon and a working docker client and socket. I have 1 docker container that was started from the host and has a docker socket mounted within it. It also has a mounted docker client from the host. So I'm able to issue docker commands at will from whithin this docker container using the aforementioned mechanism.
the need: I want to start another docker container from within this docker container; in other words, I want to start a sibling docker container from another sibling docker container.
the problem: A problem arises when I want to mount files that live inside the host filesystem to the sibling container that I want to spin up from the other docker sibling container. It is a problem because when issuing docker run, the docker daemon mounted inside the docker container is really watching the host filesystem. So I need access to the host file system from within the docker container which is trying to start another sibling.
In other words, I need something along the lines of:
# running from within another docker container:
docker run --name another_sibling \
-v {DockerGetHostPath: path_to_host_file}:path_inside_the_sibling \
bash -c 'some_exciting_command'
Is there a way to achieve that? Thanks in advance.
Paths are always on the host, it doesn't matter that you are running the client remotely (or in a container).
Remember: the docker client is just a REST client, the "-v" is always about the daemon's file system.
There are multiple ways to achieve this.
You can always make sure that each container mounts the correct host directory
You can use --volumes-from ie :
docker run -it --volumes-from=keen_sanderson --entrypoint=/bin/bash debian
--volumes-from Mount volumes from the specified container(s)
You can use volumes