This links says:
In Docker 17.06 and higher, you can also use a host network for a swarm service, by passing --network host to the docker container create command.
But I'm using docker version 17.03 which I cannot upgrade. Is there a workaround?
I want the containers created using docker stack to have access to the host's network.
Related
I'm using Docker 19.03.12 on CentOS 7 and I've noticed that a container doesn't have network access to the outside world unless I use the --network host option when I start the container. Is there a way to configure Docker to automatically use that as a default when I start a container?
I am trying to find a list of all running Docker Swarm Mode services that have a specific tag - but from inside a container.
I know about the hack to mount docker.sock into the container, but I'm looking for a more elegant/secure way to retrieve this information.
Essentially, I want my app to "auto-discover" other available services. Since the docker swarm manager node already has this information, this would eliminate the need for a dedicated service registry like Consul.
You can query docker REST API from within the container.
For example, on MacOS, run on the host to list docker images:
curl --unix-socket /var/run/docker.sock http:/v1.40/images/json
To run the same inside the container, first install socat on the host.
Then establish a relay between host's unix-socket /var/run/docker.sock and host's port 2375 using socat:
socat TCP-LISTEN:2375,reuseaddr,fork UNIX-CONNECT:/var/run/docker.sock
Then query host's 2375 port from within a container:
curl http://host.docker.internal:2375/v1.40/images/json
You should see the same result.
Notes:
I don't have initialized docker swarm, so examples use docker images
listing. Refer to Docker docs for listing services api.
You can find out API version from the output of docker info
Refer to What is linux equivalent of “host.docker.internal” if you don't use MacOS. Latest Linux docker versions should support host.docker.internal.
I have a basic question about Docker that is probably due to lack of knowledge on my part about networking. The Docker container networking documentation states:
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
It sounds like, when you install a container on your computer without mapping any ports from the container to the host machine, the container should not be able to access the internet. However, for example, I install the Ubuntu container with:
docker pull ubuntu
Then I enter the container's command line with:
docker run -ti ubuntu bash
At that point, I can run apt-get update and the container starts pulling information from the internet without mapping any ports (e.g. -p 80:80). How is this possible?
Publishing a port allows machines external to the docker host to access the container, inbound connectivity. By default, containers can access the network with outbound connectivity.
To restrict a container from accessing the network, you can either run the container with no network (note: this still creates a loopback interface, and you can later connect it to another network):
docker run --net none ...
Or you can create a network with the --internal option and run containers on that network:
docker network create --internal internal
docker run --net internal ...
The internal network is created without a gateway interface on the bridge network.
When they talk about publishing ports, they mean inbound ports.
Outbound ports work - depending on your network type - see here for more:
https://docs.docker.com/network/
Is it possible to spawn Docker Services within a container running on Docker swarm? This would allow containers to dynamically maintain the components running in the swarm.
Currently I am able to run containers within other containers on the host machine by mounting the /var/run/docker.sock into the container while using the docker-py SDK.
docker run -v /var/run/docker.sock:/var/run/docker.sock master
Inside the container I have a python script that runs the following:
container = docker.from_env().containers.run('worker', detach=True, tty=True, volumes=volumes, network='backend-network', mem_limit=worker.memory_limit)
Is something similar to this possible in Docker Swarm, not just vanilla Docker?
You can mount the Docker socket and use the docker module as you're doing now, but create a service, assuming you're on a manager node.
some_service = docker.from_env().services.create(…)
https://docker-py.readthedocs.io/en/stable/services.html
I want to be able to access a docker container via its Ip eg the one I can see when I do docker container inspect foo
The reason is I am using zookeeper inside a docker container that is managing two other docker containers running solr. My code (not in docker and I don't at this stage want it to be) calls zookeeper to get the urls of the solr servers which zookeeper reports as the docker containers ip. My code then falls over because calling the docker containers ip from the host fails as it should be calling localhost.
So how can I allow a call to the docker containers ip from the host to be routed correctly. (I am using Docker native for Mac)
I'm not using Docker for Mac, so I'm not sure the newest version Docker for Mac is still based on Docker-machine (which based on VirtualBox) or not.
If you can confirm your Docker for Mac is based on VirtualBox, then you probably could get the inet IP of vboxnet0 network interface via ifconfig command. This IP should be used as your calling IP.
Besides, you should know the port number of your Zookeeper container. Normally the exposed port of a container could be configured in docker run command, for example:
docker run -p 5000:5001 -i -t ubuntu /bin/bash
Where -p indicated the exposed port of the container.