I'm using Docker 19.03.12 on CentOS 7 and I've noticed that a container doesn't have network access to the outside world unless I use the --network host option when I start the container. Is there a way to configure Docker to automatically use that as a default when I start a container?
Related
Given a service running in a docker container, I try to connect to a service running outside (say, for example, on port 8545)
For testing, the service running outside runs on the Docker host.
If I use
http://host.docker.internal:8545
inside the Docker container, everything works nicely.
Since I like to run the "outside-service" on a different machine later, I tried to use
http://otherservice:8545
inside the Docker container, and then run it via
docker run --add-host otherservice:192.168.1.42 ...
where 192.168.1.42 is the IP of the Docker host in my private network.
However, when using the second method, I get a CeonnectionException.
Is it possible to switch between an outside machine and the Docker host machine via --add-host?
(Note: I am Docker running on a Mac).
Is it possible to spawn Docker Services within a container running on Docker swarm? This would allow containers to dynamically maintain the components running in the swarm.
Currently I am able to run containers within other containers on the host machine by mounting the /var/run/docker.sock into the container while using the docker-py SDK.
docker run -v /var/run/docker.sock:/var/run/docker.sock master
Inside the container I have a python script that runs the following:
container = docker.from_env().containers.run('worker', detach=True, tty=True, volumes=volumes, network='backend-network', mem_limit=worker.memory_limit)
Is something similar to this possible in Docker Swarm, not just vanilla Docker?
You can mount the Docker socket and use the docker module as you're doing now, but create a service, assuming you're on a manager node.
some_service = docker.from_env().services.create(…)
https://docker-py.readthedocs.io/en/stable/services.html
I have docker on my host machine with a container running. I was wondering if it's possible, and what the best approach would be, to "trigger" a container creation from the running container.
Let's say my machine is host and I have a container called app (with id 123456789) running on host.
root#host $ docker contain ls
123456789 app_mage .... app
I would like to create a container on host from within app
root#123456789 $ docker run --name app2 ...
root#host docker container ls
123456789 app_mage .... app
12345678A app_mage .... app2
What I need is for my app to be running on docker and to run arbitrary applications in an isolated environment (but I'd rather avoid docker-in-docker)
A majority of the Docker community will veer away from these types of designs, however it is very doable.
Similar to Starting and stopping docker container from other container you can simply mount the docker.sock file from the host machine into the container, giving it privilege to access the docker daemon.
To make things more automated, you could use the docker-py sdk to start containers from inside a container, which would in turn access the Docker deamon on the host machine hosting the container that you are spawning more containers from.
For example:
docker run -v /var/run/docker.sock:/var/run/docker.sock image1 --name test1
----
import docker
def create_container():
docker.from_env().containers.run("image2", name="test2")
This example starts container test1, and runs that method inside the newly created container, which in turn creates a new container test2 running on the same host as test1.
I have a docker swarm, and a container inside of an overlay network on that swarm. The container has an app written in golang that interacts with the Docker API by creating a container and starting it. When I run my golang app on the host machine, everything runs perfect and it creates and runs the container without issue. However when I put the app into the container and run it in my overlay network, it no longer can reach the API.
I'm assuming this has something to do with networking, but the idea of my project is that there are multiple services, each with their own networks, that can create, launch, and remove containers they see fit. NOT running the SDK in an app outside of the overlay networks unfortunately is not an option at this time.
Error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Your app is trying to access the Docker socket, but this is not accessible by default in the container.
You can mount it as a volume from the host:
-v /var/run/docker.sock:/var/run/docker.sock
This links says:
In Docker 17.06 and higher, you can also use a host network for a swarm service, by passing --network host to the docker container create command.
But I'm using docker version 17.03 which I cannot upgrade. Is there a workaround?
I want the containers created using docker stack to have access to the host's network.