I am new to docker environment and trying to figure out how to make two container communicate with each other.
I have two running containers. Container 1 is having a inference engine running which performs inference on image it receives. Container 1 is listening on port 9001. Container 2 is having the image and wants to send it to Container 1, but is failing saying
port 9001 is already binded to some service
PS When I try to send the image from host to container 1, it works fine, but I cannot understand on how to achieve the same from another container. Any help would be really grateful. Thanks.
You can use docker-compose. It will create a bridge network for you when running the command docker-compose up, Each image defined in the Compose file will get launched in this network automatically.
If you are not using Docker-Compose and running individual container than expose both services port with the host.
docker run -p 9001:9001 image_1
docker run -p host's_port : container_port image_2
Then can to communicate using host IP
Like:
http://hostip : port
Related
I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/
I am trying to connect and run a device (LiDAR) through Docker container since it needs Ubuntu 16 while my computer is Ubunutu 20.
I got the device to ping inside the docker container, but it is not recognised when I try to use it.
What I did:
Made Dockerfile with requirements (Added EXPOSE to expose all ports)
Built docker image using:
docker build -t testLidar
I then made a container using
docker run -d -P --name test_Lidar (imagename)
Then
docker exec -t test_Lidar (device_ip) works
I am able to ping my LiDAR IP inside the container, but when I do ip a I cannot see the interfaces connected to my machine.
Been stuck on this for 3 days, any suggestions?
Note: I have done the exact same steps but on an Ubuntu 16 machine. The only change was the docker run command had --net host instead of -P tag and my device worked perfectly. I feel like this is the root of my problem.
Use --net host flag with docker run to attach the container to your host's networking stack and make it available in for other hosts in your network.
When you use --net host, you actually attach the container to your host's networking stack. By default, containers are attached to the default network of type bridge and can communicate with each other. You can then reach them only from your host using its ip addresses typically in subnet 172.17.0.0/16.
Using -P actually binds exposed ports from a container with randomly selected free ports on your host. It should be used for exposing network services (eg. web server with port 80), but not for ICMP ping.
I want to run nginx inside a docker container as a reverse proxy to talk to an instance of gunicorn running on the host machine (possibly inside another container, but not necessarily). I've seen solutions involving docker compose but I'd like to learn how to do it "manually" first without learning a new tool, right now.
The simplified version of the problem is this:
Say I run a docker container on my machine.
Outside the container, I run gunicorn on port 5000.
From within the container, I want to run ping ??? and have it reach the gunicorn instance run in step 2.
How can I do this in a simple, portable way?
The easiest way is to run gunicorn in its own container and expose port 5000 (not map it, just expose it).
It is important to create a network first and run both your containers on the same network so that they see each other: docker network create xxxx
The when you run your 2 containers attach them to this network: docker run ... --network xxxx ...
Give names to your your containers, it is a good practice. (eg: docker run ... --name gunicorn ...)
Now from your container you can ping your gunicorn container: ping gunicorn, or even telnet on on port 5000.
If you need more information just drop me a comment.
I'm using a dedicated container for running generic project-related shell scripts in order to avoid having to test scripts on multiple environments(mac, win, ubuntu, debian...) and to minimize software requirements on the host OS. Even docker-compose commands are run from the console container. /var/run/docker.sock is bind mounted from host.
Everything else seems to be working fine, but for example if I run docker-compose up traefik inside the console container, traefik starts normally but it's unreachable both on the host and even on another container in the same network. If docker-compose up traefik is run from the host OS(Windows 10), traefik becomes reachable as expected. I suspect this has something to do with how Docker or docker-compose handle networking but I'm not completely sure. I did check that regardless of how I start the traefik container, the same ports appear instantly in NirSoft CurrPorts(sort of gui for netstat).
Is there any way (and how) to fix this?
EDIT
I realised that this must be somehow an error on my part, since dockerized docker guis exist and they assumably don't have any problems bringing up containers that are accessible from the host and outside world.
Now I'm wondering if this might be a simple configuration error either in my docker(-compose) settings or somewhere else on my host machine, or do guis like Portainer go through some extra steps in order to expose the started containers to the host?
For development purpose we all map the port of Traefik to 80, so I will assume the same in your case as well. Let's assume that you are running Traefik container in a port 80 which is mapped to the port 80 in the host. But according to your Traefik container the host machine is nothing but the container which is used for running the scripts. But the port 80 of the shell script container is not mapped to the Host machine of that particular container. I hope now you have been lost somewhere around the port mapping and containers.
Let me describe your situation in the image below.
To make your setup working you should deploy your containers as shown above along with the port mapping.
To simplify the answer,
docker run -t -d -p 80:80 shellScriptImage
docker run -t -d -p 80:80 traefik (- inside the shell script container)
By doing this you can access the traefik container from the outside.
I have:
a Windows server on bare metal with Hyper-V
Ubuntu server running in Hyper-V
a Docker container with an NGINX web application running in Ubuntu server
Every time I run a Docker image it gets a new I.P. address on the Docker0 network interface. For production, I don't know how to make the Docker container visible to the external network. I also don't know how to handle the fact that the I.P address changes every time the image is run.
What's the correct way to:
make a Docker container visible to the external network?
handle Docker container I.P. addresses in a repeatable way in production?
When you run your Docker container with docker run, you should use the -p switch to forward ports, for example:
docker run -p 80:80 nginx
This would route port 80 from the Ubuntu server to port 80 within the Nginx container.
You should check the Docker documentation on this at https://docs.docker.com/reference/run/#expose-incoming-ports.
When you have multiple containers and links, you should use EXPOSE in the Dockerfile as documented here: https://docs.docker.com/reference/builder/#expose.