By default, can a docker container call host's localhost UDP? - docker

I have a docker container, and also installed on the VM a daemon listening for UDP on port 8125. The container sends data with UDP protocol on this 8125 port.
I was trying to open the port by starting the container with the -p 8125:8125/udp, but I'm getting the following error:
Error starting userland proxy: listen udp 0.0.0.0:8125: bind: address already in use
Which makes sense because the daemon is already listening on this port.
So how can I configure Docker so that the container can send UDP payloads to the external daemon ?

Opening ports is only needed when you want to Listen for the requests not sending. By default Docker provides the necessary network namespace for your container to communicate to the host or outside world.
So, you could it in two ways:
use --net host in your docker run and send requests to localhost:8125 in this case you containerized app is effectively sharing the host's network stack. So localhost points to the daemon that's already running in your host.
talk to the container network gateway (which is usually 172.17.0.1) or your host's hostname from your container. Then your are able to send packets to your daemon in your host.

Related

Bidirectional socket communication between docker container and host

I want to establish a TCP socket based communication between client and server hosted on a docker and host respectively.
I am trying to run a GCC based socket agent on a ubuntu container running on Docker desktop installed on Windows 10 host. I have done port mapping (-p) to a port where a server runs on Windows 10.
docker run -it --name ubuntu1 -p 5997:5997 ubuntu /bin/bash
Now when I try to run a java socket server on windows 10 host it is showing error that port is already bind. But I have checked no other application is binding on port 5997.
I found that -p binds the host port already, so another service can not bind this. If I run the socket server on host first then starting container fails.
Error response from daemon: Ports are not available: listen tcp 0.0.0.0:5997: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.
Error: failed to start containers: ubuntu1
What is the correct way to establish a bidirectional socket communication between container and host ,where socket client runs at the container and socket server at the host ?

How tcp works in container, is port hijack possible

I need to understand how TCP uses ephemeral port in a container. I understand network is namespaced, and TCP port in container would be NAT’d to host port. Does that mean for two containers running in the same host, if one container binds to 64000 ports(use up 64k ports available inside the container using TCP bind(), without binding to host port), the other container won’t be able to use any port as all ports in the host system are used up?
Assuming one IP per host ofcourse
Hi TheJoker if you'll try to use localhost docker engine and run two containers with port let's say 80 with simple nginx server you can run them without any problem as long as you are not binding them with host port. If you are binding port 80 of container with port 80 of host obviously you can do that only for one container.
If you will run this command twice
docker run -d -p 80:80 nginx
You'll receive similar message to this one
docker: Error response from daemon: driver failed programming external connectivity on endpoint angry_mclean (d8bbf5af6503b4d54d234f1bf69ee372a8ada6ef07a5ebd138479691d5679994): Bind for 0.0.0.0:80 failed: port is already allocated.
To sum up you can run as many containers as you want with exposed port but you can bind only one to host port.
If you'll run container with 64000 ports bind to your host (-P option to bind all exposed ports) than your container is occupying all ports (not possible as your host system use some ports but theoretically).
UPDATE:
For more information please see :
https://docs.docker.com/engine/reference/builder/#expose
https://docs.docker.com/network/iptables/

Spin off another container on host machine from an existing container

I am currently using Docker Desktop for Mac.
My requirement is to spin off a container from another container.
Situation:
Container A has a service running which upon request looks for a swarm manager and spin off another container B. I have started single node swarm manager on my machine. I can not use host network_mode because docker for MAC exposes light weight linux vm as host and not my actual localhost. I have tried this also : https://forums.docker.com/t/access-host-not-vm-from-inside-container/11747/7
Any possible solution?
The idea is that your container can access your host. So, use the Engine API provided by Docker:
POST /containers/create
You will have to post json that contains the details of the new container.
Engine API v1.24
The daemon listens on unix:///var/run/docker.sock but you can Bind Docker to another host/port or a Unix socket.
You can listen on port 2375 on all network interfaces with -H tcp://0.0.0.0:2375, or on a particular network interface using its IP address: -H tcp://192.168.59.103:2375. It is conventional to use port 2375 for un-encrypted, and port 2376 for encrypted communication with the daemon.

How to access a Process running on docker on a host from a remote host

How to access or connect to a process running on docker on host A from a remote host B
consider a Host A with ip 192.168.0.3 which is running a application on docker on port 3999 .
If i want to access that application from remote machine with IP 192.168.0.4 in same subnet.
To be precise i am running Kafka producer on the server and i am trying to receive using Kafka-console-Consumer.
Use --net=host to run your container and it'll use the host's network stack, then you can connect to the application running inside container like it's running on host directly.
Port mapping, use option -p to map the port inside your container to a port of your host. e.g. docker run -d -p <container port>:<host port> <image>, then you can connect to <host>:<host port> to connect your application inside container
Docker's built-in multi-host network. In early releases the network driver is isolated from docker's core, you have to use 3rd party tools like flannel or weave for multi-host connection, but from release 1.9, it has been merged into docker. You can follow it's guide to set it up.
Hope this is helpful :-)
First you need to bind docker container's port to the Host A:
docker run -d -p 3999:3999 kafka-producer
Then you need to access Host A from Host B using IP:Port
192.168.0.3:3999

port linking from docker container to host

I have the following situation. I have a service that listens to 127.0.0.1 on port 1234 (This cannot be changed for security reasons). On the same machine run a docker container. I need to somehow connect to the service on the host from within the container. Because the service only accepts requests from 127.0.0.1, i need somehow to link the port from the container to the host port but in reverse so when i connect from within the container to 127.0.0.1:1234 the service on the host will receive the data. Is this possible?
Thanks.
With the default bridged network, you won't be able to connect from the container to a service on the host listening on 127.0.0.1. But you can use --net=host when running a container to use the host network stack directly in the container. It removes some of the isolation, but then allows you to talk directly to 127.0.0.1 as the container and talk to services running on the host.
Question
How to bind Dockerized service on localhost:port ?
Answer
Use the -p as this: docker run -p 127.0.0.1:1234:1234 <other options> <image> <command>.

Resources