Easiest way to connect Docker container to local host - docker

I am wondering if is it possible to connect to an app on local host from Docker container.
I run two Docker container which are connected to each other via link option. But how can I connect one of the containers to the local host?

Yes, use docker run --network=container:<container-id>
--network='container:': reuse another container's network stack
This let you run a container sharing the same network interface (then localhost) from another container.
Alternatively, you can use the host mode to give your containers the same network ips that the host has (including localhost). docker run --network=host:
--network= 'host': use the Docker host network stack
Docs: https://docs.docker.com/engine/reference/run/#name-name

I think it is possible.
Try communicate with the host's<ip:port>
ip: useip addror something similar to get the one of eth0,not the one of docker0
port:the one you assigned to the app
To make the process easier,perhaps turn selinux and firewall down when you try.

Related

How to access docker container in a custom network from another docker container running with host network

My program is consisting of a network of ROS1 and ROS2 nodes, which are software that work with a publish/subscribe way of communication.
Assume there is 4 nodes running inside a custom network: onboard_network.
Those 4 nodes (ROS1) can only communicate together, therefore we have a bridge node (ROS1 & ROS2) that needs to be sitting on the edge on both onboard_network and host network. The reason why we need the host network is because the host is inside a VPN (Zerotier). Inside the VPN we have also our server (ROS2).
We also need the bride node to work with host network because ROS2 work with some multicast stuff that works only on host mode.
So basically, I want a docker compose file running 4 containers inside an onboard_network & a container running inside the host network. The last container needs to be seen from the containers in the onboard_network and being able to see them too. How could I do it ? Is it even possible ?
If you're running a container on the host network, its network setup is identical to a non-container process running on the host.
A container can't be set to use both host networking and Docker networking.
That means, for your network_mode: host container, it can call other containers using localhost as a hostname and their published ports: (because its network is the host's network). For your bridge-network containers, they can call the host-network container using the special hostname host.docker.internal on MacOS or Windows hosts, or on Linux they need to find some reachable IP address (this is discussed further in From inside of a Docker container, how do I connect to the localhost of the machine?.

How to expose the docker container ip to the external network?

i want to expose the container ip to the external network where the host is running so that i can directly ping the docker container ip from an external machine.
If i ping the docker container ip from the external machine where the machine hosting the docker and the machine from which i am pinging are in the same network i need to get the response from these machines
Pinging the container's IP (i.e. the IP it shows when you look at docker inspect [CONTAINER]) from another machine does not work. However, the container is reachable via the public IP of its host.
In addition to Borja's answer, you can expose the ports of Docker containers by adding -p [HOST_PORT]:[CONTAINER_PORT] to your docker run command.
E.g. if you want to reach a web server in a Docker container from another machine, you can start it with docker run -d -p 80:80 httpd:alpine. The container's port 80 is then reachable via the host's port 80. Other machines on the same network will then also be able to reach the webserver in this container (depending on Firewall settings etc. of course...)
Since you tagged this as kubernetes:
You cannot directly send packets to individual Docker containers. You need to send them to somewhere else that’s able to route them. In the case of plain Docker, you need to use the docker run -p option to publish a port to the host, and then containers will be reachable via the published port via the host’s IP address or DNS name. In a Kubernetes context, you need to set up a Service that’s able to route traffic to the Pod (or Pods) that are running your container, and you ultimately reach containers via that Service.
The container-internal IP addresses are essentially useless in many contexts. (They cannot be reached from off-host at all; in some environments you can’t even reach them from outside of Docker on the same host.) There are other mechanisms you can use to reach containers (docker run -p from outside Docker, inter-container DNS from within Docker) and you never need to look up these IP addresses at all.
Your question places a heavy emphasis on ping(1). This is a very-low-level debugging tool that uses a network protocol called ICMP. If sending packets using ICMP is actually core to your workflow, you will have difficulty running it in Docker or Kubernetes. I suspect you aren’t actually. Don’t worry so much about being able to directly ping containers; use higher-level tools like curl(1) if you need to verify that a request is reaching its container.
It's pretty easy actually, assuming you have control over the routing tables of your external devices (either directly, or via your LAN's gateway/router). Assuming your containers are using a bridge network of 172.17.0.0/16, you add a static entry for the 172.17.0.0/16 network, with your Docker physical LAN IP as the gateway. You might need to also allow this forwarding in your Docker OS firewall configuration.
After that, you should be able to connect to your docker container using its bridge address (172.17.0.2 for example). Note however that it will likely not respond to pings, due to the container's firewall.
If you're content to access your container using only the bridge IP (and never again use your Docker host IP with the mapped-port), you can remove port mapping from the container entirely.
You need to create a new bridge docker network and attach the container to this network. You should be able to connect by this way.
docker network create -d bridge my-new-bridge-network
or
docker network create --driver=bridge --subnet=192.168.0.0/16 my-new-bridge-network
connect:
docker network connect my-new-bridge-network container1
or
docker network connect --ip 192.168.0.10/16 my-new-bridge-network container-name
If the problem persist, just reload docker daemon, restart the service. Is a known issue.

Access devices on local network when running Docker for Mac

I have some smart wifi devices on my network I can see from a script on my Mac. But running the same script from within a Docker container those devices are not visible.
I assume this is related to Docker for Mac's inability to connect to the host's network using --network host or network_mode: host. I also assume this issue wouldn't exist on a Linux machine but I don't have one to test on.
What is the workaround?
Edit:
Confirmed this worked fine when running inside an Ubuntu virtualbox, but I'd really not have to develop inside it.
If you start the container with network option as host, the container will share the network stack of the host. Thus any device reachable from you host should be reachable by the container.
docker run --network host ...
Adding containers to a network would allow them to communicate with each other but if you want to access other services running on host then host.docker.internal (from 18.03+). I had to do the same in a mac mini setup to access external service.
[https://docs.docker.com/docker-for-mac/networking/]
If you have to access a service on another host then you can setup an nginx server on the docker host and a proxy pass rule to direct it to the remote service.

Is there a way to add a hostname to an EXISTING docker container?

I have some containers that communicate via their IP from the network docker.
I can use the option -h or --hostname when running a new container but I want to set the hostname for existing container.
Is it possible?
One way is to create network and add different container in this network.
When adding container in the network, you can use the --alias option of docker network. Like this:
Create a network:
docker network create <my-network-name>
Add containers in the network:
docker network connect --alias <hostname-container-1> <my-network-name> <container-1>
docker network connect --alias <hostname-container-2> <my-network-name> <container-2>
docker network connect --alias <hostname-container-3> <my-network-name> <container-3>
Enjoy.
So each container can see other container by the alias (the alias is used as hostname).
Generally, you would need to stop/restart a container, in order to run it again with -h (--hostname) (unless you used --net=host)
If you cannot stop the container, you can try and (in an attached bash session) edit its /etc/hostname.
The hostname is immutable once the container is created (although technically you can modify /etc/hostname).
As suggested in another answer, you cannot change the hostname by stopping or restarting the container. There are not Docker engine client parameters for the start command that affect hostname. That wouldn't make sense anyway as starting a container simply launches the ENTRYPOINT process in a container filesystem that has already been created (i.e. /etc/hostname has already been written).
It is possible to synchronize the container hostname with the host by using the --uts=host parameter when the container is created. This shares the UTS namespace. I would not recommend --net=host unless you also want to share the host network devices (i.e. bypass the Docker bridge).

docker containers static IP to communicate two different hosts

is it possible to change the ip of docker0 or provide a static IP to docker containers, because by default docker containers have the ip range of 172.17.0.2/16 but my network is 192.168.X.X/24 in this situation on the server container is running there all the containers is able to communicate within servers but from other server this failed to connect.
How do you set up your cluster? Do you use Swarm? If so, you need to use a k/v storage backend to enable communication between two containers hosted on different hosts. Is this what you aim to do, or do you want the host to communicate with the container on the other host?
Anyway, the solution is similar.
I re-writing a tuto for Docker Swarm to pull request it into their Swarm doc, you may want to take a look: https://www.auzias.net/en/docker-network-multihost/
Have a nice day!
problem can be fix by using --network=host
this will allow your container to use the host machine network. for direct accessing your container you can change the ssh port of the container and access your container with the specific port number.
I answered a similar question here
https://stackoverflow.com/a/35359185/4094678
The difference in your case would be to create a netowrk with subnet 192.168.X.X/24 and then assign desired ip addr to container with --ip
Here we can't able to change docker0 Ip address, but we have option to create multiple networks.
Solution 1:
can be by using start container with host network --network=host
Solution2:
we can also start the container by exposing the cluster required port and from another node we can communicate it.
-p hostport:serviceport
Or, Solution3:
We can deploy cluster over docker swarm.

Resources