Docker container port unreachable after Docker host static ip changed - docker

Host uses CentOS static ip,
Container use Debian system bridge network and map with 80,9101 port to host.
After change host static ip with eth0, the container port cannot be reached remotely, even if recreated. Call from host is ok.
I have to reboot host machine.
Meanwhile, --net host mode container will not be affected.

First of all if you use --net=host your port mapping is with docker -p is not going to work any more. Because you're exposing your host network to your container.
You can't reach your container port any more because your container's application is actually bound to host IP address upon start up because of --net=host, so when the IP gets changed it's not going to work until you restart your application. Then it starts to work again.
If you want temporary solve this issue, either don't use --net=host or simply create a iptables rule your self to redirect traffic to your app.
If your app listening to port 8080 and you want to expose it as 80 for example, this should work I guess:
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to 127.0.0.1:8080
Given that your app is listening to localhost as well.

Related

How do I disable all the docker rules that are added to iptables for public accces?

I am using docker-compose and just found out all my exposed ports from docker-compose.yml are actually added to iptables to allow world access. No idea but this leaves me with a huge security hole.
The docker page says to run: iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.1 -j DROP
but that does nothing for me. I can still access my db server remotely without tunneling.
I'm not sure what my local ip address would be. I still want to allow internal connections on the host OS to connect to those ports, but not the world.
If you don't want the exposed ports to be publically available, there are easier solutions than mucking about with Docker's iptables rules.
Just don't expose them.
You don't need to expose ports just to access a service. You can access any open container ports simply by connecting to the container ip address.
Expose them only on localhost.
Instead of writing -p 8080:8080, which exposes container port 8080 on host port 8080 on all interfaces, write -p 127.0.0.1:8080:8080, which will expose the port only on the loopback address. Now you can reach it on your host at localhost:8080, but it won't be available to anyone else.

Can't access jupyterhub in a docker container with netowrk_mode: host

I have a jupyterhub running in a container with network_mode: host due to some requirement.
However after setting the network_mode to host in my docker-compose file, I can't access jupyterhub from an external host using the host ip:8000.
my understanding from this is
If you use the host network mode for a container, that container’s
network stack is not isolated from the Docker host (the container
shares the host’s networking namespace), and the container does not
get its own IP-address allocated. For instance, if you run a container
which binds to port 80 and you use host networking, the container’s
application is available on port 80 on the host’s IP address.
Is there anything i am missing?
EDIT:
To simplify i follow the instructions here
docker run --rm -d --network host --name my_nginx nginx
I can access the nginx welcome page doing
$ curl localhost:80
but if i try to curl from another host i get
$ curl 10.230.0.123:80
curl: (7) Failed connect to 10.230.0.123:80; No route to host
This issue can happen when on your system firewall is active and is blocking the port access. You can enable port access using below:
# in centos7, by updating iptables rules
iptables -I INPUT 5 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
# in ubuntu
sudo ufw allow 80/tcp

Docker containers in user defined docker network - access only from the host

I have an application that is creating a few containers in a user-defined docker network.
Currently I have forwarded (mapped) few ports from some of the containers in that network to the host machine so that I can access them from the host. The interaction between the containers (container to container) is happening via aliases that are defined in the network.
Unfortunately the map ports to the host are publicly exposed on my host machine. Is there a way that these mapped ports can be accessible only from the localhost of my host machine?
If you are using docker run -p [port-number]:[port-number] to forward your ports, you can use:
docker run -p 127.0.0.1:80:80 container
instead of:
docker run -p 80:80 container
By default, Docker exposes your ports on all available interfaces.
If you are on linux you can use iptables for that.
iptables -A INPUT -p tcp -s localhost --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp --dport 8080 -j DROP
Just change 8080 for the port you want and run it multiple times for each port you are exposing.
First command is "anything coming from localhost to port 8080 allow it" and second is "drop anything coming into port 8080"
This change is not permanent it will reset after you reboot, but you can save it with:
iptables-save > /etc/iptables.conf
And restore it with:
iptables-restore < /etc/iptables.conf

set docker container ip to global ip

I am currently trying to set a ip to container which can accessed by any other machine within the network. The network in which my docker-host machine is 9.158.143.0/24 with a gateway 9.158.143.254.
Tried setting the ip(any free) within the subnet 9.158.143.0/24 with network type as bridge.It did not work(not even able to ping the container within docker host machine).
Then created a user defined network with subnet 9.10.10.0/24 and network driver as bridge. the container created is pingable but only within docker host machine.
Is there any way that this container be accessible from all the machine within the network(not just docker host machine).
PS: i do not want to expose the port.(Is changing routes helpful ..i know very little networking)
I aso tried userdefined network with macvlan driver. in this case i am able to ping container from other machines but not from docker host machine where container is present
To do this, you need to:
1- dedicate a new address for this container, on your local network;
2- configure this address as a secondary address on your docker engine host;
3- enable routing between local interfaces on your docker engine host;
4- use iptables to redirect the connections to this new address to your container.
Here is an example:
We suppose:
your container instance is named myinstance
your LAN network interface is named eth0
you have chosen the available network address 192.168.1.178 on your LAN
So, you need to do the following:
start your container (in this example, we start a web server):
docker run --rm -t -i --name=mycontainer httpd:2.2-alpine
enable local IP routing:
sysctl -w net.ipv4.ip_forward=1
add a secondary address for your container:
ip addr add 192.168.1.178/24 dev eth0
redirect connections for this address to your container:
First, find your container local address:
% docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mycontainer
172.17.0.2
Now, use this local address with iptables:
iptables -t nat -A PREROUTING -i eth0 -d 192.168.1.178/32 -j DNAT --to 172.17.0.2
Finally, you can check it works by accessing http://192.168.1.178/ from any other host on your LAN.

docker: mutual access of container and host ports

From my docker container I want to access the MySQL server running on my host at 127.0.0.1. I want to access the web server running on my container container from the host. I tried this:
docker run -it --expose 8000 --expose 8001 --net='host' -P f29963c3b74f
But none of the ports show up as exposed:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
093695f9bc58 f29963c3b74f "/bin/sh -c '/root/br" 4 minutes ago Up 4 minutes elated_volhard
$
$ docker port 093695f9bc58
If I don't have --net='host', the ports are exposed, and I can access the web server on the container.
How can the host and container mutually access each others ports?
When --expose you define:
The port number inside the container (where the service listens) does
not need to match the port number exposed on the outside of the
container (where clients connect). For example, inside the container
an HTTP service is listening on port 80 (and so the image developer
specifies EXPOSE 80 in the Dockerfile). At runtime, the port might be
bound to 42800 on the host. To find the mapping between the host ports
and the exposed ports, use docker port.
With --net=host
--network="host" gives the container full access to local system services such as D-bus and is therefore considered insecure.
Here you have nothing in "ports" because you have all ports opened for host.
If you dont want to use host network you can access host port from docker container with docker interface
- How to access host port from docker container
- From inside of a Docker container, how do I connect to the localhost of the machine?.
When you want to access container from host you need to publish ports to host interface.
The -P option publishes all the ports to the host interfaces. Docker
binds each exposed port to a random port on the host. The range of
ports are within an ephemeral port range defined by
/proc/sys/net/ipv4/ip_local_port_range. Use the -p flag to explicitly
map a single port or range of ports.
In short, when you define just --expose 8000 the port is not exposed to 8000 but to some random port. When you want to make port 8000 visible to host you need to map published port -p 8000:8000.
Docker's network model is to create a new network namespace for your container. That means that container gets its own 127.0.0.1. If you want a container to reach a mysql service that is only listening on 127.0.0.1 on the host, you won't be able to reach it.
--net=host will put your container into the same network namespace as the host, but this is not advisable since it is effectively turning off all of the other network features that docker has-- you don't get isolation, you don't get port expose/publishing, etc.
The best solution will probably be to make your mysql server listen on an interface that is routable from the docker containers.
If you don't want to make mysql listen to your public interface, you can create a bridge interface, give it a random ip (make sure you don't have any conflicts), connect it to nothing, and configure mysql to listen only on that ip and 127.0.0.1. For example:
sudo brctl addbr myownbridge
sudo ifconfig myownbridge 10.255.255.255
sudo docker run --rm -it alpine ping -c 1 10.255.255.255
That IP address will be routable from both your host and any container running on that host.
Another approach would be to containerize your mysql server. You could put it on the same network as your other containers and get to it that way. You can even publish its port 3306 to the host's 127.0.0.1 interface.

Resources