Docker not binding to specified IP - docker

I've made the follow edit to /etc/docker/daemon.json
{
"ip" : "[ip address]"
}
To confirm, I inspect the network with docker network inspect [id]
"Options": {
"com.docker.network.bridge.host_binding_ipv4": "[ip address]",
"com.docker.network.bridge.name": "docker0",
},
Yet somehow all containers are still responding to other ips on the server, not just the given IP.
How can I restrict docker to a specific ip?
NOTE:
https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
Or if you always want Docker port forwards to bind to one specific IP address, you can edit your system-wide Docker server settings and add the option --ip=IP_ADDRESS. Remember to restart your Docker server after editing this setting.

The config file "/etc/docker/daemon.json" affects the docker daemon and not the container instances.
You have to configure your docker instance to bind just on the ip you want.
For example with a direct run :
docker run --rm -p 127.0.0.1:80:80 nginx
This will make the container bind the port 80 only on the ip 127.0.0.1

Related

Docker networks: How to get container1 to communicate with server in container2

I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/

docker run: how to avoid overlap between Docker and external host on 172.18.x.x IP range

i'm using docker run to test my container locally. I found that it is unable to connect to a certain host in my company's network, failing with "no route to host". It turns out this host has an IP address of 172.18.x.x, which overlaps with Docker's networking.
So, is there a way to change the docker run configuration so that it doesn't claim this particular IP range? I've already tried changing the bip and default-address-pools options in the Docker daemon configuration file, but that didn't solve the problem.
Create a docker config file if one doesn't exist in /etc/docker/daemon.json
Add an entry to the daemon.json with the subnet for the docker bridge0 to run in (you can add your desired subnet here), under the "bip" entry e.g. - "bip": "192.168.1.5/24"
Restart docker service : sudo systemctl restart docker
NOTE: Be ware not to use the loopback address in the subnet, i.e. IPs that end with 0 like 192.168.1.0/24 The bridge0 subnet should have enough addresses for all the containers on the machine that use the default network. skyformation does NOT use the default network.
Verify it worked by running ifconfig docker0 | grep -Po '(?<=inet )[\d.]+' . It should print out the IP address specified in "bip" example of daemon.json like this
{ "bip": "192.168.1.5/24" }
Now in your docker-compose file, add network tab as this.
networks:
isolated_nw:
driver: bridge
ipam:
config:
- subnet: 192.167.0.0/16
Now restart your docker container, verify by grepping IP Address of the container. It should work as expected.
The 172.18.x.x network (ifconfig showed with name br############) seemed to be created when I start the docker-compose.
And this network will be deleted when I run docker-compose down.
Before it being deleted and recreated, the IP address range will not change.
Once it recreates, the default-address-pools options is activated, and the new IP range is up.
My work around solution
Step1:
Add the following to /etc/docker/daemon.json.
{
// ...
"default-address-pools": [
{
"base": "172.31.0.0/16",
"size": 24
}
]
// ...
}
Step2:
Restart docker service.
sudo systemctl restart docker
Step3:
Rebuild the docker-compose.
!!DANGER!! Before you do this, ensure that removing the container then recreate is acceptable.
docker-compose down
docker-compose up -d

Rancher container taking over host IP

I have 2 IP addresses in my rancher host (centos): 1.1.1.1 and 2.2.2.2
1.1.1.1 is the IP address I want to use to access the rancher UI and SSH into the host.
I want to use 2.2.2.2 for accessing containers for an application. I have 2 containers, one nginx and one ssh. I configured the containers to use hostport 80 mapped to 2.2.2.2:80 and 22 to hostport 2.2.2.2:22.
I have also changed the default run command for the rancher container to listen on port 80 and 443 of IP 1.1.1.1
If I go to my browser and access 1.1.1.1 I see rancher as expected, and if I access 2.2.2.2 I see my container app as expected.
However, if I try accessing 1.1.1.1:22 I end up connecting to the container ssh, which should be only listening to 2.2.2.2:22.
Am I missing something here? Is this a configuration issue on the host or the container? Can the container get access to something that it shouldn't even be aware of?
UPDATE
Let me try to clarify the setup:
Rancher is running in a host with 2 IP addresses. When I run rancher, I execute the following command, so it becomes attached to the first IP address:
docker run -d --volumes-from rancher-data --restart=unless-stopped -p 1.1.1.1:80:80 -p 1.1.1.1:443:443 rancher/rancher
docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.1.7 --server https://rancher1.my.tld --token [token] --ca-checksum [checksum] --etcd --controlplane --worker
I have 4 containers configured in the rancher UI, which I want pointing to 2.2.2.2:22 and 2.2.2.2:80, 2.2.2.2:2222 and 2.2.2.2:8080
These are 2 environments for an application. 22 and 80 are nginx and ssh containers for the LIVE environment (sharing a data volume between them) and the same thing for 2222 and 8080, with these being for a the QA environment. I use the ssh container to upload contents to the nginx container through the shared data volume.
I don't see a problem with this configuration, except the fact that when I configure the ssh machine to use port 22, when I try connecting to the host ssh, I get connected to the container ssh.
UPDATE 2
Here is a screenshot from the port mapping settings in the container: https://snag.gy/idTjoV.jpg
Container port 22 mapped to IP 2.2.2.2:222
If I set that to 2.2.2.2:22, SSH to host stops working, and ssh connections are established to the container instead.

How to modify docker image config which got from inspect command

I create a docker image for openvpn. But when I use docker inspect command to get config from this image, I always see this setting in ContainerConfig:
"ContainerConfig": {
"Hostname": "cfd8618fa650",
"ExposedPorts": {
"11194/tcp": {}
},
This is not good because every time I run this image, it will expose port 11194 automatically even I didn't want to. Does any one know how to remove this config ?
Pay attention that 11194 is the default OpenVpn port, so it's quite normal that it's exposed by Docker.
Anyway, if you have the Dockerfile, obviously you can build a new image removing the EXPOSE 11194 line from Dockerfile itself.
But if you run an image directly pulling it from a repo, or you can't remove the container, the port will be exposed, but you can bind it to a specific ip.
Because port mapping -p format can be
ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort
| containerPort
you can bind the host port to a single host (e.g. localhost) instead to all the world, for example
docker run -p 127.0.0.1:11194:11194 ...
So port 11194 (or whatever port number you assign locally) will be reachable from localhost only.
Otherwise you can close the port by iptables or other firewall.
The site Docker and IPtables explains well docker port binding, iptables forwarding rules, etc.

In Docker, do I need to publish ports if I set network to host?

I was running into an issue today where I have a Dockerfile that EXPOSEs several ports and I wanted to run it with the --net=host flag.
However, all connections to the ports that the container was supposed to be listening on were refused.
Running docker inspect on the container I noticed this:
"Ports": {
"8000/tcp": {},
}
Growing exasperated I deleted the --net flag all together and went to the default bridge network. Surprise it works!
"Ports": {
"8000/tcp": null,
}
Except now it has this strange null setting. What is the difference here? Also, plot I'm running inside of a VM trying to communicate with another VM. Probably a million reasons this won't work.
Question
Is the publish option needed when the network mode is host?
Answer
No, the host network stack is directly used by the container:
'host': use the Docker host network stack. Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure.
Proof
Start a container with netcat:
user#host:~$ docker run -it --rm --net host nc:1.10-41
root#container:/# nc -l -p 9999
Back into the host:
user#host:~$ nc 127.0.0.1 9999
Sending a message for test <enter>
The message will be displayed from the netcat command executed within the container.
Monitoring
A  netstat from the host will show the established connection:
user#host:~$ netstat latuep |grep 9999
tcp 0 0 localhost:38600 localhost:9999 ESTABLISHED
tcp 0 0 localhost:9999 localhost:38600 ESTABLISHED
As for your issue
The error may stem from another configuration/network environment. Can VMs ping each other? Do they share the same LAN? Is a firewall set?

Resources