Make docker container only accessible from a certain IP - docker

Right now, when I bind a docker container port to a port on my computer, it can be accessed through every IP address belonging to my computer.
I know this since I tried connecting to the port through another computer using my Docker host's static LAN ip address.
I want to restrict that specific container to be accessible exclusively by my docker host (127.0.0.1 or localhost). When I change my web server's IP to localhost, it becomes inaccessible from my docker host (probably because that makes it local to the container, not the host).
How can I make a docker container local to the host?

If you run the container like this it will be accesable only from 127.0.0.1
docker run --rm -it -p 127.0.0.1:3333:80 httpd
--rm: I use it for testing it removing the container after exit.
-it: interactive tty.
-p: port mapping, map 3333 on the host to 80 in the container and restrict access only from localhost.
The docker-compose equivalent would be:
services:
db:
ports:
- "127.0.0.1:80:80"

Related

Docker networks: How to get container1 to communicate with server in container2

I have 2 containers on a docker bridge network. One of them has an apache server that i am using as a reverse proxy to forward user to server on another container. The other container contains a server that is listening on port 8081. I have verified both containers are on the same network and when i log into an interactive shell on each container i tested successfully that i am able to ping the other container.
The problem is, is that when i am logged into the container with the apache server, i am not able to ping the actual server in the other container.
the ip address of container with server is 172.17.0.2
How i create the docker network
docker network create -d bridge jakeypoo
How i start the containers
docker container run -p 8080:8080 --network="jakeypoo" --
name="idpproxy" idpproxy:latest
docker run -p 8081:8080 --name geoserver --network="jakeypoo" geoserver:1.1.0
wouldn't the uri to reach out to the server be
http://172.17.0.2:8081/
?
PS: I am sure more information will be needed and i am new to stack overflow and will happily answer any other questions i can.
Since you started the two containers on the same --network, you can use their --name as hostnames to talk to each other. If the service inside the second container is listening on port 8080, use that port number. Remappings with docker run -p options are ignored, and you don't need a -p option to communicate between containers.
In your Apache config, you'd set up something like
ProxyPass "/" "http://geoserver:8080/"
ProxyPassReverse "/" "http://geoserver:8080/"
It's not usually useful to look up the container-private IP addresses: they will change whenever you recreate the container, and in most environments they can't be used outside of Docker (and inside of Docker the name-based lookup is easier).
(Were you to run this under Docker Compose, it automatically creates a network for you, and each service is accessible under its Compose service name. You do not need to manually set networks: or container_name: options, and like the docker run -p option, Compose ports: are not required and are ignored if present. Networking in Compose in the Docker documentation describes this further.)
Most probably this can be the reason.
when you log into one of the container that container do not know anything about the other container network. when you ping, that container think you are try to ping a service inside that container.
Try to use docker compose if you can use it in your context. Refer this link:
https://docs.docker.com/compose/

How to communicate with a running Docker container in a Host X from another Host Y(not from a container in Host Y)

I am experimenting about Docker-networking, I had set up a scenario as below,
Installed docker in a host-X connected over a network (host-X IP: 60.0.0.28) and run a basic docker container of ubuntu-OS (Docker Container is connected to the default docker bridge network only i.e. 172.17.0.0/16 & 172.17.0.2 is container IP). Now trying to communicate that running container from another host-Y with in the same network (host-Y IP: 60.0.0.40) in which no docker is installed.
I had added basic route in host-Y like, "ip route add 172.17.0.0/16 via 60.0.0.28 dev ens3" .
From the container i am able to ping the Host-Y & in reverse case, i am only able to ping the docker gateway "172.17.0.1" from Host-Y but not able to reach the container.
There are a wide variety of situations where the Docker-internal IP addresses just aren't useful; calling from a different host is one of them. You should totally ignore those as an implementation detail.
If you take Docker out of the picture, and run the process directly on the host, this should be straightforward: from host Y, you can call the process on host X given its DNS name and the port the server is running on.
hostY$ curl http://hostX:12345/
If the process is actually running in a Docker container, you need to make sure you've started the container with a published port. This doesn't necessarily need to match the port the process is listening on.
hostX$ docker run -p 12345:12345 imagename
Once you've done this, the process can be reached via the host's DNS name or IP address, and the published port, the same way as with a non-container server.
In normal circumstances you should not need to think about the Docker-internal IP addresses; you do not need manual ip route-setup commands like you show, and you shouldn't docker inspect or docker run --ip to find or set this detail.
Let’s assume you want to start Dockerized nginx on host X.
You’d run:
docker run --detach -p 8080:80 nginx
Then you could access your nginx instance using http://60.0.0.28:8080.

docker container is not accessible from other machines on host's network

I was doing some devops and writing a script to turn my current host/nginx server/nginx setup into a host/docker/nginx server/docker/nginx set up so I can keep directories and etc the same between them.
The problem is that any ports I expose on a docker container are only accessible on the host and not from any other machines on the host network.
When typing 192.168.0.2 from a machine such as 192.168.0.3 it just says took too long to respond, but typing 192.168.0.2 from 192.168.0.2 will bring up the welcome to nginx page?! The interesting part is I did a wireshark analysis on en0 on port 80 and there are actually some packets coming through
See pastebins of packet inspections:
LAN to docker: https://pastebin.com/4qR2d1GV
Host to docker: https://pastebin.com/Wbng9nDB
I've tried using docker run -p 80:80 nginx/nginx and docker run -p 192.168.0.2:80:80 nginx/nginx and docker run -p 127.0.0.1:80:80 nginx/nginx but this doesn't seem to fix anything.
Should see welcome to nginx when connecting from 192.168.0.3 to 192.168.0.2.
this is in my dev environment which is an osx 10.13.5 system.
when I push this to my ubuntu 16.04 server it works just fine with the containerized nginx accessible from the www and when I run ngnix on my host without docker I can connect from external machines on the network too
Your description is a bit confusing the 127.0.0.1 within the port line will bind it to localhost only - you won't be able to access the docker from another machine. Remove the IP address and you should be able to access the docker from outside localhost.

Rancher container taking over host IP

I have 2 IP addresses in my rancher host (centos): 1.1.1.1 and 2.2.2.2
1.1.1.1 is the IP address I want to use to access the rancher UI and SSH into the host.
I want to use 2.2.2.2 for accessing containers for an application. I have 2 containers, one nginx and one ssh. I configured the containers to use hostport 80 mapped to 2.2.2.2:80 and 22 to hostport 2.2.2.2:22.
I have also changed the default run command for the rancher container to listen on port 80 and 443 of IP 1.1.1.1
If I go to my browser and access 1.1.1.1 I see rancher as expected, and if I access 2.2.2.2 I see my container app as expected.
However, if I try accessing 1.1.1.1:22 I end up connecting to the container ssh, which should be only listening to 2.2.2.2:22.
Am I missing something here? Is this a configuration issue on the host or the container? Can the container get access to something that it shouldn't even be aware of?
UPDATE
Let me try to clarify the setup:
Rancher is running in a host with 2 IP addresses. When I run rancher, I execute the following command, so it becomes attached to the first IP address:
docker run -d --volumes-from rancher-data --restart=unless-stopped -p 1.1.1.1:80:80 -p 1.1.1.1:443:443 rancher/rancher
docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.1.7 --server https://rancher1.my.tld --token [token] --ca-checksum [checksum] --etcd --controlplane --worker
I have 4 containers configured in the rancher UI, which I want pointing to 2.2.2.2:22 and 2.2.2.2:80, 2.2.2.2:2222 and 2.2.2.2:8080
These are 2 environments for an application. 22 and 80 are nginx and ssh containers for the LIVE environment (sharing a data volume between them) and the same thing for 2222 and 8080, with these being for a the QA environment. I use the ssh container to upload contents to the nginx container through the shared data volume.
I don't see a problem with this configuration, except the fact that when I configure the ssh machine to use port 22, when I try connecting to the host ssh, I get connected to the container ssh.
UPDATE 2
Here is a screenshot from the port mapping settings in the container: https://snag.gy/idTjoV.jpg
Container port 22 mapped to IP 2.2.2.2:222
If I set that to 2.2.2.2:22, SSH to host stops working, and ssh connections are established to the container instead.

docker: mutual access of container and host ports

From my docker container I want to access the MySQL server running on my host at 127.0.0.1. I want to access the web server running on my container container from the host. I tried this:
docker run -it --expose 8000 --expose 8001 --net='host' -P f29963c3b74f
But none of the ports show up as exposed:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
093695f9bc58 f29963c3b74f "/bin/sh -c '/root/br" 4 minutes ago Up 4 minutes elated_volhard
$
$ docker port 093695f9bc58
If I don't have --net='host', the ports are exposed, and I can access the web server on the container.
How can the host and container mutually access each others ports?
When --expose you define:
The port number inside the container (where the service listens) does
not need to match the port number exposed on the outside of the
container (where clients connect). For example, inside the container
an HTTP service is listening on port 80 (and so the image developer
specifies EXPOSE 80 in the Dockerfile). At runtime, the port might be
bound to 42800 on the host. To find the mapping between the host ports
and the exposed ports, use docker port.
With --net=host
--network="host" gives the container full access to local system services such as D-bus and is therefore considered insecure.
Here you have nothing in "ports" because you have all ports opened for host.
If you dont want to use host network you can access host port from docker container with docker interface
- How to access host port from docker container
- From inside of a Docker container, how do I connect to the localhost of the machine?.
When you want to access container from host you need to publish ports to host interface.
The -P option publishes all the ports to the host interfaces. Docker
binds each exposed port to a random port on the host. The range of
ports are within an ephemeral port range defined by
/proc/sys/net/ipv4/ip_local_port_range. Use the -p flag to explicitly
map a single port or range of ports.
In short, when you define just --expose 8000 the port is not exposed to 8000 but to some random port. When you want to make port 8000 visible to host you need to map published port -p 8000:8000.
Docker's network model is to create a new network namespace for your container. That means that container gets its own 127.0.0.1. If you want a container to reach a mysql service that is only listening on 127.0.0.1 on the host, you won't be able to reach it.
--net=host will put your container into the same network namespace as the host, but this is not advisable since it is effectively turning off all of the other network features that docker has-- you don't get isolation, you don't get port expose/publishing, etc.
The best solution will probably be to make your mysql server listen on an interface that is routable from the docker containers.
If you don't want to make mysql listen to your public interface, you can create a bridge interface, give it a random ip (make sure you don't have any conflicts), connect it to nothing, and configure mysql to listen only on that ip and 127.0.0.1. For example:
sudo brctl addbr myownbridge
sudo ifconfig myownbridge 10.255.255.255
sudo docker run --rm -it alpine ping -c 1 10.255.255.255
That IP address will be routable from both your host and any container running on that host.
Another approach would be to containerize your mysql server. You could put it on the same network as your other containers and get to it that way. You can even publish its port 3306 to the host's 127.0.0.1 interface.

Resources