Hostname not accessible from outside the container in Docker - docker

I have created the following docker compose file:
version: '2'
services:
testApp:
image: nginx
hostname: myHost
ports:
- "8080:80"
networks:
- test
networks:
test:
driver: bridge
From outside the container, I can open the web page with localhost:8080. But if I try to open the web page via the defined hostname, it doesn't work.
Can anyone tell me how I can fix this issue?
Is there also a way to connect to the containers ip from the outside?
Thanks in advance.

Other containers on the test network would be able to reference it by that hostname, but not your host machine. You are binding port 8080 on your machine to port 80 on the container, so any external system that you would want to access the website would need to connect to your host machine on 8080 (as you did with localhost:8080).
How you do that depends on your networking, but for example if you know the ip or hostname of your local machine you can (probably) connect from another device on the same home network (your phone? Another computer?) using http://{ip-of-your-host}:8080. Exposing that to the internet from within a home network typically requires port forwarding on your router, and optionally a domain name.
The main point though is that the hostname in your compose is only relevant to other containers connecting to the same docker network (test). Outside of that, systems would need to make a connection to 8080 on your host machine.

Related

Docker container networking - interal ports open to everyone

I am new to docker and have trouble setting up the network between the containers to not allow unnecessary connections from outside.
I have a Docker running on a VPS with three containers on a remote IP 123.xxx.xxx.xxx
container name published ports IP adress
sqldb 3306:3306 172.xxx.xxx.4
applet1 80:3306 172.xxx.xxx.5
applet2 4444:4444 172.xxx.xxx.3
One is database and two are java apps. The trouble I am having right now is that when I create the containers the ports on the containers become exposed to the global internet so my database sqldb is exposed by 123.xxx.xxx.xxx:3306
Right now ny java apps are connect through JDBC like so jdbc:mysql://172.xxx.xxx.4:3306/db.
I am trying to accomplish the following:
port 80 on host so 123.xxx.xxx.xxx connects to java app applet1.
The goal is to give applet1 the ability to connect to sqldb and also applet2 but I don't wan't unecessary ports to be exposed to the whole internet. Preferably that internal URIs would be left but connections from outside (apart from SSH on port 22 and TCP on port 80) would be forbidden for ports 4444, 3306. Also, I don't yet know how to use docker-compose so if possible how can I solve it without it?
*I have heard you can connect to containers by writing container names like that: have not had success with it yet jdbc:mysql://sqldb/db.
If all your containers are running on the same docker bridge network, you don't need to expose any ports for them to communicate with each other.
Docker Compose is a particularly good tool for organising several containers like this as it automatically configures a network for you
# docker-compose.yaml
version: '3.9'
services:
sqldb:
image: sqldb
applet1:
image: applet1
ports:
- '80:3306' # you sure about this container port?
depends_on:
- sqldb
applet2:
image: applet2
depends_on:
- sqldb
Now only your applet1 container will have a host port mapping. Both applets will be able to connect to any other service within the network on their container ports.

docker-compose: open port in container but not bind it from host

(Note: the whole problem is because I misread the IP address of the docker network. The my-network is 172.22.0.0/16 instead of 127.22.0.0/16. I slightly modified the OP to reflect the original problem I encountered)
I created a service (a small web server) using docker-compose. The network part is defined as
services:
service:
image: ... (this image uses port 9000)
ports:
- 9000:9000
networks:
default:
name: my-network
After docker-compose up, I observe:
the host gets an IP address 172.22.0.1 and the client gets 172.22.0.2.
I can successfully ping the client from the host ping 127.22.0.2.
From the host machine: the web server can be reached using
127.22.0.1:9000
127.22.0.2:9000
localhost:9000
192.168.0.10:9000 (This is the host's IP address in the LAN)
Now I want to restrict the access from the host using 172.22.0.2:9000 only. I feel this should be possible if I don't bind the container's 9000 port to the host's 9000 port. Then I deleted the ports: 9000:9000 part from the docker-compose.yml. Now I observe:
All the above four methods do not work now, including 127.22.0.2:9000
The client can still be pinged from the host using 127.22.0.2
I think: since the the host and the container are both in a bridge network my-network and have obtained their IP addresses. The web server should still be reachable from 127.22.0.2:9000. But this is not the case.
My questions:
why does it work like this? Shouldn't the host/container in the same subnet 127.22.0.0/16 be able to talk to each other freely?
How to achieve what I want: do not forward port 9000 from host to container and only allow accessing the container using its subnet IP address.
Your understanding of the networking is correct. Removing the port binding from the docker-compose.yml will remove the exposed port from the host. Since the host is also part of the virtual network my-network with an IP in the same subnet as the container, your service should be reachable from the host using the container IP directly.
But I think, this is actually a simple typo and instead of
127.22.0.0/16
you actually have
172.22.0.0/16
as the subnet for my-network! This is a typical subnet used by docker in the default configuration, while 127.0.0.0/8 is always bound to the loopback device!
So connecting to 127.22.0.2 will actually connect you to localhost - which is consistent with the symptoms you encountered:
connecting to 127.22.0.2:9000 will work only if the port is exposed on the host
you can always pint 127.22.0.2 since it is the loopback address

After `docker-compose.yml` uses network ipv4 identical to WiFi's IP, why some websites not accessible?

Context: I am using docker-compose.yml to set up a container for the mongoDB, where network sets up as following
...
services:
mongo:
networks:
mongodb_net:
ipv4_address: 192.168.178.23
networks:
mongodb_net:
ipam:
config:
- subnet: 192.168.178.0/24
...
which is exactly the same as the IP address of my WiFi connection.
Question:
After the setting above, why some websites are not accessible anymore (e.g. PING doesn't return any packages) on my browser?
I tried to change the YAML file to other IP address, the problem resolves. But I want to understand what was the reason. Is it because that the docker service occupies the same IP as the WiFi so that interrupting the normal internet access?
Docker defines its own network setup. You can see some details of this on Linux running ifconfig and looking at iptables output. If you manually configure a Docker network to have the same CIDR block as your external network, you can wind up in a sequence where:
I want to call 8.8.8.8.
It's not on any of my local networks, so I'll route to the default gateway 192.168.178.1.
That address is on the docker1 network 192.168.178.0/24.
...and the outbound packets never actually leave your host.
You should almost never need to manually configure IP addresses or networks in Docker. It has its own internal network setup and handles this for you. In a Compose context, Compose will also do some additional setup that you generally need, like creating a default network; Networking in Compose has more details.
To get access to a container from outside of Docker space, you need to publish ports: out of that container, and then it will be reachable on your host's IP address at the published port.
services:
mongo:
ports: ['27017:27017']
# no networks: or manual IP configuration; just use the `default` network

Access Docker Container Port in a Ubuntu VM

Given an Ubuntu VMWare Machine (IP: 192.168.10.35) that runs a docker image inside (IP: 172.0.18.2) and given this docker-compose.yml how would I access the Docker Image from my local machine?
version: '3'
services:
sc2:
build: .
ports:
- 127.0.0.1:4620:80
restart: always
networks:
- default
volumes:
- ./sc2ai:/sc2ai
- ./apache/000-default.conf:/etc/apache2/sites-available/000-default.conf
networks:
default:
I tried to access 192.168.10.35:4620 but the connection failed. What am I missing? Is there an option in the docker-compose missing or do I need to forward ports from inside the VM to the docker image?
PS: If I start the image in docker-for-windows on my local machine I can access it via http://localhost:4620.
You can't, because you've explicitly declared that the container (not the image) is only reachable from the VM itself. The declaration
ports:
- 127.0.0.1:4620:80
forwards inbound connections on port 4620 on the host to port 80 in the container, but only on the interface bound to 127.0.0.1, which is the dedicated loopback interface (often named lo). When you try to contact it from the host, it arrives on the VM's external IP 192.168.10.35, but nothing is listening there.
If you remove the explicit port binding, Docker will listen on all interfaces, which is usually what you want, and then you should be able to reach the container via the VM's external IP address.
ports:
- '4620:80'
(Terminology: an image is a set of static filesystem content; you launch containers from an image and make network connections to the running containers. You can't directly see what's inside an image, an image doesn't have any running processes, and you can't connect to an image on its own.)

Route to host machine instead of particular container

I have simple docker-compose aka:
version: '3'
services:
app:
container_name: app
ports:
- 8081:8081
db:
container_name: db
ports:
- 5432:5432
And by default this containers are created in default(brige) network.
The app has db connection property: jdbc:postgresql://db/some_db, and everythig works perfectly. But from time to time I want the app to connecto to other db, that is running on my host machine, not in docker container.
The main problem is that I can not change my connection properties. And, ideally, I do not want to run new container with some additional options every time I want to switch the db host (but restart is ok)
Hence my question: what is the best way to achive this? Is it possible to set up additional route for containers host resolving? For exapmle, if db container is unreachable, then route to host.
You can access host services from your host.
See "How to access host port from docker container":
ip addr show docker0
docker.for.mac.localhost # docker 17.06+ June 2017
if db container is unreachable, then route to host.
That is a job for an orchestrator.
For instance, with kubernetes, you can associate an external load balencer, which could be tuned to redirect traffic to your pod, unless it is not accessible.

Resources