Docker container networking - interal ports open to everyone - docker

I am new to docker and have trouble setting up the network between the containers to not allow unnecessary connections from outside.
I have a Docker running on a VPS with three containers on a remote IP 123.xxx.xxx.xxx
container name published ports IP adress
sqldb 3306:3306 172.xxx.xxx.4
applet1 80:3306 172.xxx.xxx.5
applet2 4444:4444 172.xxx.xxx.3
One is database and two are java apps. The trouble I am having right now is that when I create the containers the ports on the containers become exposed to the global internet so my database sqldb is exposed by 123.xxx.xxx.xxx:3306
Right now ny java apps are connect through JDBC like so jdbc:mysql://172.xxx.xxx.4:3306/db.
I am trying to accomplish the following:
port 80 on host so 123.xxx.xxx.xxx connects to java app applet1.
The goal is to give applet1 the ability to connect to sqldb and also applet2 but I don't wan't unecessary ports to be exposed to the whole internet. Preferably that internal URIs would be left but connections from outside (apart from SSH on port 22 and TCP on port 80) would be forbidden for ports 4444, 3306. Also, I don't yet know how to use docker-compose so if possible how can I solve it without it?
*I have heard you can connect to containers by writing container names like that: have not had success with it yet jdbc:mysql://sqldb/db.

If all your containers are running on the same docker bridge network, you don't need to expose any ports for them to communicate with each other.
Docker Compose is a particularly good tool for organising several containers like this as it automatically configures a network for you
# docker-compose.yaml
version: '3.9'
services:
sqldb:
image: sqldb
applet1:
image: applet1
ports:
- '80:3306' # you sure about this container port?
depends_on:
- sqldb
applet2:
image: applet2
depends_on:
- sqldb
Now only your applet1 container will have a host port mapping. Both applets will be able to connect to any other service within the network on their container ports.

Related

Docker Compose: exposing ports only to other containers

I'm trying to figure out best practices for Docker Compose, which I'm pretty new to.
Scenario:
I have one container with a Flask app, served through gunicorn (which binds on port 8080).
I have an nginx container that acts as a reverse proxy for TLS termination, request routing to other containers, etc. It needs to be able to forward requests to gunicorn/Flask in the first container.
I'm using Docker Compose to do a quick deployment of these containers. Since the gunicorn/Flask container should only accept traffic from nginx, and not any direct requests, I'd like to configure it such that the gunicorn/Flask port (8080 in the container) is only exposed to the nginx container, and not to anything else on the host.
Is this possible?
Just don't publish ports: for the thing you don't want accessible from outside Docker space.
version: '3.8'
services:
backend:
build: .
# no ports:
proxy:
image: nginx
volumes: ['./nginx.conf:/etc/nginx/nginx.conf']
ports:
- '80:80'
Your Nginx configuration can still proxy_pass http://backend:8080, but the Flask application won't be directly reachable from outside of Docker.
ports: specifically allows a connection from outside Docker into the container. It's not a required option. Connections between containers don't require ports:, and if ports: remap one of the application ports, that remapping is ignored.
Technically this setup only allows connections from within the same Docker network, so if you had other services in this setup they could also reach the Flask container. On a native-Linux host there are tricks to directly contact the container, but they don't work on other host OSes or from another system. IME usually these limitations are acceptable.
Yes, this is one of the main use cases for using docker-compose. You basically only need to expose the port of nginx in your docker-compose.yaml and let the flask app not expose any ports to the outside world of the container.
From the docker-compose docs: By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
For example, imagine your flask app service inside your docker-compose.yml is called my-flask-app where the app inside the container is running on port 8080. Then you can access the endpoint from the within nginx container by the servicename such as http://my-flask-app:8080. You can try this by using curl from inside the flask container.

Hostname not accessible from outside the container in Docker

I have created the following docker compose file:
version: '2'
services:
testApp:
image: nginx
hostname: myHost
ports:
- "8080:80"
networks:
- test
networks:
test:
driver: bridge
From outside the container, I can open the web page with localhost:8080. But if I try to open the web page via the defined hostname, it doesn't work.
Can anyone tell me how I can fix this issue?
Is there also a way to connect to the containers ip from the outside?
Thanks in advance.
Other containers on the test network would be able to reference it by that hostname, but not your host machine. You are binding port 8080 on your machine to port 80 on the container, so any external system that you would want to access the website would need to connect to your host machine on 8080 (as you did with localhost:8080).
How you do that depends on your networking, but for example if you know the ip or hostname of your local machine you can (probably) connect from another device on the same home network (your phone? Another computer?) using http://{ip-of-your-host}:8080. Exposing that to the internet from within a home network typically requires port forwarding on your router, and optionally a domain name.
The main point though is that the hostname in your compose is only relevant to other containers connecting to the same docker network (test). Outside of that, systems would need to make a connection to 8080 on your host machine.

Failing consul health check on local machine

I have Consul running via docker using docker-compose
version: '3'
services:
consul:
image: unifio/consul:latest
ports:
- "8500:8500"
- "8300:8300"
volumes:
- ./config:/config
- ./.data/consul:/data
command: agent -server -data-dir=/data -ui -bind 0.0.0.0 -client 0.0.0.0 -bootstrap-expect=1
mongo:
image: mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./.data/mongodb:/data/db
command: mongod --bind_ip_all
and have a nodejs service running on port 6001 which exposes a /health endpoint for health checks.
I am able to register the service via this consul package.
However, visiting the consul UI I can see that the service has a status of failing because the health check is not working.
The UI show this message:
Get http://127.0.0.1:6001/health: dial tcp 127.0.0.1:6001: getsockopt: connection refused
Not sure why is not working exactly, but I kind of sense that i may have misconfigured consul.
Any help would be great.
Consul is running in your docker container. When you use 127.0.0.1 in this container, it refers to itself, not to your host.
You need to use a host IP that is known to your container (and of course make sure your service is reachable and listening on this particular IP).
In most cases, you should be able to contact your host from a container through the default docker0 bridge ip that you can get with ip addr show dev docker0 from your host as outlined in this other answer.
The best solution IMO is to discover the gateway that your container is using which will point to the particular bridge IP on your host (i.e. the bridge created for your docker-compose project when starting it). There are several methods you can use to discover this ip from the container depending on the installed tooling and your linux flavor.
While Zeitounator's answer is perfectly fine and answers your direct question,
the "indirect" solution to your problem would be to manage the nodejs service
through docker-compose.
IMHO it's a good idea to manage all services involved using the same tool,
as then their lifecycles are aligned and also it's easy to configure them to talk
to each other (at least that's the case for docker-compose).
Moreover, letting containers access services on the host is risky in terms of security.
In production environments you usually want to shield host services from containers,
as otherwise the containers lose their "containment" role.
So, in your case you would need to add the nodejs service to docker-compose.yml:
services:
(...)
nodejs-service:
image: nodejs-service-image
ports:
- "6001:6001" [this is only required if you need to expose the port on the host]
command: nodejs service.js
And then your Consul service would be able to access nodejs-service
through http://nodejs-service:6001/health.

How to host multiple environments for a project using docker in the same machine?

I have a typical web stack that consists of nginx + django + database components.
I have set them up in different docker containers with docker-compose and it's running fine.
services:
billing_app_dev:
image: jameslin/billing_app:latest
expose:
- 8000
billing_postgres:
image: postgres:10.5
restart: always
volumes:
- pg_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
billing_nginx:
image: jameslin/billing_nginx:${TAG}
volumes:
- app_files:/static
links:
- 'billing_app'
ports:
- 80:80
Now I am wondering how I can set up DEV and QA environments on a single machine. I can change the django and database containers to listen to different ports, but looks like I cannot run nginx containers individually since port 80 can only be listened by one container.
I will have to share the nginx container for those 2 environments which doesn't seem very clean.
Are there any better ideas if running 2 VMs is not possible?
I have 3 apache containers and 1 nginx running in the same server, so pretty sure is not a issue.
For each stack of webserver + database i have a different docker-compose file, in this way docker will create a different network for each stack, avoiding possible problems with simultaneous port, and you only will have to bind your nginx in different ports of your server, because, you only can bind one service to one port. still, each container is a separated "machine", so even over the same network they can use the same port.
if you really need run all your services in the port 80 and 443 of your server, may be you will need to put a nginx running in those ports like a reverse proxy calling in the internal docker network those services, is a option but i never try it before over docker internal network.
I think what you needed is virtual ip or maybe called ip aliasing. Even you just have one network card, you can still set 2 ips on it.
Then, you can set 2 different nginx container on host, and use different ip but same 80 port.
Something like follows:
cd /etc/sysconfig/network-script/
cp ifcfg-eth0 ifcfg-eth0:1
vi ifcfg-eth0:1
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0:1 ----> sub network card
HWADDR=00:0C:29:45:62:3B
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.109.108 ----> configure a new different ip
NETMASK=255.255.255.0
Detail refers to Create Multiple IP Addresses to One Single Network Interface
For nginx, from nginx guide, you had to change your nginx docker to modify listen 80 to listen your_ip:80, then it will not listen on all ip address.

How can I make docker-compose bind the containers only on defined network instead of 0.0.0.0?

In recent versions docker-compose automatically creates a new network for the services it creates. Basically, every docker-compose setup is getting its own IP range, so that in theory I could call my services on the network's IP address with the predefined ports. This is great when developing multiple projects at the same time, since there is then no need to change the ports in docker-compose.yml (i.e. I can run multiple nginx projects at the same time on port 8080 on different interfaces)
However, this does not work as intended: every exposed port is still exposed on 0.0.0.0 and thus there are port conflicts with multiple projects. It is possible to put the bind IP into docker-compose.yml, however this is a killer for portability -- not every developer on the team uses the same OS or works on the same projects, therefore it's not clear which IP to configure.
It's be great to define the IP to bind the containers to in terms of the network created for this particular project. docker-compose should both know which network it created as well as its IP, so this shouldn't be a problem, however I couldn't find an easy way to do it. Is there a way or is this something yet to be implemented?
EDIT: An example of a port conflict: imagine two projects, each with an application server running on port 8080 and a MySQL database running on port 3306, both respectively exposed as "8080:8080" and "3306:3306". Running the first one with docker-compose creates a network called something like app1_network with an IP range of 172.18.0.0/16. Every exposed port is exposed on 0.0.0.0, i.e. on 127.0.0.1, on the WAN address, on the default bridge (172.17.0.0/16) and also on the 172.18.0.0/16. In this case I can reach my application server of all of 127.0.0.1:8080, 172.17.0.1:8080, 172.18.0.1:8080 and als on $WAN_IP:8080. If I start the second application now, it starts a second network app2_network 172.19.0.0/16, but still tries to bind every exposed port on all interfaces. Those ports are of course already taken (except for 172.19.0.1). If there had been a possibility to restrict each application to its network, application 1 would have available at 172.18.0.1:8080 and the second at 172.19.0.1:8080 and I wouldn't need to change port mappings to 8081 and 3307 respectively to run both applications at the same time.
In your service configuration, in docker-compose.yml:
ports:
- "127.0.0.1:8001:8001"
Reference: https://github.com/compose-spec/compose-spec/blob/master/spec.md#ports
You can publish a port to a single IP address on the host by including the IP before the ports:
docker run -p 127.0.0.1:80:80 -d nginx
The above runs nginx on the loopback interface. You can use a similar port mapping inside of a docker-compose.yml file. e.g.:
ports:
- "127.0.0.1:80:80"
docker-compose doesn't have any special abilities to infer which network interface to use based on the docker network. You'd need to specify the unique IP address to use in each compose file, and that IP needs to be for a network interface on your host. For a developer machine, that IP may change as DHCP gives the laptop/workstation new addresses.
Because of the difficulty implementing your goal, most would either map different ports on the host to different containers, so 13307:3307 for container a, 23307:3307 for container b, 33307:3307 for container c, or whatever number scheme makes sense for you. And when dealing with HTTP traffic, then using a reverse proxy like traefik often makes the most sense.
It can be achieved by configuring network in docker-compose file.
Please consider below two docker-compose files. There is still drawback of needing to specify subnet unique across all project you work on at the same time. On the other hand you need to know which service you connecting too - this is why it cannot assign it dynamically.
my-project.yaml:
services:
nginx:
networks:
- my-project-network
image: nginx
ports:
- 80:80
networks:
my-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.20.0.1"
ipam:
config:
- subnet: "172.20.0.0/16"
my-other-project.yaml
services:
nginx:
networks:
- my-other-project-network
image: nginx
ports:
- 80:80
networks:
my-other-project-network:
driver_opts:
com.docker.network.bridge.host_binding_ipv4: "172.21.0.1"
ipam:
config:
- subnet: "172.21.0.0/16"
Note: that if you have other service binding to *:80 like for instance apache running on host - it will also bind on docker-compose networks' interfaces and you will not be able to use this port.
To run above two services:
docker-compose -f my-project.yaml up -d
docker-compose -f my-other-project.yaml up -d

Resources