docker-compose microservice inter-container api communication with nginx proxy - docker

I am trying to build a docker-compose file that will mimic my production environment with its various microservices. I am using a custom bridge network with an nginx proxy that routes port 80 and 443 requests to the correct service containers. The docker-compose file and the nginx conf files together specify the port mappings that allow the proxy container to route traffic for each DNS entry to its matching container.
Consequently, I can use my container names as DNS entries to access each container service from my host browser. I can also exec into each container and ping other containers by that same DNS hostname. However, I cannot successfully curl from one container to another by the container name alone.
It seems that I need to append the proxy port mapping to each inter-service API call when operating within the Docker environment. In my production environment each service has its own environment and can respond on ports 80 and 443. The code written for each service therefore ignores port specifications and simply calls each service by its DNS hostname. I would rather not have to append port id mappings to each API call throughout the various code bases in order for my services to talk to each other in the Docker environment.
Is there a tool or configuration setting that will allow my microservice containers to successfully call each other in Docker without the need of a proxy port map?
version: '3'
services:
#---------------------
# nginx proxy service
#---------------------
nginx_proxy:
image: nginx:alpine
networks:
- test_network
ports:
- "80:80"
- "443:443"
volumes:
- "./site1/site1.test.conf:/etc/nginx/conf.d/site1.test.conf"
- "./site2/site2.test.conf:/etc/nginx/conf.d/site2.test.conf"
container_name: nginx_proxy
#------------
# site1.test
#------------
site1.test:
build: alpine:latest
networks:
- test_network
ports:
- "9001:9000"
environment:
- "VIRTUAL_HOST=site1.test"
volumes:
- "./site1:/site1"
container_name: site1.test
#------------
# site2.test
#------------
site2.test:
build: alpine:latest
networks:
- test_network
ports:
- "9002:9000"
environment:
- "VIRTUAL_HOST=site2.test"
volumes:
- "./site2:/site2"
container_name: site2.test
# networks
networks:
test_network:

http://hostname/ always means http://hostname:80/ (that is, TCP port 80 is the default port for HTTP URLs). So if you want one container to be able to reach the other as http://othercontainer/, the other container needs to be running an HTTP daemon of some sort on port 80 (which probably means it needs to at least be started as root within its container).
If your nginx proxy routes to all of the containers successfully, it's not wrong to just route all inter-container traffic through it (in a previous technology generation we would have called this a service bus). There's not a trivial way to do this in Docker, but you might be able to configure it as a standard HTTP proxy.
I would suggest making all of the outbound service URLs configurable in any case, probably as environment variables. You can imagine wanting to run multiple services together in a development environment (in which case the service URL might be http://localhost:9002), or in a pure-Docker environment like what you show (http://otherservice:9000), or in a hybrid multi-host Docker setup (http://other.host.example.com:9002), or in Kubernetes (http://otherservice.default.svc.cluster.local:9000).

Related

Expose one container port but other not reacheable from the host machine

I do not know how to achieve that. Now all the ports are exposed to the host machine but I just want to expose one container port (80), not the other (8080). Here is the docker-compose file:
---
version: "3.9"
services:
app:
image: sandbox/app
container_name: app
volumes:
- ./src/app:/app/
expose:
- "8080"
restart: unless-stopped
networks:
custom-net:
ipv4_address: 10.0.0.7
web_server:
image: nginx:latest
container_name: proxy
ports:
- "80:80"
networks:
custom-net:
ipv4_address: 10.0.0.6
networks:
custom-net:
name: custom-net
driver: bridge
ipam:
driver: default
config:
- subnet: 10.0.0.0/8
If I run from the local machine nmap 10.0.0.6, it shows port as open in port 80. This container exposure is the desired one. But when I run nmap 10.0.0.7, it also shows as open 8080 port, how it could be that one? Checking some stackoverflow thread, ports is defined like that:
Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen).
and expose:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Do I miss some network concepts or do I have wrong docker-compose file?
You must be on a native-Linux host. If you happen to know the Docker-internal IP addresses, and you're on a native-Linux host, then you can always connect to a container using those addresses; you can't prevent this (without iptables magic) but it's also not usually harmful. This trick doesn't work in other environments (on MacOS or Windows hosts, or if Docker is in a Linux VM, or from a different host from the container) and it's much more portable to connect only to containers' published ports:.
You should be able to use a much simpler Compose file. Delete all of the networks: blocks and the expose: blocks. You also do not need container_name:, and you should not need to inject code using volumes:. Trimming out all of the unnecessary options leaves you with
version: '3.8' # last version supported by standalone docker-compose tool
services:
app:
image: sandbox/app # may want `build: .` _instead of_ this line
restart: unless-stopped
web_server:
image: nginx:latest # needs some custom configuration?
ports:
- "80:80"
That should literally be the entire file.
From outside Docker but on the same machine, http://localhost:80 matches the first ports: of the web_server container, so forwards to the second ports:, on which the Nginx server is listening. The Nginx configuration should include a line like proxy_pass http://app:8080 which will forward to the application container.
Compared to your original file:
expose: is an artifact of first-generation Docker networking. In a Compose file it does absolutely nothing at all and it's always safe to delete it.
Connections between containers (where web_server uses app as a host name) connect directly to the specified port; they do not use or require expose: or ports: settings, and they ignore ports: if they're present.
Compose assigns container names on its own, and there are docker-compose CLI equivalents to almost all Docker commands that can figure out the right mapping. You don't need to manually specify container_name:.
Docker automatically assigns IP addresses to containers. These are usually an internal implementation detail; it's useful to know that containers do have their own IP addresses (and so you can have multiple containers that internally listen on the same port) but you never need to know these addresses, look them up, or manually specify them.
Compose automatically creates a network named default for you and attaches containers to it, so in most common cases you don't need networks: at all.
Networking in Compose in the Docker documentation describes how to make connections between containers (again, you do not need to know the container-private IP addresses). Container networking discusses these concepts separately from Compose.

Dockerizing 2 separate dependant services

I currently have a VM running 2 services. A frontend httpd/apache2 service that proxies all request to my backend services.
My bankend service only listens on 127.0.0.1:7878. This means it is only accessible via localhost. Thats the reason why Im using a frontend so that I can use that to proxy my requests to 127.0.0.1:7878
So my apache2 config on the VM looks like :
root#vm:/etc/apache2/sites-enabled# cat backend.conf
<VirtualHost *:443>
ServerName my.domain.com
ProxyPass / http://localhost:7878/
ProxyPassReverse / http://localhost:7878/
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/ssl_cert.crt
SSLCertificateKeyFile /etc/apache2/ssl/ssl_cert.key
</VirtualHost>
Now I want to dockerize both services and deploy them using docker-compose
I have setup my backend service like :
version: '3'
services:
backend:
build: backend/.
ports:
- "7878:7878"
And my backend/ folder has all the required files for my backend service including the Dockerfile. I am able to successfully build by docker image and can run it successfully. When I exec into the pod, I can successfully run curl commands towards 127.0.0.1:7878/some-end-point
Now I need to dockerize the frontend service too. It could be apache or it could even be nginx. But Im not sure how both containers will interac with each other given that my backend services ONLY listens on 127.0.0.1
If I extend my docker-compose file like :
version: '3'
services:
backend:
build: backend/.
ports:
- "7878:7878"
frontend:
build: frontend/.
ports:
- "80:80"
- "443:443"
I believe it will spin up its own network and my backend service wont be accessible using 127.0.0.1:7878
So in this case, whats the best approach ? How do I use docker-compose to spin up different containers on the SAME network so that they share 127.0.0.1 ?
You can't do that as you describe: the IPv4 address 127.0.0.1 is a magic address that always means "me", and in a Docker context it will mean "this container".
It's easy enough to set up a private Docker-internal network for your containers; in fact, Docker Compose will do this automatically for you. Your backend service must be listening on 0.0.0.0 to be accessible from other containers. You're not required to set externally published ports: on your container (or use the docker run -p option), though. If you don't, then your container will only be reachable from other containers on the same Docker-internal network, using the service name in the docker-compose.yml file as a DNS name, on whatever port the process inside the container happens to be listening on.
A minimal example of this could look like:
version: '3'
services:
proxy:
image: 'my/proxy:20181220.01'
environment:
BACKEND_URL: 'http://backend'
BIND_ADDRESS: '0.0.0.0:80'
ports:
- '8080:80'
backend:
image: 'my/backend:20181220.01'
environment:
BIND_ADDRESS: '0.0.0.0:80'
From outside Docker, you can reach the proxy at http://server-hostname.example.com:8080. From inside Docker, the two hostnames proxy and backend will resolve to Docker-internal addresses, and we've set both services (via a hypothetical environment variable setup) to listen on the ordinary HTTP port 80.

Why would i use docker links when i still need to hardcode the address?

Hello i have not understood the following :
-In the docker world we have from what i understood :
A port that the application exposes
A port that the container exposes for the application
A port that the host maps the container port
So given these facts in a configuration of 2 containers within docker-expose
If:
app | Host Port | Container Port | App Port
app1 8300 8200 8200
app2 9300 9200 9200
If app2 needs to communicate with with app1 directly through docker-host why would i use links ,since i still have to somehow hardcode in the environment of app2 the hostname and port of app1 (container_name of app1 and port of container of app1)?( In our example : port=8200 and host=app1Inst)
app1:
image: app1img
container_name: app1Inst
ports:
- 8300:8200 //application code exposes port 8200 - e.g sends to socket on 8200
networks:
- ret-net
app2:
image: app2img
container_name: app2Inst
ports:
- 9300:9200
depends_on:
- app1
networks:
- ret-net
links:
- app1
///i still need to say here
/ environment : -
/ - host=app1Inst
/ - port=8200 --what do i gain using links?
networks:
ret-net:
You do not need to use links on modern Docker. But you definitely should not hard-code host names or ports anywhere. (See for example every SO question that notes that you can interact with services as localhost when running directly on a developer system but needs some other host name when running in Docker.). The docker-compose.yml file is deploy-time configuration and that is a good place to set environment variables that point from one service to another.
As you note in your proposed docker-compose.yml file, Docker networks and the associated DNS service basically completely replace links. Links existed first but aren’t as useful any more.
Also note that Docker Compose will create a default network for you, and that the service block names in the docker-compose.yml file are valid as host names. You could reduce that file to:
version: '3'
services:
app1:
image: app1img
ports:
- '8300:8200'
app2:
image: app2img
ports:
- '9300:9200'
env:
APP1_URL: 'http://app1:8200'
depends_on:
- app1
Short answer, no you don't need links, also its now deprecated in docker & not recommended.
https://docs.docker.com/network/links/
Having said that, since both your containers are on the same network ret-net, they will be able to discover & communicate freely between each other on all ports, even without the ports setting.
The ports setting comes into play for external access to the container, e.g. from the host machine.
The environment setting just sets environment variables within the container, so the app knows how to find app1Inst & the right port 8200.

Why my Nginx proxy succeed to find my node webserver since my docker-compose doesn't expose any webserver port on the network?

My node webserver uses express and listen on port 5500.
My dockerfile-compose doesn't expose any port of my node webserver (named webserver) as following:
version: "3"
services:
webserver:
build: ./server
form:
build: ./ui
ports:
- "6800:80"
networks:
- backend // i let the backend network just for legacy but in fact webserver isn't in this networks
command: [nginx-debug, '-g', 'daemon off;']
networks:
backend:
My Nginx reverse proxy as following:
/request {
proxy_pass http://webserver:5500/request
}
expectation: y request have to fail because of the absence of shared network between the two services.
result : request succeed.
I can't understand why ? maybe the default network between the container make the job ?
more info: the request fails when the reverse_proxy redirect to a bad port, but succeed if the domain name is wrong and the port is good as following :
proxy_pass http://webver:5500/request > succeed
I can't understand the Nginx / Docker flow here. Someone would please explain to us what happens here ?
More recent versions of Docker Compose create a Docker network automatically. Once that network exists, Docker provides its own DNS system so that containers can reach each other by name or network alias; Compose registers each service under its name in the YAML file, so within this set of containers, webserver and form would both be resolvable host names.
(The corollary to this is that you don't usually need to include a networks: block in the YAML file at all, and there's not much benefit to explicitly specifying a container_name: or manually setting container network settings.)

Traefik as a proxy for Docker container with host machines network

I would like to set up the following scenario:
One physical machine with Docker containers
traefik in a container with network backend
another container which is using the host machines network (network_mode: host)
Traefik successfully finds the container and adds it with the IP address 127.0.0.1 which obviously not accessible from the traefik container (different network/bridge).
docker-compose.yml:
version: '3'
services:
traefik:
image: traefik
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/etc/traefik/traefik.toml
networks:
- backend
app:
image: my_app
labels:
- "traefik.enable=true"
- "traefik.frontend.rule=Host:myapp.example"
- "traefik.port=8080"
network_mode: host
networks:
backend:
driver: bridge
The app container is added with
Server URL Weight
server-app http://127.0.0.1:8080 0
Load Balancer: wrr
Of course I can access app with http://127.0.0.1:8080 on the host machine or with http://$HOST_IP:8080 from the traefik container.
Can I somehow convince traefik to use another IP for the container?
Thanks!
Without a common docker network, traefik won't be able to route to your container. Since you're using host networking, there's little need for traefik to proxy the container, just access it directly. Or if you need to only access it through the proxy, then place it on the backend network. If you need some ports published on the host and others proxied through traefik, then place it on the backend network and publish the ports you need to publish, rather than using the host network directly.

Resources