Docker-Compose randomly assigned ports - docker

Probably a stupid simple question but I'm having trouble with docker-compose randomly assigning a port to a container.
I have all my services behind an API gateway and that gateway is the only service which should be mapped to the outside
The compose file looks like this
services:
api.gateway:
build:
context: .
dockerfile: API.Gateway/Dockerfile
ports:
- "80:80"
api.products:
build:
context: .
dockerfile: API.Products/Dockerfile
ports:
- "80"
when started 80 maps to 80 just fine for the gateway. But products service will randomly be assigned an external port which i dont want. I want all communication with that service to go through the api gateway.

Since you dont want to access the products API from your host, then you do not need ports exposed on your host for it.
The API Gateway is exposed to your host. Then that will forward requests to the other container via the Docker network without going through your host network interface.

Related

Docker Compose: expose with scaling

I scale container
ports:
- "8086-8090:8085"
But what if I needed it only inside my bridge network?
In other words, does it exists something like this?
expose:
- "8086-8090:8085"
UPDATED:
I have a master container:
exposed to host network
acts as a load balancer
I want to have N slaves of another container, exposed to assigned ports inside docker network (bot visible in host network)
Connections between containers (over the Docker-internal bridge network) don't need ports: at all, and you can just remove that block. You only need ports: to accept connections from outside of Docker. If the process inside the container is listening on port 8085 then connections between containers will always use port 8085, regardless of what ports: mappings you have or if there is one at all.
expose: in a Compose file does almost nothing at all. You never need to include it, and it's always safe to delete it.
(This wasn't the case in first-generation Docker networking. However, Compose files v2 and v3 always provide what the Docker documentation otherwise calls a "user-defined bridge network", that doesn't use "exposed ports" in any way. I'm not totally clear why the archaic expose: and links: options were kept.)
No extra changes needed!
Because of internal Docker DNS it 'hides' scaled instances under same port:
version : "3.8"
services:
web:
image: "nginx:latest"
ports:
- "8080:8080"
then
docker-compose up -d --scale web=3
calling localhost:8080 will proxy requests to all instances using Round Robin!

Docker web app can't communicate with API app

I have 2 .net core apps running in docker (one is a web api, the other is a web app consuming the web api):
I can't seem to communicate with the api via the web app, but I can access the api by going directly to it in my browser at http://localhost:44389
I have an environment variable in my web app that has that same info, but it can't get to it.
If I were to specify the deployed version of my API on azure, it's able to communicate with that address. Seems like the problem is the containers talking to each other.
I read that creating a bridge should fix that problem but it doesn't seem to. What am I doing wrong?
Here is my docker compose file:
version: '3.4'
services:
rc.api:
image: ${DOCKER_REGISTRY}rcapi
build:
context: .
dockerfile: rc.Api/Dockerfile
ports:
- "44389:80"
rc.web:
image: ${DOCKER_REGISTRY}rcweb
build:
context: .
dockerfile: rc.Web/Dockerfile
environment:
- api_endpoint=http://localhost:44389
depends_on:
- rc.api
networks:
my-net:
driver: bridge
docker-compose automatically creates a network between your containers. As your containers are in the same network you would be able to connect between containers using aliases. docker-compose creates an alias with the container name and the container IP. So in your case docker-compose should look like
version: '3.4'
services:
rc.api:
image: ${DOCKER_REGISTRY}rcapi
build:
context: .
dockerfile: rc.Api/Dockerfile
ports:
- "44389:80"
rc.web:
image: ${DOCKER_REGISTRY}rcweb
build:
context: .
dockerfile: rc.Web/Dockerfile
environment:
- api_endpoint=http://rc.api
depends_on:
- rc.api
networks:
my-net:
driver: bridge
As in rc.api opens port 80 in its container, therefore rc.web can access to 80 port with http://rc.api:80 or http://rc.api (without port since HTTP port is default 80)
You need to call http://rc.api because you have two containers and the API containers localhost is different from the web apps container localhost.
The convention is each service can be resolved by its name specified in the docker-compose.yml.
Thus you can call the API on internal Port 80 instead of exposing it on a particular port.

docker-compose microservice inter-container api communication with nginx proxy

I am trying to build a docker-compose file that will mimic my production environment with its various microservices. I am using a custom bridge network with an nginx proxy that routes port 80 and 443 requests to the correct service containers. The docker-compose file and the nginx conf files together specify the port mappings that allow the proxy container to route traffic for each DNS entry to its matching container.
Consequently, I can use my container names as DNS entries to access each container service from my host browser. I can also exec into each container and ping other containers by that same DNS hostname. However, I cannot successfully curl from one container to another by the container name alone.
It seems that I need to append the proxy port mapping to each inter-service API call when operating within the Docker environment. In my production environment each service has its own environment and can respond on ports 80 and 443. The code written for each service therefore ignores port specifications and simply calls each service by its DNS hostname. I would rather not have to append port id mappings to each API call throughout the various code bases in order for my services to talk to each other in the Docker environment.
Is there a tool or configuration setting that will allow my microservice containers to successfully call each other in Docker without the need of a proxy port map?
version: '3'
services:
#---------------------
# nginx proxy service
#---------------------
nginx_proxy:
image: nginx:alpine
networks:
- test_network
ports:
- "80:80"
- "443:443"
volumes:
- "./site1/site1.test.conf:/etc/nginx/conf.d/site1.test.conf"
- "./site2/site2.test.conf:/etc/nginx/conf.d/site2.test.conf"
container_name: nginx_proxy
#------------
# site1.test
#------------
site1.test:
build: alpine:latest
networks:
- test_network
ports:
- "9001:9000"
environment:
- "VIRTUAL_HOST=site1.test"
volumes:
- "./site1:/site1"
container_name: site1.test
#------------
# site2.test
#------------
site2.test:
build: alpine:latest
networks:
- test_network
ports:
- "9002:9000"
environment:
- "VIRTUAL_HOST=site2.test"
volumes:
- "./site2:/site2"
container_name: site2.test
# networks
networks:
test_network:
http://hostname/ always means http://hostname:80/ (that is, TCP port 80 is the default port for HTTP URLs). So if you want one container to be able to reach the other as http://othercontainer/, the other container needs to be running an HTTP daemon of some sort on port 80 (which probably means it needs to at least be started as root within its container).
If your nginx proxy routes to all of the containers successfully, it's not wrong to just route all inter-container traffic through it (in a previous technology generation we would have called this a service bus). There's not a trivial way to do this in Docker, but you might be able to configure it as a standard HTTP proxy.
I would suggest making all of the outbound service URLs configurable in any case, probably as environment variables. You can imagine wanting to run multiple services together in a development environment (in which case the service URL might be http://localhost:9002), or in a pure-Docker environment like what you show (http://otherservice:9000), or in a hybrid multi-host Docker setup (http://other.host.example.com:9002), or in Kubernetes (http://otherservice.default.svc.cluster.local:9000).

Why would i use docker links when i still need to hardcode the address?

Hello i have not understood the following :
-In the docker world we have from what i understood :
A port that the application exposes
A port that the container exposes for the application
A port that the host maps the container port
So given these facts in a configuration of 2 containers within docker-expose
If:
app | Host Port | Container Port | App Port
app1 8300 8200 8200
app2 9300 9200 9200
If app2 needs to communicate with with app1 directly through docker-host why would i use links ,since i still have to somehow hardcode in the environment of app2 the hostname and port of app1 (container_name of app1 and port of container of app1)?( In our example : port=8200 and host=app1Inst)
app1:
image: app1img
container_name: app1Inst
ports:
- 8300:8200 //application code exposes port 8200 - e.g sends to socket on 8200
networks:
- ret-net
app2:
image: app2img
container_name: app2Inst
ports:
- 9300:9200
depends_on:
- app1
networks:
- ret-net
links:
- app1
///i still need to say here
/ environment : -
/ - host=app1Inst
/ - port=8200 --what do i gain using links?
networks:
ret-net:
You do not need to use links on modern Docker. But you definitely should not hard-code host names or ports anywhere. (See for example every SO question that notes that you can interact with services as localhost when running directly on a developer system but needs some other host name when running in Docker.). The docker-compose.yml file is deploy-time configuration and that is a good place to set environment variables that point from one service to another.
As you note in your proposed docker-compose.yml file, Docker networks and the associated DNS service basically completely replace links. Links existed first but aren’t as useful any more.
Also note that Docker Compose will create a default network for you, and that the service block names in the docker-compose.yml file are valid as host names. You could reduce that file to:
version: '3'
services:
app1:
image: app1img
ports:
- '8300:8200'
app2:
image: app2img
ports:
- '9300:9200'
env:
APP1_URL: 'http://app1:8200'
depends_on:
- app1
Short answer, no you don't need links, also its now deprecated in docker & not recommended.
https://docs.docker.com/network/links/
Having said that, since both your containers are on the same network ret-net, they will be able to discover & communicate freely between each other on all ports, even without the ports setting.
The ports setting comes into play for external access to the container, e.g. from the host machine.
The environment setting just sets environment variables within the container, so the app knows how to find app1Inst & the right port 8200.

Why my Nginx proxy succeed to find my node webserver since my docker-compose doesn't expose any webserver port on the network?

My node webserver uses express and listen on port 5500.
My dockerfile-compose doesn't expose any port of my node webserver (named webserver) as following:
version: "3"
services:
webserver:
build: ./server
form:
build: ./ui
ports:
- "6800:80"
networks:
- backend // i let the backend network just for legacy but in fact webserver isn't in this networks
command: [nginx-debug, '-g', 'daemon off;']
networks:
backend:
My Nginx reverse proxy as following:
/request {
proxy_pass http://webserver:5500/request
}
expectation: y request have to fail because of the absence of shared network between the two services.
result : request succeed.
I can't understand why ? maybe the default network between the container make the job ?
more info: the request fails when the reverse_proxy redirect to a bad port, but succeed if the domain name is wrong and the port is good as following :
proxy_pass http://webver:5500/request > succeed
I can't understand the Nginx / Docker flow here. Someone would please explain to us what happens here ?
More recent versions of Docker Compose create a Docker network automatically. Once that network exists, Docker provides its own DNS system so that containers can reach each other by name or network alias; Compose registers each service under its name in the YAML file, so within this set of containers, webserver and form would both be resolvable host names.
(The corollary to this is that you don't usually need to include a networks: block in the YAML file at all, and there's not much benefit to explicitly specifying a container_name: or manually setting container network settings.)

Resources