How to use service name as URL in Docker? - docker

What I want to do:
My docker-composer file contains several services, which each have a service name. All services use the exact same image. I want them to be reachable via the service name (like curl service-one, curl service-two) from the host. The use-case is that these are microservices that should be reachable from a host system.
services:
service-one:
container_name: container-service-one
image: customimage1
service-two:
container_name: container-service-two
image: customimage1
What's the problem
Lots of tutorials say that that's the way to build microservices, but it's usually done by using ports, but I need services names instead of ports.
What I've tried
There are lots of very old answers (5-6 years), but not a single one gives a working answer. There are ideas like parsing the IP of each container and then using that, or just using hostnames internally between docker containers, or complex third party tools like building your own DNS.
It feels weird that I'm the only one who needs several APIs reachable from a host system, this feels like standard use-case, so I think I'm missing something here.
Can somebody tell me where to go from here?

I'll start from basic to advanced as far as I know.
For starter, every service that's part of the docker network (by default everyone that's part of the compose file) so accessing each other by their service name is already there "for free".
If you want to use the service names from the host itself you can set a reverse proxy like nginx and by the server name (in your case would be equal to service name) route the appropriate port on the host running the docker containers.
the basic idea is to intercept all communication to port 80 on the server and send the communication by the incoming DNS name.
Here's an example:
compose file:
version: "3.9"
services:
nginx-router:
image: "nginx:latest"
volumes:
- type: bind
source: ./nginx.conf
target: /nginx/nginx.conf
ports:
- "80:80"
service1:
image: "nginx:latest"
ports:
- "8080:80"
service2:
image: "httpd:latest"
ports:
- "8081:80"
nginx.conf
worker_processes auto;
pid /tmp/nginx.pid;
events {
worker_connections 8000;
multi_accept on;
}
http {
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_set_header Host $host;
proxy_pass http://service1:80;
}
}
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Host $host;
proxy_pass http://service2:80;
}
}
}
in this example if my server name is localhost I'm routing to service2 which is an httpd image or Apache HTTP and we're getting it works which is the default apache image HTML page:
and when we're accessing through 127.0.0.1 server name we should see nginx and indeed this is what we're getting:
in your case you'd use the service names instead after setting them as a DNS record and using this DNS record to route to the appropriate service.

Related

How to set environment variables on docker compose for nginx container?

My project is using CI/CD for deployment and I have one docker-compose file for each application stage (dev, staging, release).
Depending on what stage the application is, I want to redirect the user for my API using Nginx for a different ip/port.
On my default.conf file I want to write something like this.
server {
listen 443 ssl;
ssl_certificate /etc/ssl/server/cert.pem;
ssl_certificate_key /etc/ssl/server/privkey.pem;
location / {
proxy_pass https://api:$API_PORT;
proxy_set_header Host $host;
...
where api is a reference for my service' IP that is defined in my docker-compose file and I want ${API_PORT} to be a reference to my environment variable that is defined inside docker-compose.
My docker-compose file looks like this.
version: "3"
services:
api:
...
ports:
- 4000:4000
nginx:
...
environment:
- API_PORT=4000
ports:
- 5180:80
- 5181:443
How could I achieve that?
Note: If I have a static port, for example 4000, when I up both stage and release versions I will have conflicts on port 4000.
In your Nginx configuration, you don't need to do anything; use the fixed port 4000.
proxy_pass https://api:4000;
Since this is a connection from the Nginx container to the API container, it stays within the Docker network environment. This connection doesn't pay any attention to what you might have set as ports:, it connects to the server process listening on port 4000 in the API container.
When you start the API container, the server process inside the container should use that same fixed port 4000. If you need to make the API container externally visible, you may choose a different number for the first port in the ports: block, but the second port needs to be 4000.
services:
api:
ports: ['4001:4000']
nginx:
ports: ['5180:80', '5181:443']
If you need to launch multiple copies of this stack, you need to change the first port number in all of the ports: blocks, but leave the second numbers unchanged.
If all access to the API container is through this Nginx proxy, you may not need the api: { ports: [] } block at all, and you can safely delete it; again, it's not used for connections between containers.
For accomplishing that you will need to set your Dockerfile and rename your .conf file in order to Nginx understand what you want to do.
First, Nginx by itself supports what you want to do, so you will need to use templates for that.
By default, if you place your config files inside /etc/nginx/templates and your filename ends with .template, Nginx will use envsubst to substitute your environment variable inside your .conf file for the values that you define in your docker-compose file.
So let's have an example.
You have default.conf.template (don't forget to rename your .conf files) file with your Nginx settings:
server {
listen 443 ssl;
ssl_certificate /etc/ssl/server/cert.pem;
ssl_certificate_key /etc/ssl/server/privkey.pem;
location / {
proxy_pass https://api:$API_PORT;
proxy_set_header Host $host;
...
Your Dockerfile will copy your default.conf.template file and will paste it inside /etc/nginx/templates
...
COPY /your/nginx/settings/folder/default.conf.template /etc/nginx/templates
...
With that done, when Nginx starts running it will search on the templates folder for *.template files, and when it finds your default.conf.template file it will replace the environment variables reference for the actual value and will move this file for /etc/nginx/conf.d folder.
So if your docker-compose file looks like this:
version: "3"
services:
api:
...
ports:
- 4000:4000
nginx:
...
environment:
- API_PORT=4000
your default.conf.template file (mentioned above) will be renamed to default.conf, moved to /etc/nginx/conf.d/ and will look like this:
location / {
proxy_pass https://api:4000;
...
So Nginx will replace the references for the values and move the .conf files to the right place.

How to fix 502 Bad Gateway error in nginx?

I have a server running docker-compose. Docker-compose has 2 services: nginx (as a reverse proxy) and back (as an api that handles 2 requests). In addition, there is a database that is not located on the server, but separately (database as a service).
Requests processed by back:
get('/api') - the back service simply replies "API is running" to it
get('/db') - the back service sends a simple query to an external database ('SELECT random() as random, current_database() as db')
request 1 - works fine, request 2 - the back service crashes, nginx continues to work and a 502 Bad Gateway error appears in the console.
An error occurs in the nginx service Logs: upstream prematurely
closed connection while reading response header from upstream.
The back service Logs: connection terminated due to connection timeout.
These are both rather vague errors. And I don’t know how else to get close to them, given that the code is not in a container, without Nginx and with the same database, it works as it should.
What I have tried:
increase the number of cores and RAM (now 2 cores and 4 GB of Ram);
add/remove/change proxy_read_timeout, proxy_send_timeout and proxy_connect_timeout parameters;
test the www.test.com/db request via postman and curl (fails with the same error);
run the code on your local machine without a container and compose and connect to the same database using the same pool and the same ip (everything is ok, both requests work and send what you need);
change the parameter worker_processes (tested with a value of 1 and auto);
add/remove attribute proxy_set_header Host $http_host, replace $http_host with "www.test.com".
Question:
What else can I try to fix the error and make the db request work?
My nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http{
upstream back-stream {
server back:8080;
}
server {
listen 80;
listen [::]:80;
server_name test.com www.test.com;
location / {
root /usr/share/nginx/html;
resolver 121.0.0.11;
proxy_pass http://back-stream;
}
}
}
My docker-compose.yml:
version: '3.9'
services:
nginx-proxy:
image: nginx:stable-alpine
container_name: nginx-proxy
ports:
- 80:80
- 443:443
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- network
back:
image: "mycustomimage"
container_name: back
restart: unless-stopped
ports:
- '81:8080'
networks:
- network
networks:
network:
driver: bridge
I can upload other files if needed. Just taking into account the fact that the code does not work correctly in the container, the problem is rather in setting up the container.
I will be grateful for any help.
Code of the back: here
The reason for the error is this: I forgot to add my server's ip to the list of allowed addresses in the database cluster.

Nginx API Gateway in Docker Compose

(Disclaimer: I've seen a lot of version of this question asked on here but none seem to really answer my question.)
I want to use NGINX as an API Gateway to route requests to microservice APIs in docker-compose.
For my sample app, I have two microservice APIs (A and B). Any request endpoint that starts with /a should go to API-A and any request endpoint that starts with /b should go to API-B.
Some issues I've had are:
I want paths like /a/foo/bar to match API-A but not /ab/foo
I want routing to work regardless of whether or not the path ends in a / (aka both /a/foo and /a/foo/ work)
My docker-compose file looks like this:
version: "3.8"
services:
gateway:
build:
context: ./api-gw
ports:
- 8000:80
apia:
build:
context: ./api-a
ports:
- 8000
apib:
build:
context: ./api-b
ports:
- 8000
and my sample NGINX config file looks like this:
server {
listen 80;
server_name localhost;
location ^~ /a {
proxy_pass http://apia:8000/;
}
location ^~ /b {
proxy_pass http://apib:8000/;
}
}
How can I setup my NGINX config to properly route my requests?
Thanks for your help!
you need to change your Nginx regex rules to these :
match for Api-A :
^a(\/.*)?
match for Api-B :
^b(\/.*)?

Nginx proxy for multibranch environment

I am using nginx as a simple proxy service for my multiple dockerized containers (including image with nginx layer as well). I am trying to create vhost for each branch and this is causing a lot of trouble here. What i want to achieve is:
An nginx proxy service should proxy to containers on paths:
[branch_name].a.xyz.com (frontend container)
some-jenkins.xyz.com (another container)
some other containers not existing yet
nginx.conf inside proxy container:
upstream frontend-branch {
server frontend:80;
}
server {
listen 80;
server_name ~^(?<branch>.*)\.a\.xyz\.com;
location / {
proxy_pass http://frontend-branch;
}
}
nginx.conf inside frontend container:
server {
listen 80;
location / {
root /www/html/branches/some_default_branch
}
}
server {
listen 80;
location ~^/(?<branch>.*)$ {
root /www/html/branches/$branch
}
}
docker-compose for proxy:
version: "2.0"
services:
proxy:
build: .
ports:
- "80:80"
restart: always
networks:
default:
external:
name: nginx-proxy
Inside frontend project it looks pretty much the same, except service name and ofc ports (81:80).
Is there any way to "pass" the branch as a path for frontend container (e.g. some frontend:80/$branch) ?
Is it even possible to create that kind of proxy? I don't want to use the same image based on nginx as a proxy and as a 'frontend keeper' because in the future I will want to use proxy for more than only one container so having configuration for whole site proxy inside frontend project would be weird.
Cheers

Linking a docker container a subdirectory of another one

I am trying to setup multiple docker containers that can be accessed through one main container.
For example:
http://localhost:80 is the main container
http://localhost:80/site1 is a separate container
http://localhost:80/site2 is a separate container again
I know that the --link flag has been deprecated and the new way of doing things is by using the --network flag.
When I use the --link (for testing) I see an entry of the container I am linking to in the hosts file. That is where I am stuck.
So I would like to set the above scenario up using the docker --networking option.
Usage case: /Site1 might be the admin area or member to a website, but I would like to have them in separate containers so I can maintain them easier.
The containers are apache2 based, but if possible would like to refrain from editing any config files (but I can if I need to)
How would I go about that?
As far as I know there is no way that docker routes HTTP requests to one or the other container. You can only map a port from your host to one container.
What you will need is to run a reverse proxy (e.g. nginx) as your main container that then routes the request to the appropriate container.
Here an example how to set it up
site1/Dockerfile
FROM node:6.11
WORKDIR /site1
COPY site1.js .
CMD node site1.js
EXPOSE 80
site1/site1.js
var http = require("http");
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World 1\n');
}).listen(80);
site2/Dockerfile
FROM node:6.11
WORKDIR /site2
COPY site2.js .
CMD node site2.js
EXPOSE 80
site2/site2.js
var http = require("http");
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World 2\n');
}).listen(80);
node-proxy/default.conf
server {
listen 80;
# ~* makes the /site1 case insensitive
location ~* /site1 {
# Nginx can access the container by the service name
# provided in the docker-compose.yml file.
proxy_pass http://node-site1;
}
location ~* /site2 {
proxy_pass http://node-site2;
}
# Anything that didn't match the patterns above goes here
location / {
# proxy_pass http://some other container
return 500;
}
}
docker-compose.yml
version: "3"
services:
# reverse proxy
node-proxy:
image: nginx
restart : always
# maps config file into the proxy container
volumes:
- ./node-proxy/default.conf:/etc/nginx/conf.d/default.conf
ports:
- 80:80
links:
- node-site1
- node-site2
# first site
node-site1:
build: ./site1
restart: always
# second site
node-site2:
build: ./site2
restart: always
To start the reverse proxy and both sites enter in the root of this folder docker-compose up -d and check with docker ps -a that all docker containers are running.
Afterwards you can access this two sites with http://localhost/site1 and http://localhost/site2
Explanation
The folder site1 and site2 contains a small webserver build with nodejs. Both of them are listening on port 80. "node-proxy" contains the configuration file that tells nginx when to return which site.
Here are some links
docker-compose: https://docs.docker.com/compose/overview/
nginx reverse proxy: https://www.nginx.com/resources/admin-guide/reverse-proxy/
You should use volumes so you will set it in docker-compose.yml like this
version: '3.2'
services:
container1:
volumes:
- shared-files:/directory/files
container2:
volumes:
- shared-files:/directory/files
container3:
volumes:
- shared-files:/directory/files
volumes:
shared-files:

Resources