Linking a docker container a subdirectory of another one - docker

I am trying to setup multiple docker containers that can be accessed through one main container.
For example:
http://localhost:80 is the main container
http://localhost:80/site1 is a separate container
http://localhost:80/site2 is a separate container again
I know that the --link flag has been deprecated and the new way of doing things is by using the --network flag.
When I use the --link (for testing) I see an entry of the container I am linking to in the hosts file. That is where I am stuck.
So I would like to set the above scenario up using the docker --networking option.
Usage case: /Site1 might be the admin area or member to a website, but I would like to have them in separate containers so I can maintain them easier.
The containers are apache2 based, but if possible would like to refrain from editing any config files (but I can if I need to)
How would I go about that?

As far as I know there is no way that docker routes HTTP requests to one or the other container. You can only map a port from your host to one container.
What you will need is to run a reverse proxy (e.g. nginx) as your main container that then routes the request to the appropriate container.
Here an example how to set it up
site1/Dockerfile
FROM node:6.11
WORKDIR /site1
COPY site1.js .
CMD node site1.js
EXPOSE 80
site1/site1.js
var http = require("http");
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World 1\n');
}).listen(80);
site2/Dockerfile
FROM node:6.11
WORKDIR /site2
COPY site2.js .
CMD node site2.js
EXPOSE 80
site2/site2.js
var http = require("http");
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World 2\n');
}).listen(80);
node-proxy/default.conf
server {
listen 80;
# ~* makes the /site1 case insensitive
location ~* /site1 {
# Nginx can access the container by the service name
# provided in the docker-compose.yml file.
proxy_pass http://node-site1;
}
location ~* /site2 {
proxy_pass http://node-site2;
}
# Anything that didn't match the patterns above goes here
location / {
# proxy_pass http://some other container
return 500;
}
}
docker-compose.yml
version: "3"
services:
# reverse proxy
node-proxy:
image: nginx
restart : always
# maps config file into the proxy container
volumes:
- ./node-proxy/default.conf:/etc/nginx/conf.d/default.conf
ports:
- 80:80
links:
- node-site1
- node-site2
# first site
node-site1:
build: ./site1
restart: always
# second site
node-site2:
build: ./site2
restart: always
To start the reverse proxy and both sites enter in the root of this folder docker-compose up -d and check with docker ps -a that all docker containers are running.
Afterwards you can access this two sites with http://localhost/site1 and http://localhost/site2
Explanation
The folder site1 and site2 contains a small webserver build with nodejs. Both of them are listening on port 80. "node-proxy" contains the configuration file that tells nginx when to return which site.
Here are some links
docker-compose: https://docs.docker.com/compose/overview/
nginx reverse proxy: https://www.nginx.com/resources/admin-guide/reverse-proxy/

You should use volumes so you will set it in docker-compose.yml like this
version: '3.2'
services:
container1:
volumes:
- shared-files:/directory/files
container2:
volumes:
- shared-files:/directory/files
container3:
volumes:
- shared-files:/directory/files
volumes:
shared-files:

Related

How to set environment variables on docker compose for nginx container?

My project is using CI/CD for deployment and I have one docker-compose file for each application stage (dev, staging, release).
Depending on what stage the application is, I want to redirect the user for my API using Nginx for a different ip/port.
On my default.conf file I want to write something like this.
server {
listen 443 ssl;
ssl_certificate /etc/ssl/server/cert.pem;
ssl_certificate_key /etc/ssl/server/privkey.pem;
location / {
proxy_pass https://api:$API_PORT;
proxy_set_header Host $host;
...
where api is a reference for my service' IP that is defined in my docker-compose file and I want ${API_PORT} to be a reference to my environment variable that is defined inside docker-compose.
My docker-compose file looks like this.
version: "3"
services:
api:
...
ports:
- 4000:4000
nginx:
...
environment:
- API_PORT=4000
ports:
- 5180:80
- 5181:443
How could I achieve that?
Note: If I have a static port, for example 4000, when I up both stage and release versions I will have conflicts on port 4000.
In your Nginx configuration, you don't need to do anything; use the fixed port 4000.
proxy_pass https://api:4000;
Since this is a connection from the Nginx container to the API container, it stays within the Docker network environment. This connection doesn't pay any attention to what you might have set as ports:, it connects to the server process listening on port 4000 in the API container.
When you start the API container, the server process inside the container should use that same fixed port 4000. If you need to make the API container externally visible, you may choose a different number for the first port in the ports: block, but the second port needs to be 4000.
services:
api:
ports: ['4001:4000']
nginx:
ports: ['5180:80', '5181:443']
If you need to launch multiple copies of this stack, you need to change the first port number in all of the ports: blocks, but leave the second numbers unchanged.
If all access to the API container is through this Nginx proxy, you may not need the api: { ports: [] } block at all, and you can safely delete it; again, it's not used for connections between containers.
For accomplishing that you will need to set your Dockerfile and rename your .conf file in order to Nginx understand what you want to do.
First, Nginx by itself supports what you want to do, so you will need to use templates for that.
By default, if you place your config files inside /etc/nginx/templates and your filename ends with .template, Nginx will use envsubst to substitute your environment variable inside your .conf file for the values that you define in your docker-compose file.
So let's have an example.
You have default.conf.template (don't forget to rename your .conf files) file with your Nginx settings:
server {
listen 443 ssl;
ssl_certificate /etc/ssl/server/cert.pem;
ssl_certificate_key /etc/ssl/server/privkey.pem;
location / {
proxy_pass https://api:$API_PORT;
proxy_set_header Host $host;
...
Your Dockerfile will copy your default.conf.template file and will paste it inside /etc/nginx/templates
...
COPY /your/nginx/settings/folder/default.conf.template /etc/nginx/templates
...
With that done, when Nginx starts running it will search on the templates folder for *.template files, and when it finds your default.conf.template file it will replace the environment variables reference for the actual value and will move this file for /etc/nginx/conf.d folder.
So if your docker-compose file looks like this:
version: "3"
services:
api:
...
ports:
- 4000:4000
nginx:
...
environment:
- API_PORT=4000
your default.conf.template file (mentioned above) will be renamed to default.conf, moved to /etc/nginx/conf.d/ and will look like this:
location / {
proxy_pass https://api:4000;
...
So Nginx will replace the references for the values and move the .conf files to the right place.

How to use service name as URL in Docker?

What I want to do:
My docker-composer file contains several services, which each have a service name. All services use the exact same image. I want them to be reachable via the service name (like curl service-one, curl service-two) from the host. The use-case is that these are microservices that should be reachable from a host system.
services:
service-one:
container_name: container-service-one
image: customimage1
service-two:
container_name: container-service-two
image: customimage1
What's the problem
Lots of tutorials say that that's the way to build microservices, but it's usually done by using ports, but I need services names instead of ports.
What I've tried
There are lots of very old answers (5-6 years), but not a single one gives a working answer. There are ideas like parsing the IP of each container and then using that, or just using hostnames internally between docker containers, or complex third party tools like building your own DNS.
It feels weird that I'm the only one who needs several APIs reachable from a host system, this feels like standard use-case, so I think I'm missing something here.
Can somebody tell me where to go from here?
I'll start from basic to advanced as far as I know.
For starter, every service that's part of the docker network (by default everyone that's part of the compose file) so accessing each other by their service name is already there "for free".
If you want to use the service names from the host itself you can set a reverse proxy like nginx and by the server name (in your case would be equal to service name) route the appropriate port on the host running the docker containers.
the basic idea is to intercept all communication to port 80 on the server and send the communication by the incoming DNS name.
Here's an example:
compose file:
version: "3.9"
services:
nginx-router:
image: "nginx:latest"
volumes:
- type: bind
source: ./nginx.conf
target: /nginx/nginx.conf
ports:
- "80:80"
service1:
image: "nginx:latest"
ports:
- "8080:80"
service2:
image: "httpd:latest"
ports:
- "8081:80"
nginx.conf
worker_processes auto;
pid /tmp/nginx.pid;
events {
worker_connections 8000;
multi_accept on;
}
http {
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_set_header Host $host;
proxy_pass http://service1:80;
}
}
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Host $host;
proxy_pass http://service2:80;
}
}
}
in this example if my server name is localhost I'm routing to service2 which is an httpd image or Apache HTTP and we're getting it works which is the default apache image HTML page:
and when we're accessing through 127.0.0.1 server name we should see nginx and indeed this is what we're getting:
in your case you'd use the service names instead after setting them as a DNS record and using this DNS record to route to the appropriate service.

Nginx API Gateway in Docker Compose

(Disclaimer: I've seen a lot of version of this question asked on here but none seem to really answer my question.)
I want to use NGINX as an API Gateway to route requests to microservice APIs in docker-compose.
For my sample app, I have two microservice APIs (A and B). Any request endpoint that starts with /a should go to API-A and any request endpoint that starts with /b should go to API-B.
Some issues I've had are:
I want paths like /a/foo/bar to match API-A but not /ab/foo
I want routing to work regardless of whether or not the path ends in a / (aka both /a/foo and /a/foo/ work)
My docker-compose file looks like this:
version: "3.8"
services:
gateway:
build:
context: ./api-gw
ports:
- 8000:80
apia:
build:
context: ./api-a
ports:
- 8000
apib:
build:
context: ./api-b
ports:
- 8000
and my sample NGINX config file looks like this:
server {
listen 80;
server_name localhost;
location ^~ /a {
proxy_pass http://apia:8000/;
}
location ^~ /b {
proxy_pass http://apib:8000/;
}
}
How can I setup my NGINX config to properly route my requests?
Thanks for your help!
you need to change your Nginx regex rules to these :
match for Api-A :
^a(\/.*)?
match for Api-B :
^b(\/.*)?

serving static files from jwilder/nginx-proxy

I have a web app (django served by uwsgi) and I am using nginx for proxying requests to specific containers.
Here is a relevant snippet from my default.conf.
upstream web.ubuntu.com {
server 172.18.0.9:8080;
}
server {
server_name web.ubuntu.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi://web.ubuntu.com;
}
}
Now I want the static files to be served from nginx rather than uwsgi workers.
So basically I want to add something like:
location /static/ {
autoindex on;
alias /staticfiles/;
}
to the automatically generated server block for the container.
I believe this should make nginx serve all requests to web.ubuntu.com/static/* from /staticfiles folder.
But since the configuration(default.conf) is generated automatically, I don't know how to add the above location to the server block dynamically :(
I think location block can't be outside a server block right and there can be only one server block per server?
so I don't know how to add the location block there unless I add dynamically to default.conf after nginx comes up and then reload it I guess.
I did go through https://github.com/jwilder/nginx-proxy and I only see an example to actually change location settings per-host and default. But nothing about adding a new location altogether.
I already posted this in Q&A for jwilder/nginx-proxy and didn't get a response.
Please help me if there is a way to achieve this.
This answer is based on this comment from the #553 issue discussion on the official nginx-proxy repo. First, you have to create the default_location file with the static location:
location /static/ {
alias /var/www/html/static/;
}
and save it, for example, into nginx-proxy folder in your project's root directory. Then, you have to add this file to /etc/nginx/vhost.d folder of the jwilder/nginx-proxy container. You can build a new image based on jwilder/nginx-proxy with this file being copied or you can mount it using volumes section. Also, you have to share static files between your webapp and nginx-proxy containers using a shared volume. As a result, your docker-compose.yml file will look something like this:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx-proxy/default_location:/etc/nginx/vhost.d/default_location
- static:/var/www/html/static
webapp:
build: ./webapp
expose:
- 8080
volumes:
- static:/path/to/webapp/static
environment:
- VIRTUAL_HOST=webapp.docker.localhost
- VIRTUAL_PORT=8080
- VIRTUAL_PROTO=uwsgi
volumes:
static:
Now, the server block in /etc/nginx/conf.d/default.conf will always include the static location:
server {
server_name webapp.docker.localhost;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi://webapp.docker.localhost;
include /etc/nginx/vhost.d/default_location;
}
}
which will make Nginx serve static files for you.

Nginx proxy for multibranch environment

I am using nginx as a simple proxy service for my multiple dockerized containers (including image with nginx layer as well). I am trying to create vhost for each branch and this is causing a lot of trouble here. What i want to achieve is:
An nginx proxy service should proxy to containers on paths:
[branch_name].a.xyz.com (frontend container)
some-jenkins.xyz.com (another container)
some other containers not existing yet
nginx.conf inside proxy container:
upstream frontend-branch {
server frontend:80;
}
server {
listen 80;
server_name ~^(?<branch>.*)\.a\.xyz\.com;
location / {
proxy_pass http://frontend-branch;
}
}
nginx.conf inside frontend container:
server {
listen 80;
location / {
root /www/html/branches/some_default_branch
}
}
server {
listen 80;
location ~^/(?<branch>.*)$ {
root /www/html/branches/$branch
}
}
docker-compose for proxy:
version: "2.0"
services:
proxy:
build: .
ports:
- "80:80"
restart: always
networks:
default:
external:
name: nginx-proxy
Inside frontend project it looks pretty much the same, except service name and ofc ports (81:80).
Is there any way to "pass" the branch as a path for frontend container (e.g. some frontend:80/$branch) ?
Is it even possible to create that kind of proxy? I don't want to use the same image based on nginx as a proxy and as a 'frontend keeper' because in the future I will want to use proxy for more than only one container so having configuration for whole site proxy inside frontend project would be weird.
Cheers

Resources