I currently have a VM running 2 services. A frontend httpd/apache2 service that proxies all request to my backend services.
My bankend service only listens on 127.0.0.1:7878. This means it is only accessible via localhost. Thats the reason why Im using a frontend so that I can use that to proxy my requests to 127.0.0.1:7878
So my apache2 config on the VM looks like :
root#vm:/etc/apache2/sites-enabled# cat backend.conf
<VirtualHost *:443>
ServerName my.domain.com
ProxyPass / http://localhost:7878/
ProxyPassReverse / http://localhost:7878/
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/ssl_cert.crt
SSLCertificateKeyFile /etc/apache2/ssl/ssl_cert.key
</VirtualHost>
Now I want to dockerize both services and deploy them using docker-compose
I have setup my backend service like :
version: '3'
services:
backend:
build: backend/.
ports:
- "7878:7878"
And my backend/ folder has all the required files for my backend service including the Dockerfile. I am able to successfully build by docker image and can run it successfully. When I exec into the pod, I can successfully run curl commands towards 127.0.0.1:7878/some-end-point
Now I need to dockerize the frontend service too. It could be apache or it could even be nginx. But Im not sure how both containers will interac with each other given that my backend services ONLY listens on 127.0.0.1
If I extend my docker-compose file like :
version: '3'
services:
backend:
build: backend/.
ports:
- "7878:7878"
frontend:
build: frontend/.
ports:
- "80:80"
- "443:443"
I believe it will spin up its own network and my backend service wont be accessible using 127.0.0.1:7878
So in this case, whats the best approach ? How do I use docker-compose to spin up different containers on the SAME network so that they share 127.0.0.1 ?
You can't do that as you describe: the IPv4 address 127.0.0.1 is a magic address that always means "me", and in a Docker context it will mean "this container".
It's easy enough to set up a private Docker-internal network for your containers; in fact, Docker Compose will do this automatically for you. Your backend service must be listening on 0.0.0.0 to be accessible from other containers. You're not required to set externally published ports: on your container (or use the docker run -p option), though. If you don't, then your container will only be reachable from other containers on the same Docker-internal network, using the service name in the docker-compose.yml file as a DNS name, on whatever port the process inside the container happens to be listening on.
A minimal example of this could look like:
version: '3'
services:
proxy:
image: 'my/proxy:20181220.01'
environment:
BACKEND_URL: 'http://backend'
BIND_ADDRESS: '0.0.0.0:80'
ports:
- '8080:80'
backend:
image: 'my/backend:20181220.01'
environment:
BIND_ADDRESS: '0.0.0.0:80'
From outside Docker, you can reach the proxy at http://server-hostname.example.com:8080. From inside Docker, the two hostnames proxy and backend will resolve to Docker-internal addresses, and we've set both services (via a hypothetical environment variable setup) to listen on the ordinary HTTP port 80.
Related
I have tried reading through the other stackoverflow questions here but I am either missing something or none of them are working for me.
Context
I have two docker containers setup on a DigitalOcean server running Ubuntu.
root_frontend_1 running on ports 0.0.0.0:3000->3000/tcp
root_nginxcustom_1 running on ports 0.0.0.0:80->80/tcp
If I connect to http://127.0.0.1, I get the default Nginx index.html homepage. If I http://127.0.0.1:3000 I am getting my react app.
What I am trying to accomplish is to get my react app when I visit http://127.0.0.1. Following the documentation and suggestions here on StackOverflow, I have the following:
docker-compose.yml in root of my DigitalOcean server.
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./nginx.conf:/root/nginxcustom/conf/custom.conf
tty: true
backend:
build: https://github.com/Twitter-Clone/twitter-clone-api.git
ports:
- "8000:8000"
tty: true
frontend:
build: https://github.com/dougmellon/react-api.git
ports:
- "3000:3000"
stdin_open: true
tty: true
nginxcustom/conf/custom.conf :
server {
listen 80;
server_name http://127.0.0.1;
location / {
proxy_pass http://root_frontend_1:3000; # this one here
proxy_redirect off;
}
}
When I run docker-compose up, it builds but when I visit the ip of my server, it's still showing the default nginx html file.
Question
What am I doing wrong here and how can I get it so the main URL points to my react container?
Thank you for your time, and if there is anything I can add for clarity, please don't hesitate to ask.
TL;DR;
The nginx service should proxy_pass to the service name (customnginx), not the container name (root_frontend_1) and the nginx config should be mounted to the correct location inside the container.
Tip: the container name can be set in the docker-compose.yml for services setting the container_name however beware you can not --scale services with a fixed container_name.
Tip: the container name (root_frontend_1) is generated using the compose project name which defaults to using the current directory name if not set.
Tip: the nginx images are packaged with a default /etc/nginx/nginx.conf that will include the default server config from /etc/nginx/conf.d/default.conf. You can docker cp the default configuration files out of a container if you'd like to inspect them or use them as a base for your own configuration:
docker create --name nginx nginx
docker cp nginx:/etc/nginx/conf.d/default.conf default.conf
docker cp nginx:/etc/nginx/nginx.conf nginx.conf
docker container rm nginx
With nginx proxying connections for the frontend service we don't need to bind the hosts port to the container, the services ports definition can be replaced with an expose definition to prevent direct connections to http://159.89.135.61:3000 (depending on the backend you might want prevent direct connections as well):
version: "3"
services:
...
frontend:
build: https://github.com/dougmellon/react-api.git
expose:
- "3000"
stdin_open: true
tty: true
Taking it a step further we can configure an upstream for the
frontend service then configure the proxy_pass for the upstream:
upstream frontend {
server frontend:3000 max_fails=3;
}
server {
listen 80;
server_name http://159.89.135.61;
location / {
proxy_pass http://frontend/;
}
}
... then bind-mount the custom default.conf on top of the default.conf inside the container:
version: "3"
services:
nginxcustom:
image: nginx
hostname: nginxcustom
ports:
- "80:80"
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
tty: true
... and finally --scale our frontend service (bounce the services removing the containers to make sure changes to the config take effect):
docker-compose stop nginxcustom \
&& docker-compose rm -f \
&& docker-compose up -d --scale frontend=3
docker will resolve the service name to the IP's of the 3 frontend containers which nginx will proxy the connections for in a (by default) round robin manner.
Tip: you can not --scale a service that has ports mappings, only a single container can bind to the port.
Tip: if you've updated the config and can connect to your load balanced service then you're all set to create a DNS record to resolve a hostname to your public IP address then update your default.conf's server_name.
Tip: for security I maintain specs for building a nginx docker image with Modsecurity and Modsecurity-nginx pre-baked with the OWASP Core Rule Set.
In Docker when multiple services needs to communicate with each other, you can use the service name in the url (set in the docker-composer.yml instead of the ip (which is attributed from the available pool of the network, default by default), it will automatically be resolve to the right container ip due to network management by docker.
For you it would be http://frontend:3000
I am trying to build a docker-compose file that will mimic my production environment with its various microservices. I am using a custom bridge network with an nginx proxy that routes port 80 and 443 requests to the correct service containers. The docker-compose file and the nginx conf files together specify the port mappings that allow the proxy container to route traffic for each DNS entry to its matching container.
Consequently, I can use my container names as DNS entries to access each container service from my host browser. I can also exec into each container and ping other containers by that same DNS hostname. However, I cannot successfully curl from one container to another by the container name alone.
It seems that I need to append the proxy port mapping to each inter-service API call when operating within the Docker environment. In my production environment each service has its own environment and can respond on ports 80 and 443. The code written for each service therefore ignores port specifications and simply calls each service by its DNS hostname. I would rather not have to append port id mappings to each API call throughout the various code bases in order for my services to talk to each other in the Docker environment.
Is there a tool or configuration setting that will allow my microservice containers to successfully call each other in Docker without the need of a proxy port map?
version: '3'
services:
#---------------------
# nginx proxy service
#---------------------
nginx_proxy:
image: nginx:alpine
networks:
- test_network
ports:
- "80:80"
- "443:443"
volumes:
- "./site1/site1.test.conf:/etc/nginx/conf.d/site1.test.conf"
- "./site2/site2.test.conf:/etc/nginx/conf.d/site2.test.conf"
container_name: nginx_proxy
#------------
# site1.test
#------------
site1.test:
build: alpine:latest
networks:
- test_network
ports:
- "9001:9000"
environment:
- "VIRTUAL_HOST=site1.test"
volumes:
- "./site1:/site1"
container_name: site1.test
#------------
# site2.test
#------------
site2.test:
build: alpine:latest
networks:
- test_network
ports:
- "9002:9000"
environment:
- "VIRTUAL_HOST=site2.test"
volumes:
- "./site2:/site2"
container_name: site2.test
# networks
networks:
test_network:
http://hostname/ always means http://hostname:80/ (that is, TCP port 80 is the default port for HTTP URLs). So if you want one container to be able to reach the other as http://othercontainer/, the other container needs to be running an HTTP daemon of some sort on port 80 (which probably means it needs to at least be started as root within its container).
If your nginx proxy routes to all of the containers successfully, it's not wrong to just route all inter-container traffic through it (in a previous technology generation we would have called this a service bus). There's not a trivial way to do this in Docker, but you might be able to configure it as a standard HTTP proxy.
I would suggest making all of the outbound service URLs configurable in any case, probably as environment variables. You can imagine wanting to run multiple services together in a development environment (in which case the service URL might be http://localhost:9002), or in a pure-Docker environment like what you show (http://otherservice:9000), or in a hybrid multi-host Docker setup (http://other.host.example.com:9002), or in Kubernetes (http://otherservice.default.svc.cluster.local:9000).
I am new in Docker.
And i had a task - create docker container with nginx that will be send php (dynamics) to the apache docker container.
I solved my problem. But i've spend much time. So i hope this will help to other people
There is many articles - how to build nginx + apache...
But they dont work in Docker
My problem was solved by changing nginx configuration file (my.conf | default ...)
from:
upstream backend {
server 127.0.0.1:8080;
}
to:
upstream backend {
server apache2php:8080;
}
where apache2php is the service name in docker-compose.yml
like this:
version: "3"
services:
apache2php:
image: apache2php
ports:
- "8080:8080"
volumes:
- "/var/www/html:/var/www/html"
mynginx:
image: mynginx
ports:
- "80:80"
volumes:
- "/var/www/html:/var/www/html"
When i checked logs (/var/log/nginx/error.log) in the nginx container with my bad settings i found an error 111 (Connection refused while connecting to upstream)
And also there was not my local ip (127.0.0.1) in the Host field but another (like 10.5.100.2 - maybe another)
I think that docker use its own ip addresses in the docker's network and that IP-addresses are used by docker containers (nginx use 10.5.100.2:8080 when need to redirect php file to apache)
But when we go to 127.0.0.1:80 in outer network (like when we type the IP in the browser) Docker translate inner IP (nginx - 10.5.100.2:80) into outer IP that we type later (127.0.0.1:80)
Am i right?
My node webserver uses express and listen on port 5500.
My dockerfile-compose doesn't expose any port of my node webserver (named webserver) as following:
version: "3"
services:
webserver:
build: ./server
form:
build: ./ui
ports:
- "6800:80"
networks:
- backend // i let the backend network just for legacy but in fact webserver isn't in this networks
command: [nginx-debug, '-g', 'daemon off;']
networks:
backend:
My Nginx reverse proxy as following:
/request {
proxy_pass http://webserver:5500/request
}
expectation: y request have to fail because of the absence of shared network between the two services.
result : request succeed.
I can't understand why ? maybe the default network between the container make the job ?
more info: the request fails when the reverse_proxy redirect to a bad port, but succeed if the domain name is wrong and the port is good as following :
proxy_pass http://webver:5500/request > succeed
I can't understand the Nginx / Docker flow here. Someone would please explain to us what happens here ?
More recent versions of Docker Compose create a Docker network automatically. Once that network exists, Docker provides its own DNS system so that containers can reach each other by name or network alias; Compose registers each service under its name in the YAML file, so within this set of containers, webserver and form would both be resolvable host names.
(The corollary to this is that you don't usually need to include a networks: block in the YAML file at all, and there's not much benefit to explicitly specifying a container_name: or manually setting container network settings.)
I want to have Nginx reverse proxy in front of another container. Both these containers will be on the same docker network using:
docker network create my-net
Now, normally I can then inspect to see what IP the container is on and then use this in the Nginx config file. How do I do this so it is seamless? So that I can use a single docker-compose file and have both containers on the same network and have the Nginx configured correctly.
THanks
If you use docker-compose you don't need to create your own network, it will create a private network that is aware of all the other services in the same compose file. For example:
docker-compose.yml
version: '2'
services:
php:
image: php5.6
...<snip>...
nginx:
image: nginx:stable-alpine
ports:
- "443:443"
links:
- php
...<snip>...
then in nginx config:
proxy_pass http://php;
You shouldn't need to specify the "links" for networking in the nginx compose block (it should be aware of all the service names), however it will help to define the load order of the containers.
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
https://docs.docker.com/compose/networking/
I recommend use Automated Nginx reverse proxy for docker containers
nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.