nginx proxy + apache in docker - docker

I am new in Docker.
And i had a task - create docker container with nginx that will be send php (dynamics) to the apache docker container.
I solved my problem. But i've spend much time. So i hope this will help to other people
There is many articles - how to build nginx + apache...
But they dont work in Docker
My problem was solved by changing nginx configuration file (my.conf | default ...)
from:
upstream backend {
server 127.0.0.1:8080;
}
to:
upstream backend {
server apache2php:8080;
}
where apache2php is the service name in docker-compose.yml
like this:
version: "3"
services:
apache2php:
image: apache2php
ports:
- "8080:8080"
volumes:
- "/var/www/html:/var/www/html"
mynginx:
image: mynginx
ports:
- "80:80"
volumes:
- "/var/www/html:/var/www/html"
When i checked logs (/var/log/nginx/error.log) in the nginx container with my bad settings i found an error 111 (Connection refused while connecting to upstream)
And also there was not my local ip (127.0.0.1) in the Host field but another (like 10.5.100.2 - maybe another)
I think that docker use its own ip addresses in the docker's network and that IP-addresses are used by docker containers (nginx use 10.5.100.2:8080 when need to redirect php file to apache)
But when we go to 127.0.0.1:80 in outer network (like when we type the IP in the browser) Docker translate inner IP (nginx - 10.5.100.2:80) into outer IP that we type later (127.0.0.1:80)
Am i right?

Related

nginx docker does not redirect gogs docker container

i'm new to docker networking and nginx stuff, but try to "dockerize" everything on a local devserver. for tests a docker image with nginx which should redirect another container (gogs) from port 3000 to a specific url with port 80. And i want to have the reverse proxy configs and the docker images "separated", for each "app" an own docker-compose file.
so i should reach with http://app.test.local the gog installation.
BUT: i reach with http://app.test.local only a bad gateway of nginx and with http://app.test.local:3000 i reach the gog installation...
i tried many tutorials, but somehwere there have to be an error, thats slips in every time
so what i did:
$ docker network create testserver-network
created
docker-compose for nginx:
version: '3'
services:
proxy:
container_name: proxy
hostname: proxy
image: nginx
ports:
- 80:80
- 443:443
volumes:
- /docker/proxy/config:/etc/nginx
- /docker/proxy/certs:/etc/ssl/private
networks:
- testserver-network
networks:
testserver-network:
and one for gogs:
version: '3'
services:
gogs:
container_name: gogs
hostname: gogs
image: gogs/gogs
ports:
- 3000:3000
- "10022:22"
volumes:
- /docker/gogs/data:/var/gogs/data
networks:
- testserver-network
networks:
testserver-network:
(mapped directories work)
configured default.conf of nginx:
# upstream gogs {
# server 0.0.0.0:10880;
# }
server {
listen 80;
server_name app.test.local;
location / {
proxy_pass http://localhost:3000;
}
}
and added to hosts file on client
app.test.local <server ip>
docker exec proxy nginx -t and docker exec proxy nginx -s reload say everythings fine...
Answer
You should connect both containers to the same docker network and then proxy to http://gogs:3000 instead. You also shouldn't need to expose port 3000 on your localhost unless you want http://app.test.local:3000 to work. I think ideally you should remove that, so http://app.test.local should proxy to your gogs server, and http://app.test.local:3000 should error out.
Explanation
gogs is exposed on port 3000 inside its container, which is then further exposed on port 3000 on your host machine. The nginx container does not have access to port 3000 on your host, so when it tries to proxy to http://localhost:3000 it is proxying to port 3000 inside the nginx container (which is hosting nothing).
After you have joined the containers to the same network, you should be able to reference the gogs container from the nginx container by its hostname (which you've set to gogs). Now nginx will proxy through the docker network. So you should be able to perform the proxy without needing to expose 3000 on your local machine.

Dockerizing 2 separate dependant services

I currently have a VM running 2 services. A frontend httpd/apache2 service that proxies all request to my backend services.
My bankend service only listens on 127.0.0.1:7878. This means it is only accessible via localhost. Thats the reason why Im using a frontend so that I can use that to proxy my requests to 127.0.0.1:7878
So my apache2 config on the VM looks like :
root#vm:/etc/apache2/sites-enabled# cat backend.conf
<VirtualHost *:443>
ServerName my.domain.com
ProxyPass / http://localhost:7878/
ProxyPassReverse / http://localhost:7878/
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/ssl_cert.crt
SSLCertificateKeyFile /etc/apache2/ssl/ssl_cert.key
</VirtualHost>
Now I want to dockerize both services and deploy them using docker-compose
I have setup my backend service like :
version: '3'
services:
backend:
build: backend/.
ports:
- "7878:7878"
And my backend/ folder has all the required files for my backend service including the Dockerfile. I am able to successfully build by docker image and can run it successfully. When I exec into the pod, I can successfully run curl commands towards 127.0.0.1:7878/some-end-point
Now I need to dockerize the frontend service too. It could be apache or it could even be nginx. But Im not sure how both containers will interac with each other given that my backend services ONLY listens on 127.0.0.1
If I extend my docker-compose file like :
version: '3'
services:
backend:
build: backend/.
ports:
- "7878:7878"
frontend:
build: frontend/.
ports:
- "80:80"
- "443:443"
I believe it will spin up its own network and my backend service wont be accessible using 127.0.0.1:7878
So in this case, whats the best approach ? How do I use docker-compose to spin up different containers on the SAME network so that they share 127.0.0.1 ?
You can't do that as you describe: the IPv4 address 127.0.0.1 is a magic address that always means "me", and in a Docker context it will mean "this container".
It's easy enough to set up a private Docker-internal network for your containers; in fact, Docker Compose will do this automatically for you. Your backend service must be listening on 0.0.0.0 to be accessible from other containers. You're not required to set externally published ports: on your container (or use the docker run -p option), though. If you don't, then your container will only be reachable from other containers on the same Docker-internal network, using the service name in the docker-compose.yml file as a DNS name, on whatever port the process inside the container happens to be listening on.
A minimal example of this could look like:
version: '3'
services:
proxy:
image: 'my/proxy:20181220.01'
environment:
BACKEND_URL: 'http://backend'
BIND_ADDRESS: '0.0.0.0:80'
ports:
- '8080:80'
backend:
image: 'my/backend:20181220.01'
environment:
BIND_ADDRESS: '0.0.0.0:80'
From outside Docker, you can reach the proxy at http://server-hostname.example.com:8080. From inside Docker, the two hostnames proxy and backend will resolve to Docker-internal addresses, and we've set both services (via a hypothetical environment variable setup) to listen on the ordinary HTTP port 80.

docker-compose microservice inter-container api communication with nginx proxy

I am trying to build a docker-compose file that will mimic my production environment with its various microservices. I am using a custom bridge network with an nginx proxy that routes port 80 and 443 requests to the correct service containers. The docker-compose file and the nginx conf files together specify the port mappings that allow the proxy container to route traffic for each DNS entry to its matching container.
Consequently, I can use my container names as DNS entries to access each container service from my host browser. I can also exec into each container and ping other containers by that same DNS hostname. However, I cannot successfully curl from one container to another by the container name alone.
It seems that I need to append the proxy port mapping to each inter-service API call when operating within the Docker environment. In my production environment each service has its own environment and can respond on ports 80 and 443. The code written for each service therefore ignores port specifications and simply calls each service by its DNS hostname. I would rather not have to append port id mappings to each API call throughout the various code bases in order for my services to talk to each other in the Docker environment.
Is there a tool or configuration setting that will allow my microservice containers to successfully call each other in Docker without the need of a proxy port map?
version: '3'
services:
#---------------------
# nginx proxy service
#---------------------
nginx_proxy:
image: nginx:alpine
networks:
- test_network
ports:
- "80:80"
- "443:443"
volumes:
- "./site1/site1.test.conf:/etc/nginx/conf.d/site1.test.conf"
- "./site2/site2.test.conf:/etc/nginx/conf.d/site2.test.conf"
container_name: nginx_proxy
#------------
# site1.test
#------------
site1.test:
build: alpine:latest
networks:
- test_network
ports:
- "9001:9000"
environment:
- "VIRTUAL_HOST=site1.test"
volumes:
- "./site1:/site1"
container_name: site1.test
#------------
# site2.test
#------------
site2.test:
build: alpine:latest
networks:
- test_network
ports:
- "9002:9000"
environment:
- "VIRTUAL_HOST=site2.test"
volumes:
- "./site2:/site2"
container_name: site2.test
# networks
networks:
test_network:
http://hostname/ always means http://hostname:80/ (that is, TCP port 80 is the default port for HTTP URLs). So if you want one container to be able to reach the other as http://othercontainer/, the other container needs to be running an HTTP daemon of some sort on port 80 (which probably means it needs to at least be started as root within its container).
If your nginx proxy routes to all of the containers successfully, it's not wrong to just route all inter-container traffic through it (in a previous technology generation we would have called this a service bus). There's not a trivial way to do this in Docker, but you might be able to configure it as a standard HTTP proxy.
I would suggest making all of the outbound service URLs configurable in any case, probably as environment variables. You can imagine wanting to run multiple services together in a development environment (in which case the service URL might be http://localhost:9002), or in a pure-Docker environment like what you show (http://otherservice:9000), or in a hybrid multi-host Docker setup (http://other.host.example.com:9002), or in Kubernetes (http://otherservice.default.svc.cluster.local:9000).

Why my Nginx proxy succeed to find my node webserver since my docker-compose doesn't expose any webserver port on the network?

My node webserver uses express and listen on port 5500.
My dockerfile-compose doesn't expose any port of my node webserver (named webserver) as following:
version: "3"
services:
webserver:
build: ./server
form:
build: ./ui
ports:
- "6800:80"
networks:
- backend // i let the backend network just for legacy but in fact webserver isn't in this networks
command: [nginx-debug, '-g', 'daemon off;']
networks:
backend:
My Nginx reverse proxy as following:
/request {
proxy_pass http://webserver:5500/request
}
expectation: y request have to fail because of the absence of shared network between the two services.
result : request succeed.
I can't understand why ? maybe the default network between the container make the job ?
more info: the request fails when the reverse_proxy redirect to a bad port, but succeed if the domain name is wrong and the port is good as following :
proxy_pass http://webver:5500/request > succeed
I can't understand the Nginx / Docker flow here. Someone would please explain to us what happens here ?
More recent versions of Docker Compose create a Docker network automatically. Once that network exists, Docker provides its own DNS system so that containers can reach each other by name or network alias; Compose registers each service under its name in the YAML file, so within this set of containers, webserver and form would both be resolvable host names.
(The corollary to this is that you don't usually need to include a networks: block in the YAML file at all, and there's not much benefit to explicitly specifying a container_name: or manually setting container network settings.)

How to access guest IP in docker

I am making my first steps with docker-compose. I created a very basic docker-compose.yml file with this content:
version: '2'
services:
webserver:
build: ./docker/webserver
image: runwaytest_web
ports:
- "80:80"
- "443:443"
volumes:
- /myhome/Docker/simple-docker/www:/var/www/html
- /myhome/Docker/simple-docker/symfony3:/var/www/symfony3
links:
- mysql
mysql:
# mysql stuff
I also have a very basic Dockerfile in ./docker/webserver. Servers are created correctly. If I ssh to the container, apache is running and the config file is correct.
When I inspect my container from the host, the IP is 172.18.0.3, but I can't ping it, and virtual host for symfony3 does not work (actually I can't neither reach the base http-document folder in /var/www).
I am using Docker for Mac.
What I am doing wrong?
See https://stackoverflow.com/a/24149795/99189 for why you can't ping your container. In general, don't expect to be able to do that. The only network access you have to the container is through the ports that you expose, 80 and 443 in this case.
From the perspective of running this in a docker container and using virtual hosts, you'll need your http client to send a Host: header when making requests to localhost:80/localhost:443.
Assuming you are testing with a browser, and that your vhost is user3174311.com, try the following:
add the line 127.0.0.1 user3174311.com to your /etc/hosts
visit user3174311.com in your browser
This is what should be happening:
browser looks up user3174311.com in /etc/hosts and resolves it to 127.0.0.1
browser sends an http request with a Host: user3174311.com header to 127.0.0.1:80
docker is listening on this address and forwards the connection to port 80 in your container
apache sees the request, looks at the Host: header and determines the correct virtual host to use
After that, it depends on your apache/symphony3 configuration. You'll have to post more details if it's not working.

Resources