Bind multiple ports to multiple url with nginx - docker

I'm newbie using nginx. I was using Docker to deploy some apps in differents port. No problems with deployment. I can set hostname one app but the problem is when try to set hostname to the other app. For example.
App_1:
hostname: http://app.1.com
Ports to expose: 8080, 5000
App_2: (problem here)
hostname: http://app.2.com
Ports to expose: 8081, 5001
When access to http://app.1.com is no problem, but http://app.2.com throws an error. If access by IP and port http://192.168.1.x:8081 then shows something.
All runs locally and read that nginx can do that but searching in google, can't find some clearly. I hope you can help me.
Thanks!!

Related

Nginx Proxy manager cant connect to docker containers

My setup: I have a Raspberry pi at home connected to my Fritzbox 6660 Cable over Lan. The Pi is Running Docker with Portainer. While playing around and learning I was able to deploy numerous different containers with different programs. Now I would like to be able to connect to those containers from outside of my home network. In this example I will describe my Problem with my Grafana Container.(but I tryed other containers as well)
So Currently running are Grafana, InfluxDB(to feed Grafana) and nginx proxy manager.
I setup Nginx with the Docker compose file from nginx`s quick start page:
version: '3'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
once Nginx was running I made sure that Grafana and Nginx are running on the same docker network (nginx_default in this case)
For my custom Domain I signed up for a Duckdns account and created my domain "http://example.duckdns.org"
I used Duckdns`s install instructions to configure the DynDns settings in my fritzbox
with Update-URL:http://www.duckdns.org/update?domains=example&token=xxxxxxx-680f-4c66-a982-60d7e2f56911&ip=
Domainname: example.duckdns.org
username: none (as stated from duckdns install page)
password: xxxxxxxx-680f-4c66-a982-60d7e2f56911
dont worry the "xxxxxx" is actually different in my case.
Further I enabled portforwarding to the static Ip adress of my Raspberry on the ports 80 and 443 since those are the once nginx needs.
Then I went on the nginxpm webpage on port 81 and set up a proxy host like so:
Domain names: grafana.example.duckdns.org (I also tryed without grafana at the beginning, same result)
Scheme: http
Forward Hostname: Raspberry pi Ip
Forward Port: 3000 because thats where I can reach Grafana
I also enabled Block common exploits and websockets support. I know I should enable SSL but wont for this example.
My Nginx now sais this Proxy Host is online. But still I cant connect. Browser says Timeout.
I have this raspberry pi for 2 weeks now and have dumped more than one week just to figure out how to reach over the web. even tryed traefik at some point. But also no success.
I have watched dozens of tutorials, and reconstructed way more than one documentation example. But everytime those tutorials say something about success when they show their container webpage from outside home network. My browsers just give me "ERR_CONNECTION_TIMED_OUT"
I also tryed NO_IP and ddnss.
So please if anyone has suggestions I would highly appreciate.
I am curious if you could solve this problem because I get a similar error and I tried any possible IP combination in Nginx. I can reach the "Congratulations! You've successfully started the Nginx Proxy Manager." side from outside, but the redirection to the docker container does not work.
Regards

Cannot run gitlab docker image ports already in use

I'm trying to run a gitlab docker image. I get trouble with ports already in use.
ERROR: for gitlab_web_1 Cannot start service web: driver failed
programming external connectivity on endpoint gitlab_web_1
(a22b149b76f705ec3e00c7ec4f6bcad8f0e1b575aba1dbf621c4edcc4d4e5508):
Error starting userland proxy: listen tcp 0.0.0.0:22: bind: address
already in use
Here is my docker-compose.yml:
web:
image: 'gitlab/gitlab-ee:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
# Add any other gitlab.rb configuration here, each on its own line
ports:
- '80:80'
- '443:443'
- '22:22'
volumes:
- '$GITLAB_HOME/config:/etc/gitlab'
- '$GITLAB_HOME/logs:/var/log/gitlab'
- '$GITLAB_HOME/data:/var/opt/gitlab'
I previously had the same error message for port 80 and 443.
To fix it, I removed apache from my server.
But I need the port 22 to ssh connect, so I don't know how to make it out...
Is it possible to have apache and a docker container running with the same ports?
Why does gitlab/gitlab-ee need the port 22?
A friend told me about traefik that will answer to my needs:
https://docs.traefik.io/.
Another solution would be to create as many VirtualHost as needed on apache and reroute them to local docker ports.
Gitlab needs port 22 because it's the default port for ssh connections, which are used for push/pull of different repos.
Because there are two different protocols in this one question, they both have very different solutions.
SSH ports
To get around this, I followed the steps here, which explains how to update the /etc/gitlab/gitlab.rb file, to change the default listening port to something of your choosing (2289 in the example).
Notice, when the change is applied, when you Clone a repo, the "Clone with SSH" string changes to include this custom port.
Apache ports
AFAIK It's not possible to have two processes listening on the same port. Because of this, I publish different ports for the container (ie: 8080 and 8443), and use Apache with a virtual host, and a proxy to make it behave how users expect. This does assume you have control over your DNS.
This allows me to have several containers all publishing different ports, while apache listens on port 80/442, and acting as a proxy for those containers.

Docker tutorials all bind to port 80, and fail on local and remote servers as port 80 is already in use

Trying to wrap my head around all these Docker tutorials, and there is really no explanation around what port 80 is all about. Just, "bind to port 80".
This is the 3rd Docker tutorial I've taken with the same error after running the sample Dockerfile:
Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address
already in use
So, I've understood that port 80 is basically the default port, which would allow my app to run at example.com instead of example.com:80 - for example. My web server, and local machine complain that this port is in use. Of course it is, it's in use by default.
So, why are all these Docker tutorials binding to port 80? I bet they are doing it right, and I am missing something... but, cannot find a clear solution or description.
Here is the tutorial I'm doing: Digital Ocean's Install WordPress with Docker: https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-with-docker-compose
Sure enough, port 80 fails for me:
webserver:
depends_on:
- wordpress
image: nginx:1.15.12-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- wordpress:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
networks:
- app-network
Changing this to throws no error, but this means we can only resolve http://example.com:90 -
ports:
- "90:80"
What am I missing here? Why are all of these definitions of port 80 failing locally on my Mac and on a remote Digital Ocean Ubuntu8.1 server?
Do you have something else running on port 80? You can try curl localhost:80 or lsof -i :80; you might have Apache or something else running there by default that you'd need to kill.
If you're using a mac like me, sudo apachectl stop helped resolve this for me. Macs have a built-in web server, and mine was running by default. Maybe due to some defaulted websharing feature on the macbook pro.
example.com and example.com:80 are same thing btw. Here, some application in your host is already listening to port 80, it has got nothing to do with the container. Possibly, you are running an nginx server in the host as well. Are you ?

Sporadic 503s from specified ports

I've been working on using Rancher for manager our dashboard applications, part of this has involved exposing multiple kibana containers from the same port, and one kibana 3 container exposing on port 80.
I want to therefore send requests on specified ports: 5602, 5603, 5604 to specific containers, so I setup the following docker-compose.yml config:
kibana:
image: rancher/load-balancer-service
ports:
- 5602:5602
- 5603:5603
- 5604:5604
links:
- kibana3:kibana3
- kibana4-logging:kibana4-logging
- kibana4-metrics:kibana4-metrics
labels:
io.rancher.loadbalancer.target.kibana3: 5602=80
io.rancher.loadbalancer.target.kibana4-logging: 5603=5601
io.rancher.loadbalancer.target.kibana4-metrics: 5604=5601
Everything works as expected, but I get sporadic 503's. When I go into the container and look at the haproxy.cfg I see:
frontend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_frontend
bind *:5603
mode http
default_backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
mode http
timeout check 2000
option httpchk GET /status HTTP/1.1
server cbc23ed9-a13a-4546-9001-a82220221513 10.42.60.179:5603 check port 5601 inter 2000 rise 2 fall 3
server 851bdb7d-1f6b-4f61-b454-1e910d5d1490 10.42.113.167:5603
server 215403bb-8cbb-4ff0-b868-6586a8941267 10.42.85.7:5601
The IPs listed are all three Kibana containers, the first container has a health check has it, but none of the others do (kibana3/kibana4.1 dont have a status endpoint). My understanding of the docker-compose config is it should have only the one server per backend, but all three appear to be listed, I assume this is in part down to the sporadic 503s, and removing this manually and restarting the haproxy service does seem to solve the problem.
I am configuring the load balancer incorrectly or is this worth raising as a Github issue with Rancher?
I posted on the Rancher forums as that was suggested from Rancher Labs on twitter: https://forums.rancher.com/t/load-balancer-sporadic-503s-with-multiple-port-bindings/2358
Someone from rancher posted a link to a github issue which was similar to what I was experiencing: https://github.com/rancher/rancher/issues/2475
In summary, the load balancers will rotate through all matching backends, there is a work around involving "dummy" domains, which I've confirmed with my configuration does work, even if it is slightly inelegant.
labels:
# Create a rule that forces all traffic to redis at port 3000 to have a hostname of bogus.com
# This eliminates any traffic from port 3000 to be directed to redis
io.rancher.loadbalancer.target.conf/redis: bogus.com:3000
# Create a rule that forces all traffic to api at port 6379 to have a hostname of bogus.com
# This eliminates any traffic from port 6379 to be directed to api
io.rancher.loadbalancer.target.conf/api: bogus.com:6379
(^^ Copied from rancher github issue, not my workaround)
I'm going to see how easy it would be to route via port and raise a PR/Github issue as I think it's a valid usecase for an LB in this scenario.
Make sure that you are using the port initially exposed on the docker container. For some reason, if you bind it to a different port, HAProxy fails to work. If you are using a container from DockerHub that is using a port already taken on your system, you may have to rebuild that docker container to use a different port by routing it through a proxy like nginx.

Docker compose not exposing port for application container

I have exposed port 80 in my application container's dockerfile.yml as well as mapping "80:80" in my docker-compose.yml but I only get a "Connection refused" after I do a "docker-compose up" and try to do a HTTP GET on port 80 on my docker-machine's IP address. My docker hub provided RethinkDB instance's admin panel gets mapped just fine through that same dockerfile.yml ("EXPOSE 8080") and docker-compose.yml (ports "8080:8080") and when I start the application on my local development machine port 80 gets exposed as expected.
What could be going wrong here? I would be very grateful for a quick insight from anyone with more docker experience!
So in my case, my service containers both bound to localhost (127.0.0.1) and therefore seemingly the exposed ports were never picked up via my docker-compose port mapping. I configured my services to bind to 0.0.0.0 respectively and now they works flawlessly. Thank you #creack for pointing me in the right direction.
In my case I was using
docker-compose run app
Apparently
docker-compose run command does not create any of the ports specified in the service configuration.
See https://docs.docker.com/compose/reference/run/
I started using
docker-compose create app
docker-compose start app
and problem solved.
In my case I found that the service I am trying to set up had all their networks as internal: true. It is strange that it didn't give me an issue when doing a docker stack deploy
I have opened up https://github.com/docker/compose/issues/6534 to ask for a proper error message so it will be obvious for other people.
If you are using the same Dockerfile, make sure you also expose the port 80 EXPOSE 80 otherwise, your compose mapping 80:80 will not work.
Also make sure that your http server listens on 0.0.0.0:80 and not localhost or a different port.

Resources