accessing my container via my ip:host not working anymore - docker

I use docker/docker-compose and nginx on my own server.
I was able to access to my container via external port
like my_adress:8080
then i made a redirect via nginx
{
listen 80
servername my_adress.8080
return 301 https://my_adress.8080
}
and then i removed the nginx conf.
I restarted nginx services
but now i can't access to http://my_adress:8080 anymore
there is an automatic redirect 301 to https://my_adress without port 8080
I search online how to remove nginx cache or something similar by didn't find it
i searched in https://serverfault.com/questions/825331/nginx-still-redirects-even-though-i-removed-the-rule-from-the-conf
but diden't find solution
when i do service docker status
i get in CGroup: /system.slice/docker.service
/usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 8080 -container-ip 172.19.0.3 -container-port 80
Any ideas why i doesn't work anymore?

I found where the problem was.
I was using the image https://hub.docker.com/r/onlyoffice/documentserver
and i set up the https config
see "Running ONLYOFFICE Docs using HTTPS" in this page :
https://helpcenter.onlyoffice.com/installation/docs-community-install-docker.aspx
And on this image they were an automatic proxy to redirect https to https, so i was not linked to the nginx conf on my server it was only inside the docker-proxy
So the 2 solutions that i found:
remove the https configuration so it would be available in http
or binding host server port 443 (https) to the 443 port of the ONLYOFFICE
container, so the redirection works

Related

Gitlab-ce docker container unaccessable over https

I am having an issue accessing local gitlab over https.
I installed it on Ubuntu/Redhat 8 with the same results, port 443 is not reachable.
Under /etc/gitlab/ssl/ I have created a self signed certificate and key
xxx.crt
xxx.key
Then configured it I followed the instruction giving here. https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#manually-configuring-https. the issue that I am having is that I am not able to connect to gitlab over https. http works just fine
Odd behavior:
per documentation By default, when you specify an external_url starting with ‘https’, NGINX will no longer listen for unencrypted HTTP traffic on port 80. but that's not the case even with external_url is set.
So I check /var/opt/gitlab/nginx/conf/gitlab-http.conf after I configured it and I saw the server port is still pointing to *:80. I changed it to 443 stop/start getlab container and it broke with unreachable.
To get it back working I revert the changes and I did the following:
gitlab-ctl reconfigure
sudo docker restart gitlab
Now it's back responding on port 80 and not 443.
http works just fine
HTTP should not work, see "Redirect HTTP requests to HTTPS"
By default, when you specify an external_url starting with 'https', NGINX will
no longer listen for unencrypted HTTP traffic on port 80.
If you want to redirect all HTTP traffic to HTTPS you can use the redirect_http_to_https setting.
external_url "https://gitlab.example.com"
nginx['redirect_http_to_https'] = true
So double-check your gitlab.rb, then sudo gitlab-ctl reconfigure

How to configure nginx listen to port 80 and redirect to my website running on port 80?

I want my nginx listen to port 80 and redirect to my website in Tomcat Docker on port 80 as well.
I tried change port of Tomcat to other and when user visits my website, it shows the port number in URL bar. So I want to use the default port 80.
The problem is that I cannot run nginx and Tomcat Docker on port 80 at the same time.
Correct, you should run
-- nginx in port 80
-- set Tomcat to run with another port (e.g. 8080)
-- use proxy_pass option in nginx, setting http://localhost:8080

Docker refusing connection on port 443

I'm setting up my AWS EC2 instance. I wanted to let that instance access via https but I get a
This is what I tried
run docker pull abiosoft/caddy
Put Caddyfile in home folder
Run mkdir -p $HOME/caddycerts; chmod ugo+rwx $HOME/caddycerts;
Run docker run -d -e "CADDYPATH=/etc/caddycerts" -v $HOME/Caddyfile:/etc/Caddyfile -v $HOME/caddycerts:/etc/caddycerts -p 443:443 abiosoft/caddy
Run docker restart *dockerName*
My Caddyfile looks like this:
some-domain-name.com {
tls myemail
proxy / 172.17.0.1:9001 {
header_upstream Host {host}
header_upstream X-Real-IP {remote}
header_upstream X-Forwarded-Proto {scheme}
}
}
Error: curl: (7) Failed to connect to some-domain-name.com port 443: Connection refused
EC2 instance's security group has https enabled for port 443
when you use AWS make sure that the port you are using is allowed and you have the right to use it
AWS Security group and ACL doesn't give connection refused, they silently drops the packet. From the message connection refused it seems the service isn't running or server isn't listening on port 443.
Have you tried to telnet it locally ? Does it work ?
telnet localhost 443
Error: curl: (7) Failed to connect to some-domain-name.com port 443: Connection refused
The above error message means that your web server is not running on the specified port of 443. You can simply validate via a telnet (which I see in James's answer above).
From your caddyfile it points to port 9001. The first line of the Caddyfile is always the address of the site to serve.
Without seeing the dockerfile it's hard to pinpoint, but I'd say there's nothing configured to run on 443 in your application

nginx + docker: http to https redirection

I have an nginx inside a docker container which I want to force SSL on all request. Because I need to expose the webserver port in a different port outside the container, I'm not using standar ports when accesing the server.
I mapped the SSL port 443 inside my container to 8888 outside, so when I write the URL https://myserver:8888 the HTTPS works fine.
What happens when I don't use the https prefix? The port 443 is still listening but since I'm not using https schema I get the following error:
400 Bad Request
The plain HTTP request was sent to HTTPS port
If I redirect request to port 80 to 443 it is not enough, since because I'm accesign by the port 8888 the all requests are incoming by port 443, but I can not guarantee the schema used is HTTP.
I mean, the following block has not effect because I'm not exposing port 80 outside, only 8888 which maps direcly to 443
server {
server_name myserver;
listen 80;
return 301 https://$host:$server_port$request_uri;
}
How can I force it to work even when the user puts http on the URL?
Thank you
You have to use two different ports : for example 8888 and 8889.
Bind the first port to the container's 80, and the second to 443.
If a client want to contact your container on http it will have to use 8888 (-> 80). If the vhost is well configured nginx will serve a 301 or 302 HTTP return code (a redirect) to the https port (8889 -> 443).
So your return might look like
return 301 https://$host:8889$request_uri;
And the client will start a TLS connection on the right port.
Due to technical limitation it's pretty hard to use the same port for both clear traffic and TLS-encrypted one, so most of the webservers will listen on two distinct ports : 80 for plain-text and 443 for TLS.

Docker Nginx-Proxy Container used for Port 80 Forwarding to other container based on Domain

I am trying to set up a Docker Nginx Proxy server to forward incoming requests to their corresponding Docker Container on 192.168.1.120 or to the Router's Web-Admin at 192.168.1.1
So right now I am in a bit of a pickle, but I need to set this up regardless. I have this setup right now
Router 192.168.1.1 (Web Admin + Port Forwarding)
Server1 LAMP - (Router Forwards -> port 80 for LAMP Server)
Server2 Docker - (Router Forwards -> 20 SSH, 8080, 9000 Docker Admin)
So I have to configure the port forwarding through my Router's web interface, which is accessible on port 8080. But the issue is that right now I moved to Florida, and I had stupidly added a port-forwarding rule on 8080 to forward to Shipyard Docker Manager, which I eventually planned to install an Nginx-Proxy Forwarding Docker container. I never got the forwarding Docker container working, and I eventually switched to Portainer on port 9000 which I had to configure because it was the only other port I had already set forwarded before I lost access to my Router's web interface, and thus lost the ability to forward ports.
The downside is that I cannot access my Router's web interface. The upside is that - I still have to implement an Nginx-Proxy port forwarding Container anyways, to set up dynamic port 80 forwarding to different Docker containers based on the URL.
So I want to mvoe my LAMP server on as a new Docker Container, and then I will also have a few other Rails Docker containers - but I need to configure a Docker Container to forward the app to differnt servers based on the port. I assume I need to have 2 dockers running - one for port 80 forwarding, and then one for port 8080 forwarding - this is not a problem.
I have not been able to correctly configure my Nginx config to forwarding an incoming request from my domain-name that I have point to my server (my.domain.com below), needs to get forwarded to my router 192.168.1.1. Any help / suggestions on how to configure my Nginx-Proxy Docker Container to forward this correctly, or what I should setup here to forward incoming requests to a web-server dynamically based on the URL. I can install any Docker containers I need for this.
My current Config /etc/nginx/nginx.conf, running on a Nginx-Proxy Docker Container on port 8080 (Google to find the Docker Image for nginx-proxy)
# My Nginx Config to forward my.domain.com
http {
resolver 127.0.0.1;
access_log /var/logs/nginx/access.log;
server {
listen 8080;
server_name my.domain.com;
return 301 http://192.168.1.1:8080/$request_uri;
}
}
I get these errors:
[error] 55#55: *2274 datacenter.URL.com could not be resolved (110: Operation timed out), client: 166.172.189.185, server: datacenter.URL.com, request: "GET / HTTP/1.1", host: "datacenter.URL.com:8080"
[error] 55#55: recv() failed (111: Connection refused) while resolving, resolver: 192.168.1.1:8080
EDIT: I just noticed that I can only have one Docker Container running at-a-time for each port. So I need to figure out how to forward requests to different servers's + ports based on the Domain Name. So each URL forwarding rule entry needs to be able to go to different servers all running on all different ports.

Resources