I have running my own website for security reasons at an unusual port: https on Port 11223 instead oh 443.
This website provides the feature to login with an google account, realized by using the google OAuth API.
At the last step of authentication (redirecting back from google OAuth to my system), an network timeout happens.
On the other hand: if my server is running https on default port 443 instead of 11223, everything works fine.
I have configured the google OAuth client settings (Redirect URIs, Home page URL, JavaScript origins) for using the special port 112233. But without success.
Maybe it's important to know, the Server is behind a firewall with NAT. This means, the firewall receives https connections to port 11223 to redirect this to the internal webserver running https only on port 11223. But I think, this is not the point.
What could be the reason, why port 443 works but port 11223 doesn't.
I guess google OAuth does not support webservers running on an unusual prot!?!
The port number is 16 bits and thus can not exceed 65535.
Could it be proxy configuration issues? I recommend you configure your firewall to return 404 on the port 11223 and see what happens.
Related
i know there are lots of similar questions about but no one help me.
I have a linux server running nginx reverse proxy in docker, a duckdns domain created,opened my router port 80 and 443. I can't access from outside with my domain name or public ip, it seems like my router refuses external request. I tried with lots of configuration, follow lots of guides on web... I get crazy to solve this problem.
I think problem is before nginx, so i not post my nginx conf. If can help, I will post it.
Hope someone can help me. Thank you so much
There are several things that could be causing the issue with your router refusing external requests. Here are a few things to check:
Make sure that your router's firewall is configured to allow incoming connections on ports 80 and 443. Some routers have a built-in firewall that needs to be configured to allow traffic through specific ports.
Confirm that your router is properly forwarding incoming requests to the correct IP address and port on your network. This is typically done through a feature called port forwarding.
Check your router's security settings to ensure that it is not blocking incoming requests based on the source IP address or domain name. Some routers have the option to block incoming requests from specific IP addresses or domain names.
Confirm that your Linux server is properly configured to handle incoming requests. This includes checking that your Nginx reverse proxy is running and properly configured to forward requests to the correct IP address and port.
Verify that your DNS is pointing to the right IP address, you can use online tools like https://www.whatsmydns.net/ to check this.
Check if your router have any VPN or proxy service enabled, which could be affecting the incoming request.
Check if your ISP is blocking incoming connection to your public IP address.
It's also possible that there might be a problem with your router's firmware or hardware, in that case, you may need to contact the manufacturer for further assistance.
i'm trying to make public my moodle site but without success.
First, i tried with the port forwarding from my router but it doesn't connect to the server, so i can imagine that there is a block from my router (i also free the port from the firewall in windows and internally the private_ip:8080 is working).
So i tried with ngrok but when i click the link from safari/chrome with my phone (on 4G) the connection gave me the error 303 see other and redirect me to localhost:8080 that of course my phone did not recognise.
Is there anyone that was able to exposed the moodle site with success?
Thank you
I need URL logs on my network using SQUID and Mikrotik I am able to get HTTP traffic, but I am not getting HTTPS traffic. How to get HTTPS traffic using SQUID and Mikrotik? another way is also fine.
I run a DNS server, with that, I log the DNS requests on the mikrotik.
username || DNS/URL (website.com)
A quick way to test:
Download and install pi-hole on a rapsberry pi, make that the DNS server of your mikrotik and then in pi-hole you will see the DNS queries for each client on the mikrotik. You can then use the actual files on the raspberry pi to create an API for you or use the build in APIs in pi-hole.
Not sure how to do this via squid or web proxy.
I have hosted my website http://www.example.com, and it works fine.
when I try to access it by https://www.example.com, my browser says it is unable to connect?
Is this normal? (Is it a DNS issue or a rails app)
This probably isn't a Rails issue, but it's hard to say without more information. The most likely explanation is that your server probably isn't configured to have port 443 open, which is the default port for https connections.
If you are on Amazon EC2, you'll need to manually open port 443 in the EC2 security group configuration.
I have an application using spring-security's OpenID implementation. The app server sits behind a proxy. The proxy is apache httpd with mod_proxy. If the proxy connects to the app server via HTTP, the application will tell the OpenID authenticator to redirect back via HTTP rather than HTTPS like I would prefer. It seems to pull the protocol dynamically and only sees HTTP. If I configure the proxy to use HTTPS, I run into this problem. So is there a way to operate spring security behind a proxy which uses HTTP?
A little extra mod_proxy and Glassfish configuration solved this problem for me:
https://serverfault.com/questions/496888/ssl-issue-with-mod-proxy