I can't seem to figure out if this is common practice or not, but I want to create a website (Running on a container) and then have traffic forwarded to the website from a wildcard on my domain, I want to secure it and use Nginx Proxy Manager and Let's Encrypt to manage the certificate.
Do I keep the website running on my internal server as just HTTP:80 and redirect traffic to to via Nginx? My current site is just a serverside Blazor webapp.
I've seen other people do this, but it makes me wonder if that is indeed secure, at some point between Nginx and the internal server it is not encrypted. Is my understanding correct?
I imagine it looks something like this:
Client connects securely to Nginx Proxy Manager (HTTPS)
Nginx Proxy Manager then decrypts and forwards to the Internal Website (HTTP)
Is my understanding correct?
Is this common practice, or is there a better way to achieve what I want?
Related
I have a Spring based application, which uses keycloak-spring-security-adapter to handle the Keycloak specific stuff. This server is deployed on same machine as the Keycloak server, and both of them are running behind Nginx reverse proxy.
The Spring app has in its keycloak.json configuration the correct proxy-url. The Keycloak server has the the frontendUrl set to the correct proxy-url. When testing on localhost without the reverse proxy everything works as expected.
The issue is when deployed with the reverse-proxy in front. The Spring application runs the OIDC service discovery during startup. But to do this, is uses the public URL. This fails, because the on the backend side, the reverse proxy is not in DNS record.
How to setup the keycloak-spring-security-adapter in such a way, that for the backend requests it uses local URL. But for the logins that are done through the JSP pages in the browser, it uses the proxied URL?
I've created a service inside minikube (expressjs API) running on my local machine,
so when I launch the service using minikube service wedeliverapi --url I can access it from my browser with localhost:port/api
But I also want to access that service from another device so I can use my API from a flutter mobile application. How can I achieve this goal?
Due to small amount of information and to clarify everything- I am posting a general Community wiki answer.
The solution to solve this problem was to use reverse proxy server. In this documentation is definiton what exactly is reverse proxy server .
A proxy server is a go‑between or intermediary server that forwards requests for content from multiple clients to different servers across the Internet. A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers
Common uses for a reverse proxy server include:
Load balancing
Web acceleration
Security and anonymity
This is the guide where one can find basic configuration of a proxy server.
See also this article.
I'm currently struggling with docker and SSL. Let me give you an overview on what I'm trying to do.
I built a microservice-based architecture which is composed by a react web application and some "backend" services written in python and exposed with gunicorn on docker containers. I need to serve it over SSL because of Auth0 which needs the https communication. So, I built the server, bought a domain and got the SSL certificate for the domain with let's encrypt.
Now, here are the troubles, since mi services communicates to each other with a docker network, say services-network. For this reason they refer each other with the url `service:port/example.
At the moment I'm able to successfully connect to my web app with https but whenever this tries to contact the "backend" services the connection is refused because of it came from a non-secure resource (I used http://service:port/endpoint).
I tried to use the let's encrypt certificate generated for the webapp but the communication is blocked with message requests.exceptions.SSLError: HTTPSConnectionPool(host='service', port=8081): Max retries exceeded with url: /endpoint (Caused by SSLError(CertificateError("hostname 'service' doesn't match 'domain.com'",),))
I understand that a possible workaround for this error is to make the services communicate each other without using the docker network but the external one. Anyway I think that is not a good practice and that the communication among containers needs to be done through the docker network.
Finally, my question is: which is the best way to make the containers communicate through https over the docker network?
I personally like to use nginx as a reverse proxy. You would configure it normally and set it to proxy_pass <dockerIp:port>.
Many people like to use traefik.io which has many features including Let's Encrypt integration.
I'm deploying a Rails app to production. It seems that Puma is fast and handles many of the things I want in a web server.
I'm wondering if I even need to bother with Nginx, and what I'd be missing out on if just used Puma?
Nginx is a web server and puma is an application server.
Both have their advantages, and you need both.
Some examples:
Static redirects- you could setup your nginx to redirect all http traffic to the same url with https. This way such trivial requests will never hit your app server.
Multipart upload- Nginx is better suited to handle multipart uploads. Nginx will combine all the requests and send it as a single file to puma.
Serving static assets- It is recommended to serve static assets (those in /public/ endpoint in rails) via a webserver without loading your app server.
There are some basic DDoS protections built-in in nginx.
There's a significant difference between a web server and an application server.
Nginx (Web Server) and Puma (App Server) will handle requests in your application simultaneously.
Whenever there's a request coming from a client, it will be received by the nginx and then it will be forwarded to the application server which is Puma over here.
Having nginx as a web server will help you in handling multiple requests much more efficiently. Being a multi threaded server it will distribute requests into multiple threads making your application more faster.
As mentioned by vendant you can serve static pages using a web server as it will be a better approach.
If you're going to include a certification to your web application then you can provide redirects from http to https over here which will hit the app server only after redirecting to https.
If you're going to use Puma then you've to make sure that server is using resources efficiently but if you'll use nginx then it's going to take care of it by itself.
you can get more info here.
I have an application using spring-security's OpenID implementation. The app server sits behind a proxy. The proxy is apache httpd with mod_proxy. If the proxy connects to the app server via HTTP, the application will tell the OpenID authenticator to redirect back via HTTP rather than HTTPS like I would prefer. It seems to pull the protocol dynamically and only sees HTTP. If I configure the proxy to use HTTPS, I run into this problem. So is there a way to operate spring security behind a proxy which uses HTTP?
A little extra mod_proxy and Glassfish configuration solved this problem for me:
https://serverfault.com/questions/496888/ssl-issue-with-mod-proxy