Graph of network structure
I am following the flows inspired by these Ben Awad videos: https://www.youtube.com/watch?v=iD49_NIQ-R4 https://www.youtube.com/watch?v=25GS0MLT8JU.
The general pattern is access-token in memory, refresh token as httponly cookie*. This seems pretty secure and dev friendly.
However since both my node frontend and my api backend are dockerized: during SSR I want to use the local connection to the backend, not through the DNS. By default this is a bridge network. This comes with a problem. Since the internal uri of the backend is http://backend, not http://localhost:8000 (or DNS name in production), the cookie does not apply to that domain, even though it really is the same app as we got the cookie from.
So: what is the best solution, and how do I implement it?
Ideas for solutions:
To not use local connection and let the frontend container use host network
To "rename" the local connection from http://backend to http://localhost
To somehow set two cookies, one for http://backend and one for localhost
Store the refresh token somewhere thats not a cookie
You can use Nginx to solve this problem, since the header of your frontend will include a cookie for verification when a request is sent to your backend server, you can bind each server to a different port on the host machine, then add a CNAME record on your domain control panel to direct all request sent to (let's as) api.mydomain.com, to serve mydomain.com then in your Nginx config, you can do something like this
Nginx Config
server {
server_name mydomain.com;
listen 80;
location '/' {
proxy_pass 'http://localhost:8000/';
}
}
server {
listen 80;
server_name api.mydomain.com;
location '/' {
proxy_pass 'http://localhost:7000/';
}
}
then you can use the svelte externalFetch to change the path on the server side, so when the request hit the server, instead of fetching the specified URL, you can override it with the local host URL, like this:
src/hooks.ts
export async function externalFetch(request) {
if (request.url.includes('api.mydomain.com')) {
const localPath = new URL(request.url.pathname, 'http://localhost:7000')
request = new Request(localPath.href, request);
}
}
Related
does anyone know how the interaction works in Nginx?
I currently have a subdomain, let's call it subdomain1, I want to change it to subdomain2.
To be more specific.
I run everything in a docker container and my certificate will be for subdomain2. And there will be no more servers with subdomain1.
I want to keep the traffic from google for subdomain1, but the name is not appropriate anymore and it needs to be changed to subdomain2.
Does something like this work? Will there be any issues?
server {
server_name subdomain1.mydomain.com;
return 301 http://www.subdomain2.mydomain.com/$request_uri;
}
Something like that could match :
server {
listen 8066;
server_name localhost;
location / {
rewrite (.*)$ http://www.google.com$1 redirect;
}
}
8066 is for my test purpose to redirect to google.com.
If y try localhost:8066/foo, I go to https://www.google.com/foo
Note that redirect keyword makes it temporary. For a permanent redirection, use permanent instead.
Yes, your approach will work. Following points might be helpful:
Since you want not to have any server for subdomain1 but in this redirection you need to ensure that subdomain1 also pointing to the same server where you have hosted subdomain2
use of $scheme
server { server_name subdomain1.mydomain.com; return 301 $scheme://subdomain2.mydomain.com$request_uri; }
Generally people avoid using www before sub-domain.domain.com (you may refer this also)
Section server in nginx has two required parameters listen and server_name. Add listen to your config and it will work
Man about server https://nginx.org/en/docs/http/ngx_http_core_module.html#server
Example
server {
listen 8080;
server_name _;
return 301 http://www.google.com/$request_uri;
}
As stated in the title, i would like to change (internaly) my URL from:
https://subdomain.example.tech:8081/
to something like:
https://subdomain.example.tech/something/
Is it actually possible? (it's a Docker)
thanks for your answers
You have to use a web server/load balancer which handles the requests.
So, you should handle it on its configurations, Not in Docker.
For example, if you use nginx so:
server {
listen 8081;
server_name subdomain.example.tech;
return 301 https://subdomain.example.tech/$request_uri;
}
I'm a noob to docker, Nginx, and devops, so go easy on me.
I've followed a few tutorials that show me how to host multiple web apps through docker containers using Nginx and subdomains. I cannot create a new A Record for this domain, so I can't use subdomains, it has to be a url. If I could create a new A Record, I found a million tutorials that show me how to host it on ProjectA.example.com but since I don't have access to create a new A Record for the domain, I need to find a way to host it on something like example.com/ProjectA. Another obstacle is only port 80 is open to the outside, so all traffic must come through port 80 and be reverse proxied to whatever port the docker container is forwarding from.
So far I have an Nginx configuration that looks something like this
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
listen 80;
server\_name \_;
location / {
try\_files $uri $uri/ =404;
}
location /projectA {
proxy\_pass http://127.0.0.1:9001/;
}
location /projectB {
proxy\_pass http://127.0.0.1:9002/;
}
}
This works getting me to the homepage of the project. But the CSS of the website doesn't load, and whenever I click a link, it sends me to something like example.com/signup instead of example.com/projectA/signup. I tried making a wildcard location (location \~ /projectA.\*) but Nginx didn't like that. I was thinking there's probably a way I could get something like if the referring uri contains projectA, send them to example.com/projectA$uri but I couldn't find the documentation on the syntax.
Basically the question is, is this a good way to tackle the problem, and does anyone have a link to a tutorial or some documentation on how to do this?
Using a trailing slash behind location should do it:
location /projectA/ {
proxy_pass http://127.0.0.1:9001/;
leads /projectA/whatever to http://127.0.0.1:9001/whatever
If you want to use regex to rewrite, it's something like that:
location ~ ^/projectA/(.*)$ {
proxy_pass http://127.0.0.1:9001/$1;
or
location /projectA/ {
rewrite ^/projectA/whatever/(.*)$ /whatever.php?path=$1 break;
proxy_pass http://127.0.0.1:9001/;
}
leads /projectA/whatever/foo to http://127.0.0.1:9001/whatever.php?path=foo
I'm trying to make an api call to an internal docker container, but for every request url I have to make a proxy_pass in the Nginx config. I've read articles that the slashes at the end should work to pass all after de certain url to the proxy_pass.
Read here (redirect table)
Example
www.example.com/api -> redirects to correct endpoint
www.example.com/api/2020 -> this doesn't redirect to http://api/2020
Configuration
location = /api/ {
proxy_pass http://api/;
}
So why doesn't this configuration pass the 2020 'parameter' to the api endpoint? It works when I make a configuration like this:
location = /api/2020 {
proxy_pass http://api/2020;
}
But the problem is that it's a parameter so it can possibly be any number, how to solve this?
I've read other posts, but I ask this question again to get a broader understanding of the passing possibilities for parameters. Is it really necessary to use Regex for this?
Remove exact matching, just use
location /api/ {
proxy_pass http://api/;
}
without any regexes.
You are using "=" regex for comparison so It will find same url so please read the below code & change your configuration.
location ~ ^/(api)/ {
proxy_pass http://api;
}
After the above changes restart your nginx server & you dont need to write separate code for all the APIs.
I hope!
It will resolve your problem.
This is very easy to solve:
location / {
proxy_pass http://internal_addr:port$request_uri;
}
Example for a IP/PORT: 172.168.1.1:3000, internal server:
location / {
proxy_pass http://172.168.1.1:3000$request_uri;
}
By doing this, everything that the external client requests to nginx after the / (routes, parameters, etc.) nginx will forward to the internal server in exactly the same way.
If you have more than one internal server, you can use something like this:
In server1(IP/PORT: 172.168.1.1:3333):
location /app1 {
proxy_pass http://172.168.1.1:3333$request_uri;
}
That is:
Client Request to Nginx: exemple.com/app1/login.php?x=y
Nginx Will send the request to server1 as /app1/login.php?x=y
In server2 (IP/PORT: 172.168.1.2:4444):
location /app2 {
proxy_pass http://172.168.1.2:4444$request_uri;
}
That is:
Client Request to Nginx: exemple.com/app2/login.php?x=y
Nginx Will send the request to server2 as /app2/login.php?x=y
I have a Rails app running on an AWS OpsWorks Nginx/Unicorn Rails Layer. I want my app to only process requests to api.mydomain.com and have my web server directly return a 404 if any request is made using the server's IP address.
I've implemented a custom cookbook that overrides unicorn/templates/default/nginx_unicorn_web_app.erb (from the opsworks-cookbooks repo: https://github.com/aws/opsworks-cookbooks). I copied the template file that exists in this repository and added a new server block at the top of the template:
server {
listen 80;
server_name <%= #instance[:ip] %>;
return 404;
}
I stopped and started my server to ensure that the customized template file gets used, but when I issue a request using the server's IP address it still gets routed to my Rails app.
Is this <%= #instance[:ip] %> not correct? Is there a way to log from within this template file so that I can more easily debug what is going wrong? I tried using Chef::Log.info, but my message didn't seem to get logged.
Thanks!
Edit: For anyone else having this issue... The answer below about setting up a default server block fixed one of my issues. My other issue was related to the fact that my cookbook updates were not even making their way to my instance and needed to manually refresh the cookbook cache: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-installingcustom-enable-update.html
EC2 instances have a private (typically RFC-1918) IP address. The Internet Gateway translates traffic to that address from the public address. If that private address is the address <%= #instance[:ip] %> returns, then obviously, this configuration isn't going to do what you want.
Even if not, this isn't the correct approach.
Instead, you should define the default behavior of Nginx -- which is the first server block -- to throw the error, and later in the config, declare a server block with the api DNS hostname and the behavior you want for normal operation.
See Why is nginx responding to any domain name?.
Try adding a location block around the return statement "location /" refers to root
server {
listen 80;
server_name <%= #instance[:ip] %>;
location / {
return 404;
}
}