does anyone know how the interaction works in Nginx?
I currently have a subdomain, let's call it subdomain1, I want to change it to subdomain2.
To be more specific.
I run everything in a docker container and my certificate will be for subdomain2. And there will be no more servers with subdomain1.
I want to keep the traffic from google for subdomain1, but the name is not appropriate anymore and it needs to be changed to subdomain2.
Does something like this work? Will there be any issues?
server {
server_name subdomain1.mydomain.com;
return 301 http://www.subdomain2.mydomain.com/$request_uri;
}
Something like that could match :
server {
listen 8066;
server_name localhost;
location / {
rewrite (.*)$ http://www.google.com$1 redirect;
}
}
8066 is for my test purpose to redirect to google.com.
If y try localhost:8066/foo, I go to https://www.google.com/foo
Note that redirect keyword makes it temporary. For a permanent redirection, use permanent instead.
Yes, your approach will work. Following points might be helpful:
Since you want not to have any server for subdomain1 but in this redirection you need to ensure that subdomain1 also pointing to the same server where you have hosted subdomain2
use of $scheme
server { server_name subdomain1.mydomain.com; return 301 $scheme://subdomain2.mydomain.com$request_uri; }
Generally people avoid using www before sub-domain.domain.com (you may refer this also)
Section server in nginx has two required parameters listen and server_name. Add listen to your config and it will work
Man about server https://nginx.org/en/docs/http/ngx_http_core_module.html#server
Example
server {
listen 8080;
server_name _;
return 301 http://www.google.com/$request_uri;
}
Graph of network structure
I am following the flows inspired by these Ben Awad videos: https://www.youtube.com/watch?v=iD49_NIQ-R4 https://www.youtube.com/watch?v=25GS0MLT8JU.
The general pattern is access-token in memory, refresh token as httponly cookie*. This seems pretty secure and dev friendly.
However since both my node frontend and my api backend are dockerized: during SSR I want to use the local connection to the backend, not through the DNS. By default this is a bridge network. This comes with a problem. Since the internal uri of the backend is http://backend, not http://localhost:8000 (or DNS name in production), the cookie does not apply to that domain, even though it really is the same app as we got the cookie from.
So: what is the best solution, and how do I implement it?
Ideas for solutions:
To not use local connection and let the frontend container use host network
To "rename" the local connection from http://backend to http://localhost
To somehow set two cookies, one for http://backend and one for localhost
Store the refresh token somewhere thats not a cookie
You can use Nginx to solve this problem, since the header of your frontend will include a cookie for verification when a request is sent to your backend server, you can bind each server to a different port on the host machine, then add a CNAME record on your domain control panel to direct all request sent to (let's as) api.mydomain.com, to serve mydomain.com then in your Nginx config, you can do something like this
Nginx Config
server {
server_name mydomain.com;
listen 80;
location '/' {
proxy_pass 'http://localhost:8000/';
}
}
server {
listen 80;
server_name api.mydomain.com;
location '/' {
proxy_pass 'http://localhost:7000/';
}
}
then you can use the svelte externalFetch to change the path on the server side, so when the request hit the server, instead of fetching the specified URL, you can override it with the local host URL, like this:
src/hooks.ts
export async function externalFetch(request) {
if (request.url.includes('api.mydomain.com')) {
const localPath = new URL(request.url.pathname, 'http://localhost:7000')
request = new Request(localPath.href, request);
}
}
I'm a noob to docker, Nginx, and devops, so go easy on me.
I've followed a few tutorials that show me how to host multiple web apps through docker containers using Nginx and subdomains. I cannot create a new A Record for this domain, so I can't use subdomains, it has to be a url. If I could create a new A Record, I found a million tutorials that show me how to host it on ProjectA.example.com but since I don't have access to create a new A Record for the domain, I need to find a way to host it on something like example.com/ProjectA. Another obstacle is only port 80 is open to the outside, so all traffic must come through port 80 and be reverse proxied to whatever port the docker container is forwarding from.
So far I have an Nginx configuration that looks something like this
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
listen 80;
server\_name \_;
location / {
try\_files $uri $uri/ =404;
}
location /projectA {
proxy\_pass http://127.0.0.1:9001/;
}
location /projectB {
proxy\_pass http://127.0.0.1:9002/;
}
}
This works getting me to the homepage of the project. But the CSS of the website doesn't load, and whenever I click a link, it sends me to something like example.com/signup instead of example.com/projectA/signup. I tried making a wildcard location (location \~ /projectA.\*) but Nginx didn't like that. I was thinking there's probably a way I could get something like if the referring uri contains projectA, send them to example.com/projectA$uri but I couldn't find the documentation on the syntax.
Basically the question is, is this a good way to tackle the problem, and does anyone have a link to a tutorial or some documentation on how to do this?
Using a trailing slash behind location should do it:
location /projectA/ {
proxy_pass http://127.0.0.1:9001/;
leads /projectA/whatever to http://127.0.0.1:9001/whatever
If you want to use regex to rewrite, it's something like that:
location ~ ^/projectA/(.*)$ {
proxy_pass http://127.0.0.1:9001/$1;
or
location /projectA/ {
rewrite ^/projectA/whatever/(.*)$ /whatever.php?path=$1 break;
proxy_pass http://127.0.0.1:9001/;
}
leads /projectA/whatever/foo to http://127.0.0.1:9001/whatever.php?path=foo
Edit
I can't really find a way to generate a secure URL from route name.
To get a full URL, I use
echo route('my_route_name');
But what to do, if I want a URL with https?
UPDATE: As pointed out in the comments, a simpler way of doing this would be adding URL::forceSchema('https'); for Laravel version between 4.2-5.3 or URL::forceScheme('https'); for version 5.4+ in the boot method of your AppServiceProvider file.
Old answer:
It's actually entirely possible and there's only one line of code needed to accomplish that.
Laravel doesn't check the presence of SSL by itself, it depends on Symfony. And there goes our key to making it believe that the current request is secure.
The thing is, we have to set the HTTPS server param to true and the easiest method is to paste the following code in the boot method of your AppServiceProvider:
$this->app['request']->server->set('HTTPS', true);
In my very case, I only need to force SSL in production, the local env should still work on http. This is how I force SSL only on production:
$this->app['request']->server->set('HTTPS', $this->app->environment() != 'local');
By the way, mind those terms, you may need them in the future.
Laravel 8
I recently resolved this by modifying this file:
app/Providers/AppServiceProvider.php
in the method boot() add the following:
URL::forceScheme('https');
Add the use in the top:
use Illuminate\Support\Facades\URL;
to work in your local environment you can leave it like this:
public function boot()
{
if(env('APP_ENV') !== 'local') {
URL::forceScheme('https');
}
}
Note: Don't forget to set your env variable APP_ENV with prod for the production file.
APP_ENV=prod
Actually turns out, that laravel doesn't care if url is secure or not, because it generates based on the current url. If you're on https page, route() will return secure url. If on http, then http:// url
The problem was, that Laravel didn't detect that https was enabled, which was due to faulty server configuration.
You can check if Laravel sees the current connection as https by calling Request::isSecure();
As I mentioned in a relevant question, I found 5 ways of how to generate secure URLs.
Configure your web server to redirect all non-secure requests to https. Example of a nginx config:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
Set your environment variable APP_URL using https:
APP_URL=https://example.com
Use helper secure_url() (Laravel5.6)
Add following string to AppServiceProvider::boot() method (for version 5.4+):
\Illuminate\Support\Facades\URL::forceScheme('https');
Implicitly set scheme for route group (Laravel5.6):
Route::group(['scheme' => 'https'], function () {
// Route::get(...)->name(...);
});
At the moment this way is not documented, but it works well.
I think there is only one way to do this.
To generate the secure URL to your named routes, you might want to pass in your route into the secure_url helper function.
secure_url(URL::route('your_route_name', [], false));
You can't really use the route helper function because it generates absolute URL (with http://) by default and it's http not the https version that you wanted
Laravel 5.x will generate secure URL via route() helper if it detects the incoming connection is secure. Problem usually happen if the app is hidden behind load balancer or proxy (e.g. Cloudflare) since the connection between app server and load balancer/proxy might not be secure.
I am using Laravel Forge + Cloudflare now and this is the easiest way I could find to enable app thinking incoming connection is secure (not sure about other proxy).
Generate self signed certificate (see https://www.digitalocean.com/community/tutorials/openssl-essentials-working-with-ssl-certificates-private-keys-and-csrs or http://www.selfsignedcertificate.com/)
In Forge panel, insert your private key and cert via Sites > your-site > SSL Certificates > Install Existing Certificate.
Activate
In CloudFlare panel, Crypto > SSL, choose “Full” (not strict)
Done (it will take few minutes for the change to get propagated)
In short, connection between client and Cloudflare is secured by Cloudflare's own SSL. Connection between app server and Cloudflare is protected via your generated cert (thus the app is seeing 'connection' as secure.
You can apply the same principle with other stacks.
Use secure_url:
secure_url(URL::route('your_route_name', [], false));
You will need to set URL::route to false in order to not return a full URL. Then use secure_url function generates a fully qualified HTTPS URL to the given path.
From the UrlGenerator interface you can use URL::route
string route(string $name, mixed $parameters = array(), bool $absolute = true)
Get the URL to a named route.
Parameters
string $name
mixed $parameters
bool $absolute
Return Value
string
https://laravel.com/api/5.4/Illuminate/Contracts/Routing/UrlGenerator.html
In most cases routes should be generated with the same scheme your site was loaded with. Laravel automatically detects if request has X-Forwarded-Proto header and uses it to decide which scheme to use in generated route URLs. If your site is behind reverse proxy then you should add reverse proxy IP address to list of trusted proxies. https://github.com/fideloper/TrustedProxy package helps to do this. It's included in Laravel 5.5. For example, my config/trustedproxy.php looks like:
<?php
return [
'proxies' => '*',
'headers' => [
]
];
I use it with nginx reverse proxy that has the following configuration:
server {
listen 80;
server_name example.com;
access_log /var/log/nginx/example.com_access.log;
error_log /var/log/nginx/example.com_error.log;
client_max_body_size 50m;
location / {
proxy_pass http://localhost:8002;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
}
Replace example.com with your domain. SSL certificates was provided by Let's Encrypt with certbot.
On laravel 5.5.*
You only need to add https on your .env file
as AppServiceProvider already had function that checks if your APP_URL or app.url on your config has https on it.
class AppServiceProvider extends ServiceProvider
{
public function boot()
{
\URL::forceRootUrl(\Config::get('app.url'));
if (str_contains(\Config::get('app.url'), 'https://')) {
\URL::forceScheme('https');
}
}
This is certainly old, but someone like me will dump over here one day.
In your .env file define the APP_URL to use https instead of using http. Because all laravel url are generated based on this variable.
APP_URL=https://example.com
and wherever you want you can just say
{{ URL::route('my.route', params) }}
Or
{{ route('my.route', params) }}
With make sure all the routes are generated with secure protocol, add in the boot method of AppServiceProvider class:
<?php
namespace App\Providers;
use Illuminate\Routing\UrlGenerator;
use Illuminate\Support\ServiceProvider;
class AppServiceProvider extends ServiceProvider
{
/**
* Bootstrap any application services.
*
* #return void
*/
public function boot(UrlGenerator $url)
{
if (config('app.production')) {
$url->forceScheme('https');
}
}
Just add your application domain with the https protocol in the APP_URL of your .env file.
APP_URL=https://example.com
Then run route:cache
For reference of future visitors:
The secure_url function doesn't correctly handle GET parameters. So, for example, if you want to convert the url that the user has visited into a secure url while retaining the GET fields, you need to use this:
secure_url(Request::path()).'?'.http_build_query(Input::all());
Particularly note the use of path() rather than url() - if you give it a full url, it doesn't replace the http at the start, making it efectively useless.
I came across this issue while trying to generate a route as form action in Blade using Laravel 5.4.
Then I hit upon secure_url(), so I tried
{{ secure_url(route('routename', $args)) }}
This still returned a non-secure URL. :-(
After digging through the code and adding some debug logs, I finally figured out that secure_url does not change the incoming url argument, if it's already an absolute URL (including the scheme).
Fortunately route has an absolute flag as the third argument, and returns a relative URL if $absolute is passed as false.
Assuming /a/{id}/b is a named route "a.b"
route('a.b', 1) : will return http://[domain]/a/1/b
route('a.b', 1, false) : will return /a/1/b
Joining the two I arrived at :
{{ secure_url(route('routename', $args, false)) }}
As expected it generated https://[domain]/routeXXX
:-)
I had a problem with redirect trailing slashes after 2 hours of looking for a bug, just need to remove
.htaccess
<IfModule mod_rewrite.c>
<IfModule mod_negotiation.c>
Options -MultiViews
</IfModule>
RewriteEngine On
# Redirect Trailing Slashes If Not A Folder...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)/$ /$1 [L,R=301]
# Handle Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
</IfModule>
to
<IfModule mod_rewrite.c>
<IfModule mod_negotiation.c>
Options -MultiViews
</IfModule>
RewriteEngine On
# Handle Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
</IfModule>
If you are using Load Balancer, Laravel will never have the actual schema available.
So use https://stackoverflow.com/a/65691937/6489768. Working with Laravel - 9.x
Place this in your filters.php file and everywhere will be forced to https while retaining URL parameters:
//force ssl
App::before(function () {
if(!Request::secure() && App::environment() != 'local')
{
$baseHost = Request::getHttpHost();
$requestUri = Request::getRequestUri();
$newLink = 'https://'.$baseHost.$requestUri;
return Redirect::to($newLink);
}});
According to the laravel documentation on the url() helper method.
If no path is provided, a Illuminate\Routing\UrlGenerator instance is
returned
So you can use the secure method of the UrlGenerator class in the following way:
echo url()->secure('my_route_name');
To generate a secure (https) route use the following built-in 'before' filter called 'auth':
For example:
Route::get('your-route', ['before' => 'auth', 'uses' => YourController#yourAction']);
Now when you output your link it will be prepended with 'https'
I'm using rails 4 and I'm proxying a GET request to another server like this:
def proxy_video(path)
self.status = 200
response.headers["X-Accel-Redirect"] = "/proxy/#{path}"
render text: 'ok'
end
In my nginx config, I have this:
location ~* ^/proxy/(.*?)/(.*) {
internal;
resolver 127.0.0.1;
# Compose download url
set $download_host $1;
set $download_url http://$download_host/$2;
# Set download request headers
proxy_set_header Host $download_host;
# Do not touch local disks when proxying content to clients
proxy_max_temp_file_size 0;
# Stream the file back send to the browser
proxy_pass $download_url?$args;
}
This works fine for proxying GET requests like:
proxy_image('http://10.10.0.7:80/download?path=/20140407_120500_to_120559.mp4')
However, I want to proxy a request that passes a list of files that will not fit in a GET request. So I need to pass what currently goes in $args as POST data.
How would I proxy this POST data? - Do I need to do something like response.method = :post or something? - Where would I provide the parameters of what I'm POSTing?
I'm pretty sure you can't do this out-of-the-box with nginx. This feature is really designed for accelerating file downloads, so it's pretty focused on GET requests.
That said, you might be able to do something fancy with the lua module. After you've compiled a version of nginx that includes the module, something like this might work.
Ruby code:
def proxy_video(path)
self.status = 200
response.headers["X-Accel-Redirect"] = "/proxy/#{path}"
response.headers["X-Accel-Post-Body"] = "var1=val1&var2=val2"
render text: 'ok'
end
Nginx config:
location ~* ^/proxy/(.*?)/(.*) {
internal;
resolver 127.0.0.1;
# Compose download url
set $download_host $1;
set $download_url http://$download_host/$2;
rewrite_by_lua '
ngx.req.set_method(ngx.HTTP_POST)
ngx.req.set_body_data(ngx.header["X-Accel-Post-Body"])
';
# Set download request headers
proxy_set_header Host $download_host;
# Do not touch local disks when proxying content to clients
proxy_max_temp_file_size 0;
# Stream the file back send to the browser
proxy_pass $download_url?$args;
}