Original Host Request through HAProxy - oauth

I'm using HAProxy to load balance an API which uses OAuth. As part of OAuth there is a hash that uses the requested URL in part of it. In the API code, the url when sent to the server from the LB contains the port. This makes the hash not match because the sent hash does not contain the port, however, the server side hash does.
Is there a way to send the requested host in the x-forwarded-host header via an option like x-forward-for? Or do I need to alter the header via reqadd in the backend. And if so, is there a way to get the host without having to hard code it?

Related

How can I prevent Electron's Chromium from forcing HTTPS on fetch requests?

From the Electron renderer, I am accessing a local GraphQL endpoint served by a Django instance on my computer, which I'd like to do over HTTP, not HTTPS. But Electron's Chromium seems to intercept my fetch request and preemptively return a 307 redirect.
So if my fetch request is POST to http://local.myapp.com:3000/v1/graphql, then Chromium returns a 307 and forces a redirect to https://local.myapp.com:3000/v1/graphql, which fails because my server is listening on port 3000 and for my use case I can't do a local cert for local.myapp.com.
Theoretically the first insecure request should be hitting an nginx docker container listening on port 3000 without any SSL requirement. And nginx is proxying the request to a Hasura container. But I'm not even seeing the requests in the nginx access logs, so I'm pretty sure the request is being intercepted by Chromium.
I believe this StackOverflow comment summarizes well why this is happening: https://stackoverflow.com/a/34213531
Although I don't recall ever returning a Strict-Transport-Security header from my GraphQL endpoint or Django server.
I have tried the following code without success to turn off this Chromium behavior within my Electron app:
import { app, } from 'electron'
app.commandLine.appendSwitch('ignore-certificate-errors',)
app.commandLine.appendSwitch('allow-insecure-localhost', )
app.commandLine.appendSwitch('ignore-urlfetcher-cert-requests', )
app.commandLine.appendSwitch('allow-running-insecure-content', )
I have also tried setting the fetch options to include {redirect: 'manual'} and {redirect: 'error'}. I can prevent the redirect but that doesn't do me any good because I need to make a successful request to the endpoint to get my data.
I tried replacing the native fetch with electron-fetch (link) and cross-fetch (link) but there seems to be no change in behavior when I swap either of those out.
Edit: Also, making the request to my GraphQL outside of Electron with the exact same header and body info works fine (via Insomnia).
So I have a couple of questions:
Is there a way to programmatically view/clear the list of HSTS domains that is being used by Chromium within Electron?
Is there a better way to accomplish what I'm trying to do?
I think the issue might be from the server, most servers don't allow HTTP in any possible way, they'll drop the data transfer and redirect you to HTTPS and there's a clear reason why they would do that.
Imagine you have an app that connects through HTTPS to send your API in return for some data, if someone just changed the https:// to http:// that'd mean the data will be sent un-encrypted and no matter what you do with your API key, it'll be exposed, that's why the servers don't ever allow any HTTP request, they don't accept even a single bit of data.
I could think of two solutions.
Chromium is not the reason for the redirect, our Django instance might be configured as production or with HTTPS listeners.
Nginx might be the one who's doing the redirecting (having a little bit of SSL def on the configuration)
Last but not least, just generate a cert with OpenSSL (on host http://local.myapp.com:3000/) note: include the port and use that on your Django instance. You can trust the certificate so that it could work everywhere on your computer.

Swagger proxied by haproxy can't execute requests

I have a swagger working with a haproxy. I use built in swagger in Websphere Liberty Profile (apiDiscovery feature):
Browser -swagger.mydomain.com-> haproxy -swagger.intranet-> IBM Liberty server with Swagger
The first swagger page is generated and shown correctly in the browser, but as Liberty server gets the request from haproxy, not my browser, and gets them to the intranet name/ip (swagger.intranet), Swagger code to execute GETs, POSTs, etc. is generated with that intranet IP name (swagger.intranet), so when I try any of the methods they won't work as reference this internal ip name from in a browser outside that zone.
Can I configure haproxy with some header to inform haproxy that he should generate code with the original server name (swagger.mydomain.com) request used in the request? (That is the one to be used in the generated HTML/Javascript code)
Thanks.
Liberty trusts the Host: header and uses it to assemble self-referential links.
Where you define the backend, try setting http-request set-header Host swagger.mydomain.com to what the client will be using or removing a similar stanza if you are setting it to some swagger.intranet already.
(sorry, I'm not an HAProxy user. This is based on searching for 'HAProxy equivalent of ProxyPreserveHost')

Route 53 - Special domain for a single page on existing server

I have a complex web app at example-app.com, hosting fully on AWS using ELB and Route 53 for DNS. It's a Rails app.
I'm running an experiment that I'm using in the rails app, at example-app.com/test. I want to set up new-domain-app.com, to point at example-app.com/test, and have the URL cloacked to always be new-domain-app.com. It's a single page site, so it shouldn't require any navigation.
I'm having a lot of trouble figuring out how to set up my DNS on Route 53 to accomplish this. Does anyone have good ideas on what this Route 53 configuration should look like?
AWS offers a very simple way to implement this -- with CloudFront. Forget about the fact that it's marketed as a CDN. It's also a reverse proxy that can prepend a fixed value onto the path, and send a different hostname to the back-end server than the one typed into the browser, which sounds like what you need.
Create a CloudFront web distribution.
Configure the new domain name as an alternate domain name for the distribution.
For the origin server, put your existing hostname.
For the origin path, put /test -- or whatever string you want prefixed onto the path sent by the browser.
Configure the cache behavior as needed -- enable forwarding of the query string or cookies if needed and any headers your app wants to see, but not Host.
Point your new domain name at CloudFront... But before you do that, note that your CloudFront distribution has a dxxxexample.cloudfront.net hostname. After the distribution finishes setting up (the "In Progress" status goes away, usually in 5 to 20 minutes) your site should be accessible at the cloudfront.net hostname.
How this works: When you type http://example.com into the browser, CloudFront will add the origin path onto the path the browser sends, so GET / HTTP/1.1 becomes GET /test/ HTTP/1.1. This configuration just prefixes every request's path with the string you specified as the origin path, and sends it on to the server. The browser address bar does not change, because this is not a redirect. The host header sent by the browser is replaced with the hostname of the origin server when the request is sent to the origin.
What you are trying to do is not possible. Route53 is a DNS system, and you can not configure a hostname (e.g. new-domain-app.com) to point to URL (e.g. http://example-app.com/test) using DNS.
However, you are probably using a wrong tool for the job. If example-app.com/test is indeed a simple, static, single page site, then you do not need to host it inside Rails app. Instead, you can host it on AWS S3 bucket, and then you can point new-domain-app.com to that bucket using Route53.
See the following for details:
http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html
DNS knows about Domains, not url's. DNS simply converts names to IP addresses.
You can't do what you are asking for just using DNS and ELB, however, what you can do is have a seperate VHOST for new-domain-app.com that points to your example-app.com site and accomplishes what you want using some sort of redirection rule that only fires for new-domain-app.com.
I'm not sure that this qualifies as an SO question, and more likely is a serverfault question. Specifics about your webserver and OS platform would be helpful in getting more specific advice.
So here's some details:
You already have example-app.com setup and working
You create a CNAME entry pointing new-domain-app.com to example-app.com or you can make an A record pointing to the same IP. If you already have example-app.com pointing to a different IP address, then use a subdomain (test.example-app.com) to isolate it.
Setup a new vhost on your server that basically duplicates the existing vhost for new-domain-app.com. The only thing you need to change is the server name configuration.
Why does this work? Because HTTP 1.1 included the HOST header that browsers send along, and web servers use in vhosting to determine which virtual host to route an incoming request to. When it sees that the client browser wanted "example-app.com" it routes the request to the appropriate vhost.
Rather than having to do some fancy proxying, which certainly can be used to get to a similar result, you can just add a redirection rule that looks for requests for the host example-app.com and redirects those to example-app.com. In apache that uses mod_rewrite which people often utilize by putting rules in the ubiquitous .htacess file, but can also be done in nginx and other common web servers. The specifics are slightly different for each.

Why IP is not pointing to Joomla main page

Given the following URL: htttp://domain/index.php, where index.php is the main webpage in a joomla server. I want to get the URL with the IP format, http://IP/index.php. I've tried that with several Joomla servers without success. What is it happening?
I will try to keep this answer simple, yet understandable.
The relation between Internet domains and IP address is not necessarily one-to-one.
In shared hosting, a single IP address may be used by several domains (or hostnames).
A Host header, which is a part of the HTTP standard, is sent with the HTTP request. This allows the server to determine which site to serve.
When you are trying to access a domain for which you don't know the IP, DNS lookup is performed, which provides the requested IP address.
A HTTP request is then sent to that IP with a Host header with the hostname (which contains the domain name).
If you are trying to access the ip directly, for example by typing in a web browser's address bar, the value of the Host header will be the IP itself and the server will have no indication what domain you actually want.
It is possible to set up a default behavior for cases where the IP address is directly accessed, but it is highly likely that a shared host will not allow you to set it yourself.

Is subdomain part of a https url secure?

If we have something like this url:
https://www.example.com/Some/Page/index.html?id=15
I know that example.com will be sent as plain text, but /Some/Page/index.html?id=15 is sending securely.
Now, my question is, if we have something like this:
https://somesubdomain.example.com/Some/Page/index.html?id=15
May attackers know that I'm visiting somesubdomain.example.com? or they just can know I'm visiting example.com?
In other words, is subdomain part of url sending securely?
If the client is using Server Name Indication (most modern web browsers/platforms do), the host name (not the rest of the URL) will be visible in clear in the handshake in the server name indication extension, so both www.example.com and somesubdomain.example.com will be visible.
If the client isn't using SNI, an eavesdropper would still see the server certificates and the target IP address(es). Some certificates can be valid for multiple host names, so there may be some ambiguity, but this should give a fairly strong clue to the eavesdropper.
In addition, the same eavesdropper might be in a position to see the DNS requests (unless you've configured the hosts explicitly in your hosts file perhaps).
In general, you shouldn't assume that the host name you're trying to contact is going to be hidden. Whether it's a subdomain isn't relevant, it's the full host name as it's requested by the client that matters.
When using https all traffic between http client and server is encrypted. That does not mean it is safe, but it is encrypted according to what you refer here. Something a network sniffer can see is the ip address you communicate with. That is regardless of what network name had been resolved to that address.
Simply try yourself and use a network sniffer. I recommend wireshark.

Resources