Guacamole 1.2.0
I have a running guac server and I want to create a client connection from another web app on different domain(for testing) and move to the same domain once production ready. I've tried HTTPTunnel and the websockettunnel, both of failing for different reasons. Websocket seems to only want to connect to itself on localhost even if I supply the full URL of the guac server and HTTP tunnel I'm getting a CORS error about InvalidAllowCredentials. The following are two variations I've tried and the rest of the html is from the example on guac doc site.
var guac = new Guacamole.Client(
new Guacamole.HTTPTunnel("https://example.com/tunnel", true)
{#new Guacamole.WebSocketTunnel("https://example.com/websocket-tunnel", true)#}
);
If this connect was going to work I believe I'm missing configuration and authentication information.
The guac documentation seems to be lacking and I'm not sure what I should be doing.
What needs to be done to get connected to an existing machine?
Example.
API call, authenticate user and get a token.
API call, search for connection
Create tunnel, pass in connection somehow
Client connect
This should pop up the display to the remote machine.
Headers added in nginx config under the location / section
add_header 'Access-Control-Allow-Origin' 'http://127.0.0.1:8000' always;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,ContentType,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always;
You may check how a connection is made in the guacamole client application, in the file ManagedClient.js. Search for functions getInstance() and getConnectString().
In principle, you are correct about the token, you can get one with this API call:
curl 'https://guacamole.example.com/guacamole/api/tokens' -H 'Content-Type: application/x-www-form-urlencoded' --data-raw 'username=guacadmin&password=somePassword'
The un/pw are for Guacamole, not for the remote machine.
Once you have the token, you may create HTTPTunnel and Client, populate parameters like shown in getConnectString() and initiate connect(). Keep in mind that the credentials for the remote machine are set up on the Guacamole side. In the getConnectString(), you are only selecting one of the connections defined on the Guacamole server.
Note that depending on the actual plugin used for authentication, different data may be sent; you can also make your own auth plugin with completely different authentication or make a plugin where remote machine credentials are sent directly from JS code. Several example auth plugins in the extensions directory; the above assumes you are using the default database auth plugin or file auth plugin.
Related
From the Electron renderer, I am accessing a local GraphQL endpoint served by a Django instance on my computer, which I'd like to do over HTTP, not HTTPS. But Electron's Chromium seems to intercept my fetch request and preemptively return a 307 redirect.
So if my fetch request is POST to http://local.myapp.com:3000/v1/graphql, then Chromium returns a 307 and forces a redirect to https://local.myapp.com:3000/v1/graphql, which fails because my server is listening on port 3000 and for my use case I can't do a local cert for local.myapp.com.
Theoretically the first insecure request should be hitting an nginx docker container listening on port 3000 without any SSL requirement. And nginx is proxying the request to a Hasura container. But I'm not even seeing the requests in the nginx access logs, so I'm pretty sure the request is being intercepted by Chromium.
I believe this StackOverflow comment summarizes well why this is happening: https://stackoverflow.com/a/34213531
Although I don't recall ever returning a Strict-Transport-Security header from my GraphQL endpoint or Django server.
I have tried the following code without success to turn off this Chromium behavior within my Electron app:
import { app, } from 'electron'
app.commandLine.appendSwitch('ignore-certificate-errors',)
app.commandLine.appendSwitch('allow-insecure-localhost', )
app.commandLine.appendSwitch('ignore-urlfetcher-cert-requests', )
app.commandLine.appendSwitch('allow-running-insecure-content', )
I have also tried setting the fetch options to include {redirect: 'manual'} and {redirect: 'error'}. I can prevent the redirect but that doesn't do me any good because I need to make a successful request to the endpoint to get my data.
I tried replacing the native fetch with electron-fetch (link) and cross-fetch (link) but there seems to be no change in behavior when I swap either of those out.
Edit: Also, making the request to my GraphQL outside of Electron with the exact same header and body info works fine (via Insomnia).
So I have a couple of questions:
Is there a way to programmatically view/clear the list of HSTS domains that is being used by Chromium within Electron?
Is there a better way to accomplish what I'm trying to do?
I think the issue might be from the server, most servers don't allow HTTP in any possible way, they'll drop the data transfer and redirect you to HTTPS and there's a clear reason why they would do that.
Imagine you have an app that connects through HTTPS to send your API in return for some data, if someone just changed the https:// to http:// that'd mean the data will be sent un-encrypted and no matter what you do with your API key, it'll be exposed, that's why the servers don't ever allow any HTTP request, they don't accept even a single bit of data.
I could think of two solutions.
Chromium is not the reason for the redirect, our Django instance might be configured as production or with HTTPS listeners.
Nginx might be the one who's doing the redirecting (having a little bit of SSL def on the configuration)
Last but not least, just generate a cert with OpenSSL (on host http://local.myapp.com:3000/) note: include the port and use that on your Django instance. You can trust the certificate so that it could work everywhere on your computer.
In my Grails app (2.3.11), my login page sends an Ajax request to:
https://myurl/my-app/j_spring_security_check
but spring-security redirects to:
https://myurl:80/my-app/login/ajaxSuccess
This results in a timeout error (because port 80 is added on the URL).
This problem only occurs when my client accesses the application through their traffic manager(Big-IP); if they access the application directly through server IP, it works correctly.
Is there any configuration I can do in Grails to fix this problem? I'm not sure if this problem is related to the application or Big-IP.
These are my configs (Config.groovy) related to spring-security plugin:
grails.plugins.springsecurity.successHandler.defaultTargetUrl = '/login/authSucccessExtJs'
grails.plugins.springsecurity.successHandler.alwaysUseDefault = true
grails.plugins.springsecurity.failureHandler.defaultFailureUrl = '/login/authFailExtJs?login_error=1'
grails.plugins.springsecurity.password.algorithm = 'MD5'
The problem is your application is receiving http traffic because you are offloading ssl at the BIG-IP, so it returns http links to your client. There are a few potential solutions.
Configure grails to set all URLs to https, even though requests are http
Insert the header X-Forwarded-Proto: https (if grails honors this) at the BIG-IP via a local traffic policy or an iRule (you can test this from curl by inserting the header there to see if that helps)
Rewrite https to http URLs on BIG-IP in response traffic via a stream profile or an iRule. This can be very problematic with AJAX but otherwise will work, however, option 1 or 2 would be far more efficient and less maintenance.
I have a swagger working with a haproxy. I use built in swagger in Websphere Liberty Profile (apiDiscovery feature):
Browser -swagger.mydomain.com-> haproxy -swagger.intranet-> IBM Liberty server with Swagger
The first swagger page is generated and shown correctly in the browser, but as Liberty server gets the request from haproxy, not my browser, and gets them to the intranet name/ip (swagger.intranet), Swagger code to execute GETs, POSTs, etc. is generated with that intranet IP name (swagger.intranet), so when I try any of the methods they won't work as reference this internal ip name from in a browser outside that zone.
Can I configure haproxy with some header to inform haproxy that he should generate code with the original server name (swagger.mydomain.com) request used in the request? (That is the one to be used in the generated HTML/Javascript code)
Thanks.
Liberty trusts the Host: header and uses it to assemble self-referential links.
Where you define the backend, try setting http-request set-header Host swagger.mydomain.com to what the client will be using or removing a similar stanza if you are setting it to some swagger.intranet already.
(sorry, I'm not an HAProxy user. This is based on searching for 'HAProxy equivalent of ProxyPreserveHost')
Hi I have a website that I will be developing in the future.
Upon looking at the current website I noticed something weird that I have never seen before and also Google'd and found nothing.
If you go to: http://www.smartrainer.com.au you get the normal site
But, if you go to: https://www.smartrainer.com.au you get redirected to another website and are also given an SSL warning beforehand (in Chrome)
The site is hosted on a UNIX / PHP server and the .htaccess file currently has nothing that would suggest that it's redirecting to this other website.
Any help or insight would be appreciated with this, because I've never heard of this or seen this before.. The client also has no idea why it would be directing to that company that we've never heard of
Thanks!
It sounds like you're using a shared hosting server.
In plain HTTP, the server can know which host the client is requesting using the Host header in the request (this is based on the URL). Apache Httpd supports this with what it calls Name-based virtual hosts.
The HTTPS configuration is separate from the HTTP configuration in Apache Httpd (and presumably a number of other servers). Having virtual hosts (typically on a shared host) for the HTTP configuration doesn't mean that the same configuration is replicated for HTTPS.
HTTPS presents another problem: choosing which certificate to send before being able to see the Host header. Indeed, the server needs to send the client a certificate with the correct name during the SSL/TLS handshake, which happens before any HTTP traffic is sent (so before the Host header can be read). To overcome this problem, some hosts will set up a certificate valid for multiple host names (typically multiple Subject Alternative Names, or sometimes wilcards), others will use Server Name Indication (which isn't supported by all clients).
To get your server to host your site for HTTPS, you'd need:
To make sure the certificate it serves is valid for your host name (otherwise, there will be a warning message).
That the virtual hosts (or equivalent) it serves are configured for your host too.
In your case it seems that (a) your server is serving a single certificate that is not valid for your host and (b) your host isn't configured for HTTPS anyway, since you're falling back to what's probably the default host.
You may solve this issue by redirecting HTTPS URL to HTTP URL from your .htaccess. This error might because of shared hosting. If you cannot solve this issue from your .htaccess than you may also contact your hosting provider on this issue.
I'm using HAProxy to load balance an API which uses OAuth. As part of OAuth there is a hash that uses the requested URL in part of it. In the API code, the url when sent to the server from the LB contains the port. This makes the hash not match because the sent hash does not contain the port, however, the server side hash does.
Is there a way to send the requested host in the x-forwarded-host header via an option like x-forward-for? Or do I need to alter the header via reqadd in the backend. And if so, is there a way to get the host without having to hard code it?