Generic authentication fails following oAuth2 authorisation, and produces an uncontextualised and unhandled HTTP/500 error.
Context
The hub and the authentication servers are run on separate Docker containers. They are both served through Nginx, which itself is housed on a a container that is independent of the two servers.
The hub runs on a subdomain, and all proxies are handled by Nginx.
The authorization is handled through the Spring oauth
The authentication server has been tested separately and is fully operational.
Hub server logs
[JupyterHub] [INFO] [302 GET /hub/oauth_login?next= -> http://localhost/oauth2/authorize/?redirect_uri=http%3A%2F%2Flab.localhost%2Fhub%2Foauth_callback&client_id=[secret]&response_type=code&state=[secret] (#XXX.XXX.XXX.XXX) 1.87ms]
[JupyterHub] [ERROR] [Uncaught exception GET /hub/oauth_callback?code=[secret]&state=[secret] (XXX.XXX.XXX.XXX)
HTTPServerRequest(protocol='http', host='lab.localhost', method='GET', uri='/hub/oauth_callback?code=[secret]&state=[secret]', version='HTTP/1.1', remote_ip='XXX.XXX.XXX.XXX')]
Traceback (most recent call last):
File "[...]/tornado/web.py", line 1543, in _execute
result = yield result
File "[...]/oauthenticator/oauth2.py", line 182, in get
user = yield self.login_user()
File "[...]/jupyterhub/handlers/base.py", line 473, in login_user
authenticated = await self.authenticate(data)
File "[...]/jupyterhub/auth.py", line 257, in get_authenticated_user
authenticated = await maybe_future(self.authenticate(handler, data))
File "[...]/oauthenticator/generic.py", line 116, in authenticate
resp = yield http_client.fetch(req)
tornado.curl_httpclient.CurlError: HTTP 599: Failed to connect to localhost port 80: Connection refused
[JupyterHub] [DEBUG] [No template for 500]
[JupyterHub] [ERROR] [{
"X-Forwarded-Host": "lab.localhost",
"X-Forwarded-Proto": "http",
"X-Forwarded-Port": "80",
"Cookie": "oauthenticator-state=[secret]:oauthenticator-state|120:[secret]"",
"Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8",
"Accept-Encoding": "gzip, deflate, br",
"Referer": "http://localhost/oauth2/authorize/?redirect_uri=http%3A%2F%2Flab.localhost%2Fhub%2Foauth_callback&client_id=[secret]&response_type=code&state=[secret]",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36 OPR/55.0.2994.37",
"Upgrade-Insecure-Requests": "1",
"Cache-Control": "max-age=0",
"Connection": "close",
"X-Nginx-Proxy": "true",
"X-Forwarded-For": "XXX.XXX.XXX.XXX,::XXXX:XXXX.XXXX.XXX.XXX",
"X-Real-Ip": "XXX.XXX.XXX.XXX.1",
"Host": "lab.localhost"
}]
[JupyterHub] [ERROR] [500 GET /hub/oauth_callback?code=[secret]&state=[secret] (#XXX.XXX.XXX.XXX)
The hub server never gets past the authorisation for it to start dealing with the token and user data. You see, this error occurs when the hub server tries to resolve the callback by the authentication server, not when requesting the token. The GET request is that which follows the authorisation in the auth server. That's what makes this quite bizarre.
Related
I have a Deluge client (in a docker container - that's likely irrelevant).
I want to be able to connect to the daemon from the outside world while having it behind a reverse proxy.
I don't necessarily need TLS, but I suspect http2 may require it.
What works:
connecting locally on the network to the Deluge RPC with a Deluge desktop, Android and WebUI client works well.
sending requests to the nginx server is OK (I can see logs as I hit nginx)
All the networking around (firewalls, port forwardings, DNS are fine)
What doesn't work:
Deluge client can't connect to the http server
nginx config:
server {
server_name deluge.example.com;
listen 58850;
location / {
proxy_pass grpc://localhost:58846;
}
ssl_certificate /etc/ssl/nginx/example.com.pem;
ssl_certificate_key /etc/ssl/nginx/example.com.key;
proxy_request_buffering off;
gzip off;
charset utf-8;
error_log /var/log/nginx/nginx_deluge.log debug;
}
Major edit:
As it turns out, I believed the JSON RPC and gRPC are more similar than just the "RPC" in the name. Hence my "original" issue "nginx deluge rpc doesn't work", is no longer relevant.
Unfortunately, the "same" issue still prevails. I still can't connect to the proxy even when using a regular HTTP proxy while I can make HTTP requests locally.
I will surely post an update or even an answer should I figure it out in the next days...
When I try to connect with the Deluge client, I get this error message in the log file:--
2022/06/14 16:59:55 [info] 1332115#1332115: *7 client sent invalid method while reading client request line, client: <REDACTED IPv4>, server: deluge.example.com, request: " Fu�Uq���U����a(wU=��_`. a��¹�(���O����f�"
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http finalize request: 400, "?" a:1, c:1
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 event timer del: 17: 243303738
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http special response: 400, "?"
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http set discard body
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 HTTP/1.1 400 Bad Request
Server: nginx/1.22.0
Date: Tue, 14 Jun 2022 16:59:55 GMT
Content-Type: text/html
Content-Length: 157
Connection: close
When I change the line listen 58850; to listen 58850 http2;, as I probably should, I get the following error: (log verbosity set to "debug")
2022/06/14 15:04:00 [info] 1007882#1007882: *3654 client sent invalid method while reading
client request line, client: <REDACTED IPv4>,
server: deluge.example.com, request: "x�;x��;%157?O/3/-�#�D��"
The gibberish there is seemingly identical when trying to connect from a different network from a different device. It was Dx�;x��;%157?O/3/-�#�E�, (there is a D as first character now) but all other attempts are again without the leading D.
or this error: (log verbosity set to "info")
2022/06/14 17:09:13 [info] 1348282#1348282: *14 invalid connection preface while processing HTTP/2 connection, client: <REDACTED IPv4>, server: 0.0.0.0:58850
I tried decoding the gibberish between various encodings, in hoping it would be just bad encoding of a better error message or a lead to a solution.
I looked through the first two pages of Google hoping the error messages were pointing me to a solution someone else has had to my problem.
environment:
Docker version 20.10.17, build 100c70180f
nginx version: nginx/1.22.0
deluged 2.0.5
libtorrent: 2.0.6.0
I'm loading jupyterhub within an iframe. Both the parent page and jupyterhub use the same authentication service(keycloak). I first login with my username(rabraham) to my parent page and then I open an iframe and start jupyterhub and I logon to jupyterhub. It logs on just fine but fails at the next step giving perhaps when using DockerSpawner
500 : Internal Server Error
Error in Authenticator.pre_spawn_start: APIError 400 Client Error: Bad Request ("invalid tag format")
You can try restarting your server from the home page.
Here are my logs if that helps:
[I 2020-02-07 19:41:50.050 JupyterHub proxy:320] Checking routes
[I 2020-02-07 19:44:04.222 JupyterHub log:174] 302 GET / -> /hub/ (#::ffff:192.0.161.155) 1.87ms
[I 2020-02-07 19:44:04.305 JupyterHub log:174] 302 GET /hub/ -> /hub/login (#::ffff:192.0.161.155) 1.30ms
[I 2020-02-07 19:44:04.387 JupyterHub log:174] 200 GET /hub/login (#::ffff:192.0.161.155) 3.06ms
[I 2020-02-07 19:44:05.988 JupyterHub oauth2:103] OAuth redirect: 'http://35.225.100.133:30100/hub/oauth_callback'
[I 2020-02-07 19:44:05.990 JupyterHub log:174] 302 GET /hub/oauth_login?next= -> http://35.225.100.133:30080/auth/realms/master/protocol/openid-connect/auth?response_type=code&redirect_uri=http%3A%2F%2F35.225.100.133%3A30100%2Fhub%2Foauth_callback&client_id=jupyterhub&state=[secret] (#::ffff:192.0.161.155) 2.57ms
[I 2020-02-07 19:44:06.206 JupyterHub base:707] User logged in: rabraham
[I 2020-02-07 19:44:06.214 JupyterHub log:174] 302 GET /hub/oauth_callback?state=[secret]&session_state=[secret]&code=[secret] -> /hub/spawn (#::ffff:192.0.161.155) 84.66ms
[I 2020-02-07 19:44:06.440 JupyterHub dockerspawner:930] pulling image localhost:5000/fifteenrock/fifteenrock-jupyterhub:0.1
[E 2020-02-07 19:44:06.446 JupyterHub user:640] Unhandled error starting rabraham's server: 400 Client Error: Bad Request ("invalid tag format")
[I 2020-02-07 19:44:06.449 JupyterHub dockerspawner:784] Container 'jupyter-rabraham' is gone
[W 2020-02-07 19:44:06.450 JupyterHub dockerspawner:757] Container not found: jupyter-rabraham
[W 2020-02-07 19:44:06.467 JupyterHub web:1782] 500 GET /hub/spawn (::ffff:192.0.161.155): Error in Authenticator.pre_spawn_start: APIError 400 Client Error: Bad Request ("invalid tag format")
[E 2020-02-07 19:44:06.497 JupyterHub log:166] {
"X-Forwarded-Host": "35.225.100.133:30100",
"X-Forwarded-Proto": "http",
"X-Forwarded-Port": "30100",
"X-Forwarded-For": "::ffff:192.0.161.155",
"Cookie": "jupyterhub-hub-login=[secret]; session=[secret]; oidc_id_token=[secret]; jupyterhub-session-id=[secret]",
"Accept-Language": "en-CA,en-GB;q=0.9,en-US;q=0.8,en;q=0.7",
"Accept-Encoding": "gzip, deflate",
"Referer": "http://35.225.100.133:30100/hub/login",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36",
"Upgrade-Insecure-Requests": "1",
"Connection": "close",
"Host": "35.225.100.133:30100"
}
[E 2020-02-07 19:44:06.497 JupyterHub log:174] 500 GET /hub/spawn (rabraham#::ffff:192.0.161.155) 199.18ms
The error comes from your local docker repository. So you probably should check the logs of the private docker repo on localhost:5000.
EDIT: Shown at end, found that upgrade headers were actually created.
I'm working from the action-cable-example codebase, trying to build a WebSocket app. The "Chatty" application, which depends upon the browser client provided in the app, works fine. But, I am not going to use that client as I need an external IoT connection. As a result, I am trying to implement the ws/wss WebSocket protocols to external non-browser devices and my connection in route.rb is:
mount ActionCable.server => '/cable'
I've tried several external clients, such as the Chrome Simple WebSocket Client extension and gem websocket-client-simple using sample/client.rb. In both cases, ActionCable returns no upgrade headers. The Chrome Extension complains as follows:
WebSocket connection to 'ws://127.0.0.1:3000/cable' failed: Error during WebSocket handshake: 'Upgrade' header is missing
The actual handshake shows that to be true, as in:
**General**
Request URL:ws://127.0.0.1:3000/cable
Request Method:GET
Status Code:101 Switching Protocols
**Response Headers**
view source
Connection:keep-alive
Server:thin
**Request Headers**
view source
Accept-Encoding:gzip, deflate, sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:Upgrade
Cookie:PPA_ID=<redacted>
DNT:1
Host:127.0.0.1:3000
Origin:chrome-extension://pfdhoblngboilpfeibdedpjgfnlcodoo
Pragma:no-cache
Sec-WebSocket-Extensions:permessage-deflate; client_max_window_bits
Sec-WebSocket-Key:1vokmzewcWf9e2RwMth0Lw==
Sec-WebSocket-Version:13
Upgrade:websocket
User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36
Per the standards, the response headers are to be this:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: HSmrc0sMlYUkAGmm5OPpG2HaGWk=
Sec-WebSocket-Protocol: chat
The Sec-WebSocket-Accept is particularly important, as it's a calculation based on the request header's Sec-WebSocket-Key to confirm that ws/wss is understood and that the the Switching Protocols should occur.
During all of this, the server is more happy, until the client gets ticked and closes the connection:
Started GET "/cable" for 127.0.0.1 at 2016-06-16 19:19:17 -0400
ActiveRecord::SchemaMigration Load (1.0ms) SELECT "schema_migrations".* FROM "schema_migrations"
Started GET "/cable/" [WebSocket] for 127.0.0.1 at 2016-06-16 19:19:17 -0400
Successfully upgraded to WebSocket (REQUEST_METHOD: GET, HTTP_CONNECTION: Upgrade, HTTP_UPGRADE: websocket)
Finished "/cable/" [WebSocket] for 127.0.0.1 at 2016-06-16 19:19:18 -0400
Looking at websocket-client-simple, I broke down the WebSocket returned to client.rb, and it also showed empty headers. I am showing the code and then the WebSocket:
url = ARGV.shift || 'ws://localhost:3000/cable'
ws = WebSocket::Client::Simple.connect url
#<WebSocket::Client::Simple::Client:0x2cdaf68
#url="ws://localhost:3000/cable",
#socket=#<TCPSocket:fd 3>,
#handshake=<WebSocket::Handshake::Client:0x013231c8
#url="ws://localhost:3000/cable",
#headers={},
#state=:new,
#handler=#<WebSocket::Handshake::Handler::Client11:0x2e88400
#handshake=<WebSocket::Handshake::Client:0x013231c8
#url="ws://localhost:3000/cable",
#headers={},
#state=:new,
#handler=#<WebSocket::Handshake::Handler::Client11:0x2e88400 ...>,
#data="",
#secure=false,
#host="localhost",
#port=3000,
#path="/cable",
#query=nil,
#version=13>,
#key="KUJ0/C0rvoCMruW8STp0Sw==">,
#data="",
#secure=false,
#host="localhost",
#port=3000,
#path="/cable",
#query=nil,
#version=13>,
#handshaked=false,
#pipe_broken=false,
#closed=false,
#__events=[{:type=>:__close, :listener=>#<Proc:0x2d10ae8#D:/Bitnami/rubystack-2.2.5-3/projects/websocket-client-simple/lib/websocket-client-simple/client.rb:37>, :params=>{:once=>true}, :id=>0}],
#thread=#<Thread:0x2d10a70#D:/Bitnami/rubystack-2.2.5-3/projects/websocket-client-simple/lib/websocket-client-simple/client.rb:42 sleep>
>;
In this response, I noted the instance variable "#handshaked" is returned as false. That may be relevant, but I haven't found where that is set or referenced within the code so far.
UPDATE:
Found that WebSocket::Driver.start actually creates the upgrade headers. And, #socket.write(response) should send them out through EventMachine.
Code:
def start
return false unless #ready_state == 0
response = handshake_response
return false unless response
#socket.write(response)
open unless #stage == -1
true
end
handshake_response is:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: iJVnsG1ApNMFzABXGDSHN1V0i/s=
The problem was that I was trying to use the Thin server in development. It would operate. However, it was actually transmitting the response headers during its processing, such as this:
Response Headers
Connection:keep-alive
Server:thin
ActionCable was actually sending the appropriate upgrade headers, but it was doing so only after Thin had sent out its own headers so the client didn't recognize them.
After converting back to Puma, I receive these as expected:
Response Headers
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: XJOmp1e2IwQIMk5n0JV/RZZSIhs=
On a fresh install of neo4j-community-2.2.4 I get the message "Invalid username or password." when submitting the login form at localhost:7474/browser with the default username neo4j and password neo4j.
I did enable org.neo4j.server.webserver.address=0.0.0.0 and dbms.security.auth_enabled=true with a server stop and start and a browser shift-reload.
I then changed the property org.neo4j.server.webserver.address=127.0.0.1 to match my /etc/hosts and tried on 127.0.0.1:7474/browser but got the same message.
Here is the browser console output:
Remote Address:127.0.0.1:7474
Request URL:http://localhost:7474/db/data/
Request Method:GET
Status Code:401 Unauthorized
Request Headersview source
Accept:application/json, text/plain, */*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8,fr;q=0.6,ru;q=0.4,es;q=0.2,sv;q=0.2,nb;q=0.2,et;q=0.2
Authorization:Basic bmVvNGo6bmVvNGo=
Cache-Control:no-cache
Connection:keep-alive
Cookie:languageCodeAdmin=en; PHPSESSID=vcbvfkvj3shajhlh5u2pue9i70; admin_template_phone_client=0; admin_template_touch_client=0; admin_template_model=1
Host:localhost:7474
If-Modified-Since:Wed, 11 Dec 2013 08:00:00 GMT
Pragma:no-cache
Referer:http://localhost:7474/browser/
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/37.0.2062.120 Chrome/37.0.2062.120 Safari/537.36
X-stream:true
Response Headersview source
Content-Length:135
Content-Type:application/json; charset=UTF-8
Date:Sat, 19 Sep 2015 09:14:03 GMT
Server:Jetty(9.2.4.v20141103)
WWW-Authenticate:None
{
"errors" : [ {
"message" : "Invalid username or password.",
"code" : "Neo.ClientError.Security.AuthorizationFailed"
} ]
}
So I then tried to change the default password.
To do that, I first disabled the authentication with a dbms.security.auth_enabled=false and a server stop and start.
And I then tried the following curl request:
curl -H "Accept:application/json; charset=UTF-8" -H "Content-Type: application/json" "http://localhost:7474/user/neo4j/password" -X POST -d "{ \"password\" : \"neo4j\" }" -i
But it gives me the response:
HTTP/1.1 404 Not Found
Date: Sat, 19 Sep 2015 09:25:38 GMT
Access-Control-Allow-Origin: *
Content-Length: 0
Server: Jetty(9.2.4.v20141103)
Since you get an authentication error on your browser request which uses the default credentials neo4j:neo4j your database has already a different password configured. To reset it to default, stop neo4j, delete data/dbms/auth and start it again. Now you can use the default pw.
In my case Neo4J windows services was not installed just followed below steps
neo4j.bat install-service
neo4j.bat status
neo4j.bat start
it worked, cheers
My app generates some image data on the fly and sends it back to the browser with send_data some_huge_blob, :type => 'image/png'. This works well enough in development mode, but in production with nginx/passenger in the mix it appears as if sometimes passenger just crashes. Here is the debug output in my nginx log
[ pid=596 thr=140172782794496 file=ext/common/ApplicationPool/Pool.h:1162 time=2011-07-25 23:15:14.965 ]: Exception occurred while connecting to checked out process 1428: Cannot connect to Unix socket '/tmp/passenger.1.0.589/generation-0/backends/ruby.kJRjXYuZteKoogZIufN8a2cDPdpbIlYmIr1hh3G9UV7GhKDB4pqZ5y0jR': Connection refused (111)
[ pid=596 thr=140172782794496 file=ext/common/ApplicationPool/Pool.h:685 time=2011-07-25 23:15:14.965 ]: Detaching process 1428
[ pid=596 thr=140172782794496 file=ext/common/ApplicationPool/../Process.h:138 time=2011-07-25 23:15:14.969 ]: Application process 1428 (0x2676ee0): destroyed.
[ pid=1405 thr=70178806733240 file=abstract_request_handler.rb:466 time=2011-07-25 23:15:14.982 ]: Accepting new request on main socket
2011/07/25 23:15:16 [error] 642#0: *96 upstream prematurely closed connection while reading response header from upstream, client: 173.8.216.57, server: app.somedomain.com, request: "GET /projects/4e2dee4c106a821bf2000008/revisions/1/assets/Layout2.psd/preview HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "app.somedomain.com"
Note that there is nothing in my production.log file that indicates the request even makes it to the app!
Any ideas? Or ideas as to how to debug this further? The connection refused bit is interesting...
For what it's worth, this is an Ubuntu image on a micro instance in AWS.