Deluge client not passing through nginx grpc - docker

I have a Deluge client (in a docker container - that's likely irrelevant).
I want to be able to connect to the daemon from the outside world while having it behind a reverse proxy.
I don't necessarily need TLS, but I suspect http2 may require it.
What works:
connecting locally on the network to the Deluge RPC with a Deluge desktop, Android and WebUI client works well.
sending requests to the nginx server is OK (I can see logs as I hit nginx)
All the networking around (firewalls, port forwardings, DNS are fine)
What doesn't work:
Deluge client can't connect to the http server
nginx config:
server {
server_name deluge.example.com;
listen 58850;
location / {
proxy_pass grpc://localhost:58846;
}
ssl_certificate /etc/ssl/nginx/example.com.pem;
ssl_certificate_key /etc/ssl/nginx/example.com.key;
proxy_request_buffering off;
gzip off;
charset utf-8;
error_log /var/log/nginx/nginx_deluge.log debug;
}
Major edit:
As it turns out, I believed the JSON RPC and gRPC are more similar than just the "RPC" in the name. Hence my "original" issue "nginx deluge rpc doesn't work", is no longer relevant.
Unfortunately, the "same" issue still prevails. I still can't connect to the proxy even when using a regular HTTP proxy while I can make HTTP requests locally.
I will surely post an update or even an answer should I figure it out in the next days...
When I try to connect with the Deluge client, I get this error message in the log file:--
2022/06/14 16:59:55 [info] 1332115#1332115: *7 client sent invalid method while reading client request line, client: <REDACTED IPv4>, server: deluge.example.com, request: " Fu�Uq���U����a(wU=��_`. a��¹�(���O����f�"
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http finalize request: 400, "?" a:1, c:1
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 event timer del: 17: 243303738
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http special response: 400, "?"
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http set discard body
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 HTTP/1.1 400 Bad Request
Server: nginx/1.22.0
Date: Tue, 14 Jun 2022 16:59:55 GMT
Content-Type: text/html
Content-Length: 157
Connection: close
When I change the line listen 58850; to listen 58850 http2;, as I probably should, I get the following error: (log verbosity set to "debug")
2022/06/14 15:04:00 [info] 1007882#1007882: *3654 client sent invalid method while reading
client request line, client: <REDACTED IPv4>,
server: deluge.example.com, request: "x�;x��;%157?O/3/-�#�D��"
The gibberish there is seemingly identical when trying to connect from a different network from a different device. It was Dx�;x��;%157?O/3/-�#�E�, (there is a D as first character now) but all other attempts are again without the leading D.
or this error: (log verbosity set to "info")
2022/06/14 17:09:13 [info] 1348282#1348282: *14 invalid connection preface while processing HTTP/2 connection, client: <REDACTED IPv4>, server: 0.0.0.0:58850
I tried decoding the gibberish between various encodings, in hoping it would be just bad encoding of a better error message or a lead to a solution.
I looked through the first two pages of Google hoping the error messages were pointing me to a solution someone else has had to my problem.
environment:
Docker version 20.10.17, build 100c70180f
nginx version: nginx/1.22.0
deluged 2.0.5
libtorrent: 2.0.6.0

Related

Artifactory Docker 404 after upgrade to 7.4.1

After an Artifactory upgrade to 7.4.1 from 6.10.4, I've made the necessary port changes and the UI works fine, but I'm seeing the following in the artifactory-service log when attempting to use docker login via the subdomain method:
Request /v2/ should be a repo request and does not match any repo key
The docker login command prompts for authentication but then returns:
Error response from daemon: login attempt to http://<local-docker-repo>.<artifactory-url>.com/v2/ failed with status: 404 Not Found
Artifactory is running in a Kubernetes cluster behind an nginx ingress controller, which has an ingress set up specifically to serve https://<local-docker-repo>.<artifactory-url>.com via the same backend as the Artifactory UI. It seems like some URL rewrite functionality is not working, I'm just not sure how I've misconfigured it as I had no problems in the previous version.
Curl results as follows:
curl -i -L -k http://docker-local.<artifactory-url>.com/v2/
HTTP/1.1 308 Permanent Redirect
Server: nginx/1.15.9
Date: Mon, 21 Sep 2020 00:25:32 GMT
Content-Type: text/html
Content-Length: 171
Connection: keep-alive
Location: https://docker-local.<artifactory-url>.com/v2/
X-JFrog-Override-Base-Url: ://docker-local.\<artifactory-url>.com:80
X-Forwarded-Port: 80
Host: docker-local.artifactory.<artifactory-url>.com
X-Forwarded-For: 10.60.1.1
HTTP/2 401
server: nginx/1.15.9
date: Mon, 21 Sep 2020 00:25:32 GMT
content-type: application/json;charset=ISO-8859-1
content-length: 91
www-authenticate: Basic realm="Artifactory Realm"
x-artifactory-id: ea0c76c54c1ef5de:45761df0:174ad9a6887:-8000
x-artifactory-node-id: artifactory-0
x-jfrog-override-base-url: ://docker-local.<artifactory-url>.com:443
x-forwarded-port: 443
host: docker-local.<artifactory-url>.com
x-forwarded-for: 10.60.x.x
strict-transport-security: max-age=15724800; includeSubDomains
{
"errors" : [ {
"status" : 401,
"message" : "Authentication is required"
} ]
Any help would be greatly appreciated!
Edit: As a workaround I've enabled Repository Path as the Docker access method, which works fine -- still not sure where subdomain is going wrong.
The issue was that the $repo variable in the nginx rewrite rules provided by Artifactory was not getting populated for some reason. Since we only have a single registry being used in the subdomain method, I updated the rewrite rule to provide the repo name which resolved the issue.
To illustrate:
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
was changed to:
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/docker-local/$1/$2;

Is it possible to use a self signed cert with a EC2 instance that requires a client cert from API Gateway

Here's my situation:
I'm using Elastic Beanstalk to spin up a single EC2 instance without an ELB. I want to have the instance only accessible through the API Gateway. So, I went the route of using client-side certificates for authentication, like what's described here.
My EC2 instance has Nginx serving a Rails application. I generated a self-signed certificate on my machine and configured Nginx to use that to serve stuff over https.
Everything seems fine, but when I try to invoke my proxy endpoint from the API Gateway console, I get a 500 error like below:
...
Thu Sep 14 02:27:05 UTC 2017 : Endpoint request URI: https://xxxxxxxxx.xxxxxxxxx.us-east-1.elasticbeanstalk.com/health
Thu Sep 14 02:27:05 UTC 2017 : Endpoint request headers: {x-amzn-apigateway-api-id=xxxxxxxxx, User-Agent=AmazonAPIGateway_xxxxxxxx, Accept-Encoding=identity}
Thu Sep 14 02:27:05 UTC 2017 : Endpoint request body after transformations:
Thu Sep 14 02:27:05 UTC 2017 : Sending request to https://xxxxxxxxx.xxxxxxxx.us-east-1.elasticbeanstalk.com/health
Thu Sep 14 02:27:05 UTC 2017 : Execution failed due to configuration error: General SSLEngine problem
Thu Sep 14 02:27:05 UTC 2017 : Method completed with status: 500
I'm thinking that it has something to do with the fact that I'm using a self-signed certificate on the backend. But do I really have to purchase a legitimate certificate in order to complete my setup? Are there any other solutions that would allow me to only accept requests to my EC2 instance only through the API Gateway?
I looked at the Lambda method that is described here, but I didn't want to add any more complexity or latency to the requests.
Here's my Nginx configuration for completeness:
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
ssl_client_certificate /etc/pki/tls/certs/api_gateway.cer;
ssl_verify_client on;
if ($ssl_protocol = "") {
return 444;
}
}
see my answer here AWS API Gateway - Use Client-Side SSL Certificates. Sot sure what incompatibility is with NGINX - i managed to create PoC and validate Authenticate with Client-SSL behavior
It appears at the time of this writing that API Gateway has a known incompatibility with NGINX around Client Certificates.

how to parse these URL

I'm trying to get source code of a url but it gives me an error. Could you help me please?
curl -v http://www.segundamano.es/anuncios-madrid/ -m 10* About to connect() to www.segundamano.es port 80 (#0)
* Trying 195.77.179.69...
* Connected to www.segundamano.es (195.77.179.69) port 80 (#0)
> GET /anuncios-madrid/ HTTP/1.1
> User-Agent: curl/7.29.0
> Host: www.segundamano.es
> Accept: */*
>
* Empty reply from server
* Connection #0 to host www.segundamano.es left intact
curl: (52) Empty reply from server
Many thanks and sorry for my english!
It looks like this domain is actively blocking curl (and wget) requests, if you pass a browser's UserAgent it appears that you can get around this (curl and wget use the same command line arguments for user agent). For example:
This doesn't work:
C:\>wget http://www.segundamano.es/anuncios-madrid/
SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc
syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc
--2013-10-16 10:06:13-- http://www.segundamano.es/anuncios-madrid/
Resolving www.segundamano.es... 195.77.179.69, 213.4.96.70
Connecting to www.segundamano.es|195.77.179.69|:80... connected.
HTTP request sent, awaiting response... 502 Bad Gateway
2013-10-16 10:06:15 ERROR 502: Bad Gateway.
But this does:
C:\>wget --user-agent="Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1" http://www.segundamano.es/anuncios-madrid/
SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc
syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc
--2013-10-16 10:06:29-- http://www.segundamano.es/anuncios-madrid/
Resolving www.segundamano.es... 195.77.179.69, 213.4.96.70
Connecting to www.segundamano.es|195.77.179.69|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `index.html'
[<=>] 178,588 267K/s in 0.7s
2013-10-16 10:06:33 (267 KB/s) - `index.html' saved [178588]

Why am I getting a segmentation fault in Typheous when I perform a POST request?

It's really tricky in our production environment( Red Hat Enterprise Linux Server release 5.4 (Tikanga) and curl 7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5) that the POST request become to GET request automatically. Below is the relative log provide by nginx.
cache: [GET /login] miss
url->http://localhost:8080/login
About to connect() to localhost port 8080
Expire at 1339680839 / 265363 (300000ms)
Trying 127.0.0.1... * connected
Connected to localhost (127.0.0.1) port 8080 /usr/local/lib/ruby/gems/1.8/gems/typhoeus-0.4.0/lib/typhoeus/multi.rb:141: [BUG] Segmentation fault ruby 1.8.7 (2012-02-08 patchlevel 358) [x86_64-linux]
2012/06/14 21:28:59 [error] 29829#0: *6031 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "POST /login HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "127.0.0.1:8081", referrer: "http://127.0.0.1:8081/login"
In fact, we send a POST request to login, but it convert to GET actually.
In our development environment( which is Ubuntu12.04 and curl 7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3), everything goes ok.
Who can explain it?
After I upgraded the curl from 7.15 to 7.22. The problem has been solved.

Rails app may be crashing passenger, not sure how to debug?

My app generates some image data on the fly and sends it back to the browser with send_data some_huge_blob, :type => 'image/png'. This works well enough in development mode, but in production with nginx/passenger in the mix it appears as if sometimes passenger just crashes. Here is the debug output in my nginx log
[ pid=596 thr=140172782794496 file=ext/common/ApplicationPool/Pool.h:1162 time=2011-07-25 23:15:14.965 ]: Exception occurred while connecting to checked out process 1428: Cannot connect to Unix socket '/tmp/passenger.1.0.589/generation-0/backends/ruby.kJRjXYuZteKoogZIufN8a2cDPdpbIlYmIr1hh3G9UV7GhKDB4pqZ5y0jR': Connection refused (111)
[ pid=596 thr=140172782794496 file=ext/common/ApplicationPool/Pool.h:685 time=2011-07-25 23:15:14.965 ]: Detaching process 1428
[ pid=596 thr=140172782794496 file=ext/common/ApplicationPool/../Process.h:138 time=2011-07-25 23:15:14.969 ]: Application process 1428 (0x2676ee0): destroyed.
[ pid=1405 thr=70178806733240 file=abstract_request_handler.rb:466 time=2011-07-25 23:15:14.982 ]: Accepting new request on main socket
2011/07/25 23:15:16 [error] 642#0: *96 upstream prematurely closed connection while reading response header from upstream, client: 173.8.216.57, server: app.somedomain.com, request: "GET /projects/4e2dee4c106a821bf2000008/revisions/1/assets/Layout2.psd/preview HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "app.somedomain.com"
Note that there is nothing in my production.log file that indicates the request even makes it to the app!
Any ideas? Or ideas as to how to debug this further? The connection refused bit is interesting...
For what it's worth, this is an Ubuntu image on a micro instance in AWS.

Resources