Is it possible to use a self signed cert with a EC2 instance that requires a client cert from API Gateway - ruby-on-rails

Here's my situation:
I'm using Elastic Beanstalk to spin up a single EC2 instance without an ELB. I want to have the instance only accessible through the API Gateway. So, I went the route of using client-side certificates for authentication, like what's described here.
My EC2 instance has Nginx serving a Rails application. I generated a self-signed certificate on my machine and configured Nginx to use that to serve stuff over https.
Everything seems fine, but when I try to invoke my proxy endpoint from the API Gateway console, I get a 500 error like below:
...
Thu Sep 14 02:27:05 UTC 2017 : Endpoint request URI: https://xxxxxxxxx.xxxxxxxxx.us-east-1.elasticbeanstalk.com/health
Thu Sep 14 02:27:05 UTC 2017 : Endpoint request headers: {x-amzn-apigateway-api-id=xxxxxxxxx, User-Agent=AmazonAPIGateway_xxxxxxxx, Accept-Encoding=identity}
Thu Sep 14 02:27:05 UTC 2017 : Endpoint request body after transformations:
Thu Sep 14 02:27:05 UTC 2017 : Sending request to https://xxxxxxxxx.xxxxxxxx.us-east-1.elasticbeanstalk.com/health
Thu Sep 14 02:27:05 UTC 2017 : Execution failed due to configuration error: General SSLEngine problem
Thu Sep 14 02:27:05 UTC 2017 : Method completed with status: 500
I'm thinking that it has something to do with the fact that I'm using a self-signed certificate on the backend. But do I really have to purchase a legitimate certificate in order to complete my setup? Are there any other solutions that would allow me to only accept requests to my EC2 instance only through the API Gateway?
I looked at the Lambda method that is described here, but I didn't want to add any more complexity or latency to the requests.
Here's my Nginx configuration for completeness:
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
ssl_client_certificate /etc/pki/tls/certs/api_gateway.cer;
ssl_verify_client on;
if ($ssl_protocol = "") {
return 444;
}
}

see my answer here AWS API Gateway - Use Client-Side SSL Certificates. Sot sure what incompatibility is with NGINX - i managed to create PoC and validate Authenticate with Client-SSL behavior

It appears at the time of this writing that API Gateway has a known incompatibility with NGINX around Client Certificates.

Related

Deluge client not passing through nginx grpc

I have a Deluge client (in a docker container - that's likely irrelevant).
I want to be able to connect to the daemon from the outside world while having it behind a reverse proxy.
I don't necessarily need TLS, but I suspect http2 may require it.
What works:
connecting locally on the network to the Deluge RPC with a Deluge desktop, Android and WebUI client works well.
sending requests to the nginx server is OK (I can see logs as I hit nginx)
All the networking around (firewalls, port forwardings, DNS are fine)
What doesn't work:
Deluge client can't connect to the http server
nginx config:
server {
server_name deluge.example.com;
listen 58850;
location / {
proxy_pass grpc://localhost:58846;
}
ssl_certificate /etc/ssl/nginx/example.com.pem;
ssl_certificate_key /etc/ssl/nginx/example.com.key;
proxy_request_buffering off;
gzip off;
charset utf-8;
error_log /var/log/nginx/nginx_deluge.log debug;
}
Major edit:
As it turns out, I believed the JSON RPC and gRPC are more similar than just the "RPC" in the name. Hence my "original" issue "nginx deluge rpc doesn't work", is no longer relevant.
Unfortunately, the "same" issue still prevails. I still can't connect to the proxy even when using a regular HTTP proxy while I can make HTTP requests locally.
I will surely post an update or even an answer should I figure it out in the next days...
When I try to connect with the Deluge client, I get this error message in the log file:--
2022/06/14 16:59:55 [info] 1332115#1332115: *7 client sent invalid method while reading client request line, client: <REDACTED IPv4>, server: deluge.example.com, request: " Fu�Uq���U����a(wU=��_`. a��¹�(���O����f�"
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http finalize request: 400, "?" a:1, c:1
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 event timer del: 17: 243303738
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http special response: 400, "?"
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http set discard body
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 HTTP/1.1 400 Bad Request
Server: nginx/1.22.0
Date: Tue, 14 Jun 2022 16:59:55 GMT
Content-Type: text/html
Content-Length: 157
Connection: close
When I change the line listen 58850; to listen 58850 http2;, as I probably should, I get the following error: (log verbosity set to "debug")
2022/06/14 15:04:00 [info] 1007882#1007882: *3654 client sent invalid method while reading
client request line, client: <REDACTED IPv4>,
server: deluge.example.com, request: "x�;x��;%157?O/3/-�#�D��"
The gibberish there is seemingly identical when trying to connect from a different network from a different device. It was Dx�;x��;%157?O/3/-�#�E�, (there is a D as first character now) but all other attempts are again without the leading D.
or this error: (log verbosity set to "info")
2022/06/14 17:09:13 [info] 1348282#1348282: *14 invalid connection preface while processing HTTP/2 connection, client: <REDACTED IPv4>, server: 0.0.0.0:58850
I tried decoding the gibberish between various encodings, in hoping it would be just bad encoding of a better error message or a lead to a solution.
I looked through the first two pages of Google hoping the error messages were pointing me to a solution someone else has had to my problem.
environment:
Docker version 20.10.17, build 100c70180f
nginx version: nginx/1.22.0
deluged 2.0.5
libtorrent: 2.0.6.0

Digital ocean managed deployed rails APP not responding to http (returns 404)

I followed this 3 parts tutorial and successfully deployed a Rails app in a Managed Digital Ocean App.
Locally, I can use httpie to GET resources and POST to create new users such as:
http :8080/signup name=test email=test#email.com password=foobar password_confirmation=foobar
But, once deployed on digital ocean with at this url with valid TCP health check, I try to create an user (with http/postman):
http mtserver-igkkx.ondigitalocean.app:8080/signup name=test email=test#email.com password=foobar password_confirmation=foobar
and end up with:
HTTP/1.1 301 Moved Permanently
CF-RAY: 6d851c0f7951408d-CDG
Cache-Control: max-age=3600
Connection: keep-alive
Date: Fri, 04 Feb 2022 16:00:02 GMT
Expires: Fri, 04 Feb 2022 17:00:02 GMT
Location: https://mtserver-igkkx.ondigitalocean.app/signup
Server: cloudflare
Transfer-Encoding: chunked
Vary: Accept-Encoding
Assuming I need to prefix the URL with https, i try again with:
http https://mtserver-igkkx.ondigitalocean.app:8080/signup name=test email=test#email.com password=foobar password_confirmation=foobar
And end up with:
http: error: SSLError: HTTPSConnectionPool(host='mtserver-igkkx.ondigitalocean.app', port=8080): Max retries exceeded with url: /signup (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:997)'))) while doing a POST request to URL: https://mtserver-igkkx.ondigitalocean.app:8080/signup
If I try a POST with postman:
POST > mtserver-igkkx.ondigitalocean.app:8080/signup?name=test&email=test#mail.com&password=foobar&password_confirmation=foobar
it returns:
{
"status": 404,
"error": "Not Found"
}
While visiting the server URL on '/' it returns a 404 but I assume it's normal as it's only supposed to work in API mode and no route currently handles /.
I'm Looking forward to understand how to handle digital ocean in production to be able to create users on this API through http requests.
For the record, ssl was the problem.
You need to go through the process of adding a SSL certificate through digital ocean so https (and further your web app) works when you query the prefixed url.
This unanswered SO post helped me a lot to figure out the good process.

Artifactory Docker 404 after upgrade to 7.4.1

After an Artifactory upgrade to 7.4.1 from 6.10.4, I've made the necessary port changes and the UI works fine, but I'm seeing the following in the artifactory-service log when attempting to use docker login via the subdomain method:
Request /v2/ should be a repo request and does not match any repo key
The docker login command prompts for authentication but then returns:
Error response from daemon: login attempt to http://<local-docker-repo>.<artifactory-url>.com/v2/ failed with status: 404 Not Found
Artifactory is running in a Kubernetes cluster behind an nginx ingress controller, which has an ingress set up specifically to serve https://<local-docker-repo>.<artifactory-url>.com via the same backend as the Artifactory UI. It seems like some URL rewrite functionality is not working, I'm just not sure how I've misconfigured it as I had no problems in the previous version.
Curl results as follows:
curl -i -L -k http://docker-local.<artifactory-url>.com/v2/
HTTP/1.1 308 Permanent Redirect
Server: nginx/1.15.9
Date: Mon, 21 Sep 2020 00:25:32 GMT
Content-Type: text/html
Content-Length: 171
Connection: keep-alive
Location: https://docker-local.<artifactory-url>.com/v2/
X-JFrog-Override-Base-Url: ://docker-local.\<artifactory-url>.com:80
X-Forwarded-Port: 80
Host: docker-local.artifactory.<artifactory-url>.com
X-Forwarded-For: 10.60.1.1
HTTP/2 401
server: nginx/1.15.9
date: Mon, 21 Sep 2020 00:25:32 GMT
content-type: application/json;charset=ISO-8859-1
content-length: 91
www-authenticate: Basic realm="Artifactory Realm"
x-artifactory-id: ea0c76c54c1ef5de:45761df0:174ad9a6887:-8000
x-artifactory-node-id: artifactory-0
x-jfrog-override-base-url: ://docker-local.<artifactory-url>.com:443
x-forwarded-port: 443
host: docker-local.<artifactory-url>.com
x-forwarded-for: 10.60.x.x
strict-transport-security: max-age=15724800; includeSubDomains
{
"errors" : [ {
"status" : 401,
"message" : "Authentication is required"
} ]
Any help would be greatly appreciated!
Edit: As a workaround I've enabled Repository Path as the Docker access method, which works fine -- still not sure where subdomain is going wrong.
The issue was that the $repo variable in the nginx rewrite rules provided by Artifactory was not getting populated for some reason. Since we only have a single registry being used in the subdomain method, I updated the rewrite rule to provide the repo name which resolved the issue.
To illustrate:
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
was changed to:
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/docker-local/$1/$2;

Unknown SSL protocol error in connection with rails app on Heroku

I upgraded my Plan on Heroku to be able to use Heroku SSL, which includes Automated Certificate Management (ACM).
Hence when i run heroku certs:info I get:
Certificate details:
Common Name(s): www.myapp.fr
Expires At: 2018-04-29 10:10 UTC
Issuer: /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
Starts At: 2018-01-29 10:10 UTC
Subject: /CN=www.myapp.fr
SSL certificate is verified by a root authority.
or heroku certs:
Name Common Name(s) Expires Trusted Type
────────────────── ──────────────── ──────────────────── ─────── ────
tyrannosaurs-12099 www.myapp.fr 2018-04-29 10:10 UTC True ACM
However, my app still appears as being unsecured (no https) and when I run curl -kvI https://www.myapp.fr, here is what I get:
[2.3.4]
* Rebuilt URL to: https://www.myapp.fr/
* Trying 79.125.111.38...
* Connected to www.myapp.fr (79.125.111.38) port 443 (#0)
* Unknown SSL protocol error in connection to www.myapp.fr:-9838
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to www.myapp.fr:-9838
Any idea on How can I get my HTTPS working ?
I think I solved it at that time doing this: In order to force all clients to use https you will need to update your application to check for this. In Rails this is usually done by setting
config.force_ssl = true
in config/environments/production.rb.
Then wait a few minutes and it should be OK.

Heroku SSL DNS Endpoint not resolving

I'm having the issue that my CNAME that points to a herokussl.com SSL endpoint refuses to resolve, or seemingly even be propagated by the DNS servers, but yet seems to work fine for the regular herokuapp.com name. I bought the domain name from whois.com, and their support claims the error is on heroku's end or my entering the url but I'm not so sure. My certificate seems fine. Tech details below, thanks for any and all help!
Details:
CNAMES:
deez.chrtwt.org -> sleepy-garden-8448.herokuapp.com Active **Works**
www.chrtwt.org -> gifu-3664.herokussl.com Active **Does not resolve**
pmarx$ heroku certs
Endpoint Common Name(s) Expires Trusted
----------------------- -------------- -------------------- -------
gifu-3664.herokussl.com www.chrtwt.org 2015-01-27 23:59 UTC True
pmarx$ heroku certs:info
Fetching SSL Endpoint gifu-3664.herokussl.com info for sleepy-garden-8448... done
Certificate details:
Common Name(s): www.chrtwt.org
Expires At: 2015-01-27 23:59 UTC
Issuer: /C=US/ST=Nevada/L=Las Vegas/O=Charitweet LLC/CN=www.chrtwt.org
Starts At: 2014-01-27 00:00 UTC
Subject: /C=US/ST=Nevada/L=Las Vegas/O=Charitweet LLC/CN=www.chrtwt.org
SSL certificate is verified by a root authority.
pmarx$ heroku domains
=== sleepy-garden-8448 Domain Names
deez.chrtwt.org
sleepy-garden-8448.herokuapp.com
www.chrtwt.org
www.pbridge.org

Resources