Update Amazon RDS SSL/TLS Certificates - Elastic Beanstalk - ruby-on-rails

AWS recently announced the need to:
Update Your Amazon RDS SSL/TLS Certificates by October 31, 2019
I have a Rails application hosted with a classic Elastic Beanstalk load balancer, which connects to a Postgres DB using RDS.
The required steps according to Amazon are:
Download the new SSL/TLS certificate from Using SSL/TLS to Encrypt a Connection to a DB Instance.
Update your database applications to use the new SSL/TLS certificate.
Modify the DB instance to change the CA from rds-ca-2015 to rds-ca-2019.
(https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL-certificate-rotation.html)
Since I have my load balancers set up like this (connecting to my EC2 instances via HTTP port 80 (not SSL), does this mean I don't need to follow steps 1 and 2? And only follow step 3?
Or do I have to download the updated certificates and install/add them to my Load balancer or EC instances manually? Not sure how to do that.

Step 1 & 2 only required if your application connection with MySQL is TLS encrypted.
Do not change LB TLS setting it can break your application, LB TLS is something else, where RDS TLS is something else.
If your application just creation plain connection you are safe to perform directly the step 3.
Modify the DB instance to change the CA from rds-ca-2015 to
rds-ca-2019.
Normally practice for DB, DB should be in private subnet and it should not accessible from the public, TLS is helpfull when your Database and Backend connection is on the internet, not within VPC.
With an unencrypted connection between the MySQL client and the
server, someone with access to the network could watch all your
traffic and inspect the data being sent or received between client and
server.

There is a much easier answer to the question:
You do not need to install anything in your Beanstalk environment if
you upgrade the CA Certificate used by the RDS attached to it.
https://stackoverflow.com/a/59742149/7051819
Just follow point 3 and ignore 1 and 2.
(Yes I wrote that answer myself).

Related

Jenkins pointing server to domain created

Good Morning
I have created a Jenkins server in AWS I am able to access the platform using the IP of the server
however, I want to access it more securely.
I have set up a subdomain on my hosting service and I set the IP of the server as an A record
I have also defined this in the configuration section of Jenkins
however, when I access the URL https://domainname I get nothing
but if I add 8080 at the end of it it takes me to the Jenkins platform
what am I missing here?
Thanks
I recommend you to use AWS Application Load Balancer to access to you jenkins web server.
I will host https certificat (if you are using AWS Certificate Manager) and you will be able configure DNS to redirect to ALB name.

AWS/SSL certificate(s) for Nginx setup inside docker container

I have a dockerized django app (cookiecutter) and I want to configure nginx inside of a docker container, so I can deploy it to an EC2 instance. For that I need ssl certificates.
The process to get a ssl certificate with Let's Encrypt like it is recommended everywhere seems to be a complicated task when you use docker, nginx and EC2. I tried it and can't get passed the error I'm linking below.
So I was wondering if there is a way to configure nginx with an AWS certificate. I read that AWS certificates are free but can't be downloaded (https://serverfault.com/questions/822035/). So my question is threefold:
a) Can I configure nginx without https, get a free certificate for my AWS EC2 instance and then run my app on that server with https?
b) If the answer is yes, how could I configure my nginx server to serve only http for that?
c) If I buy a certificate from a CA can I use it to configure my nginx and will it be transportable if I move my app (to Digital Ocean or Azure or sth)?
I am by no means an expert in most of these technologies and fighting myself through a jungle here. Very grateful for help, hints, tips, suggestions and guidance. Thanks very much in advance. I happily provide more code if needed.
Tutorial I tried but can't solve my error:
https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
Tutorial for nginx with docker and let's encrypt I wanted to follow if there is no easier and quicker solution: https://www.humankode.com/ssl/how-to-set-up-free-ssl-certificates-from-lets-encrypt-using-docker-and-nginx
Error with Let's Encrypt:
Timeout during connect (likely firewall problem) To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address.

Using mruby, ngx_mruby and redis - Applying on current production server

I am very afraid of making some modifications on the server. Because the server is working fine with the current settings.
I will to explain: The server is an Amazon EC2 instance. In this instance I have:
ruby -v: ruby 2.2.2p95 (2015-04-13 revision 50295) [x86_64-linux]
rails -v: Rails 4.2.3
nginx -v: nginx/1.8.0
passenger -v: Phusion Passenger version 5.0.10
I have 1 Load Balancer, which has a listner:
Load Balancer Protocol: HTTPS
Load Balancer Port: 443
Instance Protocol: HTTP
Instance Port: 80
SSL Certificate: Using a certificate issued on Amazon Certificate Manager. I have the domain and all sub-domains (wildcard).
This settings allow me to:
Have the main domain to the app:
www.testname.com and testname.com to use as institutional pages (About, Price, Terms etc.);
app.testname.com for users to use the system;
Have how many subdomains I want, because EACH USER has an specific page:
user1.testname.com
user2.testname.com
user3.testname.com
etc.
All work is dynamic. The user registers on the app and has a subdomain. In this subdomain, the user can access via https://. It works fine.
Users WANT to use their own domain, off course. This part is easily resolved. I create a CNAME record in the custom domain, pointing to the our subdomain, like that:
usercustomname.com CNAME TO user1.testname.com
It works fine. BUT, the big problem is: 'https://' not working on its custom domain name, obviously. Our certificate allows the domain testname.com and its subdomains.
With Amazon Certificate Manger I can import custom certificates. And then, using the awesome rails-letsencrypt gem, I can generate Let's Encrypt certificates to the custom domain names.
But the Amazon Load Balancer, in the https listner, allow to use only 1 certificate! This is very bad, because I can to have a lot of certificates, but using only one in the whole server.
Recently, Amazon releases multiple certificates to Application Load Balancer using SNI. I can to migrate my Classic Load Balancer to the Application Load Balancer, but this not solves the problem, because the max certificates limit is 25 per Load Balancer. Is very low.
The solution I found is to create an Amazon Elasticache to run a REDIS server. And then, using ngx_mruby to get the certificate. I plan it like that:
Change the https listner like that:
Instance Protocol: HTTPS
Instance Port: 443
Remove the certificate issued in Amazon Certificate Manger
Install mruby
Install ngx_mruby
Using rails-letsencrypt gem, create 1 certificate for each institutional subdomain (app, www, empty subdomain) AND create 1 certificate for each user subdomain.
When a certificate is created, the rails-letsencrypt gem can save the certificate in REDIS.
Using ngx_mruby, listen the port 443, the certificate for the domain is picked up on redis.
Apparently, this will work. The logic seems right but I do not know in practice.
My questions are:
1) To install mruby, I will follow these steps. After install, will impact in the current ruby installation? Will I need to change the system code already developed as a mruby installation result?
2) Using REDIS will affect something in the current server? Despite the $ 12/month increase in Amazon's account, I believe that using REDIS will not influence the current server at all.
3) Do you think that what I planned to solve the Amazon certificate limit will works?
Sorry the big text. I'm not server specialist. This is the unique server I have, AND without backup. And I'm afraid to break the server with no way to fix.
Tks and I appreciate any help :)
EDIT 1
Using ngx_mruby and redis with Amazon Classic Load Balancer will not works, beacuse the listner https requires one certificate. So even if I generate the certificates and connect ngx_mruby with redis, before it, the Load Balancer will respond with the default domain certificate.
But, I found a way (it works):
All customers URL have this structure:
customer1.myapp.com
customer2.myapp.com
customer3.myapp.com
All requisitions are using https listner via Load Balancer and has no way to using multiple ssl certificates in classic load balancer. Then, I did:
Register another domain, like myapp.net
Using Amazon Route 53, I created another hosted zone and I point the domain DNS records to this hosted zone
In Amazon Route 53, I created these records:
Type A point to the instance IP
Type CNAME with name * and value myapp.net
I setup my Rails App to identify the domain myapp.net. With this, the access to customer1.myapp.com AND customer1.myapp.net calls the same resource, BUT, customer1.myapp.com uses the lister https in the load balancer and customer1.myapp.net don't.
I just save the SSL CERTIFICATE which was generated by the gem in the folder /etc/nginx/ssl/ and then, create a virtual host in NGINX. After that, FINALLY WORKS!
Now, I have to dicover HOW TO SAVE the certificate in the folder and HOW TO CREATE a virtual host in NGINX, using RAILS. The manual process is described in my another question.
You need to recompile your Nginx to support ngx_mruby, we didn't suggest using dynamic module feature now. Because there is no use case with it when people using ngx_mruby. And your ruby version in your system didn't be changed.
If you want to enable Redis as the cache for your Rails, it may influence your website. But if you only create a new ElasticCache instance, there no other side effect for you. And I think to use ElasticCache you will get better optimize than host it by yourself.
I didn't try it, but it may work. Maybe others can answer your question.

AWS Certificate Manager - SSL says in use but HTTPS does not work

So here is my issue. I have a Rails 5 Application that is being deployed to AWS using Elastic Beanstalk. I purchased the domain name (eightysixpad.me) from Bluehost.com and updated the DNS records to point to the IP address of the EC2 instance that was created.
I used AWS's Certificate Manager to create an SSL certificate for the domain eightysixpad.me and www.eightysixpad.me. I have verified both of them through email. I created a Load Balancer under the Elastic Beanstalk environment and applied the SSL certificate to it. The AWS Certificate Manager console says the SSL certificate is in use; however, when I go to https://eightysixpad.me, it says Site Cannot be reached. http://eightysixpad.me works fine but says it is unsecure.
I am not sure what I am doing wrong! Any help would be greatly appreciate and I would be more than happy to provide more information if necessary!
Thank you all in advance!
Update the DNS entry to a CNAME and make it point to the DNS endpoint of the ELB , to which you have added the EC2 instance(s).
For example create new CNAME entry to ELB Dns endpoint "name-of-elb-unique-.ap-southeast-1.elb.amazonaws.com"
ELBs in AWS do not have an IP address (always dns names) as IP address keep changing

Use a trusted CA signed certifiicate on a local rails server

This might sound a little stupid, but I am trying to test out IOS device enrollment and I want to use a trusted CA(eg Verisign,Comodo) signed certificate to add to my localhost rails webrick server. I do not want to add a self signed certificate because I need to test a very particular scenario. Is there a way to do this? I know domain controller validation will fail if I try to create the CA signed certificate on a website like Comodo and I cant use a certificate I already have for my production server since its bound to that domain. Is there a way to workaround this and create a production level SSL certificate and use it for development server?
You can use your existing production certificates for your local setup, and use a local DNS server (such as BIND) to resolve the domain name to your local ip address instead of your production servers ip address.
Update:
Install BIND (or whatever DNS server software you like) on some computer on your network, let us say 192.168.100.10.
Add www.myprodserver.com to resolve to 192.168.100.100.
Now on your local machine (assume its a MacBook), go to your network settings and add 192.168.100.10 as the only DNS server.
Now run ping www.myprodserver.com and make sure it is resolving to 192.168.100.100.
This is almost equivalent (but not exactly) to using /etc/hosts file to resolve domain names to ip addresses .
(all ip addresses and domain names used above are just for example)
Also, I think you will need something better than WEBRick to handle SSL certificates. You can use nginx to offload SSL and proxy to WEBRick

Resources