Good Morning
I have created a Jenkins server in AWS I am able to access the platform using the IP of the server
however, I want to access it more securely.
I have set up a subdomain on my hosting service and I set the IP of the server as an A record
I have also defined this in the configuration section of Jenkins
however, when I access the URL https://domainname I get nothing
but if I add 8080 at the end of it it takes me to the Jenkins platform
what am I missing here?
Thanks
I recommend you to use AWS Application Load Balancer to access to you jenkins web server.
I will host https certificat (if you are using AWS Certificate Manager) and you will be able configure DNS to redirect to ALB name.
I have a multi-tenant Heroku app running Ruby on Rails with a wildcard SSL certificate on *.xyz.com which serves https://customer1-app.xyz.com securely & as expected. The problem is I can't add a GoDaddy SSL certificate to a custom domain (customer1.com) also at Godaddy pointing to one of the subdomains (customer1-app.xyz.com).
The approach I'm trying is fully described at here: https://help.heroku.com/8P5TVA4T/how-can-i-configure-multiple-ssl-certificates-for-a-single-app
Simply put:
I created a shell application customer1-endpoint on Heroku
I added the SSL-Endpoint addon & installed the certificated bought
from GoDaddy on the shell application
I copied the endpoint (DNS target - abc.ssl.herokudns.com) from the Heroku CLI to theCNAME
record of GoDaddy
I added the custom domain (customer1.com, www.customer1.com) to my main production (xyz.com)
Heroku app
When I try to access https://www.customer1.com/ I receive a "No such app" message on Heroku.
Any ideas what's going on wrong?
For anyone coming through Google, my issue was that I had my SSL-endpoint in a different region than my main app. This approach works, make sure that both apps are on the same region.
SSL Endpoint is deprecated as of July 31st, 2021.
I have a development server that I run locally on my Mac, which is named mark-mb12. It has a number of docker containers and exposes a few subdomains (api, www, admin) over https. Because I use https, I've created a local Certificate Authority and certs using my local machine name, so they are for mark-mb12.local, api.mark-mb12.local, etc. I modified my /etc/hosts file to define the subdomains, mapping them to 127.0.0.1.
Everything works just fine on my machine. I can access the www and admin subdomains in the browser, and my iOS app works against the api subdomain when running in the simulator.
From my iPhone, I have it working so that I can access https://mark-mb12.local in Safari, so I have the CA installed and trusted.
What I can't seem to figure out is how to access the subdomains from my device or a browser running on a different machine.
Any ideas as to what I need to do?
I am very afraid of making some modifications on the server. Because the server is working fine with the current settings.
I will to explain: The server is an Amazon EC2 instance. In this instance I have:
ruby -v: ruby 2.2.2p95 (2015-04-13 revision 50295) [x86_64-linux]
rails -v: Rails 4.2.3
nginx -v: nginx/1.8.0
passenger -v: Phusion Passenger version 5.0.10
I have 1 Load Balancer, which has a listner:
Load Balancer Protocol: HTTPS
Load Balancer Port: 443
Instance Protocol: HTTP
Instance Port: 80
SSL Certificate: Using a certificate issued on Amazon Certificate Manager. I have the domain and all sub-domains (wildcard).
This settings allow me to:
Have the main domain to the app:
www.testname.com and testname.com to use as institutional pages (About, Price, Terms etc.);
app.testname.com for users to use the system;
Have how many subdomains I want, because EACH USER has an specific page:
user1.testname.com
user2.testname.com
user3.testname.com
etc.
All work is dynamic. The user registers on the app and has a subdomain. In this subdomain, the user can access via https://. It works fine.
Users WANT to use their own domain, off course. This part is easily resolved. I create a CNAME record in the custom domain, pointing to the our subdomain, like that:
usercustomname.com CNAME TO user1.testname.com
It works fine. BUT, the big problem is: 'https://' not working on its custom domain name, obviously. Our certificate allows the domain testname.com and its subdomains.
With Amazon Certificate Manger I can import custom certificates. And then, using the awesome rails-letsencrypt gem, I can generate Let's Encrypt certificates to the custom domain names.
But the Amazon Load Balancer, in the https listner, allow to use only 1 certificate! This is very bad, because I can to have a lot of certificates, but using only one in the whole server.
Recently, Amazon releases multiple certificates to Application Load Balancer using SNI. I can to migrate my Classic Load Balancer to the Application Load Balancer, but this not solves the problem, because the max certificates limit is 25 per Load Balancer. Is very low.
The solution I found is to create an Amazon Elasticache to run a REDIS server. And then, using ngx_mruby to get the certificate. I plan it like that:
Change the https listner like that:
Instance Protocol: HTTPS
Instance Port: 443
Remove the certificate issued in Amazon Certificate Manger
Install mruby
Install ngx_mruby
Using rails-letsencrypt gem, create 1 certificate for each institutional subdomain (app, www, empty subdomain) AND create 1 certificate for each user subdomain.
When a certificate is created, the rails-letsencrypt gem can save the certificate in REDIS.
Using ngx_mruby, listen the port 443, the certificate for the domain is picked up on redis.
Apparently, this will work. The logic seems right but I do not know in practice.
My questions are:
1) To install mruby, I will follow these steps. After install, will impact in the current ruby installation? Will I need to change the system code already developed as a mruby installation result?
2) Using REDIS will affect something in the current server? Despite the $ 12/month increase in Amazon's account, I believe that using REDIS will not influence the current server at all.
3) Do you think that what I planned to solve the Amazon certificate limit will works?
Sorry the big text. I'm not server specialist. This is the unique server I have, AND without backup. And I'm afraid to break the server with no way to fix.
Tks and I appreciate any help :)
EDIT 1
Using ngx_mruby and redis with Amazon Classic Load Balancer will not works, beacuse the listner https requires one certificate. So even if I generate the certificates and connect ngx_mruby with redis, before it, the Load Balancer will respond with the default domain certificate.
But, I found a way (it works):
All customers URL have this structure:
customer1.myapp.com
customer2.myapp.com
customer3.myapp.com
All requisitions are using https listner via Load Balancer and has no way to using multiple ssl certificates in classic load balancer. Then, I did:
Register another domain, like myapp.net
Using Amazon Route 53, I created another hosted zone and I point the domain DNS records to this hosted zone
In Amazon Route 53, I created these records:
Type A point to the instance IP
Type CNAME with name * and value myapp.net
I setup my Rails App to identify the domain myapp.net. With this, the access to customer1.myapp.com AND customer1.myapp.net calls the same resource, BUT, customer1.myapp.com uses the lister https in the load balancer and customer1.myapp.net don't.
I just save the SSL CERTIFICATE which was generated by the gem in the folder /etc/nginx/ssl/ and then, create a virtual host in NGINX. After that, FINALLY WORKS!
Now, I have to dicover HOW TO SAVE the certificate in the folder and HOW TO CREATE a virtual host in NGINX, using RAILS. The manual process is described in my another question.
You need to recompile your Nginx to support ngx_mruby, we didn't suggest using dynamic module feature now. Because there is no use case with it when people using ngx_mruby. And your ruby version in your system didn't be changed.
If you want to enable Redis as the cache for your Rails, it may influence your website. But if you only create a new ElasticCache instance, there no other side effect for you. And I think to use ElasticCache you will get better optimize than host it by yourself.
I didn't try it, but it may work. Maybe others can answer your question.
I would like to test my rails app on my local machine and also have it functional on heroku. However, if I specify my IP address for the "Website" field on the facebook app settings, then my heroku breaks and vice versa. Is there any way to have them both work using the same API Key?
If not, how do I tell Omniauth to use one api key for the development environment and another for the production environment? Thanks!
Use a separate FB app for dev (local) and for production (Heroku).
Read the key out of the environment like:
ENV["FACEBOOK_APP_ID"]
ENV["FACEBOOK_SECRET"]
Then set the key/creds in your config on Heroku using heroku config:add.
Locally use foreman to run your app and set the dev key/creds in a .env file. http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
Keep in mind that FB requires you to use SSL so you'll need to setup something locally that can handle SSL requests.
You will either have to create another development Facebook application for development, which is what I do, or you will have to create an entry in your /etc/hosts file that points the hostname of your Heroku app to you local machine.