How to set firewall for two servers hosted in Heroku - ruby-on-rails

I have two heroku apps accessing the same Redis database. I need to make sure only these two servers can access it.
Normally, I can do this by setting firewall through IP. However, Heroku uses dyno system, and does not have fixed IP for servers.
I found proximo addon, which can be used to set an IP for each of my apps. But I would like to know if there is a simpler solution for this issue.

You don't have any control or guarantee about the servers running your application on Heroku or their IP addresses.
You should use a secondary authentication mechanism, like redis's built in auth scheme for connections to authenticate the incoming request.
This is the mechanism most of the hosted redis providers on Heroku use (RedisToGo, OpenRedis, etc).

Related

Azure Cloud Service microservice to K8 Migration

I am in the process of evaluating moving a very large Azure Cloud Service (Web Role) microservice architecture to AKS and have been working through the necessary code and build changes to support it.
In order to replicate the production environment locally for the developers, we run nginx on the host with SSL offloading and DNS (hosted in Azure) A records pointing to 127.0.0.1. When running in the Azure Emulator, the net affect is the ability for both the developer to visit the various web front ends in their browser (i.e. https://myapp.mydomain.dev) as well as hit the various API's in the solution (Web API 2) in Postman/cURL, etc.
Additionally due to how the networking of the Azure Emulator works, the apps themselves can resolve each other through nginx on the host (i.e. MVC app at https://myapp.mydomain.dev can obtain a token from the IdP web API at https://identity.mydomain.dev and then use that token at the API at https://api.mydomain.dev). This is the critical piece and the source of my question.
All attempts at getting the containers themselves to resolve each other the same way the host OS can (browser/Postman, SSL offloading via nginx) have failed. Many of the instructions out there are understandably for linux containers but having adapted the various networking docker-compose settings for the windows container equivalent have not yet yielded an success. In order to keep the development environments aligned with the real work systems, which are tenantized and make sure of the default mapping in nginx to catch all incoming traffic and route it to a specific user facing app/container, it is not as simple as determining a "static" method of addressing these on startup and why the effort was put in to produce the development environments we have today.
Right now when one service (container) attempts to communication with another, it ultimately results in a resolution error as all requests resolve to https://127.0.0.1 due to the DNS A records hosted in Azure for the domain. Since this migration will be a longer term project, the environments need to co-exist so changing the way that DNS is resolved (real DNS A records pointing to 127.0.0.1), host running nginx and handling SSL offloading to the various webroles normally running in the Azure Emulator is not an option.
Is there a way (with Windows containers) to either:
Allow the container to utilize nginx on the host OS transparently (app must still call the API at https://api.mydomain.dev), which will cause the traffic to be routed properly to the correct container/port defined in the docker-compose file?
OR
Run nginx on each container, allowing each container to then resolve and route appropriately without knowing the IP of the other container, possibly through an alias which could be added to the containers nginx.conf before the service starts?
The platform utilizes OAuth2/OIDC and it is critical to maintain the full URL to the other services from the applications perspective. Beyond mirroring production and sandbox environments, this URL's are utilized for redirect URL and post logout redirect URL validation among other things so using "https://myContainerNameForOtherContainerAlias" is not a workable solution.
Will I have the same problem when setting up the AKS environment as well?

Connect to rails server remotely from raspberry pi

I have ssh'd into my rasberry pi and built a rails application.
Now how do I load the rails app from another machine?
I have tried IP:port in a web browser, but this fails.
Can I use ssh from a web browser to load the rails server process?
Are there gems I need to install to do this?
Is there any good documentation that I have missed?
SOLUTION
use ngrok to tunnel https://medium.com/#karimbutt/using-ngrok-to-create-a-publicly-accessible-web-facing-raspberry-pi-server-35deef8c816a#.sraso7zar
Maybe the problem is with the IP address you're trying to use. Servers don't necessarily forward their public IP traffic to localhost automatically.
Perhaps you could configure the IP address somehow, I don't know (others might?). Alternatively, you have a use a "local tunnel" service like ngrok or localtunnel. What these do is create a public URL for your localhost (i.e. your "loopback" address), so anyone can access it.
I spoke with a Ngrok author via email. He ensured me that I shouldn't need to expect any downtime from the service or to have to manually restart it. Although keep in mind that if you're on the free plan, whenever you restart Ngrok you're going to get a different URL. He also described it as kind of like a "souped up SSH -R"

Connect to remote database from Heroku with static IP (Since database server will only allow whitelisted IPs)

I am running a Ruby on Rails application on Heroku and my database is in someother place where it will be accessed with certain whitelisted IP's only but since heroku doesn't provide dynamic IP's I thought of using proximo.
Please help me how to connect to remote database with proximo from heroku.
We had a difficult time achieving this (we ended up whitelisting every domain)
IP's
The problem is Dyno's are hosted on AWS' EC2 cloud - meaning they aren't actually Heroku's servers. This causes a lot of problems, as the IPs are all shrouded & change:
Because the Heroku dyno grid is dynamic in nature, the IP address that
a given dyno will be assigned over time will be both dynamic and
unpredictable. This dynamic sourcing of outbound traffic can make it
difficult to integrate with APIs or make connections through firewalls
that require IP-based whitelisting
After seeing the proximo addon, you may be able to achieve what you need using a static IP
Proximo
According to the proximo tutorial on Heroku's site, you should be able to install the add-on & receive your outbound IP relatively simply:
$ heroku addons:add proximo:development
Adding proximo to sharp-mountain-4005⦠done, v18 ($5/mo)
Your static IP address is 127.0.0.1
You should then be able to use this on your db host - to allow the IP
No ruby database adapters natively support proxy connections so for database access you need to proxy your calls via a SOCKS proxy. A SOCKS wrapper script to do this is available as part of our QuotaGuard Static Heroku add-on.
You configure this by prepending the call to the wrapper script in your Procfile so should work with minimal integration.
web: bin/qgsocksify bundle exec unicorn -p $PORT -c ./config/unicorn.rb
By default this wrapper routes all outbound TCP traffic via the proxy but there is additional configuration available to limit this to just your database traffic.
A workaround is to whitelist all IP adresses from your SQL database provider admin interface:
You can do this by whitelisting 0.0.0.0/0. (In Google Cloud SQL, you can do this under "authorized networks")
If you do so, it is highly recommended to configure your connection to use SSL and to only allow SSL connections to your database.
You can configure NGINX as your reverse proxy to allow your Heroku app to connect to the IP address(which is your NGINX server and whitelisted), the reverse proxy will connect to the DB.
https://stackoverflow.com/a/27874505/1345865
http://blog.talenox.com/post/107675614745/how-to-setup-static-ip-on-heroku

AWS Allowing inbound access from Heroku for forward proxy to external API

I have a rails 3.2 app running on Heroku which needs to proxy requests to an external API from a static IP address. Since Heroku doesn't offer elastic IPs, and Proximo is too expensive and limiting for the number of requests I need to make, I set up a simple forward proxy on an AWS EC2 micro instance in US-East using mod proxy.
I can proxy requests from my app's local environment just fine. However, requests from heroku time out. My thinking is that, since I can proxy from my local environment, the point of failure must be the connection between Heroku and my Proxy box. I've tried the answer given here: Security settings between ec2 and heroku but it didn't work. I've even tried allowing all inbound access on port 80 (even though that's terrible for the internet).
So, my question is, what are the security settings that I should enable for my ec2 instance in order to allow Heroku to proxy through it?
Heroku dynos are all running on machines within Amazon EC2 us-east-1 data center. They do not have any restrictions/firewalls on outgoing connections.
As long as you have the proper Security Group settings to allow the connections from your dynos to your own EC2 instance, you should be good.
It sounds like you haven't correctly opened up access from within us-east-1 to your instance. Double check your security group.
Information on how to edit the correct security group:
Check what security group you are using for your instance. See value
of Security Groups column in row of your instance. It's important -
I changed rules for default group, but my instance was under
quickstart-1 group when I had similar issue.
Go to Security Groups tab, go to Inbound tab, select HTTP in Create
a new rule combo-box, leave 0.0.0.0/0 in source field and click Add
Rule, then Apply rule changes.

Can I use Amazon Elasticache on Heroku?

I am currently using Heroku's Memcached in a Rails 3 app and would like to move over to Elasticache because the pricing is much more favorable. Is this possible? Is the configuration relatively straightforward? Is there anything that I should be aware of as regards the performance?
No, it isn't recommended you use Elasticache as there is no authentication mechanism with it. As such, anyone can access your cache! This is normally fine as you would use AWS security rules to restrict what machines can access it to yours. However, this obviously doesn't work with Heroku since your app is run on a randomly chosen machine of Herokus.
You could deploy memcache yourself with SASL authentication on an EC2 machine. ElastiCache doesn't really give you anything more than an EC2 machine with memcache pre-installed anyway.
There is another option: MemCachier
(Full disclaimer, I work for MemCachier).
There is another memcache provider on Heroku that is significantly cheaper than the membase provided one. It's called MemCachier, addon home page is here.
It's comparable in price to ElasticCache depending on your cache size and if you use reserved instances or not (at the very large cache sizes ElatiCache is cheaper).
Update (June, 2013): The membase memcache addon has shutdown, so MemCachier is the only provider of Memcache on Heroku.
Please reach out to me if you need any help even if you go with ElastiCache.
DANGER: I do NOT recommend using this solution for production use. While this does work, #btucker pointed out that it allows any Heroku-hosted app to access your ElastiCache cluster.
Yes you can. The setup is similar to the guide Heroku has on Amazon RDS. The steps that differ go like this:
Follow the "Get Started with Amazon ElastiCache" guide to create a cache cluster and node
Install the ElastiCache Command Line Toolkit
Allow Heroku's servers ingress to your ElastiCache cluster like the RDS guide explains but replace the rds- commands with elasticache- ones:
elasticache-authorize-cache-security-group-ingress \
--cache-security-group-name default \
--ec2-security-group-name default \
--ec2-security-group-owner-id 098166147350 \
# If your AWS_CREDENTIAL_FILE environment setting is configured,
# this option is not necessary.
--aws-credential-file ../credential-file-path.template
Set a Heroku config value for your production app with your cluster's hostname:
heroku config:set MEMCACHE_SERVERS=elasticachehostname.amazonaws.com
After that, follow the Memcache Rails setup, and you're set.
It's worth noting that while #ssorallen's answer above will work as described, it also allows ANY heroku-deployed app to access your memcached server. So if you store anything at all confidential, or you're concerned about other people making use of your ElatiCache cluster, don't do it. In the context of RDS you have the access control built into the database, but memcached has no such authentication supported by ElastiCache. So opening up the security group to all of Heroku is a pretty big risk.
There are several Heroku addons that will kinda solve this problem. They provide a SOCKS5 proxy with a static IP address that you can whitelist.
https://elements.heroku.com/addons/proximo
https://elements.heroku.com/addons/quotaguardstatic
https://elements.heroku.com/addons/fixie-socks
You can also do this yourself by setting up your own SOCKS5 proxy on ec2.
Note the caveats here though:
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Access.Outside.html
It's slower, unencrypted, and some NAT monkey business will be required to get it working.
If you are using Heroku Private spaces, then it should be possible to do using VPC peering. Follow the instructions here so that your AWS VPC and Heroku VPC can access each other's resources:
https://devcenter.heroku.com/articles/private-space-peering
Once you have the above setup working, just create an elastic cache cluster in the AWS VPC and allow access from the dyno CIDR ranges in the AWS security group or to the complete Heroku VPC CIDR and your dynos will be able to access elastic cache URLs. I was able to get a working setup for Redis, and it should work for any other resource in the AWS VPC.

Resources