I have an AngularJS front-end running on a Nginx server that sends requests to a Rails API backend running on a Puma application server. This application is running on an Amazon AWS EC2 instance.
The Rails API is listening on port 8081.
According to this architecture I had to open the HTTP port 8081 in AWS, so that I could receive the request from the front-end.
I have a domain, so It´s supposed all request should come from www.domain.com. However, I have noticed that if I use my EC2 instance name, such as, in example http://ec2-<ip>.eu-west-1.compute.amazonaws.com:8081/users the Rails API is serving all my users information.
How can I avoid this security bug. Where should I block this? In AWS configuration? In my Rails API CORS configuration? Any other place...
This seems an Authorization bug in your Rails API.
Who is the controller that answer to the route /users?
Let's say it is, for e.g., UsersController: in this case, you could have an action
def index
#users = User.all
end
or something similar, that returns the information you see.
Is difficult to give you a solution, without knowing if you need this action (maybe is just auto-generated boilerplate code) or if you want simply hiding it to who is not an Administrator...
Who wrote the Back End API should fix this for you based on your specifications.
Related
I made a docker container with a pretty simple web app: https://github.com/liquidcarbon/dockerflask2fa
The whole thing behaves well locally and when you're accessing via the ELB endpoint:
http://dockerflask2faloadbalancer-f10e5f558aaa921f.elb.us-east-1.amazonaws.com:5000
But when I use my Cloudfront distribution that lives on my domain, logging in does not work, returning "CSRF tokens do not match" message on registering a new user, and or logging in as an existing user.
https://flask.albond.xyz
The Cloudfront Cache Policy was set to CachingDisabled.
I'm new to web security, and I'll appreciate your help.
Looks like the caching & cookies behavior needs to be tweaked in Cloudfront: https://github.com/liquidcarbon/dockerflask2fa
I would like to integrate my cloud service in Heroku as an add-on. I read the available tutorials on how to do this, but it is still not clear. https://devcenter.heroku.com/articles/building-a-heroku-add-on#provisioning
I couldn't understand the role of the application that we create from a template (Sinatra for example) using kensa.
Is it an intermediate between Heroku and the cloud service?
thanks in advance.
Actually, Heroku needs 2 things:
addon-manifest.json file where described all information needed for Heroku. And this json file contains 2 important urls:
'base_url'
'sso_url'
Application which will server heroku-specific API and responds wit corresponding JSON on provisioning/deprovisioning/planchange requests. These request point to 'base_url'.
So, if you own your Cloud service code, and can add new API endpoints, then you don't need any application based on kensa-template: add necessary API controllers directly in the service.
But if you can't upgrade the cloud service, then you're right, kensa-template is a ready to use with heroku intermediate.
In case of sinatra template, you just need to put necessary API calls to your cloud service in "# provision" method of app.rb file, deploy app somewhere and do 'kensa push' for your addon-manifest.json (don;t forget to update base_url to yours)
Good luck!
Bare minimum API routes for heroku add-on based on your Cloud service:
POST request to '/heroku/resources' - for provisioning
DELETE request to '/heroku/resources' - for deprovisioning
If you really want to sell it to heroku users, then you should do more stuff:
add support for heroku single sign-on
this is one more API route: POST to '/heroku/sso', but you can change it in addon-manifest.json file.
PUT '/heroku/resources/:id' for Plan change request. Note that ':id' is an id which you provided heroku in your response during provisioning.
If you implement SSO, then user can click on your add-on on heroku instance's resources page and redirect directly to your service bypass any login forms.
You can show just short info about user's resource in the page after SSO.
We have a Rails app that we run on Unicorn (2 workers) and nginx. We want to integrate a 3rd party API where processing of a single request takes between 1 and 20 seconds. If we simply create a new controller that proxies to that service the entire app suffers, because it takes only 2 people to make a request to that service via our API and for 20 seconds the rest of the users can't access the rest of our app.
We're thinking about 2 solutions.
Create a separate node.js server that will do all of the requests to the 3rd party API. We would only use Rails for authentication/authorization in this case, and we would redirect the requests to node via nginx using X-Accel-Redirect header (as described here http://blog.bitbucket.org/2012/08/24/segregating-services/)
Replace Unicorn with Thin or Rainbow! and keep proxying in our Rails app, which could then, presumably, allow us to handle many more concurrent connections.
Which solution might we be better off? Or is there something else we could do.
I personally feel that nodes even-loop is better suited for the job here, because in option 2 we would still be blocking many threads and waiting for HTTP requests to finish and in option 1, we could be doing more requests while waiting for the slow ones to finish.
Thanks!
We've been using the X-Accel-Redirect solution in production for a while now and it's working great.
In nginx config under server, we have entries for external services (written in node.js in our case), e.g.
server {
...
location ^~ /some-service {
internal;
rewrite ^/some-service/(.*)$ /$1 break;
proxy_pass http://location-of-some-service:5000;
}
}
In rails we authenticate and authorize the requests and when we want to pass it to some other service, in the controller we do something like
headers['X-Accel-Redirect'] = '/some-service'
render :nothing => true
Now, rails is done with processing the request and hands it back to nginx. Nginx sees the x-accel-redirect header and replays the request to the new url - /some-service which we configured to proxy to our node.js service. Unicorn and rails can now process new requests even if node.js+nginx is still processing that original request.
This way we're using Rails as our main entry point and gatekeeper of our application - that's where authentication and authorization happens. But we were able to move a lot of functionality into these smaller, standalone node.js services when that's more appropriate.
You can use EventMachine in your existing Rails app which would mean much less re-writing. Instead of making a net/http request to the API, you would make a EM::HttpRequest request to the API and add a callback. This is similar to node.js option but does not require a special server IMO.
I want to restrict the crawler access to my rails app running on Heroku. This would have been a straight forward task if I was using Apache OR nginX. Since the app is deployed on Heroku I am not sure how I can restrict access at the HTTP server level.
I have tried to use robots.txt file, but the offending crawlers don't honor robot.txt.
These are the solutions I am considering:
1) A before_filter in the rails layer to restrict access.
2) Rack based solution to restrict access
I am wondering if there are any better ways to deal with this problem.
I have read about honeypot solutions: You have one URI that must not be crawled (put it in robots.txt). If any IP calls this URI, block it. I'd implement it as a Rack middleware so the hit does not go to the full Rails stack.
Sorry, I googled around but could not find the original article.
I am in the early stages of building an app using Rails 3. User authentication is powered by Authlogic which I have setup pretty much as standard (as per the example docs) and everything is working as expected locally.
I have just deployed the app to a clean server install of Centos 5.4 / NginX / Passenger so staff can start to log in and enter content, etc. However, we're a long way from this being ready for public eyes so I have used NginX's basic auth module to keep the entire site behind another level of authentication.
Unfortunately Authlogic's authentication and NginX's basic authentication seem to be conflicting with one another. If basic auth is on then it is impossible to log in with Authlogic, yet if I disable basic auth then Authlogic works as expected.
I haven't posted any code as I'm really not sure what code would be relevant. I wonder whether this is a known issue and if there is any changes I can make to the configuration to get round the issue?
I can answer my own question (after several hours of looking in completely the wrong place). A good readup on Authlogic::Session::Config did the trick.
class UserSession < Authlogic::Session::Base
allow_http_basic_auth false
end
I still didn't try Rails 3, so my answer will be more general. And I don't know basic auth module for NginX.
If your team is connected localy, then you can create server accessible from local network only.
If you need access via Internet, then you can hide it behind VPN.
You can set access to site only through local ip and give ssh access to anybody who need it. It is easy to create socks proxy via ssh (in linux: ssh -D 8080 user#yourserver.com; where 8080 is port number, then set socks proxy in browser and you can lunch yoursever.com:3000).
I think that NginX allows you to set allowed IP's and deny other - so you can use it also for access restriction.
And also you can temporarly add to ApplicationController before_filter :require_login :), so only login page will be availbe to the world.
Hope it helps!