Cross Origin post to Rails site - ruby-on-rails

I'm using the rack-cors gem with Rails:
https://github.com/cyu/rack-cors
I need to whitelist ONE domain so that it will allow that domain through.
I would think that this would allow traffic from the whitelisted domain . I am making a POST request from https://reflective-basket.surge.sh/ to my rails app. (domain name modified for the sake of this post on stackoverflow)
However, POST requests will not go through. The destination Rails app says:
The page you were looking for doesn't exist.
You may have mistyped the address or the page may have moved.
If I remove the protect from forgery line from the application controller, (protect_from_forgery with: :exception), of course, the app allows all traffic through but this defeats the purpose of having a secure app.
I'm sure this is a common problem (needing a form on website A submit data to website B but only from a certain domain) but this just doesn't seem to work as I would have hoped. Any pointers? I'm open to making this work in any way that's possible.

Related

How do I set up a Rails page as a subdomain of another site?

We're trying to create pages in our Rails app that will eventually live on a subdomain of another partnering site. This would be like StatusPage, which allows users to create a status page with their account on the StatusPage site and then attach it to their own subdomain (e.g. status.usersite.com).
For example, if we wanted one of our pages (www.oursite.com/users/bobsplumbing) to be a subdomain on another site (ourservice.bobsplumbing.com), how would we go about it?
If it's useful info, we use Heroku to host the Rails app and we also utilize Route 53 and Cloudflare.
From your example I understand that you want to have multiple web apps since that would be your customer domain and your page will redirect to that page.
You will be better off to do NGINX (or whatever you use) redirects since they are faster and will take less time, being cached by the browser after the initial load.
To answer your question you can add this code to your routes:
sites = %w(bobsplumbing catsandboots)
sites.each do |name|
match "users/#{name}" => redirect("https://ourservice.#{name}.com")
end
You can also have a look at apartment gem.

Rails links in mailer are invalid (Liquid gem)

There are template witch are written by admins. And in mail there possibility to enter link (not the rails way). There is editing menu, what generates following, basic html code:
company’s profile
where user.owner_name - domain.com/user/user_name
I don't talk about localhost because it possible will not work. So I'm talking about production server.
I receive email, with broken link (if I click on it - it's not opening) but if copy link:
x-webdoc://73A3A2DC-F22E-4558-8853-C6A57985EE7C/mydomaine.com/user/
Why this appears?
EDIT
It seems it's realeted to MacOs. It prbolem appears when I view letter thorought Mail App, or Safari browser.
Now, I need any advise how to avoide this prob.
I would argue that example.com/user/user_name is not a useful URI in the context of an email, because it is missing a protocol (like: http://example.com/user/user_name). Without the protocol it could be misunderstood as a relative URL, which may lead to security issues or at least is useless in the context of an email client.
From that point of view, it is not surprising to me that some email clients or web mailers are trying to be smart and protect the user by annotating the URL in some way.
In this example the added x-webdoc: indicates that the user has to make the decision on what application to use to open that link because without a proper protocol it is not obvious what application will be able to handle the URI. See What is x-webdoc?

Subdomain security in Rails

Let's say I'm trying to create an application called Blue. Blue is a Ruby on Rails application that turns the background of any website blue. It also allows users to log in and keep track of the websites they've visited and turned blue.
In order to turn a website's background websites blue, I've created a web proxy that inserts <link HREF="http://www.example.com/blue.css" type="text/css"> into the response's body. The proxy is implemented as a rack application and is be placed inside the Rails routes using the approach from the Rack in Rails 3 Railscast:
root :to => BlueProxy, :constraints => { :subdomain => "proxy" }
I'm very concerned about security with this approach. I know by default the domain for the cookies in my application would be .example.com. If the user typed in a malicious URL, the website could manipulate the user's account. I could fix this by only allowing the www subdomain for cookies in the application. However, I'd also like the proxy to be able to store cookies for the proxied site as well.
Here are my three questions:
Is this a bad approach? Is there a better way to solve this problem?
What's the best way to keep sibling subdomain cookies separate in Rails?
Are there any other security concerns I'm missing?
This approach is dangerous, and I caution you against running a proxy for several reasons:
It brings up a host of legal issues ranging from people accessing illegal content to your hosting content for your own benefit (and modifying it).
Your bandwidth (and hosting fees) will explode if the site gets popular.
Loading content inside an iframe has ux issues, like the browser back button not quite performing as the user wants it to.
Running a proxy opens up several more attack vectors to your site (e.g. sending a permalink to a malicious site proxied through your site) that you'll have to consider from a security perspective.
Instead of running an open proxy (okay, maybe it's not completely open, but how hard is it for someone to sign up?) on your back end, consider using a browser extension or greasemonkey script on the front end that can get its set of rules from your rails app and then add the stylesheet changes on the client side.

RESTful web services with complex actions (verbs)

I am attempting to construct a web app in which the back end is a complete RESTful web service. I.e. the models (business logic) would be completely accessible via HTTP. For example:
GET /api/users/
GET /api/users/1
POST /api/users
PUT /api/users/1
DELETE /api/users/1
Whats the proper way to provide more methods that aren't CRUD (verbs/actions)? Is this considered more of a RPC-api domain? How would one properly design the RPC api to run on top of the RESTful api?
For example, how would I elegantly implement a forgot password method for a user.
POST (?) /api/users/1/forgot
The application (Controllers/View) would then use a https requests (HMVC like) to access the models and methods. What would be the best for authentication? OAuth, Basic Auth over HTTPs?
Although this is "best practice" for scalability later on, am I over engineering this task? Is it best to just follow the typical MVC model and provide a very basic API?
This question has been mostly inspired by ASP.NET's MVC 4 (WebAPI) and a NodeJS module https://github.com/marak/webservice.js
Thanks in advance
I recently started learning REST, and when developing a new web service I think you're doing the right thing to consider it.
You are correct in your assumptions about the custom verbs. REST acknowledges that some actions need to be handled in a different way, and custom verbs don't violate the requirements. You should use POST when communicating with the server, but the verbs are normally written in imperative. Instead of forgot, I'd probably use remind or something similar. I.e., you should give instructions on what to do, rather than describe what happened without clearly indicating what you expect as a result.
Furthermore, the preferred way to construct the service is to include api into the domain name, and drop it from the path. I'd write your particular example like this:
POST /users/1/remind HTTP/1.1
Host: api.myservice.example.com
Session handling in REST is a bit tricky. The cleanest way of doing it would probably be to authenticate with username and password on every single request, using Basic access authentication. However, I believe that it's rarely done like that. You should read this question (and its accepted answer): OAuth's tokens and sessions in REST
EDIT: I'd also drop the trailing forward slash in the GET request in your example. If the service is truly RESTful, then the resource is not supposed to be accessibly from both /users/ and /users. A particular resource should have one and only one URL pointing to it. A URL with a trailing slash is actually distinct from one without. REST promotes dropping it, and a RESTful web service should not accept both (which in the case of GET means responding with 200 OK), although it may redirect from one to the other. Otherwise, it might lead to confusion about the proper URL, duplicate caching, weeping and gnashing of teeth. :)
EDIT 2: In RESTful Web Services by Richardson & Ruby you're discouraged from putting the new verb in the path. Instead, you could append something like ?_method=remind. It's up to you which one you choose, but please remember that you're not supposed to handle these requests with GET, regardless of what you choose. A GET must not change the resource, and should not cause side effects if the user browses back and forth in the history. Otherwise, you might end up resending the password several times. Use POST instead.

SSL-secured website best practices

I have a website (www.mydomain.com) that is secured with an SSL certificate. It is an ASP.NET website and I have forced certain pages via code to be required to use the https:// prefix. If they don't it will redirect them to the https:// equivalent. Is this a good practice? Is there an easier way to do this? Not every single page requires SSL.
Also, when the users use my URL in the form of mydomain.com instead of www.mydomain.com they get a certificate error because the certificate was registered for www.mydomain.com. Should I use the same approach as I am with the http:// and https:// issue I mentioned above? Or is there a better way of handling this?
Your approach sounds fine. In my current project, I force HTTPS when a user goes to my login page, (Based on a config flag which lets me test locally without dealing with needing a cert). This allows me to access other pages unsecured which is handy.
I have a couple places where our server grabs the output of other pages (rendering to html to PDF and fetching dynamic images for example). Because of our environment, our server can't resolve it's public name, so if we were to force ssl at the site we'd have to add, our internal IP address (or fake the domain name).
As for your second question you have two options to handle the www.example.com vs example.com. You can buy a certificate that allows you to have multiple domain names. These are known as UCC certificates.
Your second option is to redirect example.com to www.example.com or the other way around. Redirecting is a great option if want your content to be indexed by google or other search engines. Since they will see www.example.com and example.com as two seperate sites. This means that links to your sites will be split reducing your overall page rank.
You can configure sites in IIS to require a Cert but that would A) generate an error if someone isn't visiting with https and B) require all pages to use https. So, that won't work. You could put a filter on IIS that checks all requests and redirects them as https calls if they are on your encryption list. The obvious drawback here is the need to update your list of pages every time a new page is added (e.g. from an XML file or database) and restart the filter.
I think that you are probably correct in building code into the pages that require https that redirects to an https version if they arrive via http. As far as your cert error goes, you could redirect with a full path (that includes the www) instead of a relative path to fix this problem. If you have any questions about how to detect whether the call uses https OR how to get the full path of the current request please let me know. Both are pretty straightforward but I've got sample code if you need it.
UPDATE - Josh, the certs that handle multiple subdomains are called wildcard certs. The problem is that they are quite a bit more expensive than standard certs.
UPDATE 2: One other thing to consider is to use a Master page or derived class for the pages that need SSL. That way, instead of duplicating the code in each page you can just declare it as type SSLPage (or use the corresponding Master page) and have the Master/Parent class handle the redirect. Again, you'll need to do some URL processing if you take this approach but it is pretty trivial.
Following is something that can help you:
If it is fine to display all your website pages with https:// then you can simply update your code to use https:// and set two bindings in IIS. One is for http and another is for https. In this way, your website can be accessible through any of the protocol.
Your visitors are receiving a name mismatch error because the common name used in your SSL certificate is www.mydomain.com. Namecheap is providing RapidSSL certificates through which you can secure both names under single SSL. You can purchase this SSL for www.mydomain.com and it will automatically secure mydomain.com (i.e. without www).
Another option is you can write a code to redirect your visitors to www.mydomain.com website even if they browse mydomain.com.

Resources