I'm trying to submit input to the form, and parse the results in a RoR app. I've tried using mechanize, but it has some trouble with the way the page dynamically updates the results. It doesn't help that most fields are hidden.
Is there anyway to get mechanize to do what I'm looking for, or are there any alternatives to mechanize which I can use?
So whenever I want to do something like this, I go with the gem selenium-webdriver. It spawns a real browser (supports all major brands) and lets you control it with ruby code. You can do almost everything a real user could do. In addition, you have access to the (rendered) dom, so javascript generated content is not a problem.
Performance is much slower than with pure library clients, so its not a good fit for use in a web request cycle.
http://rubygems.org/gems/selenium-webdriver
Related
So, my cohorts and I have been doing some development with Phonegap +jQueryMobile for an application we've been planning to rollout. We switched off doing this natively for iOS and Android, since its mostly html anyway, and phone gap seemed like a great way to do this without having to write a whole bunch of platform specific code (although we're more or less newbs when it comes to this type of development.)
Previously, all the html, javascript, etc, was going to be housed in the app itself. For the most part, this seemed to work for us, and we advanced our design/testing/etc accordingly. However, things have changed in our approach. For each of our customers (once they go through a log-in/authentication) has a 'starting' html file (essentially 'their' index.html) that is specific to said customer. This was different from before where everyone had the same files.
Now I've played around with storing certain scripts on the web server to try and off-set opening the html running on the server, but it's not really that useful when trying to integrate some of the functionality like the camera or some of the other plugins we're trying to use. It's essentially a form-based application, so this is the ONLY file that will change from customer to customer. Also, this will not be something that changes frequently. For the most part, it will be setup for a customer ONCE AND ONLY ONCE, and it truly is unlikely to change.
Is there a way to more or less pull down this html file from a web server to replace the one that is stored internally in the app, and then load that version? Would doing something like that (if its even possible) violate Apple's or Google's App guidelines? Or is what I'm describing not even possible in the framework?
The only other thing I can think of would be to change the stored 'index.html' file to not load any of the form itself, but rather make ajax (or equivalent) calls to do so, but I've been told by our developer working the web design side of things that it would be a huge pain.
Any insight/knowledge would be appreciated.
If you really need to do this (I don't quite understand why), I think your best bet is to go the AJAX route. At least Apple does not look kindly on applications that update themselves without going through the App Store submission process.
You can do the same index.html for all and a script config.js that be the responsible of load/unload resources/html of each user at app start.
All you need to do then is save that config JSON values in localstorage and go.
I've been trying to figure out the best way to use Rails, Ember 1.0RC and jQuery mobile but with no success.
I'm building a simple single web app with Rails as a backend that provides simple JSON. Now I know that Ember and JQM don't like each other and you have to write custom helpers to render Ember views. This makes things quite complicated.
I know that there are a few examples out there but they are quite obsolete since Ember was under heavy development and there have been many changes.
I'd like to hear from experienced developers if it is a good idea or not to use Ember with JQM in my case?
Maybe I should go for other MVC framework (which one?)?
Sorry for the question being pretty open but I could not find any reliable resources on the web.
// edited on March 20
I've watched 2 Ember screencasts (from Peepscode and Railscasts) and they shed some light on the matter. Now I know a little bit more. But let me explain what I'm after.
I'm building an internal 'kudos' app based on the merits system. That is every Monday an employee receives 20 'kudos' to give other co-workers. The design is as follows:
the main page shows a list of all employees and at the top, also as an list item, there is position that belongs to himself. It shows for example how many kudos to give left and how many he or she received from others. The owner? do not know how many kudos other employees received. But I think there'll be a 'Top 3 kudoers' page.
When you tap/hold an item, a modal dialog will appear that will ask you if you what to give a kudo.
It is done. But what remained is porting it to Ember.
Now, after watching screencasts I kind of know what to do, but what buggs me is how to make JQM internal hash pages and Ember router a breeze.
I saw that one page app in Ember uses urls like these:
myapp/#/users/user
whereas JQM uses internal pages like this:
myapp/#somepage
I'd like to keep the app as simple as possible (following Ember 'convention over configuration') and make use of JQM internal pages.
So my question is how can they both go with each other?
I'd like to hear from experienced developers if it is a good idea or not to use Ember with JQM in my case?
I've been asking around about this and pretty much everyone who has tried to use JQuery Mobile with ember has advised against it.
Not that it can't be done. It's just that most people have determined the challenges outweigh the benefits. Especially if you are new to one or both frameworks.
The best example of JQM + Ember integration i've seen is by TOMASZ and can be found here: https://coderwall.com/p/ylogzg
Thing is, that app does not use ember router at all. For sure it won't help you make JQM internal hash pages and Ember router a breeze
We are planning to make a "large" website for I'd say 5000 up to many more users. We think of putting in lots of real time functionality, where data changes instantly propagate to all connected clients. New frameworks like Meteor and DerbyJS look really promising for this kind of stuff.
Now, I wonder if it is possible to do typical backend stuff like sending (bulk) emails, cleaning up the database, generating pdfs, etc. with those new frameworks. And in a way that is productive and doesn't suck. I also wonder how difficult it is to create complex forms with them. I got used to the convenient Rails view helpers and Ruby gems to handle those kind of things.
Meteor and DerbyJS are both quite new, so I do expect lots of functionality will be added in the near future. However, I also wonder if it might be a good idea to combine those frameworks with a "traditional" Rails app, that serves up certain complex pages which do not need realtime updates. And/or with a Rails or Sinatra app that provides an API to do the heavy backend processing. Those Rails apps could then access the same databases then the Meteor/DerbyJS app. Anyone thinks this is a good idea? Or rather not? Why?
It would be nice if anyone with sufficient experience with those new "single page app realtime" frameworks could comment on this. Where are they heading towards? Will they be able to handle "complete" web apps with authentication and backend processing? Will it be as productive/convenient to program with them as with Rails? Well, I guess no one can know that for sure yet ;-) Well, any thoughts, guesses and ideas are welcome!
For things like sending bulk emails and generating PDFs, Derby let's you simply use normal Node.js modules. npm now has over 10,000 packages, so there are packages for most things you might want to do on the server. Derby doesn't control your server, and it works on top of any normal Express server. You should probably stick with Node.js code as much as possible and not use Rails along with Derby. That is not to say that you can't send messages to a separate Rails app, but since you already have to have a Node.js app running to host Derby, you might as well use it for stuff like this.
To communicate with such server-side code, you can use Derby's model events. We are still exploring how this kind of code works and we don't have a lot of examples, but it is something that we will have a clear story around. We are building an app ourselves that communicates with an email server, so we should have some real experience with this pretty soon.
You can also just use a normal AJAX request or send a message over Socket.IO manually if you don't want to use the Derby model to do this kind of communication. You are free to make your own server-side only routes with Express along with your Derby app routes. We think it is nice to have this kind of flexibility in case there are any use cases that we didn't properly anticipate with the framework.
As far as creating forms goes, Derby has a very powerful templating system, and I am working on making it a lot better still. We are working on a new UI components feature that will make it possible to build libraries of self-contained UI widgets that can simply be dropped into a Derby app while still playing nicely with automatic view-model bindings and data syncing. Once this feature is completed, I think form component libraries will be written rather quickly.
We do expect to include all of the features needed for a normal app, much like Rails does. It won't look like Rails or work like Rails, but it will be similarly feature complete eventually.
For backend tasks (such as sending emails, cleaning up the database, generating pdfs) it's better to use resque or sidekiq
Now, I wonder if it is possible to do typical backend stuff like
sending (bulk) emails, cleaning up the database, generating pdfs, etc.
with those new frameworks. And in a way that is productive and doesn't
suck. I also wonder how difficult it is to create complex forms with
them. I got used to the convenient Rails view helpers and Ruby gems to
handle those kind of things.
Also, my question is not only about background jobs, but also about stuff one can might do during a request, like generating a pdf, or simply rendering complex views with rails helpers or code from gems. –
You're mixing metaphors here - a single page app is just a site where the content is loaded without doing a full page reload, be that a front end in pure js or you could use normal html and pjax.
The kind of things you are describing would be done in a background task regardless of the fornt-end framework you used. But +1 for sidekiq if you're using ruby.
As for notifying all the other users of things that have changed, you can look into using http://pusher.com or http://pubnub.com if you don't want to maintain a websocket server.
I have a Rails app where I load up a base HTML layout and I fill in the main content with rows of divs from JSON. This works in 2 steps:
Render the HTML
Ajax call to get the JSON
This has the benefit of being able to cache the HTML layout which doesn't change much, but it seems to have more drawbacks:
2 HTTP requests
HTML isn't that complex, the generated html is where all the work is done, so I'm not saving that much on time probably.
Each request in my specific case requires that we check the current user, their roles, and some things related to that user, so those 2 calls are somewhat involved.
Granted, memcached will probably solve a lot of this, I am wondering if there are some best practices here. I'm thinking I could do this:
Render the first page of JSON inline, in a script block, along with the HTML. This would cut out those 2 server calls requiring user authentication. And, assuming 80% of the time you don't need to make the second ajax call (pagination/sorting in this case), that seems like a fairly good solution.
What are your thoughts on how to approach this?
There are advantages and disadvantages to doing stuff like this. In general I'd say it's only a good idea, if whatever you're delaying via an ajax call would delay the page load enough to annoy the end user for most of the use cases on your page.
A good example of this is browsing a repository on github. 90% of the time all you want is to navigate the files, so they use an ajax load to fill in the commit messages per file after the page load.
It sounds like you're trying to do this to speed up or do something fancy for your users, but I think you should consider instead, what part is slow, and what speed of page load (and maybe for what information on that page) on your users are expecting. As you say, using memcached or fragment caching might well give you the improvements you're looking for.
Are you using some kind of monitoring tool? I'm using the free version of New Relic RPM on Heroku. It gives a lot of data on request times for individual controller actions. Data like that could help you focus your optimization process.
I'm currently in the process of writing my first Rails app. I'm writing a simple blog app that will allow users to comment on posts. I'm pretty new to Rails, so I'm looking for a bit of guidance on how to address security concerns with user input.
On the front end, I am using TinyMCE to accept user input. It is my understanding that TinyMCE will strip out any suspicious tags (e.g. <script>) from user input before posting to server. It seems that this could be bypassed by disabling javascript on the page, allowing a user to have free reign in the text area. TinyMCE recommends using javascript to create the TextArea. Therefore if the user disables javascript, there will be no text area. Is this the standard solution? It seems like a bit of a hack.
On the back end, what is the best way to strip out malicious code? Would I want to put some sort of validation in the create and update methods inside my comments controller? Is there some functionality built into Rails that can assist with this?
When displaying the information back out to the user, I'm assuming that I don't want to escape the HTML markup (with <%= h *text*%>), because that's how its stored in the back end. Is this bad practice?
I'm generally a big fan of cleaning out the data prior popping that stuff into the database. This is a debatable practice, but I usually lean toward this.
I use a modified version of the old white_list plugin to not strip out the html, but to convert anything I do want into a safer format.
<tag>
becomes
<tag>
This way I'm not really altering the content of the submission.
There are some plugins that specifically handle sanitization using a white/black list model.
http://github.com/rgrove/sanitize/ # Have not used, but looks very interesting
http://github.com/imanel/white_list_model # Used, not bad
There is also act_as_sanitized, but I have no real info on that.
And of course using the h().
Your suspicions are justified, but the creation of a text area in javascript won't make you any less vulnerable. A user could always use something like curl to force a form submission without ever visiting your site through a web browser.
You should assume that a user can post malicious scripts into the comments, and escape it on the frontend. Using <%= h(...) %> is one way to do it, or you can use the sanitize method in the same way. It will strip any scripts and escape all other html except for a few common tags that aren't harmful. Documentation for sanitize.
In addition to nowk's suggestions there is also the xss_terminate plugin. I have been using it in some of my applications. I found it to be easy to use, it needs almost no configuration, and has been working like a charm.