Grails handling FTP dump - parsing

I have a site where users are uploading files with a form and it posts and its great, but one customer insists on using FTP instead. I have determined three options for handling this, and I was wondering if anyone has any insight on which is best (or if there is a different overflow I should be asking this on), or if there is a fourth better option.
Solution 1: Learn Linux. I could probably write a cron job that looks in the directory to which they're uploading every 5 minutes and then post the files it finds into my site.
Solution 2: Create a timer driven service in grails that looks in the directory every 5 minutes. This is what I'm going to start trying.
Solution 3: This would be hard, but I'm sure it would be possible to have grails pretend to be a FTP server allowing the ftp dump to be like a post. I have no idea where to start this solution, so unless there is a plugin this isn't happening.

You can use the Grails Quartz plug-in to schedule a task, if you want to pursue option two in your list.

I would go for option 2 and use the Quartz plugin as suggested (rather than cron). Handling files in Groovy are simple and you have lots of examples such as this from mrhaki.
If you think the processing of files will have more complex requirements you could try out something like Apache Camel with this example from the same mrhaki. Though I believe Spring has its own framework that may be a better fit, you'll have to investigate that yourself if you go down that road.

Grails is a web framework so options 2 and 3 are less than ideal. If having the file that is ftped immediately available in your application option 1 is the quickest and simplest solution of the 3 I think.
Another option is to find an open source FTP server (there are several) and modify it to import the document into your system directly. This will allow your client to use the protocol they prefer (FTP) and still get the file into your application in real time.
Still another option is to provide an FTP like client that will use your grails application as the server. Whether this is suitable or not depends on why the client insists on using FTP which you should determine up front to make sure your solution works for them.

Related

How to run multiple "Applications" under one compoundJS instance?

I've been using nodeJS + expressJS for several years now developing a custom Application Platform for our organization. Our central framework provides a common set of services (authentication, language, administration, etc...) for any installed Modules/Applications under it.
I would like to switch our framework out with compoundJS. However I'm not familiar with the design constraints imposed by it (and Rails apps in general) and can't seem to figure out how to accomplish what I'm after.
I would like to only have a single server instance running: all
requests first process through our common authentication checking.
Then are passed on to an application's controllers.
I would also like to have each application separated out: preferably
under a separate site/applications/ directory. Each of these
applications could be designed using compoundJS normally. And I would like to install them like:
cd site/applications
npm install site-hr
npm install site-finance
npm install site-payroll
this would then have all the routes from /hr, /finance, /payroll operational.
How do I accomplish this?
Is there a way to get compoundJS to search the nonstandard /applications/* folders for models/controllers/views and load them while keeping the central /site configurations?
Or is there a better way?
Sorry for the late answer, but I needed something similar: I needed to put together tool applications in a portal.
I found a way to include child applications in a parent's Compound application as node modules. I wrote a guide on how to do it and sent a pull request to add it in the advanced folder of CompoundJS' guides. It is also available here. It requires a bit of work, but it works fine with 4 applications for us.
Hope it can help.
It's simple. Just use app.use in config/environment.js to map your sub-apps:
var mod = require('your-compound-module');
app.use('/subroot', mod());
When you visit /subroot/any-path, then it will be handled by /anypath route of your sub-app. Note, that you don't need any additional work on path helpers, as they will start with '/subroot' automatically (handled on compound side).
This is a good point, but we haven't seen any implementation yet. May be years later there would be some.
Using a proxy layer in front of the instances would be a general method, usually with Nginx, Vanish Cache etc. For the bleeding edge techs, I've heard Phusion Passenger has implemented Node.js support, but I haven't successfully tested yet. If you are familiar with Ruby, it would be a good try.
If you really want to construct a big project with many modules, you can try out some industrialized frameworks for instance Architect for Cloud9 IDE project.
For authentication, I think it's necessary to use independent methods in each application, but they can share with one user database.

Best Method for iOS Application to Update & Retrieve Files from SQL Server?

First let me say in advance that after having done a bit of research, I am aware that there is a ton of information regarding this question. A bit overwhelmed, I wanted to consolidate the approaches that I have found into one question and ask for confirmation as to whether each step in the path is:
The best resource/method to accomplish this task and
All encompassing (is there a gap in functionality I am not aware of)
I have written an iOS app which currently stores/loads data from a parsed csv file on the app's Documents path. Currently I must manually place the data to and retrieve the data from disk by connecting the device to my computer, and then take this csv file and enter into SQL. The next step in development is for me to perform this task via a web service of some kind which automates the process for me when I update/record data in the application. However I have no experience with this!
So after looking into this process, I think I have split the task into the following components, beginning and ending with the app and a SQL server:
___iOS App
__RestKit Framework to communicate w/ web service using JSON
_Ruby on Rails RESTful Web Service
__ActiveRecord Gem to communicate w/ SQL server using JSON
___SQL Server
At this point I have almost zero knowledge of RestKit, RoR, or ActiveRecord, and so before I dive in headfirst I want to be confident that comprehensively, hooking these elements together will provide the medium for communication between my app and a SQL database that I am looking for and that I am going about this the right way.
So am I on the right track? Is there anything else I should be aware of? Is this how you would accomplish the same thing?
Many thanks in advance!

Triggering FireWatir actions from different ruby scripts on the same browser window

I am writing an application that uses FireWatir to do a bunch of different actions. The problem is that I want to trigger these actions from many separate ruby files.
So for example, one ruby script will launch a new FireFox browser instance, than a totally different script will have that instance goto a specific website, and another will log into gmail.
I want all of these scripts to affect the same browser window. That way I can have one script take me to a specific website, and wait for another script to be triggered to do something else.
Please tell me that this is possible.
Chad,
I think that is possible. I am not sure that it's necessary or efficient, but I know that it's possible. The key is to make sure that you attach to the right browser instance. If you will only have one, that could be much simpler.
If you identify the problem that you are trying to solve with these multiple scripts then maybe one or more of the experienced framework designers can point you to existing solutions to the problem. There are some pretty awesome solutions that exist already. At the end of the day, we face the same issues.
Good luck,
Dave
I ended up getting around this issue by using socketing. Had a ruby script acting as the server that was waiting for requests from another group of ruby scripts that could be triggered whenever.

Django, Rails Routing...Point?

I'm a student of web development (and college), so my apologies if this comes off sounding naive and offensive, I certainly don't mean it that way. My experience has been with PHP and with a smallish project on the horizon (a glorified shift calendar) I hoped to learn one of the higher level frameworks to ease the code burden. So far, I looked at CakePHP Symfony Django and Rails.
With PHP, the URLs mapped very simply to the files, and it "just worked". It was quick for the server, and intuitive. But with all of these frameworks, there is this inclination to "pretty up" the URLs by making them map to different functions and route the parameters to different variables in different files.
"The Rails Way" book that I'm reading admits that this is dog slow and is the cause of most performance pains on largish projects. My question is "why have it in the first place?"? Is there a specific point in the url-maps-to-a-file paradigm (or mod_rewrite to a single file) that necessitates regexes and complicated routing schemes? Am I missing out on something by not using them?
Thanks in advance!
URLs should be easy to remember and say. And the user should know what to expect when she see that URL. Mapping URL directly to file doesn't always allow that.
You might want to use diffrent URLs for the same, or at least similar, information displayed. If your server forces you to use 1 url <-> 1 file mapping, you need to create additional files with all their function being to redirect to other file. Or you use stuff like mod_rewrite which isn't easier then Rails' url mappings.
In one of my applications I use URL that looks like http://www.example.com/username/some additional stuff/. This can be also made with mod_rewrite, but at least for me it's easier to configure urls in django project then in every apache instance I run application at.
just my 2 cents...
Most of it has already been covered, but nobody has mentioned SEO yet. Google puts alot of weight on the URL itself, if that url is widgets.com/browse.php?17, that is not very SEO friendly. If your URL is widgets.com/products/buttons/ that will have a positive impact on your page rank for buttons
Storing application code in the document tree of the web server is a security concern.
a misconfiguration might accidentally reveal source code to visitors
files injected through a security vulnerability are immediately executable by HTTP requests
backup files (created e.g. by text editors) may reveal code or be executable in case of misconfiguration
old files which the administrator has failed to delete can reveal unintended functionality
requests to library files must be explicitly denied
URLs reveal implementation details (which language/framework was used)
Note that all of the above are not a problem as long as other things don't go wrong (and some of these mistakes would be serious even alone). But something always goes wrong, and extra lines of defense are good to have.
Django URLs are also very customizable. With PHP frameworks like Code Igniter (I'm not sure about Rails) your forced into the /class/method/extra/ URL structure. While this may be good for small projects and apps, as soon as you try and make it larger/more dynamic you run into problems and have to rewrite some of the framework code to handle it.
Also, routers are like mod_rewrite, but much more flexible. They are not regular expression-bound, and thus, have more options for different types of routes.
Depends on how big your application is. We've got a fairly large app (50+ models) and it isn't causing us any problems. When it does, we'll worry about it then.

Rails best practice question: Where should one put shared code and how will it be loaded?

The rails books and web pages I've been following have all stuck to very simple projects for the sake of providing complete examples. I'm moving away from the small project app and into a realm of non-browser clients and need to decide where to put code that is shared by all involved parties.
The non-browser client is a script that runs on any machine which can connect to the database. Browser clients write commands into the database, which the script examines and decides what to do. Upon completion, the script then writes its result back. The script is not started by the RoR server, but has access to its directory structure.
Where would be the best place for shared code to live, and how would the RoR loader handle it? The code in question doesn't really belong in a model, otherwise I'd drop it in there and be done with it.
I'd put the shared code in the Rails project's /lib directory and consider making it a custom Rake task.
It really depends on how much you use this shared code. If you use it everywhere, then throw it in the lib folder (as has already been stated here). If you are only using it in a few places, you might want to consider making a plugin out of it and loading it only in the places that use it. It's nice to only load what you need (one of the reasons I'm loving merb).

Resources