Binding a irregular URL encoded POST request with Nancy - post

I'm supposed to make an old sqlite database editable trough a Sketchup Plugin in its WebDialog. Sketchups Ruby is not able to install the sqlite3 gem and since I already need to display the table in a WebDialog, I opted for a micro service with the help of Nancy. The existing Plugin already uses the dhtmlxSuite and its Grid Component is exactly what I need. They even offer a Connector that can send requests based on the actions of the user in the grid.
They also offer a connector for the server side, but that too does not work with sqlite. I already managed to connect to my database, used Dapper to bin the result of an SQL query to an object and return that data with a manually crafted JSON object. Very hacky, but it populates the data successful.
Now the client-Connector sends data manipulated by the user (delete and insert is forbidden) back with an url encoded POST request that looks quite weird:
4_gr_id=4&4_c0=0701.041&4_c1=Diagonale%20f%3Fr%202.07%20X%201.50%20m&4_c2=AR&4_c3=8.3&4_c4=380&4_c5=38.53&4_c6=0&4_c7=0&4_!nativeeditor_status=updated&ids=4
I'm not sure exactly why the hell they would use the row index inside the key for the key-value pairs, but this just makes my life that much harder. The only thing that comes to my mind is extracting the data I'm interested in with a regex, but that sounds even worse than what I did before.
Any suggestions for me how I could map that POST request to something usable?

Related

Breeze.js: Returning empty set when requested database does not exist

Currently we are using Breeze.js and Angular to develop our applications. Due to some persistent legacy issues, we have two databases ('Kenya' and 'Rwanda') that cannot be merged at this time, but have the same schema and metadata. Most of the time, the client knows which database to hit and passes the request through the .withParameters() function or the .saveOptions() function. Sometimes we want to request the same query from both databases (for example, if we are requesting a list of all available countries), and we use a EntityManager wrapper on the client to manage this and request the same query from each database. This is implemented through a custom EFContextProvider which uses the data returned to determine the appropriate database and creates the appropriate context in CreateContext().
To further complicate things, in some instances one or the other database won't exist (these are local deployments created through filtered replication), but the client won't know this. Therefore, when querying for a list of all countries, it issues two requests and one will cause failures because the context cannot be instantiated properly.
This is easy enough to detect on the Server. What I would like to do is to detect whether the requested context is available and, if not, return a 200 response and an empty set.
I can detect this in the Breeze DBContextProvider CreateContext() method, but cannot figure out how to cause the request to fallback gracefully to a empty-set response.
Thanks
Not exactly what I was looking for, but it probably makes more sense since most of the work is being done on the client-side:
Instead of trying to change the controller, I added a getAvailableDatabases to the C# controller actions and use that to determine which of the databases I will query from the client.

Store ruby Mail (from gem) object in ActiveRecord

I'm currently implementing a very basic IMAP client into an application I'm building in Rails. I'm using the Mail gem which supplies lots of useful ways of parsing the imap data.
I'd like to store the Mail object that it's generating in the database. Is that possible?
i.e.
email = Email.new
email.uid = id
email.mail = Mail.new(imap.fetch(id, "RFC822")[0]["attr"]["RFC822"]
email.save
It's a convenience thing where I don't want to have to download the object again unless I have to since performance on the IMAP call is slow, but I'd like to be able to have it there to look back on (and do any breaking down I needed to later).
I could then call
email.find(x).mail.body
and various other useful things without having to build out that functionality in my own email model.
Q1: How would I set up the active record model?
Q1a: Would I be better off doing something that excluded the attachments to make it an easier object to store? (is that even possible?)
Appreciate your help,
Several database schemata have been developed to store mail. I've worked on one, and there are others. Believe me, it's hard work. The result can be very useful, but since your question doesn't focus on the result I suspect it's not worthwhile in your case.
You might find it easier to use a json library to write your object graph to a file with an automatically inferred structure, as most json libraries seem to support these days. That won't let you do as much, but it's very much easier and lets you store both completely and incompletely retrieved messages. If you haven't fetched a particular body part, the json library will just write a null for that field.
It depends on what you want to do with the stored mails. If you need only specific parts of the mail to be easily accessible trough the database you won't need a complex setup like archiveopteryx, which basically maps a complete representation of emails to relational database tables. In most cases though you won't need that much detail and it will be totally perfect to use a simple data model.
A1: rails g model Email from to subject date:datetime message_id body. this are just the basic parts, should get you started.
A1a: You don't need to store the attachments if you don't want to. If you need them, you'll probably be better off not storing them in the database itself. Attachments are just like uploads so there are plenty of gems that can help you do that (https://www.ruby-toolbox.com/categories/rails_file_uploads).
Using posgres jsonb columns, you can store the email as json, in my case I disregard the attachments (which I store the reference to and retrieve as and when required).
This works pretty well with the Mail gem.

How should I store scraped HTML in my webapp?

I'm a newbie to web development (and development in general) and I'm building out a rails app which scrapes data from a third party website. I'm using Nokogiri to parse for specific html elements that I'm interested in and these elements are stored in a database.
However, I'd like to save the html of the whole page I'm scraping as a back-up in case I change my mind on what type of information I want and in case the website removes the site (or updates it).
What's the best practice for storing the archived html?
Should I extract it as a string and put it in a database, write it to a log or text file, or what?
Edit:
I should have clarified a bit. I am crawling on the order of 10K websites a week and anticipate only needing to access the back-ups on once-off basis if I redefine the type of data I want.
So as an example, if was crawling UN data on country population data and originally was looking at age distributions but later realized I wanted to get the gender distributions as well, I'd want to go back to all my HTML archives and pull the data out. I don't anticipate this happening much (maybe 1-3 times a month) but when it does I'll want to retrieve it across 10K-100K listings. The task should only take a few hours to do around 10K records so I guess each website fetch should take at most a second. I don't need any versioning capability. Hope this clarifies.
I'm not sure what the "best practice" for this case is (it will vary by the specifics of your project), but as a starting point I'd suggest creating a model with a string field for the URL and a text field for the HTML itself, and save the pages there. You might add a uniqueness validator for the URL, to make sure you don't store the same HTML twice.
You could then optionally add model methods to initiate a nokogiri document from the HTML text, thus using the HTML string as the "master" record (in the DB) and generating the nokogiri document on the fly when needed. But again, as #dave-newton points out, a lot of this will depend on what you're going to do with this HTML.
I would strongly suggest saving it into a table in the same DB as the data you are scraping. Why change what works? Keep it all as you normally would, or write it all to a separate database entirely just in case and keep some form or ref to link the scraped data to the backups just in case.

rails 3 - generate a preview for a new post on the fly

For a rails 3 app I am building, a user gets to share a post which has numerous different parameters. Some parameters are optional, others are required. While the user is filling out the parameters, I want to generate a preview for how the post will look on the fly. Some parameters are URLs which need to be sent back to the server to process, so basically, the preview cannot be 100% generated client side.
I was wondering what it the best way to go about this. Since it could be a lot of data, I don't want to send all the data back to the server every time something changes to regenerate the preview. I would rather only like to send the data that has changed. But in this case, where is the rest of the data stored? In a session, perhaps? Also, I would prefer to not rebuild the model object with all the data every time. Is there a way to persist the model object that represents the post as it is being created?
Thanks.
How big is that "a lot of data"? If you send it all, does it have a noticeable impact on performance, or are you just imagining that it would?
As you provided not too much information, here's basic info on what I would do:
process client-side. As much as possible.
data that can't be processed on the client - send to the server (only that part, not the rest of it). Receive result of processing and incorporate into what you already built.
no sessions, partially built models and any other state on the server. Stateless protocols are simple. Simplicity is prerequisite for reliability.

Some help with Sencha Touch Database connectivity please

I am a beginner in Sencha Touch.
I am stuck with connecting my web pages(Sencha Touch) to the database(MySQL).
There are submit buttons in various pages which when clicked should send data to the database. But in my case sending data to the DB is possible only from the first page.I should comment the submit button handler of all other pages to send data to DB from any particular page.
Also sending data or retrieving data- only one of them is possible in a single execution.
Problem might be silly, but please I'm really desperate now trying to solve this.
Can someone help me please.........
Thanks in advance...
I'm guessing you have some server code in the mix too, since I don't know how else you would get a JavaScript app to read from and write to a database server directly.
You should be using the Ext.data.* package to bind your app's model stores to some sort of server interface (e.g. AJAX, REST, XML, etc etc) and all the proxies for these have full CRUD implementations.
See: http://dev.sencha.com/deploy/touch/docs/?class=Ext#namespace?class=Ext.data.RestProxy for example
If your server provides an API that provides results in response to an update, I think it would be simple to adapt a proxy to use that response to keep the two data stores (on server and client) strongly in sync.
But would need to know more about what you are trying to do.

Resources