I have written up a Rest api for my website. Currently I have it so that a user has many computers that they can own. I am trying to send a json payload that updates the user account with a computer. For testing this is what I send:
{"auth_token"=>"AiprpscAqN6hnvNDHSwh",
"user"=>"{"id":1,
"username":"Rizowski",
"email":"test#gmail.com",
"computer":[{"id":0,
"user_id":1,
"name":"Desktop",
"enviroment":"Windows 8",
"ip_address":"192.168.1.10",
"account":[]}]}",
"format"=>"json",
"id"=>"1"}
Once I send it, the Rails controller receives it and parses the request into a hash using the JSON.parse method. Once I try to save the user object it says this:
ActiveRecord::AssociationTypeMismatch in Api::V1::UsersController#update
Computer(#51701664) expected, got Hash(#17322756)
Side question: I am still trying to completely understand rails and rest, but is it a good practice to send a computer object as I am? OR should I be sending the computer data through my computer api controller?
The solution to your error is to use the key computer_attributes instead of computer.
This is a normal practice for strongly associated and hierarchical data. If computers are going to be updated independently of user, you should certainly use its own controller.
Related
I'm supposed to make an old sqlite database editable trough a Sketchup Plugin in its WebDialog. Sketchups Ruby is not able to install the sqlite3 gem and since I already need to display the table in a WebDialog, I opted for a micro service with the help of Nancy. The existing Plugin already uses the dhtmlxSuite and its Grid Component is exactly what I need. They even offer a Connector that can send requests based on the actions of the user in the grid.
They also offer a connector for the server side, but that too does not work with sqlite. I already managed to connect to my database, used Dapper to bin the result of an SQL query to an object and return that data with a manually crafted JSON object. Very hacky, but it populates the data successful.
Now the client-Connector sends data manipulated by the user (delete and insert is forbidden) back with an url encoded POST request that looks quite weird:
4_gr_id=4&4_c0=0701.041&4_c1=Diagonale%20f%3Fr%202.07%20X%201.50%20m&4_c2=AR&4_c3=8.3&4_c4=380&4_c5=38.53&4_c6=0&4_c7=0&4_!nativeeditor_status=updated&ids=4
I'm not sure exactly why the hell they would use the row index inside the key for the key-value pairs, but this just makes my life that much harder. The only thing that comes to my mind is extracting the data I'm interested in with a regex, but that sounds even worse than what I did before.
Any suggestions for me how I could map that POST request to something usable?
I'm currently implementing a very basic IMAP client into an application I'm building in Rails. I'm using the Mail gem which supplies lots of useful ways of parsing the imap data.
I'd like to store the Mail object that it's generating in the database. Is that possible?
i.e.
email = Email.new
email.uid = id
email.mail = Mail.new(imap.fetch(id, "RFC822")[0]["attr"]["RFC822"]
email.save
It's a convenience thing where I don't want to have to download the object again unless I have to since performance on the IMAP call is slow, but I'd like to be able to have it there to look back on (and do any breaking down I needed to later).
I could then call
email.find(x).mail.body
and various other useful things without having to build out that functionality in my own email model.
Q1: How would I set up the active record model?
Q1a: Would I be better off doing something that excluded the attachments to make it an easier object to store? (is that even possible?)
Appreciate your help,
Several database schemata have been developed to store mail. I've worked on one, and there are others. Believe me, it's hard work. The result can be very useful, but since your question doesn't focus on the result I suspect it's not worthwhile in your case.
You might find it easier to use a json library to write your object graph to a file with an automatically inferred structure, as most json libraries seem to support these days. That won't let you do as much, but it's very much easier and lets you store both completely and incompletely retrieved messages. If you haven't fetched a particular body part, the json library will just write a null for that field.
It depends on what you want to do with the stored mails. If you need only specific parts of the mail to be easily accessible trough the database you won't need a complex setup like archiveopteryx, which basically maps a complete representation of emails to relational database tables. In most cases though you won't need that much detail and it will be totally perfect to use a simple data model.
A1: rails g model Email from to subject date:datetime message_id body. this are just the basic parts, should get you started.
A1a: You don't need to store the attachments if you don't want to. If you need them, you'll probably be better off not storing them in the database itself. Attachments are just like uploads so there are plenty of gems that can help you do that (https://www.ruby-toolbox.com/categories/rails_file_uploads).
Using posgres jsonb columns, you can store the email as json, in my case I disregard the attachments (which I store the reference to and retrieve as and when required).
This works pretty well with the Mail gem.
I'm using the G Adventure API to populate a database in my super-basic rails app.
The docs https://developers.gadventures.com/docs/index.html#api-endpoint
say to use a CURL request to get back JSON data, which obviously works fine, but what I'm wondering is where I should then store this request in the app itself...
Hunch is that it should be linked to a method inside the controller?
Thanks.
You don't really want to use curl in the application. Instead it would be better to use Ruby's Net::HTTP to make the request. Also, if relates to the logic of your app, and if you are saving the data in a database, it sounds like it does, this sort of thing belongs on the model, not the controller.
So in your model try
Net::HTTP::Get.new(API END POINT URL)
Most of the single page browser application are supposed to be "collaborative" which means that several people can edit the same document or object simultaneously (think about google docs). I'm trying to implement a collaborative application using Backbone.js and Rails on backend. I do understand how Backbone model sync works, but I wonder what is the best practice to handle conflicts resolution?
This is an example: a user updates field "author" of a book and Backbone.js model "Book" sends sync request to server... but someone else already updated this field this this book just a second ago. How to handle this situation? Is there any common practices / frameworks / libraries to handle conflicts?
Sign the data to confirm its validity:
Creation of a record on the back-end:
{
"author": "Ernest Hemingway",
"signature": "8332164f91b31973fe482b82a78b3b49"
}
Then when somebody retrieves the record, the signature is retrieved along.
When he edits the record the signature is sent back to the back-end. If the signature matches with what is in the DB, it's a valid edit and the back-end generates a new signature for the record and saves it.
If the signature does not match it means somebody else did an edit in the mean-time.
For a rails 3 app I am building, a user gets to share a post which has numerous different parameters. Some parameters are optional, others are required. While the user is filling out the parameters, I want to generate a preview for how the post will look on the fly. Some parameters are URLs which need to be sent back to the server to process, so basically, the preview cannot be 100% generated client side.
I was wondering what it the best way to go about this. Since it could be a lot of data, I don't want to send all the data back to the server every time something changes to regenerate the preview. I would rather only like to send the data that has changed. But in this case, where is the rest of the data stored? In a session, perhaps? Also, I would prefer to not rebuild the model object with all the data every time. Is there a way to persist the model object that represents the post as it is being created?
Thanks.
How big is that "a lot of data"? If you send it all, does it have a noticeable impact on performance, or are you just imagining that it would?
As you provided not too much information, here's basic info on what I would do:
process client-side. As much as possible.
data that can't be processed on the client - send to the server (only that part, not the rest of it). Receive result of processing and incorporate into what you already built.
no sessions, partially built models and any other state on the server. Stateless protocols are simple. Simplicity is prerequisite for reliability.