Cloning assosiations with attached file - ruby-on-rails

I have got an object wich I am using as a pattern. This objects has got a number of associations. One of those association is an attach. My object has many attaches. I can clone all, you know, db data, but how should I do it with files, that are attached to my object.
I can imagine some solutions, but all of them little hacky and don't look native.
For example I can add virtual attribute to temprorary store ids of attaches while I am cloning an object.

what's the solution do you have to manage attachments? If something like Paperclip, it has a callback that handles remove/cloning the real files on filesystem level.

Related

Temp fileupload using paperclip

I am working on a form with an attachment, using Paperclip.
If the model is invalid when the form is submitted then I want to save a temporary copy of the file, so the user doesn't have to re-upload the file after they fix the form submission.
What I'm doing now is to use regular Ruby file operations to save the file.. but this is pretty low level compared to using Paperclip.
What might work best to allow me to save a temp Paperclip attachment that will then allow me to move the temp file to my final object upon successful validation?
I'm thinking about a simple ActiveRecord object (ie., TempAttachment) where I can assign the uploaded file and then move it to the final object when the object saves successfully.
Does that make sense? Anyone have any thoughts?
You can certainly do this as you proposed. But while you're saving a temporary attachment object to keep track of this file... why not make your life a bit easier and just save the model itself flagged as 'incomplete'?
You can set an incomplete model to bypass many of your validations, while blocking out incomplete models from areas that should not be using them.
However, guaranteeing that you don't mix an 'incomplete' model with valid ones may get a bit complicated. Depending on the complexity of your application, this may not be a great idea - in which case I would recommend sticking with your initial plan.

Creating a no-database rails model that would serve as a view to index data in elasticsearch

I'm facing a complex problem which i haven't been able to resolve yet.
I'm using Rails 4(edge) with postgresql 9, ElasticSearch 0.20.6 and the gem Tire (0.5.7).
I have multiple table that are linked together. for instance:
Agency has many Clients
Clients has many Projects
Projects has many Files
My goal here is to be able to index Files in two ways:
First have an index that just allows easy and rapid search among all files. That i have succeeded into doing just by using tire. My index has a type "file" and it works like a charm.
The other index I would like to create would have some of the files information but also some of the clients and the agencie's properties. This is for an external tool that would be able to fetch the best files according to criteria like the project's category or the client's location...
As i'm using postgresql, i was thinking i could do a database view and index data from there. I'm not sure it's good to do it like that though cause the view would be pretty complex with multiple joins. Plus i'm afraid that any update would trigger a full-database re-indexing.
I was wondering if there was a way to do a pure-tire model and easily trigger the index update on any of it's linked tables.
For instance, i'll have indexed the data in a type named files_front. I would have a model FilesFront which has_one :file and each time i update a file or any parent, all the impacted files would see their values updated in ES.
I'm really not sure on how to do this, so any advice is welcome.
Having a separate, table-less or standalone model is a good solution. The problem here, I assume, is connecting the FileFront class to the rest of the system. Effectively, whenever you update the File or any associated Client etc. object, you need to update the FileFront object.
You could actually use the Model::Persistence to that. So for each File you'd have an associated FrontFile, which you would assemble as JSON in File and create/update in an after_save callback. The tricky part is ensuring changes to the parent objects are propagated to file, and vice-versa.

How to reference parent model from Carrierwave Uploader object

I want to run some methods after the upload of a file has been completed via Carrierwave. Carrierwave includes several callback methods detailed here.
I'm planning on using the :store callback.
That's great, but it requires me to run the callback method in the Uploader object instance. This isn't the end of the world, but I want to keep all the logic inside of my parent Model.
My question is: How should I be referencing the parent model from the Uploader? I don't see anything obvious in the documentation?
I suppose I could do something like
ParentModel.last.call_my_method_here
but this seems like a very brittle way to code this (not expecting a lot of requests, but still).
I guess the model variable is what you were looking for, see examples using it here
For what it's worth, I have used the after_commit callback on parent model which so far seems to be working properly. This is allowing me to interact with the uploaded file as well.

When does STI make sense? We are storing the same information for every type but using it differently

So I know STI is the most reviled thing ever but I have an instance where I think it might actually make sense. My app is parsing a bunch of different types of xml files. Every file model stores the exact same information. Just some info about what user it is associated with, when it was uploaded, and where it is stored on S3.
After the xml file gets stored then I parse it for information which I use to create various other models. Each type of file is going to create different things. It is possible there could be 100 or more different types of xml files although I don't think I'm going to write parsers for that many. Does STI make sense in this case?
The downside I guess is models are all in one directory so it is going to flood that directory unless hack Rails and stick it in a subdir in models dir.
The other option is I have a kind field and put something in the lib directory that handles all this. Or I'm using resque, maybe every xml file parser should be it's own job. There are drawbacks to that though like it being kind of awkward to force a job in the rails console.
From your explanation, the 'file' model is only storing the results of the file upload process and associated meta data. Without more information about the other kinds of models being generated from the parsed XML data, I don't see why single table inheritance applies to this use case.

Caching ActiveRecord object and associations

I want to be able to "deep clone" 10 instances of an ActiveRecord model and all its associations into memory, work on them, update the in-memory objects and then, when I've finished, pick one to write back over the original in the database.
How do I deep clone (i.e. .clone but also cloning all associations right down to the bottom of the association tree)? I've assumed so far that I'm going to have to write my own method in the Model.
How can ensure that none of the cloned instances will write back to the database until I'm ready to do so?
If possible I'd like to:-
retain all current IDs as one of my main associations is a has_many :through matching the IDs of one model to another
still be able to treat each of the clones as if it were in the database (i.e. .find_by_id etc. will work)
Moon on a stick perhaps? ;)
Not 100% sure of what you are trying to do ...
Models will only be stored in the database if you call the save method. Calling save in an existing model will update the database with any data that has been changed. Associations may be saved also, but it really depends on the type of association and in most cases you will probably need to call save on these models as well.
Doh! Sometimes it takes asking the stupid question before you see the obvious answer.
My issue was that I was having to make changes to the associated objects and they weren't showing up when I used those in-memory objects later so thought I had to save. However, you are right. All that was actually happening was that the variables referencing them had gone out of scope and I was therefore accessing the in-database ones instead.
I'll go back through my code and check that this is the case.
Having said that, it doesn't answer my question on the "deep cloning though" ...
I've solved our deep cloning issues using DefV's deep cloning plugin : http://github.com/DefV/deep_cloning
It's done everything I've required so far, though as you've found you need to be very watchful of your relationships. My tests have luckily shown this up as an issue and I'm working through it at the moment. I found this post as I was trying to solve it :)
Check out the plugin though, it's been quite handy.

Resources