Recover Redmine data from production log - ruby-on-rails

I had a project in Redmine with more than 600 issues. I moved all the issues to a different project. I had no idea that the move deletes all the data for the custom fields!
So all the custom field values are now lost. I did not backup the database before this action as I really did not think that I was going to do any harm by moving issues as moving is a native function in the UI.
What I noticed is though that the production.log contains events for all creation and updates. All my 600 issues are in order in the production log. How can I use these log statements to repeat the actions? If I can import all the log actions, I can migrate the custom fields that it writes to the original Redmine instance and restore my values.
Entries look like this:
Processing IssuesController#update (for XX.XX.XX.X at 2013-02-07 11:19:54) [PUT]
Parameters: {"_method"=>"put", "authenticity_token"=>"nWNSSRYjHhN0BGb+Ya8M4pYWPPgsfdM=", "issue"=>{"assigned_to_id"=>"", "custom_field_values"=>{"10"=>"", "5"=>"Not translated", "1"=>"fi", "8"=>"http://screencast.com/t/ODknR8K", "9"=>"", "3"=>"", "4"=>""}, "done_ratio"=>"0", "due_date"=>"", "priority_id"=>"4", "estimated_hours"=>"", "start_date"=>"2013-02-07", "subject"=>"1\tInstallation in English", "tracker_id"=>"1", "lock_version"=>"0", "description"=>"Steps:\r\nOpen Nitro\r\n\r\nProblem:\r\nNot localized"}, "controller"=>"issues", "time_entry"=>{"hours"=>"", "activity_id"=>"", "comments"=>""}, "attachments"=>{"1"=>{"description"=>""}}, "id"=>"3876", "action"=>"update", "commit"=>"Submit", "notes"=>""}
I am really hoping that there is a way, any help will be greatly appreciated

You could use a decent text editor and/or spreadsheet application and do a massive find and replace and construct a series of UPDATE SQL commands and run them directly on the database (TEST FIRST!!)
Extract from log
Remove unnessary information
Copy into spreadsheet
Split text into columns
Add in columns with necessary SQL commands "UPDATE SET etc" copy into all rows of this column etc.
Join columns to make one text command per row
Export joined data to a text file
Run against test database as sql
If all goes well run against production database as sql

The log entry, following "Parameters:", looks like a regular Ruby hash definition. I'd parse that out and eval it back into a hash variable.
From there you will need to peel off elements and insert them into a database. I'd do that using Sequel, but use what works for you.
Talk to the RedMine support people and get the schema for their tables so you can figure out what data goes where and the database driver needed.

Related

Delphi FireDAC: how to refresh data in cache

i need to refresh data in a TFDQuery which is in cached updates.
to simplify my problem, let's suppose my MsACCESS database is composed of 2 tables that i have to join.
LABTEST(id_test, dat_test, id_client, sample_typ)
SAMPLEType(id, SampleName)
in the Delphi application, i am using TFDConnection and 1 TFDQuery (in cached updates) in which i join the 2 tables which script is:
"SELECT T.id_test, T.dat_test, T.id_client, T.sample_typ, S.SampleName
FROM LABTEST T
left JOIN SAMPLEType S ON T.sample_typ = S.id"
in my application, i also use a DBGrid to show the result of the query.
and a button to edit the field "sample_typ", like this:
qr.Edit;
qr.FieldByName('sample_typ').AsString:=ce2.text;
qr.Post;
the edition of the 'sample_typ' field works fine but the corresponding 'sampleName' field is not changing (in the grid) after an update.
in fact it is not refreshed !
the problem is here: if i do refresh of the query, an exception is raised: "cannot refresh dataset. cached updates must be commited or canceled
and batch mode terminated before refreshing"
if i commit the updates, data will be sent to database and i don't want that, i need to keep the data in cache till the end of the operation.
also if i get out of the cache, data will be refreshed in the grid but will be sent to the database after qr.post and i don't want that.
i need to refresh data in the cache. what is the solution ?
Thanks in advance.
The issue comes down to the fact that you haven't told your UI that there is any dependency on the two fields - it clearly can't know how to do the join itself without resubmitting it so if you don't want to send the updates and reload you will have a problem.
It's not clear exactly what you are trying to do, but these two ideas may help you.
If you are not going to edit the fields in the SAMPLEType tables (S) then load the values from that table into a lookup table. You can load this into a TFDMemTable. You can use an adapter which loads from a query. Your UI controls can then show the value based on the valus looked up in your local TFDMemTable. Dependiong on the UI control this might be a 'LookupField' or some such.
You may also be able to store your main data in a TFDMemTable with an Adapter - you can specify diferent TFDCommands to read the whole recordset, refresh a record, update, insert and delete a record. The TFDCommands can act on multiple tables for joined recordsets like this. That would automatically refresh the individual record for you when you post it.

Hide/truncate long attributes in rails console

For a blog model I'm saving an RSS field as text under Blog.rss, problem is, some of this is rather long and each one prints when I'm working in the rails console, ie: Blog.last(10).
Is there a way to hide output unless I call someblog.rss specifically?
I had a similar problem and received some solutions in another forum, which were:
Use select to get just the columns you need
If you have a very long column (I had JSON data structure from a webhook cluttering the console), consider whether you really need it, and if you don't , don't store it in the table
Or, consider storing it in an associated table
if you need the whole object but just want to change how it's represented in console/log output, you can redefine inspect
yourobject.as_json(except: :unwanted_column)
Also
You could look into: https://github.com/awesome-print/awesome_print

Ruby on Rails: How to have multiple controllers for one table AND multiple models

I'm new to Ruby and to Rails. I have played a bit with Sinatra but I think that Rails is a more complete framework for my project. However, I am running into trouble with this.
I am working with an fairly substantial existing, and heavily used, mySQL database and I am trying to build an API for this that will report on certain features. The features that are needed are, for the most part, counts of records by certain groupings, then drilling down into details.
For example we have a table - tableA, that contains lots of information relating to documentation. One piece of information we want to report on from that is the number of items in a given language. The language code is stored against each item and based on a get request I would like to return JSON.
Request: /languages/:code/count/:tablename
There are two variables in that most specific URL - the code we are counting and the table we are counting from.
I understand that in routes.rb I can set up a mapping:
get '/languages/:code/count/:table', :controller=>'languages', :action=>'count'
I have a controller - languages_controller.rb with a count method in it. this then matches to a corresponding view file count.html.erb
In all the tutorials I have read and examples I have followed the main point seems that 'languages' would be a table in the database and would therefore be available under the 'magic' Rails approach.
My issue is that it is not a table, rather the results of the call should be a limited subset of the fields in tableA. Such as languagecode and count(id).
The description of the language needs to be looked up 'manually' as it is stored as an internal code that is not in a database anywhere (historic decision/madness).
The questions:
how do I have a model that is only a subset of fields, plus some that are manually populated - languagecode, isocode, description, count
Am I right in thinking that once I have the model defined as such as I could use ActiveRecord to get data from the database and then in the controller add the extra information in?
Can I change table in the model based on the parameter sent in the URL?
Essentially, I am at a loss at the moment on what to do with this. I have the routes defined, the view templates in place and the controller there and ready to go. The database component - getting some data from a pre-existing table seems mysterious to me.
Any help is greatly appreciated, it seems that the framework is currently getting in my way and I know that I can't be the only one trying this sort of thing so if you have any advice please share.
There's really no need for a model here, at all. This isn't what ORMs are for. What you should be doing is just running raw SQL against the database, and iterating over the results. Consider doing something like this: https://stackoverflow.com/a/14840547/229044

Parsing a CSV for Database Insertion when Formatted Incorrectly

I recently wrote a mailing platform for one of our employees to use. The system runs great, scales great, and is fun to use. However, it is currently inoperable due to a bug that I can't figure out how to fix (fairly inexperienced developer).
The process goes something like this...
Upload a CSV file to a specific FTP directory.
Go to the import_mailing_list page.
Choose a CSV file within the FTP directory.
Name and describe what the list contains.
Associate file headings with database columns.
Then, the back-end loops over each line of the file, associating the values with a heading, and importing these values into a database.
This all works wonderfully, except in a specific case, when a raw CSV is not correctly formatted. For example...
fname, lname, email
Bob, Schlumberger, bob#bob.com
Bobbette, Schlumberger
Another, Record, goeshere#email.com
As you can see, there is a missing comma on line two. This would cause an error when attempting to pull "valArray[3]" (or valArray[2], in the case of every language but mine).
I am looking for the most efficient solution to keep this error from happening. Perhaps I should check the array length, and compare it to the index we're going to attempt to pull, before pulling it. But to do this for each and every value seems inefficient. Anybody have another idea?
Our stack is ColdFusion 8/9 and MySQL 5.1. This is why I refer to the array index as [3].
There's ArrayIsDefined(array, elementIndex), or ArrayLen(array)
seems inefficient?
You gotta code what you need to code, forget about inefficiency. Get it right before you get it fast (when needed).
I suppose if you are looking for another way of doing this (instead of checking the array length each time, although that really doesn't sound that bad to me), you could wrap each line insert attempt in a try/catch block. If it fails, then stuff the failed row in a buffer (including the line number and error message) that you could then display to the user after the batch has completed, so they could see each of the failed lines and why they failed. This has the advantages of 1) not having to explicitly check the array length each time and 2) catching other errors that you might not have anticipated beforehand (maybe a value is too long for your field, for example).

TClientDataSet and processing records with StatusFilter

I'm using a TClientDataSet as a local dataset without the provider concept. After working with it a method is called that should generate the corresponding SQL statements using the StatusFilter to resolve the changes (generate SQL basically).
This looked easy initially after reading documentation (set StatusFilter to [dsInsert], process all inserts SQL, set StatusFilter to [dsModified] process all updates, the same with deletes) but after a few tests now looks far from trivial, for example:
If I add a record, then edit it: setting the StatusFilter to [dsInserted] displays it, but with the original data.
If I add a record, then edit, then delete it: the record appears with StatusFilter set to [dsInserted] and [dsModified] also.
And other similar situations..
1) I know that if first I process all inserts, then all updates then all deletes the database will be updated in the correct state but it looks far from right this approach (generating useless sql statements).
2) I've tried to access the PRecInfo(ClientDataSet.ActiveBuffer + ClientDataSet.RecordSize).Attribute information (dsRecNew, dsRecOrg, etc.) but still not manage to resolve the logic.
3) I can program the logic to resolve it, for example before processing and insert, set StatusFilter to [dsDeleted], and locating by the primary key if the record to see if its deleted thereafter.. the same with edits, before inserting, checking if the record was updated after so the insert sql in the updated version and so on.. but it should be more easy..
¿Did someone tried to solve this in an elegant and straightforward way? ¿I'm missing something? Thanks

Resources