While restructuring a Mantis system, to reduce the number of sub-sub-projects, I made a newbie error - I had a filter (hide closed) so unfortunately, I didn't move all issues before I deleted the sub-project... and hence all those closed issues are lost.
The good news is that I have a full database backup... and I can restore the whole database, which I have done on a separate installation (I can view all issues etc)
But, obviously, my live database has moved on - I don't want to restore the whole database, but just the selected issues that I've deleted. CSV export/import only deals with the issue itself, without any of the notes or suchlike.
Any suggestions?
Related
the code i'm working on makes heavy usage of TFDMemTables, and clones of those tables using CloneCursor.
Sometimes, under specific conditions which I am unable to identify, the source table and its clone become out of sync: the data between them may be different, the record count as well.
Calling Refresh on the cloned table puts things back in order.
From my understanding, CloneCursor is used to address the same underlying memory where data is stored, meaning alterations to the underlying data from any of the two pointers should reflect on the other table, yet allow the user to have separate filter / record positioning per "view". so how can it possibly go out of sync?
I built a small simulator, where I can insert / delete / filter records in either the table or its clone, and observe the impact on the other one. Changes were reflected correctly.
Another downside of Refresh is that it slows the execution tremendously, if overused.
Has anyone faced similar issues or found explanations / documentation regarding this matter?
Edit:
to clarify what I mean by "out of sync", it means reading a value from the table using FieldByName will return X prior to Refresh, and Y post-refresh. I was not able to reproduce this behavior on the simulator mentioned above.
We have a product that uses central CouchDB databases per client replicating to Apps running on user's iPads. Most of the database can replicate normally but we have two categories of document that we want to filter:
Documents with an owner - we want to filter the replication to only the current users documents (and documents with no specified owner).
Last X documents of some type. For some sorts of documents we only want to leave the last 10 (say) copies on the iPad.
We can set up both rules easily enough using filtered replication - so that the server only presents the subset of documents we want to the iPad for replication. Except... it does not work.
If a document has no owner (replicated) and later an owner is specified, it vanishes from the replication stream - but not from the iPad. In fact the version of the document that remains on the iPad still has NO owner and so we cant even hide it in code.
When a document becomes the 11th oldest and vanishes from the replication stream, it does not vanish from the iPad. Indeed unless the iPad database is rebuilt all versions of these documents end up there, and no longer replicate which is worse than just replicating them all in the first place.
We did find a hacky workaround - in the case where a document gains a new owner OR becomes older than X, we duplicate it and delete the original. The delete propogates to the iPad and the new document is filtered out of replication. This worked well enough (although it is a bit inefficient). However then we realised the newly copied document had lost all of its revision information and we were relying on the revisions to track changes!
So - does anyone have any other suggestion? What we are looking for is a mechanism to pull a document from the iPad replicas on demand. I am aware we could instruct the iPad to delete the documents locally - but then sooner or later those deletes would leak back to the server and destroy the original?
... we were relying on the revisions to track changes
IMHO this is the most interesting point to talk about an alternative solution.
I'm sorry but i have to say you using the CouchDB revision control in the way it that is not recommended. The document revisions are temporary. The best way to track changes of a document is to write a changes log inside or outside the doc.
How would you persist changes outside the doc itself - yes, you would create new docs. Surprise: you "Hack" is the right solution \o/
Maybe you shaking your head and your are not happy because you have tried to remove docs from the iPad to make them invisible client-side. That was the starting point of your "Hack", right?
My recommendation is to not combine "visibility" and "existence". Better would be to use your know-how with building view-indexes server-side in the same way client-side with PouchDB. Let the replication just handle replication - thats hard enough. Use views/filters client- and server-side to solve visibility requirements.
What is the method for removing inactive, unwanted node labels in a Neo4j database (community edition version 2.2.2)?
I've seen this question in the past but for whatever reason it gets many interpretations, such as clearing browser cache, etc.
I am referring here to labels actually contained in the database, such that the REST command
GET /db/data/labels
will produce them in its output. The labels have been removed from any nodes and there are not active constraints attached to them.
I am aware this question has been asked in the past and there is a cumbersome way of solving it, which is basically, dump and reload the database. The dump command doesn't even contain scattered commit statements and thus needs to be edited before executing it back. Of course this takes forever with big databases. There has to be a better way, or at least there is a feature in the queue of requirements waiting to be implemented. Can someone clarify?
If you delete the last node with a certain label - as you've observed - the label itself does not get deleted. As of today there is no way to delete a label.
However you can copy over the datastore in offline mode using e.g. Michael's store copy tool to achieve this.
The new store is then aware of only those labels which actually are used.
I'm going to be using a SQLite3 DB (after being told about them earlier today) for saving player progress through various worlds and levels in my game.
I was wondering how I would go about updating the database in a update to the game. So, I release V1.0 which has 10 levels and 1 world, if I then wish to update the DB to have 20 levels and two worlds in a v2.0 release in iTunes six months later, whilst preserving the data in the DB such as the player's score on each level to date, how would I go about that?
My understanding was that the DB is deployed with the app, so part of my question is, what happens when there is already a DB present? Also, how can I avoid overwriting the DB on the device and perform a smooth update procedure for the user?
For reference, I'm using this wrapper.
You can update it without removing the old database. If you database is in main bundle it will be removed but if it is in document directory it will not be remove while updating the application.
Assuming you db is in document directory, You can update you database without any problem, it fully depends on your implementation. You can easily insert your new levels & new world, just be careful about you implementation, do not replace or drop any previous table.
I had asked a similar question before see this. Hope this helps.. :)
I've been having trouble getting an app submitted to the App Store. This is due to the fact that that database, which is updatable, is too large for the iCloud backup limitations. Most of the data in the db is static, but one table records the user's schedule for reviewing words (this is a vocabulary quiz).
As far as I can tell, I have two or three realistic options. The first is to put the whole database into the Library/Cache directory. This should be accepted, because it's not backed up to iCloud. However, there's no guarantee that it will be maintained during app updates, per this entry in "Make App Backups More Efficient" at this url:
http://developer.apple.com/library/IOs/#documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/PerformanceTuning/PerformanceTuning.html
Files Saved During App Updates
When a user downloads an app update, iTunes installs the update in a new app directory. It then moves the user’s data files from the old installation over to the new app directory before deleting the old installation. Files in the following directories are guaranteed to be preserved during the update process:
<Application_Home>/Documents
<Application_Home>/Library
Although files in other user directories may also be moved over, you should not rely on them being present after an update.
The second option is to put the data into the NSDocuments or NSLibrary directory, as mark it with the skipBackupFlag. However, one problem is this flag doesn't work for iOS 5.0 and previous per this entry in "How do I prevent files from being backed up to iCloud and iTunes?" at
https://developer.apple.com/library/ios/#qa/qa1719/_index.html
Important The new "do not back up" attribute will only be used by iOS 5.0.1 or later. On iOS 5.0 and earlier, applications will need to store their data in <Application_Home>/Library/Caches to avoid having it backed up. Since this attribute is ignored on older systems, you will need to insure your app complies with the iOS Data Storage Guidelines on all versions of iOS that your application supports
This means that even if I use the "skipBackupFlag", I'll still have the problem that the database is getting backed up to the cloud, I think.
So, the third option, which is pretty much of an ugly hack, is to split the database into two. Put the updatable part into the NSLibrary or NSDocuments directory, and leave the rest in application resources. This would have the small, updatable part stored on the cloud, and leave the rest in the app resources directory. The problem is that this splits the db for no good reason, and introduces possible performance issues with having two databases open at once.
So, my question is, is my interpretation of the rules correct? Am I going to have to go with option 3?
p.s. I noticed in my last post cited urls were edited to links without the url showing. How do I do this?
Have you considered using external file references as described in https://developer.apple.com/library/IOS/#releasenotes/DataManagement/RN-CoreData/_index.html . Specifically, refer to "setAllowsExternalBinaryDataStorage:" https://developer.apple.com/library/IOS/documentation/Cocoa/Reference/CoreDataFramework/Classes/NSAttributeDescription_Class/reference.html#//apple_ref/occ/instm/NSAttributeDescription/setAllowsExternalBinaryDataStorage: . Pushing out large data into a separate file can help reduce database size .