Core Data incorrectly selects Mapping Model - ios

I am working on an app that uses Core Data to manage its persistent data. We have been making sufficiently complex schema changes that the lightweight automatic migrations are not compatible, and so we are using manually defined model migrations instead.
My problem is that when I call addPersistentStoreWithType:configuration:URL:options:error: with the NSMigratePersistentStoresAutomaticallyOption set, it is selecting the wrong mapping model. If I call mappingModelFromBundles:forSourceModel:destinationModel: manually, I get the same (incorrect) mapping model returned.
We have three versions of the data model: 1.0, 1.1, and 1.1.1.
We have defined three mapping models: 1.0-1.1, 1.0-1.1.1, and 1.1-1.1.1.
The version of the data in the app is 1.1
The current version of the data is 1.1.1
The mapping model selected is 1.0-1.1.1.
This results in a table that was added in 1.1 being dropped and re-created, losing data.
When I use the -com.apple.CoreData.MigrationDebug 1 command line option, it prints out debugging information for the upgrade process. It shows all the tables in the current dataset (including the one added in 1.1) and their hashes. It then shows the source and destination hashes for the 1.0-1.1.1 mapping model, states that it is a compatible mapping model, and never examines any of the others.
If I use [NSMappingModel initWithContentsOfURL] to specifically select the 1.1-1.1.1 mapping model, I get the desired behaviour.
I'm at a bit of a loss, since it seems to me that it should not be willing to use a mapping model where the source hash does not match the hash of the current dataset.
Here is some selected output showing that the table that is being reset is in the database, but is not in the mapping model. I have verified that the only difference between the 1.1 data hash and the 1.0 data hash is that 1.1 had an added table.
2014-07-03 23:03:20.547 XXX[50242:60b] CoreData: annotation: (migration) will attempt automatic schema migration
2014-07-03 23:03:21.212 XXX[50242:60b] CoreData: annotation: (migration) looking for mapping model with source hashes:
{
XXX = <77891dab a2acaff3 e52f71e3 0f8c3d5f eee99f43 f549e3fc 72eddd29 b2af83fd>;
XXX = <6049b4fb 7d8c4e5f 63149e35 5abfd274 1264c2f9 76d13cf3 cc69a23a e29edac8>;
XXX = <cf3bd61a 71c2838c 421c6e50 41abd013 b0c153cb b165a0e6 21d5c352 f29b5743>;
XXX = <fd8c6be5 b97b3827 455a620c 3e6ff6e9 e2e09afd 472b9cbf 07d11e29 d5a52159>;
XXX = <6cf5aac1 67ead46a fbaf8450 11c2c0b9 dcc1e2ae dd3bbf86 06d09b78 4d4b6bbe>;
XXX = <09942be7 56f82126 d48a90b2 e6cf08e7 1fe9c091 1ee7fec8 8d426ca4 a00af268>;
XXX = <40462ca4 098ae4d2 d3e8e7cf a55bc7df ca58c8c9 3aaf8d94 b681080c 63e5683b>;
XXX = <dce53740 e8aba89f ac8180b4 0f297821 d09734a1 8ea3c344 8cb9dd6c d3baf645>;
XXX = <c9f7a2e3 13518dac 5ae5209d 239d1c11 0fd3f11f 5366b7d4 fd3a97d3 3e3d41d1>;
XXX = <e5bf6c2f c0c9d818 e1b4e2cc 9b7a92e3 2cd6bed4 5a98e6c3 53619376 9b3951ef>;
XXX = <5431c046 d30e7f32 c2cc8099 58add1e7 579ad104 a3aa8fc4 846e97d7 af01cc79>;
XXX = <3aef2c65 c9647274 1302fe8a fdca7ce6 cb87c7df 2751a19f bd946707 c8244729>;
THIS_ONE = <1bb3a3b9 857bcdf2 ca573238 a86672b8 0486929e ed0357c8 72879022 e12efe37>;
XXX = <9f627411 6f8c0891 3693eab1 aa45cbd3 0143b28e 1e3584da e6ea2867 554a26ad>;
XXX = <9def8e9f 14dfb358 b5694bce 77759b7c c1901fe1 3e3a163b 80061b51 268089a8>;
XXX = <06d0b355 4fb4ff4b 0adf05d4 8ce0378c 4aa156e9 a09c8a16 a82d8376 1fd2f929>;
XXX = <fb9db76f 350ad944 88e1cf5d 15aaca9c 230355f9 13a2dace 62d5e4fb 0a2ecd7a>;
XXX = <715d9149 7aea98db fdb3a2fa e1682e12 dfe8f63a d09aac57 301be349 91fffd44>;
XXX = <002c8d92 8bc08e6c eb34fe0c a10ef78e ed050a8e 17a86e63 9911adb8 e2c36df1>;
XXX = <362ea015 28a5c834 47b125c1 c460dd62 f0172785 e024b8aa 17dc544f 66871077>;
XXX = <bd06507d f33ee72d d6bba2d5 c29eb8c5 1f87568b 186ab250 7312c0ec 6f2cd09c>;
XXX = <22ff0e46 f56dbc7a e8e92cf6 9090a451 742517ff 7d29838d 0cd41e9e a3615134>;
XXX = <ec7834a0 987c4c5f df40ade2 73075b11 e329a018 94fe47ca 08f7c9ed 95bb4da1>;
XXX = <c3feffd4 8d223692 d314720a 4496b787 871db7d0 31097cff e1225b9b 6275e613>;
XXX = <e5e3c8aa 5267d778 9fd62dc5 884ef416 5f836890 d82fed79 efd3796d bcb58503>;
XXX = <6705e1bf cac0c2ed b040b64f ca1f6e6e 74332890 907ec136 7a99606c e116a946>;
XXX = <856a10a5 18de663a 1860ea0d c0bd9295 769e4a42 99420fb5 02314b22 f39fe1a4>;
XXX = <35f6c30f a146166c 6e132297 bf463c59 756b8071 49aae2d7 ec6b6de8 fb7f7300>;
XXX = <ec2a9e60 6ae28042 a62429e4 b0ec5939 3734e0ac 9a919421 a9fbede2 031b0bf6>;
XXX = <b08fbebb 9100df77 5aba3640 c8237a5b 4ddbed50 fb6cb28c 439c7e37 9b2ccb4a>;
XXX = <95c8cfb8 a1aafabc 90a9231b 0ef15d85 10e30393 5cfd4921 4db4a12f 511c8977>;
XXX = <4e0fcdb8 4fbf9aa3 684875aa c54a4c5d c02020b2 d29212e4 587069d2 eed3aa31>;
XXX = <ad580044 b972d6ab df963bda ad071ba5 9c82aab5 4007f377 bf8858fe b9bc6274>;
XXX = <4fc9af50 0722da5d 18e0b755 63cf2a04 88e8b2d3 e8196ec2 375171b1 ce40fb4e>;
}
2014-07-03 23:03:21.231 XXX[50242:60b] CoreData: annotation: (migration) checking mapping model /Users/jonathan/Library/Application Support/iPhone Simulator/7.1/Applications/XXX/XXX.app/1.0-1.1.1.cdm
source hashes:
{(
<09942be7 56f82126 d48a90b2 e6cf08e7 1fe9c091 1ee7fec8 8d426ca4 a00af268>,
<40462ca4 098ae4d2 d3e8e7cf a55bc7df ca58c8c9 3aaf8d94 b681080c 63e5683b>,
<ec7834a0 987c4c5f df40ade2 73075b11 e329a018 94fe47ca 08f7c9ed 95bb4da1>,
<b08fbebb 9100df77 5aba3640 c8237a5b 4ddbed50 fb6cb28c 439c7e37 9b2ccb4a>,
<77891dab a2acaff3 e52f71e3 0f8c3d5f eee99f43 f549e3fc 72eddd29 b2af83fd>,
<e5e3c8aa 5267d778 9fd62dc5 884ef416 5f836890 d82fed79 efd3796d bcb58503>,
<4e0fcdb8 4fbf9aa3 684875aa c54a4c5d c02020b2 d29212e4 587069d2 eed3aa31>,
<bd06507d f33ee72d d6bba2d5 c29eb8c5 1f87568b 186ab250 7312c0ec 6f2cd09c>,
<5431c046 d30e7f32 c2cc8099 58add1e7 579ad104 a3aa8fc4 846e97d7 af01cc79>,
<ad580044 b972d6ab df963bda ad071ba5 9c82aab5 4007f377 bf8858fe b9bc6274>,
<c9f7a2e3 13518dac 5ae5209d 239d1c11 0fd3f11f 5366b7d4 fd3a97d3 3e3d41d1>,
<3aef2c65 c9647274 1302fe8a fdca7ce6 cb87c7df 2751a19f bd946707 c8244729>,
<6705e1bf cac0c2ed b040b64f ca1f6e6e 74332890 907ec136 7a99606c e116a946>,
<856a10a5 18de663a 1860ea0d c0bd9295 769e4a42 99420fb5 02314b22 f39fe1a4>,
<6049b4fb 7d8c4e5f 63149e35 5abfd274 1264c2f9 76d13cf3 cc69a23a e29edac8>,
<6cf5aac1 67ead46a fbaf8450 11c2c0b9 dcc1e2ae dd3bbf86 06d09b78 4d4b6bbe>,
<715d9149 7aea98db fdb3a2fa e1682e12 dfe8f63a d09aac57 301be349 91fffd44>,
<e5bf6c2f c0c9d818 e1b4e2cc 9b7a92e3 2cd6bed4 5a98e6c3 53619376 9b3951ef>,
<fb9db76f 350ad944 88e1cf5d 15aaca9c 230355f9 13a2dace 62d5e4fb 0a2ecd7a>,
<35f6c30f a146166c 6e132297 bf463c59 756b8071 49aae2d7 ec6b6de8 fb7f7300>,
<9def8e9f 14dfb358 b5694bce 77759b7c c1901fe1 3e3a163b 80061b51 268089a8>,
<9f627411 6f8c0891 3693eab1 aa45cbd3 0143b28e 1e3584da e6ea2867 554a26ad>,
<dce53740 e8aba89f ac8180b4 0f297821 d09734a1 8ea3c344 8cb9dd6c d3baf645>,
<95c8cfb8 a1aafabc 90a9231b 0ef15d85 10e30393 5cfd4921 4db4a12f 511c8977>,
<cf3bd61a 71c2838c 421c6e50 41abd013 b0c153cb b165a0e6 21d5c352 f29b5743>,
<ec2a9e60 6ae28042 a62429e4 b0ec5939 3734e0ac 9a919421 a9fbede2 031b0bf6>,
<c3feffd4 8d223692 d314720a 4496b787 871db7d0 31097cff e1225b9b 6275e613>,
<06d0b355 4fb4ff4b 0adf05d4 8ce0378c 4aa156e9 a09c8a16 a82d8376 1fd2f929>,
<4fc9af50 0722da5d 18e0b755 63cf2a04 88e8b2d3 e8196ec2 375171b1 ce40fb4e>,
<002c8d92 8bc08e6c eb34fe0c a10ef78e ed050a8e 17a86e63 9911adb8 e2c36df1>,
<fd8c6be5 b97b3827 455a620c 3e6ff6e9 e2e09afd 472b9cbf 07d11e29 d5a52159>,
<362ea015 28a5c834 47b125c1 c460dd62 f0172785 e024b8aa 17dc544f 66871077>,
<22ff0e46 f56dbc7a e8e92cf6 9090a451 742517ff 7d29838d 0cd41e9e a3615134>
)}
2014-07-03 23:03:21.233 XXX[50242:60b] CoreData: annotation: (migration) found compatible mapping model /Users/jonathan/Library/Application Support/iPhone Simulator/7.1/Applications/XXX/XXX.app/1.0-1.1.1.cdm
I'm using iOS 7.1 and Xcode 5.1.1.
As mentioned, the only change between 1.0 and 1.1 appears to be the addition of a table. I guess the hash comparison function they use does not consider this to be a conflict? My next attempt will be to rename the migrations on the assumption that an alphabetical search is being used so that the 1.1-1.1.1 migration is found and checked first. Other than that I expect I will have to add some sort of manual logic (by subclassing something?) to force the additional table to be treated as a schema mismatch.

This is not the answer as to why the automatic selection of the mapping model fails. As I wrote in my comment I did encountered a similar issue and wrote my own (manual) mapping model selection.
For the manual selection we need to know which model was use to create the current store.
All Entities (tables) have a version hash, so this hash can be used to identify the model version.
After the app starts up, but before loading the persistent store the mapping model selection loops through the possible data models, i.e. 1.0.xcdatamodel, 1.1.xcdatamodel, and 1.1.1.xcdatamodel
The logic compares the known hash of the entity THIS_ONE against the version hash found in the persistent store. The known version hash THIS_ONE comes from the data model file. If the data model was used for creating the store then the hashes match.
So the app loops through a list of known model names (i.e. "1.0", "1.1" and "1.1.1") and calls the matching method isModel:forStore:. If the return is YES then we have found the matching model.
Now that we know the source data model we can identify the mapping model. Next step would be to kick of the actual migration using the appropriate mapping file for
migrateStoreFromURL:type:options:withMappingModel:toDestinationURL:destinationType:destinationOptions:error:
Here is the method to match the version hashes:
-(BOOL)isModel:(NSString *)modelUsed forStore:(NSURL *)storeUrl {
NSString *modelFound = #"unknown Model";
NSDictionary *knownTHIS_ONEHashes = [self knownTHIS_ONEHashes];
NSFileManager *fileManager = [NSFileManager defaultManager];
BOOL doesExistCurrentStore = [fileManager fileExistsAtPath:[storeUrl path]];
if (doesExistCurrentStore) {
NSError *error = nil;
NSDictionary *storeMetadata = [NSPersistentStoreCoordinator metadataForPersistentStoreOfType:NSSQLiteStoreType URL:storeUrl error:&error];
if (nil == storeMetadata) { // no source meta data => dont know if need to migrate
NSLog(#"sourceMetadata is nil");
} else {
NSLog(#"sourceMetadata is %#", storeMetadata);
NSDictionary *storeHashes = [storeMetadata objectForKey:#"NSStoreModelVersionHashes"];
NSData *curentTHIS_ONEHash = storeHashes[#"THIS_ONE"];
for (NSString *modelName in [knownTHIS_ONEHashes allKeys]) {
if ([knownTHIS_ONEHashes[modelName] isEqualToData:curentTHIS_ONEHash]) {
NSLog(#"found matching model: %#",modelName);
modelFound = modelName;
break;
}
}
}
} // else store does not exist so there is no need for a data migration
return ([modelUsed isEqualToString:modelFound]);
}
The known hashes are read at run time form the models present in the bundle. _kModelNameVx are the hard coded model names.
-(NSDictionary *)knownTHIS_ONEHashes {
NSMutableDictionary *returnDict = [NSMutableDictionary new];
NSArray *knowModelFiles = #[_kModelNameV1, _kModelNameV2, _kModelNameV3, _kModelNameV4];
NSString * destinationModelPath;
NSURL * destinationModelURL;
NSManagedObjectModel * destinationModel;
for (NSString *singleFile in knowModelFiles) {
destinationModelPath = [[NSBundle mainBundle] pathForResource:singleFile
ofType:#"mom"
inDirectory:#"<xcdatamodel_name>.momd"];
destinationModelURL = [NSURL fileURLWithPath:destinationModelPath];
destinationModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:destinationModelURL];
NSDictionary *modelMetadata = [destinationModel entityVersionHashesByName];
NSData *THIS_ONEHash = [modelMetadata objectForKey:#"THIS_ONE"];
[returnDict setValue:THIS_ONEHash forKey:singleFile];
}
return returnDict;
}

Related

Saxonica - .NET API - XQuery - XPDY0002: The context item for axis step root/descendant::xxx is absent

I'm getting same error as this question, but with XQuery:
SaxonApiException: The context item for axis step ./CLIENT is absent
When running from the command line, all is good. So I don't think there is a syntax problem with the XQuery itself. I won't post the input file unless needed.
The XQuery is displayed with a Console.WriteLine before the error appears:
----- Start: XQUERY:
(: FLWOR = For Let Where Order-by Return :)
<MyFlightLegs>
{
for $flightLeg in //FlightLeg
where $flightLeg/DepartureAirport = 'OKC' or $flightLeg/ArrivalAirport = 'OKC'
order by $flightLeg/ArrivalDate[1] descending
return $flightLeg
}
</MyFlightLegs>
----- End : XQUERY:
Error evaluating (<MyFlightLegs {for $flightLeg in root/descendant::FlightLeg[DepartureAirport = "OKC" or ArrivalAirport = "OKC"] ... return $flightLeg}/>) on line 4 column 20
XPDY0002: The context item for axis step root/descendant::FlightLeg is absent
I think that like the other question, maybe my input XML file is not properly specified.
I took the samples/cs/ExamplesHE.cs run method of the XQuerytoStream class.
Code there for easy reference is:
public class XQueryToStream : Example
{
public override string testName
{
get { return "XQueryToStream"; }
}
public override void run(Uri samplesDir)
{
Processor processor = new Processor();
XQueryCompiler compiler = processor.NewXQueryCompiler();
compiler.BaseUri = samplesDir.ToString();
compiler.DeclareNamespace("saxon", "http://saxon.sf.net/");
XQueryExecutable exp = compiler.Compile("<saxon:example>{static-base-uri()}</saxon:example>");
XQueryEvaluator eval = exp.Load();
Serializer qout = processor.NewSerializer();
qout.SetOutputProperty(Serializer.METHOD, "xml");
qout.SetOutputProperty(Serializer.INDENT, "yes");
qout.SetOutputStream(new FileStream("testoutput.xml", FileMode.Create, FileAccess.Write));
Console.WriteLine("Output written to testoutput.xml");
eval.Run(qout);
}
}
I changed to pass the Xquery file name, the xml file name, and the output file name, and tried to make a static method out of it. (Had success doing the same with the XSLT processor.)
static void DemoXQuery(string xmlInputFilename, string xqueryInputFilename, string outFilename)
{
// Create a Processor instance.
Processor processor = new Processor();
// Load the source document
DocumentBuilder loader = processor.NewDocumentBuilder();
loader.BaseUri = new Uri(xmlInputFilename);
XdmNode indoc = loader.Build(loader.BaseUri);
XQueryCompiler compiler = processor.NewXQueryCompiler();
//BaseUri is inconsistent with Transform= Processor?
//compiler.BaseUri = new Uri(xqueryInputFilename);
//compiler.DeclareNamespace("saxon", "http://saxon.sf.net/");
string xqueryFileContents = File.ReadAllText(xqueryInputFilename);
Console.WriteLine("----- Start: XQUERY:");
Console.WriteLine(xqueryFileContents);
Console.WriteLine("----- End : XQUERY:");
XQueryExecutable exp = compiler.Compile(xqueryFileContents);
XQueryEvaluator eval = exp.Load();
Serializer qout = processor.NewSerializer();
qout.SetOutputProperty(Serializer.METHOD, "xml");
qout.SetOutputProperty(Serializer.INDENT, "yes");
qout.SetOutputStream(new FileStream(outFilename,
FileMode.Create, FileAccess.Write));
eval.Run(qout);
}
Also two questions regarding "BaseURI".
1. Should it be a directory name, or can it be same as the Xquery file name?
2. I get this compile error: "Cannot implicity convert to "System.Uri" to "String".
compiler.BaseUri = new Uri(xqueryInputFilename);
It's exactly the same thing I did for XSLT which worked. But it looks like BaseUri is a string for XQuery, but a real Uri object for XSLT? Any reason for the difference?
You seem to be asking a whole series of separate questions, which are hard to disentangle.
Your C# code appears to be compiling the query
<saxon:example>{static-base-uri()}</saxon:example>
which bears no relationship to the XQuery code you supplied that involves MyFlightLegs.
The MyFlightLegs query uses //FlightLeg and is clearly designed to run against a source document containing a FlightLeg element, but your C# code makes no attempt to supply such a document. You need to add an eval.ContextItem = value statement.
Your second C# fragment creates an input document in the line
XdmNode indoc = loader.Build(loader.BaseUri);
but it doesn't supply it to the query evaluator.
A base URI can be either a directory or a file; resolving relative.xml against file:///my/dir/ gives exactly the same result as resolving it against file:///my/dir/query.xq. By convention, though, the static base URI of the query is the URI of the resource (eg file) containing the source query text.
Yes, there's a lot of inconsistency in the use of strings versus URI objects in the API design. (There's also inconsistency about the spelling of BaseURI versus BaseUri.) Sorry about that; you're just going to have to live with it.
Bottom line solution based on Michael Kay's response; I added this line of code after doing the exp.Load():
eval.ContextItem = indoc;
The indoc object created earlier is what relates to the XML input file to be processed by the XQuery.

Biopython Genbank.Record : trying to understand source code

I am writing a csv reader to generate Genbank files to capture annotations with sequence.
First I used a Bio.SeqRecord and got correctly formatted output but the SeqRecord class lacks fields that I need.
Blockquote
FEATURES Location/Qualifiers
HCDR1 27..35
HCDR2 50..66
HCDR3 99..109
I switched to Bio.GenBank.Record and have the needed fields except now the annotation formatting is wrong. It can't have the extra "type:" "location:" and "qualifiers:" text and the information should all be on one line.
Blockquote
FEATURES Location/Qualifiers
type: HCDR1
location: [26:35]
qualifiers:
type: HCDR2
location: [49:66]
qualifiers:
type: HCDR3
location: [98:109]
qualifiers:
The code for pulling annotations is the same for both versions. Only the class changed.
# Read csv entries and create a container with the data
container = Record()
container.locus = row['Sample']
container.size = len(row['Seq'])
container.residue_type="PROTEIN"
container.data_file_division="PRI"
container.date = (datetime.date.today().strftime("%d-%b-%Y")) # today's date
container.definition = row['FullCloneName']
container.accession = [row['Vgene'],row['HCDR3']]
container.version = getpass.getuser()
container.keywords = [row['ProjectName']]
container.source = "test"
container.organism = "Homo Sapiens"
container.sequence = row['Seq']
annotations = []
CDRS = ["HCDR1", "HCDR2", "HCDR3"]
for CDR in CDRS:
start = row['Seq'].find(row[CDR])
end = start + len(row[CDR])
feature = SeqFeature(FeatureLocation(start=start, end=end), type=CDR)
container.features.append(feature)
I have looked at the source code for Bio.Genbank.Record but can't figure out why the SeqFeature class has different formatting output compared to Bio.SeqRecord.
Is there an elegant fix or do I write a separate tool to reformat the annotations in the Genbank file?
After reading the source code again, I discovered Bio.Genbank.Record has its own Features method that takes key and location as strings. These are formatted correctly in the output Genbank file.
CDRS = ["HCDR1", "HCDR2", "HCDR3"]
for CDR in CDRS:
start = row['Seq'].find(row[CDR])
end = start + len(row[CDR])
feature = Feature()
feature.key = "{}".format(CDR)
feature.location = "{}..{}".format(start, end)
container.features.append(feature)

how to fetch all the records every minute from a sql table using Apache Flume

I am trying to get all the data from sql table every minute using Flume.
Can someone please suggest what config changes needs to be done?
Configs :
agent.channels = ch1
agent.sinks = kafkaSink
agent.sources = sql-source
agent.channels.ch1.type = memory
agent.channels.ch1.capacity = 1000000
agent.sources.sql-source.channels = ch1
agent.sources.sql-source.type = org.keedio.flume.source.SQLSource
# URL to connect to database
agent.sources.sql-source.connection.url = jdbc:sybase:Tds:abcServer:4500
# Database connection properties
agent.sources.sql-source.user = user
agent.sources.sql-source.password = XXXXXXX
agent.sources.sql-source.table = person
agent.sources.sql-source.columns.to.select = *
# Increment column properties
agent.sources.sql-source.incremental.column.name = person_id
# Increment value is from you want to start taking data from tables (0 will import entire table)
agent.sources.sql-source.incremental.value = 0
# Query delay, each configured milisecond the query will be sent
agent.sources.sql-source.run.query.delay=1000
# Status file is used to save last readed row
agent.sources.sql-source.status.file.path = /dump/apache-flume-1.6.0-bin
agent.sources.sql-source.status.file.name = sql-source.status
Change value of agent.sources.sql-source.run.query.delay to 60000..

Update sequence number mismatch; requested USN = 2, database USN = 3

I am using Filenet 4.5.1 I have a module in my project where we move the contents from a folder to a newly created folder , and then delete them from old folder.
ObjectStore objectStore;
ReferentialContainmentRelationship toRcr = null;
ReferentialContainmentRelationship fromRcr = null;
DocumentSet documentSet;
Iterator documentIterator;
documentSet = fromFolder.get_ContainedDocuments();
documentIterator = documentSet.iterator();
Document document;
while(documentIterator.hasNext())
{
document = (Document) documentIterator.next();
toRcr = toFolder.file(document,AutoUniqueName.AUTO_UNIQUE, document.getClassName(),DefineSecurityParentage.DO_NOT_DEFINE_SECURITY_PARENTAGE);
toRcr.save(RefreshMode.REFRESH);
toFolder.save(RefreshMode.REFRESH);
fromRcr = fromFolder.unfile(document);
fromFolder.save(RefreshMode.REFRESH);
}
But, here toFolder.save(RefreshMode.REFRESH); is not being executed properly and an exception is coming
Exception in FNServices.getOldFileFolderObject() : The object {ADF64C74-F80D-4BD7-8A58-86699C66BFAC} has been modified since it was retrieved. Update sequence number mismatch; requested USN = 2, database USN = 3.
Here , the object refers to the new folder created.
Judging from IBM documentation, I believe you should create your folder first, and then worry about the filing after.
ObjectStore objectStore;
ReferentialContainmentRelationship toRcr = null;
ReferentialContainmentRelationship fromRcr = null;
DocumentSet documentSet;
Iterator documentIterator;
documentSet = fromFolder.get_ContainedDocuments();
documentIterator = documentSet.iterator();
Document document;
toFolder.save(RefreshMode.REFRESH);
fromFolder.save(RefreshMode.REFRESH);
while(documentIterator.hasNext())
{
document = (Document) documentIterator.next();
toRcr = toFolder.file(document,AutoUniqueName.AUTO_UNIQUE, document.getClassName(),DefineSecurityParentage.DO_NOT_DEFINE_SECURITY_PARENTAGE);
toRcr.save(RefreshMode.REFRESH);
fromRcr = fromFolder.unfile(document);
fromRcr.save(RefreshMode.REFRESH);
}
Take a look here: Working with Containment

Twitter JSON losing quotes around properties?

Here's part of my JSON:
[
UserJSONImpl{
id=1489761876,
name='CharlesPerin',
screenName='charles_perin',
location='Paris,
France',
description='PhdStudentatINRIA-Univ.Paris-Sud-CNRS-LIMSI#infovis#dataviz#hci',
isContributorsEnabled=false,
profileImageUrl='http: //a0.twimg.com/profile_images/3766400220/bbced44afe69e60eb30e00f593a2f3b5_normal.jpeg',
profileImageUrlHttps='https: //si0.twimg.com/profile_images/3766400220/bbced44afe69e60eb30e00f593a2f3b5_normal.jpeg',
url='http: //t.co/eYSy04EzEk',
isProtected=false,
},
UserJSONImpl{
id=19671465,
name='KevinQuealy',
screenName='KevinQ',
location='NewYork,
NY',
description='AgraphicseditorattheNewYorkTimes.AdjunctatNYU#SHERP.ReturnedPeaceCorpsvolunteer.Bald,
Minnesotan,
talkstoomuch.',
isContributorsEnabled=false,
profileImageUrl='http: //a0.twimg.com/profile_images/2213326305/image_normal.jpg',
profileImageUrlHttps='https: //si0.twimg.com/profile_images/2213326305/image_normal.jpg',
url='http: //t.co/vb0j99kE3N',
isProtected=false,
...(cont)
This was returned directly from a call to twitter4j's lookupUsers:
long[] hundredIDs = new long[100];
org.json.JSONArray users = new org.json.JSONArray();
for(int a = 0; a < (int)((double)friendArray.length()/100 +1); a++)
{
for(int j = 100*a; j < 100*(a+1); j++)
{
hundredIDs[j-100*a] = Long.parseLong(friendArray.getString(j));
}
users = new org.json.JSONArray(twitter.lookupUsers(hundredIDs)); //lookup users in batches of 100
for(int k = 0; k < users.length(); k++)
{
org.json.JSONObject user = users.getJSONObject(k);
if(Long.parseLong(user.getString("followers_count")) >= 500)
{
String id = user.getString("id"); //get id for each JSONObject
friendArrayFiltered.add(id); //store ids in another array
}
}
For some reason, the JSON returned by my code doesn't have the standard quotes around the properties ("id"= ...., rather than id =...). It doesn't seem to be a problem of the Twitter API itself since their examples are in the correct format: https://dev.twitter.com/docs/api/1/get/users/lookup.
Does anyone know what the problem is?
Also, not sure if this is a consequence but when I attempt to access individual elements of the JSONArray (like JSONArray[0]), an error is returned saying JSONArray[0] is not a JSONObject. Is this linked to the above problem?
It's not JSON, it's actually generated by the UserJSONImpl#toString() method which is providing a textual representation for each of the User objects returned by the lookupUsers invocation.
As for your second problem, you cannot use the [] operator on Object types in Java so I'm a little unclear what you mean without further information.
ASIDE
I'm not sure why you are wrapping twitter4j objects in JSONArray and JSONObject objects - of course you may have a good reason for doing this that's not apparent in the question - but you can simply use the methods directly on the returned objects to get the information you need, for example:
final List<User> users = twitter.lookupUsers(hundredIDs);
for (User user : users) {
final int followersCount = user.getFollowersCount();
if (followersCount > 500) {
... etc...
Check out the User JavaDocs and wider documentation for the project.

Resources