Is it possible to create several batches at one time?
For example I have a code that has a running batch (batch 1). And inside this batch I have a method called which has another batch inside it (batch 2). The code is not working.
When I remove the upper batch (batch 1) I have a created node. Maybe there is only 1 batch possible at one time?
The example code is below:
$batch = $client->startBatch();
$widget = NULL;
try {
$widgetLabel = $client->makeLabel('Widget');
$widget = $client->makeNode();
$widget
->setProperty('base_filename', md5(uniqid('', TRUE)))
->setProperty('datetime_added', time())
->setProperty('current_version', 0)
->setProperty('shared', 0)
->setProperty('active', 1)
->save();
// add widget history
$history = Model_History::create($widget, $properties);
if ($history == NULL) {
throw new Exception('Could not create widget history!');
}
$widget->setProperty('current_version', $history->getID());
$widget->save();
$client->commitBatch($batch);
} catch (Exception $e) {
$client->endBatch();
}
The batch 2 is inside the Model_History::create() method. I don't get a valid $widget - Neo4jphp node from this code.
If the second batch is being create with another call to $client->startBatch() it will actually be the same batch object as $batch. If you call $client->commitBatch() from there, it will commit the outer batch (since they are the same.)
Don't start a second batch in Model_History::create(). Start the outer batch, go through all your code, and commit it once at the end.
Related
I need to persist a counter value between executions of a Grails Quartz plugin job. This runs at the correctly timed intervals and I can set the jobDataMap and read the value back correctly (during the same execution run), but it refuses to remember it between between executions.
I've set concurrent = false as the docs advised. Any ideas? I just need to persist and increment a counter. I want to avoid using a DB if at all possible, I think this should just use memory? Or other work arounds?
My TestJob.groovy, in /server/grails-app/jobs:
package myPackage
class MyJob {
static triggers = {
simple repeatInterval: 5000l // execute job every 5 seconds
}
def concurrent = false // Don't run multiple simultaneous instances of this job
def execute(context) {
if(context.jobDetail.jobDataMap['recCounter'] == null) { context.jobDetail.jobDataMap['recCounter'] = 1 }
else { context.jobDetail.jobDataMap['recCounter'] = context.jobDetail.jobDataMap['recCounter'] + 1 }
println(context.jobDetail.jobDataMap['recCounter'])
}
The output when run is a new line with '1' every 5 seconds. It should be incrementing the counter each time.
1
1
1
1
etc..
I'm running Grails 3.3.9 and build.gradle has compile "org.grails.plugins:grails-spring-websocket:2.4.1" in dependencies
Thanks
I have never used context object in my apps, but a counter can be implemented in a straight-forward way:
class MyJob {
//some static stuff
AtomicInteger counter = new AtomicInteger()
def execute(context) {
counter.incrementAndGet()
println counter.intValue()
}
}
I need custom behavior for the timeout function. For example, when I use:
timeout(time: 10, unit: 'MINUTES') {
doSomeStuff()
}
it terminates the doSomeStuff() function.
What I want to achieve is not to terminate the execution of the function, but to call another function every 10 minutes until doSomeStuff() is done with executing.
I can't use the Build-timeout Plugin from Jenkins since I need to apply this behavior to pipelines.
Any help would be appreciated.
In case anyone else has the same issue: After some research, the only way that came to my mind to solve my problem was to modify the notification plugin for the jenkins pipeline, in a way to add a new field that would contain value of time (in minutes) to delay the invoking of the url. In the code itself, where the url was invoked, i put those lines in a new thread and let that thread sleep for the needed amount of time before executing the remaining code. Something like this:
#Override
public void onStarted(final Run r, final TaskListener listener) {
HudsonNotificationProperty property = (HudsonNotificationProperty) r.getParent().getProperty(HudsonNotificationProperty.class);
int invokeUrlTimeout = 0;
if (property != null && !property.getEndpoints().isEmpty()){
invokeUrlTimeout = property.getEndpoints().get(0).getInvokeUrlTimeout();
}
int finalInvokeUrlTimeout = invokeUrlTimeout;
new Thread(() -> {
sleep(finalInvokeUrlTimeout * 60 * 1000);
Executor e = r.getExecutor();
Phase.QUEUED.handle(r, TaskListener.NULL, e != null ? System.currentTimeMillis() - e.getTimeSpentInQueue() : 0L);
Phase.STARTED.handle(r, listener, r.getTimeInMillis());
}).start();
}
Maybe not the best solution but it works for me, and I hope it helps other people too.
I am trying to learn Reactor but I am having a lot of trouble with it. I wanted to do a very simple proof of concept where I simulate calling a slow down stream service 1 or more times. If you use reactor and stream the response the caller doesn't have to wait for all the results.
So I created a very simple controller but it is not behaving like I expect. When the delay is "inside" my flatMap (inside the method I call) the response is not returned until everything is complete. But when I add a delay after the flatMap the data is streamed.
Why does this code result in a stream of JSON
#GetMapping(value = "/test", produces = { MediaType.APPLICATION_STREAM_JSON_VALUE })
Flux<HashMap<String, Object>> customerCards(#PathVariable String customerId) {
Integer count = service.getCount(customerId);
return Flux.range(1, count).
flatMap(k -> service.doRestCall(k)).delayElements(Duration.ofMillis(5000));
}
But this does not
#GetMapping(value = "/test2", produces = { MediaType.APPLICATION_STREAM_JSON_VALUE })
Flux<HashMap<String, Object>> customerCards(#PathVariable String customerId) {
Integer count = service.getCount(customerId);
return Flux.range(1, count).
flatMap(k -> service.doRestCallWithDelay(k));
}
It think I am missing something very basic of the reactor API. On that note. can anyone point to a good book or tutorial on reactor? I can't seem to find anything good to learn this.
Thanks
The code inside the flatMap runs on the main thread (that is the thread the controller runs). As a result the whole process is blocked and the method doesnt return immediately. Have in mind that Reactor doesnt impose a particular threading model.
On the contrary, according to the documentation, in the delayElements method signals are delayed and continue on the parallel default Scheduler. That means that the main thread is not blocked and returns immediately.
Here are two corresponding examples:
Blokcing code:
Flux.range(1, 500)
.map(i -> {
//blocking code
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(Thread.currentThread().getName() + " - Item : " + i);
return i;
})
.subscribe();
System.out.println("main completed");
Result:
main - Item : 1
main - Item : 2
main - Item : 3
...
main - Item : 500
main completed
Non-blocking code:
Flux.range(1, 500)
.delayElements(Duration.ofSeconds(1))
.subscribe(i -> {
System.out.println(Thread.currentThread().getName() + " - Item : " + i);
});
System.out.println("main Completed");
//sleep main thread in order to be able to print the println of the flux
try {
Thread.sleep(30000);
} catch (InterruptedException e) {
e.printStackTrace();
}
Result:
main Completed
parallel-1 - Item : 1
parallel-2 - Item : 2
parallel-3 - Item : 3
parallel-4 - Item : 4
...
Here is the project reactor reference guide
"delayElements" method only delay flux element by a given duration, see javadoc for more details
I think you should post details about methods "service.doRestCallWithDelay(k)" and "service.doRestCall(k)" if you need more help.
I use BatchInserters.batchDatabase to create an embedded Neo4j 2.1.5 data base. When I only put a small amount of data in it, everything works fine.
But if I increase the size of data put in, Neo4j fails to persist the latest properties set with setProperty. I can read back those properties with getProperty before I call shutdown. When I load the data base again with new GraphDatabaseFactory().newEmbeddedDatabase those properies are lost.
The strange thing about this is that Neo4j doesn't report any error or throw an exception. So I have no clue what's going wrong or where. Java should have enough memory to handle both the small data base (Database 2.66 MiB, 3,000 nodes, 3,000 relationships) and the big one (Database 26.32 MiB, 197,267 nodes, 390,659 relationships)
It's hard for me to extract a running example to show you the problem, but I can do if this helps. Here the main steps I do though:
def createDataBase(rules: AllRules) {
// empty the data base folder
deleteFileOrDirectory(new File(mainProjectPathNeo4j))
// Create an index on some properties
db = new GraphDatabaseFactory().newEmbeddedDatabase(mainProjectPathNeo4j)
engine = new ExecutionEngine(db)
createIndex()
db.shutdown()
// Fill the data base
db = BatchInserters.batchDatabase(mainProjectPathNeo4j)
//createBatchIndex
try {
// Every function loads some data
loadAllModulesBatch(rules)
loadAllLinkModulesBatch(rules)
loadFormalModulesBatch(rules)
loadInLinksBatch()
loadHILBatch()
createStandardLinkModules(rules)
createStandardLinkSets(rules)
// validateModel shows the problem
validateModel(rules)
} catch {
// I want to see if my environment (BIRT) is catching any exceptions
case _ => val a = 7
} finally {
db.shutdown()
}
}
validateModel is updating some properties of already created nodes
def validateModule(srcM: GenericModule) {
srcM.node.setProperty("isValidated", true)
assert(srcM.node == Neo4jScalaDataSource.testNode)
assert(srcM.node eq Neo4jScalaDataSource.testNode)
assert(srcM.node.getProperty("isValidated").asInstanceOf[Boolean])
When I finally use Cypher to get some data back
the properties set by validateModel are missing
class Neo4jScalaDataSet extends ScriptedDataSetEventAdapter {
override def beforeOpen(...) {
result = Neo4jScalaDataSource.engine.profile(
"""
MATCH (fm:FormalModule {isValidated: true}) RETURN fm.fullName as fullName, fm.uid as uid
""");
iter = result.iterator()
}
override def fetch(...) = {
if (iter.hasNext()) {
for (e <- iter.next().entrySet()) {
row.setColumnValue(e.getKey(), e.getValue())
}
count += 1;
row.setColumnValue("count", count)
return true
} else {
logger.log(Level.INFO, result.executionPlanDescription().toString())
return super.fetch(dataSet, row)
}
}
batchDatabase indeed causes this problem.
I have switched to BatchInserters.inserter and now everything works just fine.
I am trying to clear out a collection and update it at the same time. It has children and finding the current items in the collection and deleting them asynchronously would save me a lot of time.
Step 1. Find all the items in the collection.
Step 2. Once I know what the items are, fork a process to delete them.
def memberRedbackCriteria = MemberRedback.createCriteria()
// #1 Find all the items in the collection.
def oldList = memberRedbackCriteria.list { fetchMode("memberCategories", FetchMode.EAGER) }
// #2 Delete them.
Promise deleteOld = task {
oldList.each { MemberRedback rbMember ->
rbMember.memberCategories.clear()
rbMember.delete()
}
}
The error message is: Illegal attempt to associate a collection with two open sessions
I am guessing that because I find the items, then fork, this creates a new session so that the collection is built before forking and a new session is used to delete the items.
I need to collect the items in the current thread, otherwise I am not sure what the state would be.
Note that using one async task for all the deletions is effectively running all the delete operations in series in a single thread. Assuming your database can handle multiple connections and concurrent modification of a table, you could parallelize the deletions by using a PromiseList, as in the following (note untested code follows).
def deletePromises = new PromiseList()
redbackIds.each { Long rbId ->
deletePromises << MemberRedback.async.task {
withTransaction {
def memberRedbackCriteria = createCriteria()
MemberRedback memberRedback = memberRedbackCriteria.get {
idEq(rbId)
fetchMode("memberCategories", FetchMode.EAGER) }
memberRedback.memberCategories.clear()
memberRedback.delete()
}
}
}
deletePromises.onComplete { List results ->
// do something with the results, if you want
}
deletePromises.onError { Throwable err ->
// do something with the error
}
Found a solution. Put the ids into a list and collect them as part of the async closure.
Note also that you cannot reuse the criteria as per http://jira.grails.org/browse/GRAILS-1967
// #1 find the ids
def redbackIds = MemberRedback.executeQuery(
'select mr.id from MemberRedback mr',[])
// #2 Delete them.
Promise deleteOld = task {
redbackIds.each { Long rbId ->
def memberRedbackCriteria = MemberRedback.createCriteria()
MemberRedback memberRedback = memberRedbackCriteria.get {
idEq(rbId)
fetchMode("memberCategories", FetchMode.EAGER) }
memberRedback.memberCategories.clear()
memberRedback.delete()
}
}
deleteOld.onError { Throwable err ->
println "deleteAllRedbackMembers An error occured ${err.message}"
}