how to execute js function with rhino and env.rhino.js ? - rhino

Scriptable envGlobals;
InputStreamReader envReader = new InputStreamReader(getClass()
.getResourceAsStream("env.rhino.js"));
// InputStreamReader jqueryReader = new InputStreamReader(getClass()
// .getResourceAsStream("jquery-1.6.2.js"));
try {
Context cx = ContextFactory.getGlobal().enterContext();
try {
Global global = new Global();
global.init(cx);
cx.setOptimizationLevel(-1);
cx.setLanguageVersion(Context.VERSION_1_7);
envGlobals = cx.initStandardObjects(global);
try {
cx.evaluateReader(envGlobals, envReader,
"env.rhino.js", 1, null);
// cx.evaluateReader(envGlobals, jqueryReader,
// "jquery-1.6.2.js", 1, null);
} catch (IOException e) {
}
} finally {
Context.exit();
}
} finally {
try {
envReader.close();
} catch (IOException e) {
}
}
/**
* the above code nicely evaluates env.rhino.js and provides a scope
* object (envGlobals). Then for each script I want to evaluate
* against env.rhino.js's global scope:
*/
Context scriptContext = ContextFactory.getGlobal().enterContext();
try {
// Create a global scope for the dependency we're processing
// and assign our prototype to the environment globals
// (env.js defined globals, the console globals etc.). This
// then allows us to (a) not have to re-establish commonly
// used globals i.e. we can re-use them in our loop; and (b)
// any global assignments are guaranteed to have come from
// the dependency itself (which is what we're trying to
// determine here).
Scriptable globalScope = scriptContext.newObject(envGlobals);
globalScope.setPrototype(envGlobals);
globalScope.setParentScope(null);
scriptContext.setOptimizationLevel(-1);
scriptContext.setLanguageVersion(Context.VERSION_1_7);
try {
//scriptContext.evaluateString(globalScope, "window.location='http://www.amazon.com'", "location", 1, null);
scriptContext.evaluateString(globalScope, tree.toSource(), "script document", 1, null);
System.out.println(scriptContext.toString());
// TODO: Do something useful with the globals.
} finally {
Context.exit();
}
....
Function f = (Function)fObj;
Object result = f.call(scriptContext, globalScope, globalScope, params);
throught this format,i always get the Exception info:
Exception in thread "main" java.lang.NullPointerException
at org.mozilla.javascript.Interpreter.interpret(Interpreter.java:849)
at org.mozilla.javascript.InterpretedFunction.call(InterpretedFunction.java:164)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:426)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3178)
at org.mozilla.javascript.InterpretedFunction.call(InterpretedFunction.java:162)
at org.sdc.food.parse.util.JavaScriptParser.getExecutableJS(JavaScriptParser.java:244)
at org.sdc.food.parse.util.JavaScriptParser.main(JavaScriptParser.java:349)
pls,someone help me!!

I have solved this. The reason is that the params passed to the called function is wrong.

Related

Problems detecting collision (Eyeshot) in an array of Tasks

I'm trying to check for each movement in a different task and after checking if there was a collision, in some iterations it generates an Exception "A source matrix was not long or sufficient. Check the index and length, as well as the lower limits of the matrix."
If you try to run sequentially in a "for" the error does not occur, I need to run in parallel to increase performance.
In debugging tests I notice that the error always occurs when trying to run cd.DoWork()
private void btn_Tasks_Click(object sender, EventArgs e)
{
// The source of your work items, create a sequence of Task instances.
Task[] tasks = Enumerable.Range(0,tabelaPosicao.Count).Select(i =>
// Create task here.
Task.Run(() =>
{
VerifiCollision(i);
})
// No signalling, no anything.
).ToArray();
// Wait on all the tasks.
Task.WaitAll(tasks);
}
private void VerifiCollision(object x)
{
int xx = (int)x;
int AuxIncrMorsa = Convert.ToInt32(tabelaPosicao[xx].Posicao) * -1;
bRef_BaseMorsa.Transformation = new Translation(0, AuxIncrMorsa, 0);
CollisionDetection cd = new CollisionDetection(new List<Entity>() { bRef_BaseMorsa }, new List<Entity>() { bRef_Matriz }, model1.Blocks, true, CollisionDetection2D.collisionCheckType.OBWithSubdivisionTree, maxTrianglesNumForOctreeNode: 5);
{
if (cd != null)
{
try
{
cd.DoWork();
}
catch (Exception e)
{
e.StackTrace;
}
catch (AggregateException ae)
{
var messege = ae.Message;
}
}
model1.Entities.ClearSelection();
if (cd3.Result != null && cd3.Result.Count > 0)
{
tabelaPosicao[xx].Tuple = new Tuple<string, string>(cd3.Result[0].Item1.ParentName,
cd3.Result[0].Item2.ParentName);
}
}
}
Before applying the transformation you need to clone the entity.
You can have a look at the "WORKFLOW" topic of this article.
I solved it by cloning the Bloks and BlockReference, so each iteration with its transformation performed separately, so there was no possibility that the transformation of one iteration would interfere with another. Grateful for the help.

How to create a constraints, indexes and nodes in a single procedure/plugin call?

similarly the code is for creating indexes and millions of nodes the respective methods. This is for creating fresh DB from JSON file.
I encounter the following error:
Exception: Cannot perform data updates in a transaction that has performed schema updates. Simple begin transaction and close it doesn't work?
After some time the session crashes in CreateNodes() method?
How exactly we separate the schema creation and data update?
Also refer the question I have posted before trying to get the similar answer, but no success. (I tried both injecting GraphDatabaseService as well as with Bolt Driver the result is the same).
How to use neo4j bolt session/transaction in a procedure as plugin for neo4j server extension?
for (int command = 4; command < inputNeo4jCommands.size(); command++) {
log.info(inputNeo4jCommands.get(command));
NEO4JCOMMANDS cmnd = NEO4JCOMMANDS.valueOf(inputNeo4jCommands.get(command).toUpperCase());
log.info(NEO4JCOMMANDS.valueOf(inputNeo4jCommands.get(command).toUpperCase()).toString());
if (NEO4JCOMMANDS.CONSTRAINT.equals(cmnd)) {
CreateConstraints1();
}
if (NEO4JCOMMANDS.INDEX.equals(cmnd)) {
CreateIndexes();
}
if (NEO4JCOMMANDS.MERGE.equals(cmnd)) {
log.info("started creating nodes........");
CreateNodes();
}
}
private void CreateIndexes1() {
log.info("Adding indexes.....");
log.info("into started adding index ......");
try (Transaction tx = db.beginTx()) {
log.info("got a transaction .....hence started adding index ......");
Iterator<Indx> itIndex = json2neo4j.getIndexes().iterator();
while (itIndex.hasNext()) {
Indx indx = itIndex.next();
Label lbl = Label.label(indx.getLabelname());
Iterable<IndexDefinition> indexes = db.schema().getIndexes(lbl);
if (indexes.iterator().hasNext()) {
for (IndexDefinition index : indexes) {
for (String key : index.getPropertyKeys()) {
if (!key.equals(indx.getColName())) {
db.schema().indexFor(lbl).on(indx.getColName());
}
}
}
} else {
db.schema().indexFor(lbl).on(indx.getColName());
}
tx.success();
tx.close();
}
log.info("\nIndexes Created..................Retured the method call ");
}
}
A lot of context is missing from your question and code examples, so it's hard to give a definite answer. Where is the exception thrown in the code example? There's no CreateNodes() method, so we can't find out why it's failing (Out Of Memory Error due to a transaction too large?).
However, there's an issue with your transaction management in the CreateIndexes1() method (not following the Java naming conventions, by the way):
try (Transaction tx = db.beginTx()) {
// ...
while (/* ... */) {
// ...
tx.success();
tx.close();
}
}
You're closing the transaction multiple times, when it's actually in a try-with-resources block where you don't need to close it yourself at all:
try (Transaction tx = db.beginTx()) {
// ...
while (/* ... */) {
// ...
}
tx.success();
}
I guess json2neo4j is the deserialization of a JSON describing the indices to create on labels. The logic is flawed: you try to create an index for a property as soon as you find an index for another property, when you should find out if an index for the current property exists and only create the index if it's missing then.
for (Indx indx : json2neo4j.getIndexes()) {
Label lbl = Label.label(indx.getLabelname());
boolean indexExists = false;
for (IndexDefinition index : db.schema().getIndexes(lbl)) {
for (String property : index.getPropertyKeys()) {
if (property.equals(indx.getColName())) {
indexExists = true;
break;
}
}
if (indexExists) {
break;
}
}
if (!indexExists) {
db.schema().indexFor(lbl).on(indx.getColName());
}
}

FlumeRpcClient multithreading

I'm trying to understand the correct way to use the Flume RpcClient in a multithreaded application. Information I have found so far indicates that the components are thread safe, but the example in the Flume documentation clouds the issue when it comes to error handling. This code:
public void sendDataToFlume(String data) {
// Create a Flume Event object that encapsulates the sample data
Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));
// Send the event
try {
client.append(event);
} catch (EventDeliveryException e) {
// clean up and recreate the client
client.close();
client = null;
client = RpcClientFactory.getDefaultInstance(hostname, port);
// Use the following method to create a thrift client (instead of the above line):
// this.client = RpcClientFactory.getThriftInstance(hostname, port);
}
}
If more then one thread calls this method, and the exception is thrown, then there will be a problem as multiple threads try and recreate the client in the exception handler.
Is the intent of the SDK that it should only be used by a single thread? Should this method be synchronized, as it appears to be in the log4jappender that is part of the Flume source? Should I put this code in its own worker and pass it events via a queue?
Does anyone have an example of RpcClient being used by more then one thread (included the error condition)?
Would I be better off using the "embedded agent"? Is that multithread friendly?
With the embedded agent, you get the same case except you don't know what to do:
try {
agent.put(event);
} catch (EventDeliveryException e) {
// ???
}
You could stop the agent, and restart it - but you would need a synchronized block (or a ReentrantReadWriteLock, to not block thread while "reading" the client field). But since I'm not a Flume expert, I can't tell you which one is better.
Example:
class MyClass {
private final ReentrantReadWriteLocklock;
private final Lock readLock;
private final Lock writeLock;
private RpcClient client;
private final String hostname;
private final Integer port;
// Constructor
MyClass(String hostname, Integer port) {
this.hostname = Objects.requireNonNull(hostname, "hostname");
this.port = Objects.requireNonNull(port, "port");
this.lock = new ReentrantReadWriteLock();
this.readLock = this.lock.readLock();
this.writeLock = this.lock.writeLock();
this.client = buildClient();
}
private RpcClient buildClient() {
return RpcClientFactory.getDefaultInstance(hostname, port);
}
public void sendDataToFlume(String data) {
// Create a Flume Event object that encapsulates the sample data
Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));
// Send the event
readLock.lock(); // lock for reading 'client'
try {
try {
client.append(event);
} catch (EventDeliveryException e) {
writeLock.lock(); // lock for reading/writing client
try {
// clean up and recreate the client
client.close();
client = null;
client = buildClient();
} finally {
writeLock.unlock();
}
}
} finally {
readLock.unlock();
}
}
}
Beside, the example will lose the event because it is not sent back. Some kind of loop + a max retry would probably do the trick:
int i = 0;
for (; i < maxRetry; ++i) {
try {
client.append(event);
break;
} catch (EventDeliveryException e) {
// clean up and recreate the client
client.close();
client = null;
client = RpcClientFactory.getDefaultInstance(hostname, port);
// Use the following method to create a thrift client (instead of the above line):
// this.client = RpcClientFactory.getThriftInstance(hostname, port);
}
}
if (i == maxRetry) {
logger.error("flume client is offline, loosing events {}", event);
}
That's the idea, but I don't think that should be the task of the user (eg: us), but an option in the client or the agent to store event that could not be processed due to such errors.

Common way to execute a stored proc from both ColdFusion and Railo

I think I've gotten the most simplest scenario built. I just want to pass it by everyone for a sanity check. Here's the idea:
GetErrorCodes.cfm does the following:
<cfscript>
response = new ErrorCodes().WhereXXX(); // ACF or Railo, doesn't matter
</cfscript>
ErrorCodes.cfc:
function WhereXXX() {
return new sproc().exec('app.GetErrorCodes'); // All my functions will do this instead of executing the sproc themselves.
}
sproc.cfc:
component {
function exec(procedure) {
local.result = {};
if (server.ColdFusion.productname == 'Railo') {
return new Railo().exec(arguments.procedure); // Has to be outside of sproc.cfc because ColdFusion throws a syntax error otherwise.
}
local.svc = new storedProc();
local.svc.setProcedure(arguments.procedure);
local.svc.addProcResult(name='qry');
try {
local.obj = local.svc.execute();
local.result.Prefix = local.obj.getPrefix();
local.result.qry = local.obj.getProcResultSets().qry;
} catch(any Exception) {
request.msg = Exception.Detail;
}
return local.result;
}
Railo.cfc:
component {
function exec(procedure) {
local.result = {};
try {
storedproc procedure=arguments.procedure result="local.result.Prefix" returncode="yes" {
procresult name="local.result.qry";
}
} catch(any Exception) {
request.msg = Exception.Message;
}
return local.result;
}
}
So I've been working on this all day, but tell me, is this a sane way to keep the source code the same if it's to be run on either a ColdFusion server or a Railo server?
Um... just use <cfstoredproc> instead of trying to use two different CFScript approaches that are mutually exclusive to each other of the CFML platforms.

Deferring persistence as device is being used in BlackBerry when listening file change

I tried to listen file change event in BlackBerry base on FileExplorer example, but whenever I added or deleted file, it always showed "Deferring persistence as device is being used" and I can't catch anything .Here is my code:
public class FileChangeListenner implements FileSystemJournalListener{
private long _lastUSN; // = 0;
public void fileJournalChanged() {
long nextUSN = FileSystemJournal.getNextUSN();
String msg = null;
for (long lookUSN = nextUSN - 1; lookUSN >= _lastUSN && msg == null; --lookUSN)
{
FileSystemJournalEntry entry = FileSystemJournal.getEntry(lookUSN);
// We didn't find an entry
if (entry == null)
{
break;
}
// Check if this entry was added or deleted
String path = entry.getPath();
if (path != null)
{
switch (entry.getEvent())
{
case FileSystemJournalEntry.FILE_ADDED:
msg = "File was added.";
break;
case FileSystemJournalEntry.FILE_DELETED:
msg = "File was deleted.";
break;
}
}
}
_lastUSN = nextUSN;
if ( msg != null )
{
System.out.println(msg);
}
}
}
Here is the caller:
Thread t = new Thread(new Runnable() {
public void run() {
new FileChangeListenner();
try {
Thread.sleep(5000);
createFile();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
t.start();
Create file method worked fine:
private void createFile() {
try {
FileConnection fc = (FileConnection) Connector
.open("file:///SDCard/newfile.txt");
// If no exception is thrown, then the URI is valid, but the file
// may or may not exist.
if (!fc.exists()) {
fc.create(); // create the file if it doesn't exist
}
OutputStream outStream = fc.openOutputStream();
outStream.write("test content".getBytes());
outStream.close();
fc.close();
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
and output:
0:00:44.475: Deferring persistence as device is being used.
0:00:46.475: AG,+CPT
0:00:46.477: AG,-CPT
0:00:54.476: VM:+GC(f)w=11
0:00:54.551: VM:-GCt=9,b=1,r=0,g=f,w=11,m=0
0:00:54.553: VM:QUOT t=1
0:00:54.554: VM:+CR
0:00:54.596: VM:-CR t=5
0:00:55.476: AM: Exit net_rim_bb_datatags(291)
0:00:55.478: Process net_rim_bb_datatags(291) cleanup started
0:00:55.479: VM:EVTOv=7680,w=20
0:00:55.480: Process net_rim_bb_datatags(291) cleanup done
0:00:55.481: 06/25 03:40:41.165 BBM FutureTask Execute: net.rim.device.apps.internal.qm.bbm.platform.BBMPlatformManagerImpl$3#d1e1ec79
0:00:55.487: 06/25 03:40:41.171 BBM FutureTask Finish : net.rim.device.apps.internal.qm.bbm.platform.BBMPlatformManagerImpl$3#d1e1ec79
I also tried to remove the thread or create or delete file in simulator 's sdcard directly but it doesn't help. Please tell me where is my problem. Thanks
You instantiate the FileChangeListenner, but you never register it, and also don't keep it as a variable anywhere. You probably need to add this call
FileChangeListenner listener = new FileChangeListenner();
UiApplication.getUiApplication().addFileSystemJournalListener(listener);
You also might need to keep a reference (listener) around for as long as you want to receive events. But maybe not (the addFileSystemJournalListener() call might do that). But, you at least need that call to addFileSystemJournalListener(), or you'll never get fileJournalChanged() called back.

Resources