Can we use graph database neo4j with react js? If not so is there any alternate option for including graph database in react JS?
Easily, all you need is neo4j-driver: https://www.npmjs.com/package/neo4j-driver
Here is the most simplistic usage:
neo4j.js
//import { v1 as neo4j } from 'neo4j-driver'
const neo4j = require('neo4j-driver').v1
const driver = neo4j.driver('bolt://localhost', neo4j.auth.basic('username', 'password'))
const session = driver.session()
session
.run(`
MATCH (n:Node)
RETURN n AS someName
`)
.then((results) => {
results.records.forEach((record) => console.log(record.get('someName')))
session.close()
driver.close()
})
It is best practice to close the session always after you get the data. It is inexpensive and lightweight.
It is best practice to only close the driver session once your program is done (like Mongo DB). You will see extreme errors if you close the driver at a bad time, which is incredibly important to note if you are beginner. You will see errors like 'connection to server closed', etc. In async code, for example, if you run a query and close the driver before the results are parsed, you will have a bad time.
You can see in my example that I close the driver after, but only to illustrate proper cleanup. If you run this code in a standalone JS file to test, you will see node.js hangs after the query and you need to press CTRL + C to exit. Adding driver.close() fixes that. Normally, the driver is not closed until the program exits/crashes, which is never in a Backend API, and not until the user logs out in the Frontend.
Knowing this now, you are off to a great start.
Remember, session.close() immediately every time, and be careful with the driver.close().
You could put this code in a React component or action creator easily and render the data.
You will find it no different than hooking up and working with Axios.
You can run statements in a transaction also, which is beneficial for writelocking affected nodes. You should research that thoroughly first, but transaction flow is like this:
const session = driver.session()
const tx = session.beginTransaction()
tx
.run(query)
.then(// same as normal)
.catch(// errors)
// the difference is you can chain multiple transactions:
const tx1 = await tx.run().then()
// use results
const tx2 = await tx.run().then()
// then, once you are ready to commit the changes:
if (results.good !== true) {
tx.rollback()
session.close()
throw error
}
await tx.commit()
session.close()
const finalResults = { tx1, tx2 }
return finalResults
// in my experience, you have to await tx.commit
// in async/await syntax conditions, otherwise it may not commit properly
// that operation is not instant
tl;dr;
Yes, you can!
You are mixing two different technologies together. Neo4j is graph database and React.js is framework for front-end.
You can connect to Neo4j from JavaScript - http://neo4j.com/developer/javascript/
Interesting topic. I am using the driver in a React App and recently experienced some issues. I am closing the session every time a lifecycle hook completes like in your example. When there where more intensive queries I would see a timeout error. Going back to my setup decided to experiment by closing the driver in some more expensive queries and it looks like (still need more testing) the crashes are gone.
If you are deploying a real-world application I would urge you to think about Authentication and Authorization when using a DB-React setup only as you would have to store username/password of the neo4j server in the client. I am looking into options of having the Neo4J server issuing a token and receiving it for Authorization but the best practice is for sure to have a Node.js server in the middle with something like Passport to handle Authentication.
So, all in all, maybe the best scenario is to only use the driver in Node and have the browser always communicating with the Node server using axios...
Related
We are setting up a federated scenario with Server and Client on different physical machines.
On the server, we have used the docker container to kickstart:
The above has been borrowed from Kubernetes tutorial. We believe this creates a 'local executor' [Ref 1] which helps create a gRPC server [Ref 2].
Ref 1:
Ref 2:
Next on the client 1, we are calling tff.framework.RemoteExecutor that connects to the gRPC server.
Our understanding based on the above is that the Remote Executor runs on the client which connects to the gRPC server.
Assuming the above is correct, how can we send a
tff.tf_computation
from the server to the client and print the output on the client side to ensure the whole setup works well.
Your understanding is definitely correct.
If you construct an ExecutorFactory directly, as seems to be the case in the code above, passing it to tff.framework.set_default_context will install your remote stack as the default mechanism for executing computations in the TFF runtime. You should additionally be able to pass the appropriate channels to tff.backends.native.set_remote_execution_context to handle the remote executor construction and context installation if desired, but the way you are doing it certainly works, and allows for greater customization.
Once you have set this up, running an example end-to-end should be fairly simple. We will set up a computation which takes a set of federated integers, prints on the clients, and sums the integers up. Let:
#tff.tf_computation(tf.int32)
def print_and_return(x):
# We must use tf.print here, as this logic will be
# serialized and run on the clients as TensorFlow.
tf.print('hello world')
return x
#tff.federated_computation(tff.FederatedType(tf.int32, tff.CLIENTS))
def print_and_sum(federated_arg):
same_ints = tff.federated_map(print_and_return, federated_arg)
return tff.federated_sum(same_ints)
Suppose we have N clients; we simply instantiate the set of federated integers, and invoke our computation.
federated_ints = [1] * N
total = print_and_sum(federated_ints)
assert total == N
This should cause the tf.prints defined above to run on the remote machine; as long as tf.print is directed to an output stream which you can monitor, you should be able to see it.
PS: you may note that the federated sum above is unnecessary; it certainly is. The same effect can be had by simply mapping the identity function with the serialized print.
For example, here is a simple dart code:
#import('dart:io');
main() {
var server = new HttpServer();
server.listen('127.0.0.1', 8080);
server.defaultRequestHandler = (HttpRequest request, HttpResponse response) {
response.outputStream.write('Hello, world'.charCodes());
response.outputStream.close();
};
}
when the web server print the 'Hello, world', I would like to run a progress to run a
long heavy task, but don't want to it blocking the current process. May I know how to handle it? Thanks.
I tried with Process.run and Process.start with no success.
From you comment I can tell there are a misunderstanding of how Dart works spawning external processes. When you spawn a process in Dart it is by default running so the Dart program and the external program are running separate (so in different processes) and the Dart program can execute other stuff. You can then await for the result from the program (e.g. when it closes).
Therefore it does not make much sense to run the process with "&" as parameter (I guess this was an attempt to tell it should run separately from the Dart program).
But, since you are spawning another Dart program your should also consider using an Isolate which can execute both your own method on another thread or run external code by using:
https://api.dart.dev/stable/2.6.0/dart-isolate/Isolate/spawnUri.html
I am trying out Node.js Express framework, and looking for plugin that allows me to interact with my models via console, similar to Rails console. Is there such a thing in NodeJS world?
If not, how can I interact with my Node.js models and data, such as manually add/remove objects, test methods on data etc.?
Create your own REPL by making a js file (ie: console.js) with the following lines/components:
Require node's built-in repl: var repl = require("repl");
Load in all your key variables like db, any libraries you swear by, etc.
Load the repl by using var replServer = repl.start({});
Attach the repl to your key variables with replServer.context.<your_variable_names_here> = <your_variable_names_here>. This makes the variable available/usable in the REPL (node console).
For example: If you have the following line in your node app:
var db = require('./models/db')
Add the following lines to your console.js
var db = require('./models/db');
replServer.context.db = db;
Run your console with the command node console.js
Your console.js file should look something like this:
var repl = require("repl");
var epa = require("epa");
var db = require("db");
// connect to database
db.connect(epa.mongo, function(err){
if (err){ throw err; }
// open the repl session
var replServer = repl.start({});
// attach modules to the repl context
replServer.context.epa = epa;
replServer.context.db = db;
});
You can even customize your prompt like this:
var replServer = repl.start({
prompt: "Node Console > ",
});
For the full setup and more details, check out:
http://derickbailey.com/2014/07/02/build-your-own-app-specific-repl-for-your-nodejs-app/
For the full list of options you can pass the repl like prompt, color, etc: https://nodejs.org/api/repl.html#repl_repl_start_options
Thank you to Derick Bailey for this info.
UPDATE:
GavinBelson has a great recommendation for running with sequelize ORM (or anything that requires promise handling in the repl).
I am now running sequelize as well, and for my node console I'm adding the --experimental-repl-await flag.
It's a lot to type in every time, so I highly suggest adding:
"console": "node --experimental-repl-await ./console.js"
to the scripts section in your package.json so you can just run:
npm run console
and not have to type the whole thing out.
Then you can handle promises without getting errors, like this:
const product = await Product.findOne({ where: { id: 1 });
I am not very experienced in using node, but you can enter node in the command line to get to the node console. I then used to require the models manually
Here is the way to do it, with SQL databases:
Install and use Sequelize, it is Node's ORM answer to Active Record in Rails. It even has a CLI for scaffolding models and migrations.
node --experimental-repl-await
> models = require('./models');
> User = models.User; //however you load the model in your actual app this may vary
> await User.findAll(); //use await, then any sequelize calls here
TLDR
This gives you access to all of the models just as you would in Rails active record. Sequelize takes a bit of getting used to, but in many ways it is actually more flexible than Active Record while still having the same features.
Sequelize uses promises, so to run these properly in REPL you will want to use the --experimental-repl-await flag when running node. Otherwise, you can get bluebird promise errors
If you don't want to type out the require('./models') step, you can use console.js - a setup file for REPL at the root of your directory - to preload this. However, I find it easier to just type this one line out in REPL
It's simple: add a REPL to your program
This may not fully answer your question, but to clarify, node.js is much lower-level than Rails, and as such doesn't prescribe tools and data models like Rails. It's more of a platform than a framework.
If you are looking for a more Rails-like experience, you may want to look at a more 'full-featured' framework built on top of node.js, such as Meteor, etc.
I run the same code from two different locations in my application. I know it is the same code, because it is in a class and that class only has one publicly facing function. Both places call the function with the same arguments and both are running in the UI thread.
The function does a search for a particular printer by name using an asynchronous WMI query-->
var searcher =
new ManagementObjectSearcher(
"SELECT * from Win32_Printer WHERE Name LIKE '%ZDesigner GX430t'");
// Create an observer to trigger a callback when the search is completed.
var watcher = new ManagementOperationObserver();
watcher.Completed += PrinterSearchCompleted;
watcher.ObjectReady += PrinterSearchReady;
// Look for the printer
_printerFound = false;
_searchCompleted = false;
searcher.Get(watcher);
The problem I am having is that the ObjectReady event is not triggered when I run it from one location and when I run it from another, it get's triggered all the time.
Also, another problem is that this seems to be computer specific; some of the computers I run this on work just fine, others exhibit the problem I described above.
Any ideas what I should be looking for?
Couple of things to try:
Check if WMI service is running on all the computers.
Restart WMI service on the computers where it is not working.
You may find this article useful.
If its a Windows 7 or Windows Server 2008 R2 server, WMI has a memory leak problem. Check this.
I have a problem with changing the standard options used by an Axis 1.4 generated web service client code.
We consume a certain web service of a partner who is using the old RPC/Encoded style, which basically means we're not able to go for Axis 2 but are limited to Axis 1.4.
The service client is retrieving data from the remote server through our proxy which actually runs quite nicely.
Our application is deployed as a servlet. The retrieved response of the foreign web service is inserted into a (XML) document we provide to our internal systems/CMS.
But if the external service is not responding - which didn't happen yet but might happen at anytime - we want to degrade nicely and return our produced XML document without the calculated web service information within a resonable time.
The data retrieved is optional (if this specific calculation is missing it isn't a big issue at all).
So I tried to change the timeout settings. I did apply/use all methods and keys I could find in the documentation of axis to alter the connection and socket timeouts by searching the web.
None of these seems to influence the connection timeouts.
Can anyone give me advice how to alter the settings for an axis stub/service/port based on version 1.4?
Here's an example for the several configurations I tried:
MyService service = new MyServiceLocator();
MyServicePort port = null;
try {
port = service.getMyServicePort();
javax.xml.rpc.Stub stub = (javax.xml.rpc.Stub) port;
stub._setProperty("axis.connection.timeout", 10);
stub._setProperty(org.apache.axis.client.Call.CONNECTION_TIMEOUT_PROPERTY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_SO_TIMEOUT_KEY, 10);
AxisProperties.setProperty("axis.connection.timeout", "10");
AxisProperties.setProperty(org.apache.axis.client.Call.CONNECTION_TIMEOUT_PROPERTY, "10");
AxisProperties.setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY, "10");
AxisProperties.setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_SO_TIMEOUT_KEY, "10");
logger.error(AxisProperties.getProperties());
service = new MyClimateServiceLocator();
port = service.getMyServicePort();
}
I assigned the property changes before the generation of the service and after, I set the properties during initialisation, I tried several other timeout keys I found, ...
I think I'm getting mad about that and start to forget what I tried already!
What am I doing wrong? I mean there must be an option, mustn't it?
If I don't find a proper solution I thought about setting up a synchronized thread with a timeout within our code which actually feels quite awkward and somehow silly.
Can you imagine anything else?
Thanks in advance
Jens
axis1.4 java client soap wsdl2java rpc/encoded xml servlet generated alter change setup stub timeout connection socket keys methods
I think it may be a bug, as indicated here:
https://issues.apache.org/jira/browse/AXIS-2493?jql=text%20~%20%22CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY%22
Typecast service port object to org.apache.axis.client.Stub.
(i.e)
org.apache.axis.client.Stub stub = (org.apache.axis.client.Stub) port;
Then set all the properties:
stub._setProperty(org.apache.axis.client.Call.CONNECTION_TIMEOUT_PROPERTY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_CONNECTION_TIMEOUT_KEY, 10);
stub._setProperty(org.apache.axis.components.net.DefaultCommonsHTTPClientProperties.CONNECTION_DEFAULT_SO_TIMEOUT_KEY, 10);