I recently created a Neo4j sandbox instance (blank) so I could test https://guides.neo4j.com/wiki.
Using https://guides.neo4j.com/wiki, I had success running the first cypher statement to populate the graph with Wikipedia categories. However, the second cypher statement produces an error - after running for a few seconds.
Here's the statement:
UNWIND range(0,4) as level
CALL apoc.cypher.doIt("
MATCH (c:Category { pagesFetched: false, level: $level })
CALL apoc.load.json('https://en.wikipedia.org/w/api.php?format=json&action=query&list=categorymembers&cmtype=page&cmtitle=Category:' + apoc.text.urlencode(c.catName) + '&cmprop=ids%7Ctitle&cmlimit=500')
YIELD value as results
UNWIND results.query.categorymembers AS page
MERGE (p:Page {pageId: page.pageid})
ON CREATE SET p.pageTitle = page.title, p.pageUrl = 'http://en.wikipedia.org/wiki/' + apoc.text.urlencode(replace(page.title, ' ', '_'))
WITH p,c
MERGE (p)-[:IN_CATEGORY]->(c)
WITH DISTINCT c
SET c.pagesFetched = true", { level: level }) YIELD value
RETURN value
And here's the error message:
Error ServiceUnavailable
WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver. Please use your browsers development console to determine the root cause of the failure. Common reasons include the database being unavailable, using the wrong connection URL or temporary network problems. If you have enabled encryption, ensure your browser is configured to trust the certificate Neo4j is configured to use. WebSocket `readyState` is: 3
The js console shows error message:
neo4j-web.min.js:20 WebSocket is already in CLOSING or CLOSED state.
I also posted this on the neoj4 Slack channel and the neo4j Google group - any help is appreciated.
As an addendum to this post (5/25/2018):
I installed neo4j community edition version 3.4.0 on an AWS EC2 (Linux) instance, and did not get the ServiceUnavailable error as above. The error occurred on my MacBook Pro macos 10.13.4
Thanks for your interest.
Colin Goldberg
Related
I am using Neo4j Desktop application and trying a cypher query to export RDF. I am using the default available load-movie.cypher data in local DB as a trial but everytime I run the query it gives FetchURLError so I dont know what I am doing wrong.Also to add that all other match queries are working fine. Here is the query that I tried:-
:POST /rdf/neo4j/cypher
{"cypher":
"MATCH g = (:Person {name: 'Keanu Reeves'})-->(:Movie { title: 'The Matrix'})
RETURN g",
"format" : "RDF/XML"}
Then I tried a simple GET query :-
:GET /rdf/neo4j/describe/11
But the response is always:
FetchURLError - Could not fetch URL: "Failed to fetch".
This could be due to the remote server policy. See your web browsers error console for more information.
Need some help in resolving the issue.
I recently ran into the same problem and for me it worked by writing the full URL. E.g.:
:GET http://localhost:7474/rdf/neo4j/describe/11
I am trying to issue an identity to a participant that already exists in the network.
return this.bizNetworkConnection.connect(this.cardname)
.then((result) => {
let email = 'user#gmail.com',
username = email.split('#')[0];
this.businessNetworkDefinition = result;
return this.bizNetworkConnection.issueIdentity('org.test.Person#user#gmail.com', username);
})
.then((result) => {
console.log(`userID = ${result.userID}`);
console.log(`userSecret = ${result.userSecret}`);
})
I expect that I will see the userID and the userSecret logged on the console but I am getting errors as described below.
Following the developer tutorial on their documents:
If I use the card name for PeerAdmin#hlfv1 on the connect function above, I get the error. "Error trying to ping. Error: Error trying to query business network. Error: Missing \"chaincodeId\" parameter in the proposal request"
If I use the card name for admin#tutorial-network on the connect function above, I get the error "fabric-ca request register failed with errors [[{\"code\":400,\"message\":\"Authorization failure\"}]]"
For option 1, I know the network name is missing in the given card, whie option 2 means that the admin has no rights to issue an identity. However, I cannot seem to find any documentation directing me on how to use either to achieve my objective. Any help is highly welcome.
While I have listed the javascript code I am using to achieve the same, I would not mind if anyone can explain what I am missing using the composer cli.
see https://hyperledger.github.io/composer/latest/managing/identity-issue.html
you would definitely use the admin#tutorial-network card, as PeerAdmin does not have authority to issue identities (admin does).
Did you already do: 1) a composer card import -f networkadmin.card (per the tutorial) ? 2) a composer network ping -c admin#tutorial-network to use the card (now in the card store) and thereby populate the admin's credentials (certificate/private key).
Only at that point would admin be recognised as the identity to issue further identities. Is it possible you spun up a new dockerized CA server at some stage since you did the import etc ?
What happens if you issue a test identity through the command line (using admin#tutorial-network? Does it fail)
I have a Rails application running on heroku and i am connecting at two dbs hosted in mongolab (X and Y).
I have configured two heroku env variables containing the connection strings.
When i query on Y all works fine but when i query on X db it gives me the error 16550 : "not authorized for query on X.table".
I have setted up correctly both env variables for these connections and also have a valid user to access X db.
If i connect with the shell all works fine.
How can i solve this?
Here is the error message in rails:
{"status":"500",
"error":"The operation: #<Moped::Protocol::Query\n #length=88\n #request_id=4\n #response_to=0\n
#op_code=2004\n #flags=[:slave_ok]\n
#full_collection_name=\"X.table\"\n
#skip=0\n #limit=0\n
#selector={\"_id\"=>\"5252c92521e4af681a000002\"}\n
#fields=nil>\n
failed with error 16550: \"not authorized for query on X.table\"\n\n
See https://github.com/mongodb/mongo/blob/master/docs/errors.md\nfor details about this error."}
I solved this, if someone comes here with the same problem : look at your table model, if as in my case it is "stored_in" another database you must specify there the session of the the uri = evn variables on datbase.yml
We are running TFS 2012 in our house for around 3 months and in particular "processing of cubes" was working fine till 14/08. At that point is just stopped to work (nothing was done on the server <- or at least I didn't found any changes yet)
What we get in the windows log looks like this:
Detailed Message: TF221122: An error occurred running job Full
Analysis Database Sync for team project collection or Team Foundation
server TEAM FOUNDATION. Exception Message: Failed to Process Analysis
Database 'Tfs_Analysis'. (type WarehouseException) Exception Stack
Trace: at
Microsoft.TeamFoundation.Warehouse.TFSOlapProcessComponent.ProcessOlap(AnalysisDatabaseProcessingType
processingType, WarehouseChanges warehouseChanges, Boolean
lastProcessingFailed, Boolean cubeSchemaUpdateNeeded) at
Microsoft.TeamFoundation.Warehouse.AnalysisDatabaseSyncJobExtension.RunInternal(TeamFoundationRequestContext
requestContext, TeamFoundationJobDefinition jobDefinition, DateTime
queueTime, String& resultMessage) at
Microsoft.TeamFoundation.Warehouse.WarehouseJobExtension.Run(TeamFoundationRequestContext
requestContext, TeamFoundationJobDefinition jobDefinition, DateTime
queueTime, String& resultMessage)
Inner Exception Details:
Exception Message: Errors in the high-level relational engine. The
following exception occurred while the managed IDbConnection interface
was being used: . Errors in the high-level relational engine. A
connection could not be made to the data source with the DataSourceID
of 'Tfs_AnalysisDataSource', Name of 'Tfs_AnalysisDataSource'. Errors
in the OLAP storage engine: An error occurred while the dimension,
with the ID of 'Dim Team Project', Name of 'Team Project' was being
processed. Errors in the OLAP storage engine: An error occurred while
the 'ProjectNodeSk' attribute of the 'Team Project' dimension from the
'Tfs_Analysis' database was being processed. Internal error: The
operation terminated unsuccessfully. Errors in the high-level
relational engine. The following exception occurred while the managed
IDbConnection interface was being used: . Errors in the high-level
relational engine. A connection could not be made to the data source
with the DataSourceID of 'Tfs_AnalysisDataSource', Name of
'Tfs_AnalysisDataSource'. Errors in the OLAP storage engine: An error
occurred while the dimension, with the ID of 'Dim Team Project', Name
of 'Team Project' was being processed. Errors in the OLAP storage
engine: An error occurred while the 'Project Node Type' attribute of
the 'Team Project' dimension from the 'Tfs_Analysis' database was
being processed. Errors in the high-level relational engine. The
following exception occurred while the managed IDbConnection interface
was being used: . Errors in the high-level relational engine. A
connection could not be made to the data source with the DataSourceID
of 'Tfs_AnalysisDataSource', Name of 'Tfs_AnalysisDataSource'. Errors
in the OLAP storage engine: An error occurred while the dimension,
with the ID of 'Dim Team Project', Name of 'Team Project' was being
processed. Errors in the OLAP storage engine: An error occurred while
the 'Is Deleted' attribute of the 'Team Project' dimension from the
'Tfs_Analysis' database was being processed. Errors in the high-level
relational engine. The following exception occurred while the managed
IDbConnection interface was being used: . Errors in the high-level
relational engine. A connection could not be made to the data source
with the DataSourceID of 'Tfs_AnalysisDataSource', Name of
'Tfs_AnalysisDataSource'. Errors in the OLAP storage engine: An error
occurred while the dimension, with the ID of 'Dim Team Project', Name
of 'Team Project' was being processed. Errors in the OLAP storage
engine: An error occurred while the 'Project Node Name' attribute of
the 'Team Project' dimension from the 'Tfs_Analysis' database was
being processed. Errors in the high-level relational engine. The
following exception occurred while the managed IDbConnection interface
was being used: . Errors in the high-level relational engine. A
connection could not be made to the data source with the DataSourceID
of 'Tfs_AnalysisDataSource', Name of 'Tfs_AnalysisDataSource'. Errors
in the OLAP storage engine: An error occurred while the dimension,
with the ID of 'Dim Team Project', Name of 'Team Project' was being
processed. Errors in the OLAP storage engine: An error occurred while
the 'Project Path' attribute of the 'Team Project' dimension from the
'Tfs_Analysis' database was being processed. Server: The current
operation was cancelled because another operation in the transaction
failed.
Warning: Parser: Out of line object 'Binding', referring to ID(s)
'Tfs_Analysis, Team System, FactCurrentWorkItem', has been specified
but has not been used. Warning: Parser: Out of line object 'Binding',
referring to ID(s) 'Tfs_Analysis, Team System, FactWorkItemHistory',
has been specified but has not been used.
...
so far :
- I've tried to force full processing of the cube, thru instruction from here http://msdn.microsoft.com/en-us/library/ff400237(v=vs.100).aspx
- I've tried to "rebuild reporting" from "TFS admin console"->"Application tire"->"Reporting"->"Start rebuild:
- finally I've also tried just to process directly from "SQL Managment studio" : Tfs_analysie->Process
- I've checked c:\olap\logs\msmdsrv file and I didn't found any errors there
beside that we also tried to:
- restart server
- restart just services
nothing of above helped.
Our TFS is :
- hosted on one machine
- updated to "Update 3" (right after setting it up)
- we use three different domain account to host TFS services, SQL, reporting services <- but nothing get changed in names/password of those account since installation. I've also verified that those accounts have access to proper databases.
Does anyone have similar problem ? Any ideas are really welcome.
I think the core error is A connection could not be made to the data source with the DataSourceID of 'Tfs_AnalysisDataSource' Check the data source settings, especially the connection string. Typical reasons are connection protocol settings wrong so that the protocol configured for Analysis Services is not configured for the relational engine, firewall or authentication issues.
Had the same issue (also TFS2012). Restarting the Analysis Service did the trick for me this time. Also check the account/password "Tfs_AnalysisDataSource" is using by clicking "Properties" in SQL Server Management Studio. I had a similar issue a while back when the passwords changed.
My JMS client connects to WMQ through JNDI. The initial context factory used is com.ibm.mq.jms.context.WMQInitialContextFactory.
Currently, at WMQ side, there's a queue manager called TestMgr. Under this queue manager I created two channels. One is PLAIN.CHL which does not specify an SSL Cipher Spec, the other one is SSL.CHL which configured SSL Cipher Spec with RC4_MD5_US and SSL Authentication with Optional.
I have created a key store for the queue manager using IBM Key Management tool. The path of key db is [wmq_home]\qmgrs\TestMgr\ssl\key.
For channel PLAIN.CHL, I defined a queue connection factory like:
DEF QCF(PlainQCF) QMANAGER(TestMgr) CHANNEL(PLAIN.CHL) HOST(192.168.66.23) PORT(1414) TRANSPORT(client)
And under the SSL channel SSL.CHL, I defined a queue connection factory like:
DEF QCF(SSLQCF) QMANAGER(TestMgr) CHANNEL(SSL.CHL) HOST(192.168.66.23) PORT(1414) TRANSPORT(client) SSLCIPHERSUITE(SSL_RSA_WITH_RC4_128_MD5)
Now I only can create connection using the PlainQCF. But failed to look up the SSL queue connection factory. My code looks like:
Hashtable environment = new Hashtable();
environment.put(Context.INITIAL_CONTEXT_FACTORY, "com.ibm.mq.jms.context.WMQInitialContextFactory");
environment.put(Context.PROVIDER_URL, "192.168.66.23:1414/SSL.CHL");
Context ctx = new InitialContext( environment );
QueueConnectionFactory qcf = (QueueConnectionFactory) ctx.lookup("SSLQCF");
qcf.createConnection();
....
Am I missing some context properties when looking up the SSL factory? connection And then I found the code is hanging on the line new InitialContext( environment ) for a long time, almost 5 minutes, and I got CC=2;RC=2009;AMQ9208... error.
Any suggestion would be appreciated. Is it true that SSL channel can't be connected by JNDI?
#T.Rob, thanks for your reply very much. But we still want to use WMQInitialContextFactory, so I'm afraid I still need to find solution for this.
I just defined the connection factory one time. The displayed info for the SSL queue connection factory like:
InitCtx> DISPLAY QCF(SSLQCF)
ASYNCEXCEPTION(ALL)
CCSID(819)
CHANNEL(SSL.CHL)
CLIENTRECONNECTOPTIONS(ASDEF)
CLIENTRECONNECTTIMEOUT(1800)
COMPHDR(NONE )
COMPMSG(NONE )
CONNECTIONNAMELIST(192.168.66.23(1414))
CONNOPT(STANDARD)
FAILIFQUIESCE(YES)
HOSTNAME(192.168.66.23)
LOCALADDRESS()
MAPNAMESTYLE(STANDARD)
MSGBATCHSZ(10)
MSGRETENTION(YES)
POLLINGINT(5000)
PORT(1414)
PROVIDERVERSION(UNSPECIFIED)
QMANAGER(TestMgr)
RESCANINT(5000)
SENDCHECKCOUNT(0)
SHARECONVALLOWED(YES)
SSLCIPHERSUITE(SSL_RSA_WITH_RC4_128_MD5)
SSLFIPSREQUIRED(NO)
SSLRESETCOUNT(0)
SYNCPOINTALLGETS(NO)
TARGCLIENTMATCHING(YES)
TEMPMODEL(SYSTEM.DEFAULT.MODEL.QUEUE)
TEMPQPREFIX()
TRANSPORT(CLIENT)
USECONNPOOLING(YES)
VERSION(7)
WILDCARDFORMAT(TOPIC_ONLY)
The JNDI Provider should be fine because I can look up the plain connection factory successfully. Also, for my client app, I extracted the cert from the key store which created for MQ server and imported it to the trust store(cacerts) of my JRE with alias name ibmwebspheremqtestmgr.
You are correct, with 2009 error there are some log entries:
=================================================================
4/20/2012 20:24:27 - Process(13768.3) User(MUSR_MQADMIN) Program(amqzmur0.exe)
Host(xxxx_host of my MQ) Installation(mqenv)
VRMF(7.1.0.0) QMgr(TestMgr)
AMQ6287: WebSphere MQ V7.1.0.0 (p000-L111019).
EXPLANATION:
WebSphere MQ system information:
Host Info :- Windows Server 2003, Build 3790: SP2 (MQ Windows 32-bit)
Installation :- C:\IBM\WebSphereMQ (mqenv)
Version :- 7.1.0.0 (p000-L111019)
ACTION:
None.
-------------------------------------------------------------------------------
4/20/2012 20:24:27 - Process(7348.116) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(xxxx_host of my MQ) Installation(mqenv)
VRMF(7.1.0.0) QMgr(TestMgr)
AMQ9639: Remote channel 'SSL.CHL' did not specify a CipherSpec.
EXPLANATION:
Remote channel 'SSL.CHL' did not specify a CipherSpec when the local channel
expected one to be specified.
The remote host is 'xxx_host of my app (192.168.66.25)'.
The channel did not start.
ACTION:
Change the remote channel 'SSL.CHL' on host 'xxx_host of my app (192.168.66.25)' to
specify a CipherSpec so that both ends of the channel have matching
CipherSpecs.
----- amqcccxa.c : 3817 -------------------------------------------------------
4/20/2012 20:24:27 - Process(7348.116) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(my app host) Installation(mqenv)
VRMF(7.1.0.0) QMgr(TestMgr)
AMQ9999: Channel 'SSL.CHL' to host 'xxx_host of my app (192.168.66.25)' ended
abnormally.
====================================================================
I also got some confusion with the error log. My app staged at at a machine which is different from my MQ. But the log says the Change the remote channel 'SSL.CHL' on host 'xxx_host of my app (192.168.66.25)' to
specify a CipherSpec so that both ends of the channel have matching
CipherSpecs. How can I change the channel cipher spec on my app host?
updates on MQEnvironment...
reply the comments.
The value of MQEnvironment.sslCipherSuite is null, so it throws out NullPointerExcetpion when i put it the the env hashtable. But i tried another one environment.put(MQC.SSL_CIPHER_SUITE_PROPERTY, "SSL_RSA_WITH_RC4_128_MD5") and it still failed with 2009 error.
For JMSAdmin tool, i had changed the config to use WMQInitialContextFactory. The configuration like(JMSAdmin.config):
INITIAL_CONTEXT_FACTORY=com.ibm.mq.jms.context.WMQInitialContextFactory
PROVIDER_URL=192.168.66.23:1414/SYSTEM.DEF.SVRCONN
The rest configuration leaves as default.
Kindly note, here i use the default channel SYSTEM.DEF.SVRCONN so that i can logon to admin console. If I change the channel to the SSL oneSSL.CHL, I also can't logon to admin console. The error happened here is just like the one in my client app.
Another clarification, in my client, i use follow code can connect to connect qmgr(TestMgr) successfully through channel SSL.CHL.
MQConnectionFactory factory = new MQConnectionFactory();
factory.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP);
factory.setQueueManager("TestMgr");
factory.setSSLCipherSuite("SSL_RSA_WITH_RC4_128_MD5");
factory.setPort(1414);
factory.setHostName("192.168.66.23");
factory.setChannel("SSL.CHL");
MQConnection connection = (MQConnection) factory.createConnection();
And now the problem is just like you said, that's the initial context failed connect to qmgr through SSL channel. The option(use plain channel for initial context and ssl channel for connection factory) you provided works too. But I still want to know how to get initial context with ssl channel work. Thanks for you patience very much. Your updates will be appreciated.
thanks
I never really liked com.ibm.mq.jms.context.WMQInitialContextFactory very much. It stores the managed objects on a queue. So in order to lookup the connectionFactory, which tells JMS how to connect to the QMgr, it is first necessary to connect to the QMgr to make the JNDI call. Therefore, before you can debug the SSL connection, you need to know whether the underlying JNDI provider is working.
If you want to skip the MQ-based JNDI provider and just use the filesystem, see the updated version of Bobby Woolf's article here. If you want to continue with com.ibm.mq.jms.context.WMQInitialContextFactory, read on but be prepared to provide more configuration info.
When you run the JMSAdmin tool, do you display the objects after creating them? For example, here is one of my JMSAdmin.bat scripts:
# Connection Factory for Client mode
# Delete the Connection Factory if it exists
DELETE CF(JMSDEMOCF)
# Define the Connection Factory
DEFINE CF(JMSDEMOCF) +
SYNCPOINTALLGETS(YES) +
SSLCIPHERSUITE(NULL_SHA) +
TRAN(client) +
HOST(127.0.0.1) CHAN(SSL.SVRCONN) PORT(1414) +
QMGR( )
# Display the resulting definition
DISPLAY CF(JMSDEMOCF)
This deletes the object (because JMSAdmin doesn't have a define with replace option) then defines the object, then displays it. Do you in fact see both objects defined? Can you connect and interactively display them both? Can you update your question with the contents displayed?
If so, then what does the JNDI provider configuration look like with each sample program? The 2009 indicates that there is at least a connection to the QMgr being made, so it is important to determine whether the thing that suffering the broken connection is your app or the JNDI provider. To diagnose that requires the config info you are using for the JNDI provider and whether it is the same in the working and failing cases. If not, how do they differ?
Once you know whether it's the app or the JNDI provider that is causing the problem (or switch to another JNDI provider that doesn't require an MQ connection such as the filesystem initial context) then it will be possible to determine the next steps.
The article linked above has samples of code and managed object scripts that use a filesystem JNDI provider. You may notice my scripts pasted in above use the same QMgr name. That's because I wrote that part of the article. When I want to switch to SSL using those same samples, I just update the connectionFactory to point to the SSL channel and it works.
Here are the other bits from the sample that I've modified:
java -Djavax.net.debug=ssl ^
-Djavax.net.ssl.trustStore=key2.jks ^
-Djavax.net.ssl.keyStore=key2.jks ^
-Djavax.net.ssl.keyStorePassword=???????? ^
-Djavax.net.ssl.trustStorePassword=???????? ^
-cp "%CLASSPATH%" ^
com.ibm.examples.JMSDemo -pub -topic JMSDEMOPubTopic %*
Note: The ^ is Windows version of line continuation.
Then if there are problems, I follow the debugging scenario I described in this SO answer. Note that the app will require a truststore, even if you have SSLCAUTH(OPTIONAL) on your channel. This is because the app must always validate the QMgr's certificate, even if the app does not present its own certificate. In my case I was using SSLCAUTH(REQUIRED) so my app needed both a keystore and a truststore. Your question mentions that the QMgr has a keystore but does not say what you did for the application.
Finally, a 2009 will usually generate an entry in the QMgr error logs. If you continue to get the problem, please update your question with those log entries.
UPDATE:
Responding to the comments, the JMSAdmin tool is part of the WMQ package. However, WMQ it comes with jars for filesystem context and LDAP context. The WMQInitialContextFactory is optional and is delivered as SupportPac ME01. When using WMQInitialContextFactory with the JMSAdmin tool (or the JMSAdmin GUI or with WMQ Explorer) it is necessary to configure the PROVIDER_URL with the host, port and channel. For example:
PROVIDER_URL: <Hostname>:<port>/<SVRCONN Channel Name>
192.168.66.23:1414/SSL.SVRCONN
So after reviewing your post again, I realized that you did provide the config info for WMQInitialContextFactory. I was looking for a JMSADmin.config file but you have it in the environment hash table. And that is where the problem is. You are attempting to use the SSL channel for both the WMQInitialContextFactory and the connection factory. This is what is causing the lookup to fail. The WMQInitialContextFactory first makes a Java connection to the QMgre in order to look in the queue to obtain the administered objects such as QCF. In order to do that, it needs to know the ciphersuite that the channel is set up for in order to negotiate the handshake. Right now, the *only * place that ciphersuite is recorded is in the QCF definition.
Try adding the following line:
environment.put(MQEnvironment.sslCipherSuite, "SSL_RSA_WITH_RC4_128_MD5");
As per this Infocenter page, that should tell the context factory classes what ciphersuite to use. Of course, they also need to know where the trust store is (and possibly keystore if the channel has SSLCAUTH(RQUIRED) set) so you still need to get those values in the environment. You can use the command-line variables or try loading them into the environment using code. You'll need both -Djavax.net.ssl.trustStore=key2.jks and -Djavax.net.ssl.trustStorePassword=????????.
The other option is to continue to use the plaintext channel for the WMQInitialContextFactory and the SSL channel for the application. If the plaintext channel has an MCAUSER for a non-privileged user ID, it can be restricted to only connect to the QMgr and access the queue that contains the administered objects. With those restrictions, anyone will be able to read the administered objects using that channel but not the application queues or administrative queues.