Issue connecting to Noe4j Aura with Python 'neo4j' driver - neo4j

I attempted to connect neo4j aura database using Python but failed as "Unable to retrieve routing information".
from neo4j import GraphDatabase
from neo4j.debug import watch
uri = "neo4j+s://<id>.databases.neo4j.io"
driver = GraphDatabase.driver(uri, auth=("neo4j", "<password>"))
def workload(tx):
return tx.run("RETURN 1 as n").data()
with watch("neo4j"): # enable logging
with driver.session() as session:
session.write_transaction(workload)
driver.close()
Running above python scripts returned the following log:
Attempting to update routing table from IPv4Address(('<id>.databases.neo4j.io', 7687))
[#0000] C: <RESOLVE> <id>.databases.neo4j.io:7687
[#0000] C: <OPEN> xx.xxx.xxx.xxx:7687
[#C000] C: <SECURE> <id>.databases.neo4j.io
[#0000] C: <CONNECTION FAILED> BoltSecurityError: [SSLCertVerificationError] Connection Failed. Please ensure that your database is listening on the correct host and port and that you have enabled encryption if required. Note that the default encryption setting has changed in Neo4j 4.0. See the docs for more information. Failed to establish encrypted connection. (code 1: Operation not permitted)
Failed to fetch routing info 35.xxx.xxx.xxx:7687
[#0000] C: <ROUTING> Deactivating address IPv4Address(('<id>.databases.neo4j.io', 7687))
[#0000] C: <ROUTING> table={None: RoutingTable(database=None routers={}, readers={}, writers={}, last_updated_time=0.235748575, ttl=0)}
Attempting to update routing table from
Unable to retrieve routing information
Transaction failed and will be retried in 1.1281720312998946s (Unable to retrieve routing information)
I looked into neo4j documentation and searched other places but none of the possible resolutions can be found.
Version:
Python 3.7.4
neo4j 4.4.2
I very much appreciate your input if you have ever experienced the same issues and found any way to resolve the issue.

Related

SAS Stored Process won't connect to Hive2 DB

I have a code that connect successful through a libname to a Hive2 DB authenticated with kerberos:
libname hdb hadoop server="server-name" port=10000 schema="schema-name";
I have tested the code using Data Integration Studio and BASE, but when I call the same code with and the same user (checked putting in the code a %put &=SYSUSERID; and running in pipe a echo %username%) with a stored procedure the libname gives me error :
ERROR: Error trying to establish connection: Could not open client transport with JDBC Uri:
jdbc:hive2://"server-name":10000/"schema-name";ssl=true;principal=hive/_HOST#"dominion": GSS initiate
failed
ERROR: Error in the LIBNAME statement.

The neo4j cypher shell and the browser connections are working but the golang client connection is not working

I have disabled the authentication on my neo4j server, so I can connect using the cypher shell using no credentials as it follows and is working.
$ ./bin/cypher-shell -a 192.168.0.89
This is how I'm declaring my driver and the session, I also tried using neo4j://* instead of bolt://*:
driver, err := neo4j.NewDriver("bolt://192.168.0.89:7687", neo4j.NoAuth())
if err != nil {
return "", err
}
defer driver.Close()
session, _ := driver.NewSession(neo4j.SessionConfig{AccessMode: neo4j.AccessModeWrite})
defer session.Close()
But that doesn't work either. I'm getting this error when running the hello world from the neo4j olang driver page https://neo4j.com/developer/go/
TLS error: Remote end closed the connection, check that TLS is enabled on the server
There are the logs of the server when it starts:
2021-03-07 23:17:23.227+0000 INFO ======== Neo4j 4.2.3 ========
2021-03-07 23:17:24.119+0000 INFO Performing postInitialization step for component 'security-users' with version 2 and status CURRENT
2021-03-07 23:17:24.119+0000 INFO Updating the initial password in component 'security-users'
2021-03-07 23:17:24.243+0000 INFO Bolt enabled on 192.168.0.89:7687.
2021-03-07 23:17:25.139+0000 INFO Remote interface available at http://192.168.0.89:7474/
2021-03-07 23:17:25.140+0000 INFO Started.
These are all my config settings:
dbms.connector.bolt.advertised_address=192.168.0.89:7687
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=192.168.0.89:7687
dbms.connector.bolt.tls_level=DISABLED
dbms.connector.http.advertised_address=192.168.0.89:7474
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=192.168.0.89:7474
dbms.connector.https.enabled=false
dbms.default_advertised_address=192.168.0.89
dbms.default_database=neo4j
dbms.default_listen_address=192.168.0.89
dbms.directories.import=/home/eduardo/NEO4J/import
dbms.directories.neo4j_home=/home/eduardo/NEO4J
dbms.jvm.additional=-Dlog4j2.disable.jmx=true
dbms.security.auth_enabled=false
dbms.tx_log.rotation.retention_policy=1 days
dbms.tx_state.memory_allocation=ON_HEAP
dbms.windows_service_name=neo4j
Again, I can connect to the same host and the browser is also working fine:
Thanks in advance for any help :)
Adding to your answer: it is likely you're using the v1.x of the Go driver. If you switch to using the v4.x driver instead, you will not have to specify this config value.
You can upgrade by simply adding v4 in your import statement like so:
import github.com/neo4j/neo4j-go-driver/v4/neo4j
More info: https://github.com/neo4j/neo4j-go-driver/blob/4.2/MIGRATIONGUIDE.md
For anyone looking for the answer, the bolt driver will try to use TLS by default and since in my case is not configured, the encryption needs to be disabled in the driver constructor call.
driver, err := neo4j.NewDriver("bolt://192.168.0.89:7687", neo4j.NoAuth(), func(c *neo4j.Config) { c.Encrypted = false })
Hope this helps other people experiencing the same issue :)

Failed to read from defunct connection (Jupyter notebook, python driver )

I try to import data into a Neo4j VM in Azure.
This code works:
def create_article(tx):
tx.run("CREATE (a:ARTICLE)")
session.read_transaction(create_article)
But this code doesn't work:
def create_node_article(tx, id, title, label):
tx.run("CREATE (a:ARTICLE {id:$id, title:$title, label:$label})", id=id, title=title, label=label)
for index, row in df_article_ids.iterrows():
session.read_transaction(create_node_article, row['id'], row['cleaned_best_title'], row['label'])
I have the error:
Transaction failed and will be retried in 1.0608892687544587s (Failed
to read from defunct connection Address(host='IP', port=7687)
(Address(host='IP', port=7687)))
I don't know what I have to change or check. I also tried Neo4j Desktop and I have the same error.
Neo4j version: 4.1.3

How to run presto queries in python using pyhive?

I am trying to run presto query in python using pyhive library but max retries error is coming. I am running it in jupyter notebook locally(laptop). I think its not able to connect to presto node. I am using Azure hdinsight cluster and installed presto application on head node(using starburst distribution). I have used cluster user name and password and also i have tried head node ssh user and password but nothing is working. Below is my code:
from pyhive import presto
conn= presto.connect(
host='clustername-ssh.azurehdinsight.net',
port=8085,
username='sshuser'
password='sshpassword',
protocol='https'
).cursor()
conn.execute('SELECT * FROM hive.default.parquettest limit 1')
The error i am getting is:
ConnectionError:
HTTPConnectionPool(host='sm-hdinsight01-ssh.azurehdinsight.net',
port=8085): Max retries exceeded with url: /v1/statement (Caused by
NewConnectionError(': Failed to establish a new connection: [Errno 110]
Connection timed out',))
But when i am running it in terminal of head node it works:
from pyhive import presto
conn= presto.connect(
host='localhost',
port=8085).cursor()
conn.execute('SELECT * FROM hive.default.parquettest limit 1')
I think i am missing some crucial thing here. please help.
sounds like an permission/authentification problem. i am currently using a jupyter notebook on my local machine t query the company presto cluster like this using the prestodb library.
so basically:
import prestodb
conn=prestodb.dbapi.connect(
host='presto.bar.foo.com',
port=80, user='foo',
password='bar'
catalog='hive',
schema='default',
)
cur = conn.cursor()
cur.execute(
'SELECT * FROM "schema"."db" limit 10')
records = cur.fetchall()
print(records[0])

How can I make my openAM SDK app know where to find OpenAM server?

I am writing a complemental service for OpenAM for some features not available as RESTful services in default server. I am using OpenAM Client SDK (12 or 13). I get the folloing error:
DebugConfiguration:07/03/2017 04:13:12:530 PM IRDT:
Thread[main,5,main]
'/debugconfig.properties' isn't valid, the default configuration will be used instead: Can't find the configuration file
'/debugconfig.properties'.
amAuthContext:07/03/2017 04:13:12:564 PM IRDT: Thread[main,5,main]:
TransactionId[unknown]
ERROR: Failed to obtain auth service url from server: null://null:null
amNaming:07/03/2017 04:13:12:573 PM IRDT: Thread[main,5,main]:
TransactionId[unknown]
ERROR: Failed to initialize naming service
java.lang.Exception: Cannot find Naming Service URL.
at com.iplanet.services.naming.WebtopNaming.getNamingServiceURL(WebtopNaming.java:1254)
at com.iplanet.services.naming.WebtopNaming.initializeNamingService(WebtopNaming.java:272)
at com.iplanet.services.naming.WebtopNaming.updateNamingTable(WebtopNaming.java:1149)
at com.iplanet.services.naming.WebtopNaming.getNamingProfile(WebtopNaming.java:1070)
at com.iplanet.services.naming.WebtopNaming.getServiceAllURLs(WebtopNaming.java:494)
at com.sun.identity.authentication.AuthContext.login(AuthContext.java:654)
at com.sun.identity.authentication.AuthContext.login(AuthContext.java:584)
at com.sun.identity.authentication.AuthContext.login(AuthContext.java:386)
at MainKt.realmLogin(Main.kt:56)
at MainKt.main(Main.kt:144)
IdRepoSampleUtils: Failed to start login for default authmodule
Exception in thread "main"
com.sun.identity.authentication.spi.AuthLoginException: Failed to create new Authentication Context: null
at com.sun.identity.authentication.AuthContext.login(AuthContext.java:657)
at com.sun.identity.authentication.AuthContext.login(AuthContext.java:584)
at com.sun.identity.authentication.AuthContext.login(AuthContext.java:386)
at MainKt.realmLogin(Main.kt:56)
at MainKt.main(Main.kt:144)
The main error is SDk does not find STS server url. How can I fix it?
I found the solution with checking the Example SDK client. The solution is to use Java's well-knwon properties file. There is a AMConfig.properties in there which the SDK jar automatically tries to extract values from it. For the format of the file we can refer to Oracle OpenSSO, and use the AMConfig.properties.template within the OpenAM example client application.

Resources