Correctly provisioning the graph data science plugin on grapheneDB - neo4j

I have a graph working fully with the plugin locally in neo4j desktop. I've replicated everything from this graph in my grapheneDB instance. I can't use the gds procedures as I get the error:
gds.proc... is unavailable because it is sandboxed and has dependencies outside of the sandbox. Sandboxing is controlled by the dbms.security.procedures.unrestricted setting. Only unrestrict procedures you can trust with access to database internals.
I know to fix this I need to add these two lines to the config/properties file:
dbms.security.procedures.unrestricted=apoc.*,gds.*
dbms.security.procedures.whitelist=apoc.*,gds.*
I just dont know how to do that on grapheneDB, I've read all the docs I can find.
I've tried adding the gds plugin by adding the jar file as just a stored procedure and then also as a server extension with a zip file containing both the jar file and the two config lines mention above in a neo4j-server.properties file.
When added as a server extension I can tell neo4j hasnt found the gds plugin at all. Am I just missing a location in the properties file? Or am I missing something obvious in the stored procedure upload method?
Using the dev free tier graphenedb, Neo4j Community Edition 3.5.17 and graph data science 1.1.1
Thanks

After a couple of weeks back and forth with graphene support, the config changes have been made. They will be adding support for the GDS plugin as part of their base image soon, but until then you may still need to request that they patch your db for you and add it as a stored procedure.

Related

Neo4j-admin load dump.file returning: Not a valid Neo4j archive

The dump comes from exporting a snapshot from AuraDB, as stated in Neo4j documentation.
im working with Neo4j Community 4.1.11 in Ubuntu, none of the other answers i found have been helpful...
Let me know what other piece of info you need to assess better the situation, thanks
when running the command...
You may need to upgrade your local version of Neo4j to something much more recent - an Aura database I just created couldn't be loaded into any version of Neo4j older than 4.4.1 (as at current date).
However, the specific version you need to use will change over time.
Guidance on the Aura support site indicates as much, and recommends using the latest possible local version of Neo4j as the target for the import (potentially to the point of needing to use pre-release versions), since Aura will typically be running bleeding-edge versions of the database store format.
An alternative might be to explore exporting to CSV or JSON and importing that way, since this output won't vary depending on Neo4j version of the source or target - you can do a stream-based export from Aura using apoc.export.csv.all or apoc.export.json.all which you could then load via script, though with large graphs this may not be practical.

Neo4J User Defined Functions - How to deploy new functions?

I'm learning Cypher since yesterday and I read about the user defined functions.
There's many material on how to use the functions, but not many on how to deploy new ones.
I would like to try out but I'm having a hard time on finding a step-by-step tutorial on how to deploy new functions to my desktop app.
The ones I have found bypass some concepts as they were too obvious. And maybe they are for someone coming from a Java background or whatever the background is you're supposed to have when using Neo4J. ...But I come from a Javascript background. I'm used to npm, never heard of maven (just an example).
It would be nice you someone could help with a detailed step-by-step tutorial on how to write and deploy a new user defined function in Neo4J.
For helping a bit.
User defined functions are only writables in Java code for now. They're server extensions. You write the code with a Java editor (outside Neo4j) and publish it under a Java Archive (a file with extension .jar) into the /plugins directory of your Neo4j installation (https://neo4j.com/docs/developer-manual/current/extending-neo4j/cypher-functions/).
Many useful procedures already exist with APOC extension (https://neo4j-contrib.github.io/neo4j-apoc-procedures/) depending of your Neo4j server version.
Try them first bfeore developping yours, especially if you're starting with CYPHER. Some of them should solve your usuals demands.
All of extensions are taking effect after a restart of Neo4j.
Note : Maven is a dependency manager for Java.

Published Azure Code

Is it possible to retrieve the published code from an Azure Cloud Service.
When I changed my TFS mapping, TFS wiped out the code I had written on my local machine. It converted the .csproj and .ccprof files to csproj.user and .ccproj.user files. It also removed the solution. I havn't checked anything in since February so checking out loses 3 months worth of code. I have access to some of the views, scripts, and .css files but all .cs files are gone. I have tried the following.
Remote desktop into the published site.
-Works but all .cs files are stored as a .dll and code is lost and
"obfuscated" when decompling.
Wondershare data recovery.
Some files are found but often in an unreadable format. Many are still
missing.
Getting the blob in the vsdeploy folder in Azure Storage.
I have the blob. Now what? Is there a way to convert that back into a readable project?
Using "Open from Azure Website" extension to load the project into visual
studio by the publishsettings file in Azure Portal.
This works great for app services, but I cannot find any existence of a
.PublishSettings file in Azure. The Get-AzurePublishSettingsFile call from
Windows Powershell doesn't not download the correct file. When using the
extension I get a "Object not set to an instance of an object" exception. I
have tested the extension with an app service and it works perfectly.
If you're talking about web/worker roles in a cloud service, then no - you cannot retrieve deployed code. To get code to a cloud service, it gets packaged up first by Visual Studio (or directly through command line tools, or via Eclipse). This entails compiling all of your code first. Source files are not included in the package (unless you've explicitly done something like setting "copy local" to true in the package, which I can't imagine anyone doing).
As far as what's in blob storage: If your .cspkg is still sitting in a blob, sure, you can download and examine it. But again, it'll just contain the same package that was built locally and uploaded during deployment.
With Web Apps, your code will be available in your d:\ drive, since deployments are done via version control (unless you simply ftp something up).
With Virtual Machines (which sit in cloud services in the "classic" deployment model), you would have needed to push code to the VMs on your own (there's no built-in push-from-version-control). So again, unless you pushed source code to the VM, there's no way to retrieve said source code.
As far as the code that was wiped out on your local machine, it might be worth looking into recovery / forensic disk tools (which it looks like you've started doing), to see if your code is still sitting around somewhere, hidden. But, really, how you go about hunting for deleted / overwritten files would be off-topic (or something to ask about on SuperUser).

Read data from Informix Database. Having folder app.dbs with *.IDX, *.Dat files

I have a friend who has a management application, and he would like to import some of his data in Excel.
The thing is I have no idea about how to read this type of files,
In his application directory he has a folder named app.dbs. Inside there are *.idx and *.dat files.
What would be the easiest way to read this files? Maybe ODBC connector, or installing some version of Informix DB??
That sounds like C-ISAM files, or an Informix-SE (Standard Engine) install. You most certainly can't read them directly. Googling Informix C-ISAM files ODBC generates plenty of results. Also this page explains the relationship between the two.
I've never used SE, but assuming its installation is reasonably similar to its big brother Informix Dynamic Server (and I believe it is), have a look on your friend's computer for an 'Informix' directory. You may find an %INFORMIXDIR% environment variable to point you in the right direction. Within that, look for an executable in its subdirectory bin called dbaccess.exe. Run that from a DOS prompt and you should hopefully get an SQL interpreter that allows you to read and extract the data.
If you have no luck finding such a directory, then it's more than likely the "management application" is writing C-ISAM directly, and you'll need an ODBC driver for C-ISAM, as you surmised.
The name app.dbs containing the .dat and .idx files is an almost sure indication that you have an Informix SE (Standard Engine) database (someone might have faked it, but it is pretty improbable).
Given that you may be able to use an Informix ODBC driver and SE itself to access the database, or you may be able to use an ISAM-based ODBC driver to access the database. It depends in part on whether this is a one-time migration or an ongoing access while the application continues to work on the database.
Assuming all of this is installed on Windows, you should indeed find a %INFORMIXDIR% directory, which will have a dbaccess.exe in the bin sub-directory, and an sqlexec.exe either in the bin directory or in the lib directory (it would be in $INFORMIXDIR/lib on Unix; I'm not sure about Windows). These should be able to access the database. If you find sqlexec but not dbaccess, then you've got a seriously old version (more than 20 years old, but I know of other people still using such archaic versions). You should be able to identify the version by running dbaccess -V or sqlexec -V. If it is 7.25, it is reasonably recent (that's been current for a decade or more); if it is older than that, it is verging on the archaic.

How to create Datasource objects using jndi in netbeans

I have tried to create a data source object using jndi, but i got error like driver not found in the org.dhcp. i have dropped the odbc14.jar file in the both cataline.
which book should i refer and also tell me which book for jsf beginners?
You shouldn't be using the odbc14.jar. (I'm assuming that you're using Oracle for your database.) The "14" in that JAR name refers to the version of the JDK that you deploy to. You'll be better off if you find the matching driver JAR for JDK 5 or 6.
Put the JAR in the /lib directory of your Tomcat deployment. (You said Catalina, so I'll assume Tomcat.)
You need more than a JAR to create a JNDI data source. Here are some docs to help you. (I'll assume Tomcat 6.)
You can tell from all the assumptions that I had to make that your question is missing a lot of very important information. I'd recommend that you bone up on how to ask smart questions.

Resources