Apache Jena: riot does not produce output - jena

I recently installed Apache Jena 3.17.0, and have been trying to use it to convert nquads files to ntriples.
As per the instructions, here (https://jena.apache.org/documentation/tools/) I first set up my WSL (Ubuntu 20.04) environment
$ export JENA_HOME=apache-jena-3.17.0/
$ export PATH=$PATH:$JENA_HOME/bin
and then attempted to run riot to do the conversion (triail.nq is my nquads file).
$ riot --output=NTRIPLES -v triail.nq
When I ran this, I got no output to the terminal. I'm not sure what is going wrong here, since there is no error message. Does anyone know what could be causing this / what the solution could be?
Thanks in advance!

The command will read the quad (multiple graph) data and output only the default graph. Presumably there is no default graph data in triail.nq.
If "convert" means combine all the quads into a single graph, then remove the graph field on each line of the data file with a text editor.
Otherwise, read into a RDF dataset and copy the named graphs into a single graph and output that.

Related

Does anyone know how to get the tdb2.dump command to actually do anything

I'm trying to dump a jena database as triples.
There seems to be a command that sounds perfectly suited to the task: tdb2.dump
jena#debian-clean:~$ ./apache-jena-3.8.0/bin/tdb2.tdbdump --help
tdbdump : Write a dataset to stdout (defaults to N-Quads)
Output control
--output=FMT Output in the given format, streaming if possible.
--formatted=FMT Output, using pretty printing (consumes memory)
--stream=FMT Output, using a streaming format
--compress Compress the output with gzip
Location
--loc=DIR Location (a directory)
--tdb= Assembler description file
Symbol definition
--set Set a configuration symbol to a value
--mem=FILE Execute on an in-memory TDB database (for testing)
--desc= Assembler description file
General
-v --verbose Verbose
-q --quiet Run with minimal output
--debug Output information for debugging
--help
--version Version information
--strict Operate in strict SPARQL mode (no extensions of any kind)
jena#debian-clean:~$
But I've not succeded in getting it to write anything to STDOUT.
When I use the --loc parameter to point to a DB, a new copy of that DB appears in the subfolder: Data-0001, but nothing appears in STDOUT.
When I try the --tdb parameter, and point it to a ttl file, I get a stack trace complaining about its formatting.
Google has turned up the Jena documentation telling me the command exists, and that's it. So any help appreciated.
"--loc" should be the same as used to create the database.
Suppose that's "DB2". For TDB2 (not TDB1) after the database is created, then "DB2/Data-0001" will already exist. Do not use this for --loc. Use "--loc DB2".
If it is a TDB1 database (the files are in the directory at "--loc", no "Datat-0001"), the use tdbdump. An empty database has no triples/quads in it so you would get no output.
Fuseki currently (up to 3.16.0) has to be called with the same setup each time it is run, which is fragile regarding TDB1/TDB2. If you created the TDB2 database outside Fuseki and only use command line args, you'll need "--tdb2" each time.
Fuseki in next release (3.17.0) detects existing database type.

How to Load Cypher File into Neo4j

I have generated a cypher file and want to load it into neo4j.
The only relevant documentation I could find was about loading csv's.
I also tried the shell but it seems to have no effect
cypher-shell.bat -uneo4j -pne04j < db.cql
Copy paste into localhost:7474/browser makes the browser unresponsive.
In the current Neo4j version you can use Cypher Shell to achieve your goal.
From the docs, Invoke Cypher Shell with a Cypher script from the command line:
$ cat db.cql | bin/cypher-shell -u yourneo4juser -p yourpassword
Note that this example is based in a Linux instalation. If you are using Neo4j with Windows, you will need to adjust this command to your needs.
Not sure when this has been added but in the current version (4.4) an alternative way to load a cypher file to NEO4J (GUI) is by dragging and dropping the cypher file over the web browser, which then offers two options, either to add it to the favorites or to paste it in the editor.

RStudio Server C-Stack memory allocation setting in rsession.conf

I'm trying to increase C-stack size in rstudio server 0.99 on CentOS6 editing /etc/rstudio/rserver.conf file as follow:
rsession-stack-limit-mb=20
But "rstudio-server verify-installation" returns this message:
The option 'rsession-stack-limit-mb' is deprecated and will be discarded.
If I put this setting within /etc/rstudio/rsession.conf I obtain this message:
unrecognised option 'rsession-stack-limit-mb'
Someone can help me to find right configuration?
Thanks in advance
Diego
I guess you use the free version of RStudio Server. According to https://github.com/rstudio/rstudio/blob/master/src/cpp/server/ServerOptions.cpp, it seems like you have to need a commercial version if you'd like to manage memory limits in RStudio Server.
Or, you can use "ulimit" command on CentOS, e.g., "ulimit -s 20000". Then, run R from the Linux command line or in batch mode.

Cayley docker isn't writing data

I'm using [or, trying to use] the docker cayley from here: https://github.com/saidimu/cayley-docker
I created a data dir at /var/lib/apps/cayley/data and dropped the .cfg file in there that I was instructed to make:
{
"database": "myapp",
"db_path": "/var/lib/apps/cayley/data",
"listen_host": "0.0.0.0"
}
I ran docker cayley with:
docker run -d -p 64210:64210 -v /var/lib/apps/cayley/data/:/data saidimu/cayley:v0.4.0
and it runs fine, I'm looking at it's UI in the browser:
And I add a triple or two, and I get success messages.
Then I go to the query interface and try to list any vertices:
> g.V
and there is nothing to be found (I think):
{
"result": null
}
and there is nothing written in the data directory I created.
Any ideas why data isn't being written?
Edit: just to be sure there wasn't something wrong with my data directory, I ran the local volume mounted docker for neo4j on the same directory and it saved data correctly. So, that eliminates some possibilities.
I can not comment yet but I think to obtain results from your query you need to use the All keyword
g.V().All() // Print All the vertices
OR
g.V().Limit(20) // Limits the results to 20
If that was not your problem I can edit and share my dockerfile which is derived from the same docker-file that you are using.
You may refer to the lib here to learn more about how to use Cayley's APIs and the data format in Cayley and some other stuff like N-Triples N-quads and RDF:
Cayley APIs usages examples (mocha test cases)
Clearly designed entry-level N-quads data for getting you in: in the project test/data directory

How to run a mindtct package (NBIS software)for a entire database(a set of 100 images) at a time in Redhat Linux

I have installed NBIS software in Redhat Linux in VMware and running as a host OS in my windows 7 system.
Till now I executed giving only one image, but now I need to run the entire DB with 100 images at a time and I should get the extracted minutiae.
I use the below cmd:
/NBIS/src/bin/mindtct /NBIS/Test_4.1.0/mindtct/data/5_2.jpg
/NBIS/output/5_2.xyt
Can anyone resolve my issue? What cmd should I use?
You can write a script to loop over all the images in your collection, or better yet, write a C program to wrap the mindtct functions, doing whatever you want to do within your new app. Take a look at the source for the binary mindtct in NBIS, especially the get_minutiae() function.
In the folder with your images you can use a bash script. This is the relevant part from mine. A simple for loop that will convert all images with the extension jp2 to xyt images.
PHOTOTYPE="*.jp2"
SAVEPATH="path/to/save/folder/"
for PIC in $PHOTOTYPE
do
echo "Processing mindtct -m1 $PIC $SAVEPATH/$PIC"
mindtct -m1 "$PIC" "$SAVEPATH/$PIC"
done
I tried it on Raspbian to Raspberry Pi
./mindtct path/file.jpg path/output
and it produced 8 files:
.brw, .dm, .hcm, .lcm, .lfm, .min, .qm, .xyt
In my understanding you should use a mindtct function to compare two finger images.

Resources