How to start Fuseki server when we have already existing database? - jena

I have Fuseki TDB database having size 53MB. I try to make it available on an endpoint by the command line utility ./fuseki-server --tdb2 --loc=/path/to/database /ds . The endpoint on localhost:3030 get started but on querying, it gives empty results. I would ask if anyone can help.

Related

Maxmind geoipupdate gets http 403 on docker run

I am using maxmind GeoLite2 binary database for geolocation services and I want to update this periodically.
It works fine on updating through geoipupdate program installed via brew.
However Maxmind provides a docker image to update db periodically.
When I try to run docker command below,
docker run --env-file IdeaProjects/ip-geolocation-service/src/main/resources/application.properties -v /Users/me/GeoIp maxmindinc/geoipupdate
With the environment file refers to application.properties,
GEOIPUPDATE_ACCOUNT_ID=12345
GEOIPUPDATE_LICENSE_KEY=aaaaaaaaaa
GEOIPUPDATE_EDITION_IDS=GeoIP2-Country
I gets the following error:
# STATE: Creating configuration file at /etc/GeoIP.conf
# STATE: Running geoipupdate
error retrieving updates: error while getting database for GeoIP2-Country: unexpected HTTP status code: received HTTP status code: 403: Invalid product ID or subscription expired for GeoIP2-Country
Since my credentials is working on manual trigger, I wonder why it has not working on docker run? Any idea for spotting problem or anyone has faced with it?
You write that you want to use the free GeoLite2 database but the ID you use looks like the commercial/paid one. Try the following instead:
GEOIPUPDATE_EDITION_IDS=GeoLite2-Country
Source: https://github.com/maxmind/geoipupdate/blob/main/doc/docker.md

Does anyone know how to get the tdb2.dump command to actually do anything

I'm trying to dump a jena database as triples.
There seems to be a command that sounds perfectly suited to the task: tdb2.dump
jena#debian-clean:~$ ./apache-jena-3.8.0/bin/tdb2.tdbdump --help
tdbdump : Write a dataset to stdout (defaults to N-Quads)
Output control
--output=FMT Output in the given format, streaming if possible.
--formatted=FMT Output, using pretty printing (consumes memory)
--stream=FMT Output, using a streaming format
--compress Compress the output with gzip
Location
--loc=DIR Location (a directory)
--tdb= Assembler description file
Symbol definition
--set Set a configuration symbol to a value
--mem=FILE Execute on an in-memory TDB database (for testing)
--desc= Assembler description file
General
-v --verbose Verbose
-q --quiet Run with minimal output
--debug Output information for debugging
--help
--version Version information
--strict Operate in strict SPARQL mode (no extensions of any kind)
jena#debian-clean:~$
But I've not succeded in getting it to write anything to STDOUT.
When I use the --loc parameter to point to a DB, a new copy of that DB appears in the subfolder: Data-0001, but nothing appears in STDOUT.
When I try the --tdb parameter, and point it to a ttl file, I get a stack trace complaining about its formatting.
Google has turned up the Jena documentation telling me the command exists, and that's it. So any help appreciated.
"--loc" should be the same as used to create the database.
Suppose that's "DB2". For TDB2 (not TDB1) after the database is created, then "DB2/Data-0001" will already exist. Do not use this for --loc. Use "--loc DB2".
If it is a TDB1 database (the files are in the directory at "--loc", no "Datat-0001"), the use tdbdump. An empty database has no triples/quads in it so you would get no output.
Fuseki currently (up to 3.16.0) has to be called with the same setup each time it is run, which is fragile regarding TDB1/TDB2. If you created the TDB2 database outside Fuseki and only use command line args, you'll need "--tdb2" each time.
Fuseki in next release (3.17.0) detects existing database type.

Error 403: "Flux query service disabled." But flux-enabled=true has been set in influxdb.conf

I have been using InfluxDB (server version 1.7.5) with the InfluxQL language for some time now. Unfortunately, InfluxQL does not allow me to perform any form of joins, so I need to use InfluxDB's new scripting language Flux instead.
The manual states that I have to enable Flux in /etc/influxdb/influxdb.conf by setting flux-enabled=true which I have done. I restarted the server to make sure I got the new settings and started the Influx Command Line tool with "-type=flux".
I then do get a different user interface than when I use InfluxQL. So far so good. I can also set and read variables etc. So I can set:
> dummy = 1
> dummy
1
However, when I try to do any form of query of the tables such as: from(bucket:"db_OxyFlux-test/autogen")
I always get
Error: Flux query service disabled. Verify flux-enabled=true in the [http] section of the InfluxDB config.
: 403 Forbidden
I found the manual for Fluxlang rather lacking in basic details of Schema exploration and so I am not sure if this is just an issue with my query raising this error or if something else is going wrong. I tested this both on my own home machine and on our remote work server and I get the same results.
Re: Vilix
Thank you. This lead me in the right direction.
I realised that InfluxDB does not automatically read the config file (which is not very intuitive). But your solution also forces me to start the deamon by hand each time. After some more googling I used:
"sudo influxd config -config /etc/influxdb/influxdb.conf"
So hopefully now the daemon will start automatically each time on startup rather than me having to do this by hand.
I have the same issue and solution is to start influxd with -config option:
influxd -config /etc/influxdb/influxdb.conf

neo4j-shell example of running a Cypher script

I need to run a Cypher query against a Neo4J database, from a command line (for batch scheduling purposes).
When I run this:
./neo4j-shell -file /usr/share/neo4j/scripts/query.cypher -path /usr/share/neo4j/neo4j-community-3.1.1/data/databases/graph.db
I get this error:
ERROR (-v for expanded information):
Error starting org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory, /usr/share/neo4j/neo4j-community-3.1.1/data/databases/graph.db
There is a running Neo4J instance on that database (localhost:7474). I need the script to perform queries against it.
NOTE: this is a split of the original question, for the sake of tidiness.
To execute (one or more) Cypher statements from a file while the neo4j server is running, you can use the APOC procedure apoc.cypher.runFile(file or url).
Since you mention "batch scheduling", the Job management and periodic execution APOC procedures may be helpful. Those procedures could, in turn, execute calls to apoc.cypher.runFile.
Okay I just spun up a fresh instance of Neo4j-community-3.1.1 today and ran into the exact same problem. Note that I had already created a database using the bulk import tool, so one might need to make a directory for a database (mkdir data/databases/graph.db) before using a shell.
I believe your problem might be that you have an instance of Neo4j process running against the database you are trying to access.
For me, shutting down Neo4j, and then starting the shell with an explicit path worked:
cd /path/to/neo4j-community-3.1.1/
bin/neo4j stop ## assuming it is already running (may need a port specifier)
bin/neo4j-shell -path data/databases/graph.db
For some reason I thought you could have both the shell and the server running, but apparently that is not the case. Hopefully someone will correct me if I am wrong.

How to start neo4j server from a given data path

As the title says out loud.
How to start neo4j server with pure
$ neo4j start
I have trying to edit dbms.directories.data in the conf/neo4j.conf and it didn't work out.
I was using
"neo4j_version" : "3.0.0"

Resources