Cayley docker isn't writing data - docker

I'm using [or, trying to use] the docker cayley from here: https://github.com/saidimu/cayley-docker
I created a data dir at /var/lib/apps/cayley/data and dropped the .cfg file in there that I was instructed to make:
{
"database": "myapp",
"db_path": "/var/lib/apps/cayley/data",
"listen_host": "0.0.0.0"
}
I ran docker cayley with:
docker run -d -p 64210:64210 -v /var/lib/apps/cayley/data/:/data saidimu/cayley:v0.4.0
and it runs fine, I'm looking at it's UI in the browser:
And I add a triple or two, and I get success messages.
Then I go to the query interface and try to list any vertices:
> g.V
and there is nothing to be found (I think):
{
"result": null
}
and there is nothing written in the data directory I created.
Any ideas why data isn't being written?
Edit: just to be sure there wasn't something wrong with my data directory, I ran the local volume mounted docker for neo4j on the same directory and it saved data correctly. So, that eliminates some possibilities.

I can not comment yet but I think to obtain results from your query you need to use the All keyword
g.V().All() // Print All the vertices
OR
g.V().Limit(20) // Limits the results to 20
If that was not your problem I can edit and share my dockerfile which is derived from the same docker-file that you are using.

You may refer to the lib here to learn more about how to use Cayley's APIs and the data format in Cayley and some other stuff like N-Triples N-quads and RDF:
Cayley APIs usages examples (mocha test cases)
Clearly designed entry-level N-quads data for getting you in: in the project test/data directory

Related

Does anyone know how to get the tdb2.dump command to actually do anything

I'm trying to dump a jena database as triples.
There seems to be a command that sounds perfectly suited to the task: tdb2.dump
jena#debian-clean:~$ ./apache-jena-3.8.0/bin/tdb2.tdbdump --help
tdbdump : Write a dataset to stdout (defaults to N-Quads)
Output control
--output=FMT Output in the given format, streaming if possible.
--formatted=FMT Output, using pretty printing (consumes memory)
--stream=FMT Output, using a streaming format
--compress Compress the output with gzip
Location
--loc=DIR Location (a directory)
--tdb= Assembler description file
Symbol definition
--set Set a configuration symbol to a value
--mem=FILE Execute on an in-memory TDB database (for testing)
--desc= Assembler description file
General
-v --verbose Verbose
-q --quiet Run with minimal output
--debug Output information for debugging
--help
--version Version information
--strict Operate in strict SPARQL mode (no extensions of any kind)
jena#debian-clean:~$
But I've not succeded in getting it to write anything to STDOUT.
When I use the --loc parameter to point to a DB, a new copy of that DB appears in the subfolder: Data-0001, but nothing appears in STDOUT.
When I try the --tdb parameter, and point it to a ttl file, I get a stack trace complaining about its formatting.
Google has turned up the Jena documentation telling me the command exists, and that's it. So any help appreciated.
"--loc" should be the same as used to create the database.
Suppose that's "DB2". For TDB2 (not TDB1) after the database is created, then "DB2/Data-0001" will already exist. Do not use this for --loc. Use "--loc DB2".
If it is a TDB1 database (the files are in the directory at "--loc", no "Datat-0001"), the use tdbdump. An empty database has no triples/quads in it so you would get no output.
Fuseki currently (up to 3.16.0) has to be called with the same setup each time it is run, which is fragile regarding TDB1/TDB2. If you created the TDB2 database outside Fuseki and only use command line args, you'll need "--tdb2" each time.
Fuseki in next release (3.17.0) detects existing database type.

How do I edit the source code of a docker image?

I hope you are doing well.
I'm trying to rebuild a docker image.
What I mean is, I don't just want to get some files into the file system of the image, but want to edit the source code/the codebase itself... whatever it's called.
Especially, I'd like to make the image instances leave some log information.
But I'm totally clueless what to edit(even I can't find the source base code of that image)
Could you please help me edit the source code if you know how?
I would really appreciate. Thank you in advance.
I'd like to make the image instances leave some log information
This requirement may be reached with bind mounts:
$ docker run -d \
-it \
--name container-name \
-v "$(pwd)"/logs:/app/logs \
your-image
Here, $(pwd)/logs is a directory on your host filesystem that will contain the logs, and /app/logs is a directory that your application uses to write logs inside the container. Of course, you need to modify these according to your needs.
The other requirement might as well be achieved in a similar way:
I don't just want to get some files into the file system of the image, but want to edit the source code/the codebase itself
It depends on the tech stack you use for development. For example, if your app is written in PHP, you can mount source code folder to the container, and each time you modify a file, the same version will "appear" inside the container, since PHP is an interpreted language that does not require compilation.
If you use, for example, Go, this will not work the same way, since Go programs require compilation, and it is not enough to update source code inside the container. In such case you'll have to build the image again each time you need to make a change.

Docker Error: Invalid Reference Format

When trying to do a docker load, I am getting the invalid reference type error. I do docker load -i name-of-tar-file.
That's the only error I see, with no additional information.
Some additional context: This is a project in Clojure. I recently updated some code (it was literally a 1 line change, pretty minor too). The previous 'version' of my code works just fine, this updated one doesn't.
I haven't been able to find answers on SO about seeing this error when doing docker load
Edit:
Some more context: I have an arraymap called result. Earlier, I was replacing :images with the placeholder, but I now I want it replaced only if :images and :og-images is empty.
Here's the original code:
(cond-> result
;; If no images, use placeholder.
(empty? (:images result)) (image-util/assoc-placeholder))
This is what I changed it to:
(cond-> result
;; If no images at all, use placeholder.
(and (empty? (:images result)) (empty? (:og-images result))) (image-util/assoc-placeholder))
And in a separate file, the version number had to be updated.
a couple common debugging steps:
can the docker image built from the updated Clojure code be run on the same same where it was built? before the docker save/load?
can the result of docker save be passed to docker load on the same system?
if you run sha256hash or md5sum does it match on both the system where docker save was run and where docker load was run?
are there file sizes reasonable?

Mounted docker volumes corrupting files

I think this is machine related, but I'm not sure. I'm using the most current docker toolbox with docker 1.10.3 on OSX
I have a project using a Dockerfile, which copies code into the container like this:
[...]
COPY . /code
VOLUME /code
WORKDIR /code
[...]
For faster local development (test execution), we mount the current directory in the compose file
[...]
volumes:
- .:/code
[...]
and execute
docker-compose -f docker-compose.yml -f docker-compose.testing.yml run web py.test
Now, it looks like I have two different folders/files:
when running the container and looking inside a file with vi, everything looks like on the host. Changing files and executing our tests (pytest, specifically) lets the python interpreter read garbage so it can't execute the tests.
Example
the end of a file looks like this (which got copied in the Dockerfile into the container):
post_save.connect(backup_something, sender=SomeSender, dispatch_uid='backup_something') foobar
this obviously raises an error when executing, so I change it to
post_save.connect(backup_something, sender=SomeSender, dispatch_uid='backup_something')
the file looks fine now, both from the host and inside the container.
Executing pytest, it still reads the content of the copied code, breaking the tests locally for me.
If I change even more, it's neither the copied nor the mounted file, so stuff breaks at random positions:
File "/code/some_code.py", line 69
dispatch_uid='backup_
^
SyntaxError: EOL while scanning string literal
(tail shows correct syntax etc, there is definitely nothing broken with the code)
Is there something wrong with our setup or is it just my machine being broken somehow? I tried restarting and recreating the docker machine but this doesn't help.
I would try to mount in read only mode and then double check the filesystem type if there's something strange.
Years ago there was a bug with ntfs-3g corrupting files, maybe it's something similar (obviously not ntfs because we are on OS X)
I have no experience with DT on IOS, but I think you may have done a union mount.
If that is the case, the solution would be to move files or mount point so that files won't be shadowed.
This article may be relevant:

docker-compose caches run results

I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.
The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker

Resources