Making an instance of GraphDB on Docker - docker

I am trying to make an instance of GraphDB on Docker. After creating the instance, I need to make a repository to import the data to the instance. However, when I make a repository, it says that the repository does not exist. When I use the loadrdf command to import data I receive an error regarding that the repository does not exist.
dist/bin/loadrdf -f -i repo-test -m parallel /opt/graphdb/home/data/*.ttl

The default data location of GraphDB is the data sub-directory of GraphDB's home directory, which in turn defaults to the distribution directory.
For the docker image this is /opt/graphdb/dist, so the default data directory is /opt/graphdb/dist/data.
But also in the docker image the default home is changed to /opt/graphdb/home, so the data directory becomes /opt/graphdb/home/data. This is done by passing the -Dgraphdb.home=/opt/graphdb/home java option when starting GraphDB.
So, when you created your repository it was created at /opt/graphdb/home/data/repositories/repo-test.
Your problem is that the loadrdf tool doesn't know about the changed home directory.
To overcome this try exporting the GDB_JAVA_OPTS variable with value -Dgraphdb.home=/opt/graphdb/home before running loadrdf, or as a one-liner:
GDB_JAVA_OPTS='-Dgraphdb.home=/opt/graphdb/home' ./dist/bin/loadrdf -f -i repo-test -m parallel /opt/graphdb/home/data/*.ttl

Related

How to manage environment variables that point to credential files in docker container?

In my ~/.bashrc, I have set GOOGLE_APPLICATION_CREDENTIALS=~/.gc/credential_file_name.json.
My source code is located in (and I'm working from here) ~/repos/github_repo/ where I have a Dockerfile with its working directory set to /usr/src/app.
If I copy ~/.gc/credential_file_name.json to ~/repos/github_repo/credential_file_name.json and run the docker container with
docker run -t \
-e GOOGLE_APPLICATION_CREDENTIALS=/usr/src/app/credential_file_name.json \
...
the credential file gets picked up and subsequent code runs ok.
But, ideally, I don't want to copy the credential to my github repository, as that risks possibly pushing it on github (even when I add it to .gitignore, it's still not safe).
Additionally, instead of having to explicitly give then full path -e GOOGLE_APPLICATION_CREDENTIALS=/usr/src/app/credential_file_name.json, I would like to do something like -e GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS} where ${GOOGLE_APPLICATION_CREDENTIALS} gets picked up from my ~/.bashrc.
But obviously, ${GOOGLE_APPLICATION_CREDENTIALS} will point to a path on my computer, which different directory structure than the docker container.
What is the best way to resolve this? I'm new to this and I came across direnv and .envrc, but don't quite understand.
I'm using Makefile to run the docker commands. I will try to avoid docker-compose, but if it solves this problem, please let me know.
Thanks for help!

How do I save changes to a docker container and image

I ran a container and it was missing command alias like ll. So I Typed alias ll="ls -lta" in the terminal while I was inside the container. After that, I ran docker commit to commit changes to the container and image. I got a new image (outside container), deleted the old image and ran a new container from the image I committed to. But was not able to use ll alias. What am I missing here?
Container state is only persisted through files.
alias ll="ls -lts" made no file changes and thus no state change was persisted by the docker commit....
You may achieve the result you intend by editing one of the files that the shell uses to define its state when opened, e.g. e.g. ~/.bashrc and ~/.bash_profile. You'll need to determine which to use for your environment|OS.

`docker service update` not working to update config

Firstly every step is done seemingly successfully (not any error reported). But per what I understand about how to check if the new config is applied OK, it seems to be failed updating the config.
Suppose I have a config file with a simple content like this:
Well done
I created a config (the first version) like this:
echo 'Well done' | docker config create my-config -
Now I have a local file named my-config.txt (on the host machine) with content as described above, it's used as a template (source) to clone over the target on the Docker container. On the Docker container, there is already a config file with the same content (originally). Now I change the content of the file my-config.txt (on the host machine) to something like this:
Well done !!!
And next I update the current docker service (created before) by using docker service update to apply the new config file, like this:
//firstly create another version of config
docker config create my-config-2 /home/my_user/my-config.txt
docker service update \
--config-add source=my-config-2,target=my-config.txt \
--config-rm my-config \
my-service
As I said, it seems to execute successfully. But when I try opening the my-config.txt file on the Docker container, its content is kept unchanged, like this:
docker exec [container_id] cat my-config.txt
It still shows Well done whereas the expected content should be Well done !!!. Isn't that what it should be? Am I doing something wrong here? Or could you suggest something to diagnose this issue or something different from what I've done without having to trying to solve this issue?

Move file downloaded in Dockerfile to harddrive

First off, I really lack a lot of knowledge regarding Docker itself and its structure. I know that it'd be way more beneficial to learn the basics first, but I do require this to work in order to move on to other things for now.
So within a Dockerfile I installed wget & used it to download a file from a website, authentification & download are successful. However, when I later try move said file it can't be found, and it doesn't show up using e.g explorer either (path was specified)
I thought it might have something to do with RUN & how it executes the wget command; I read that the Id can be used to copy it to harddrive, but how'd I do that within a Dockerfile?
RUN wget -P ./path/to/somewhere http://something.com/file.txt --username xyz --password bluh
ADD ./path/to/somewhere/file.txt /mainDirectory
Download is shown & log-in is successful, but as I mentioned I am having trouble using that file later on as it's not to be located on the harddrive. Probably a basic error, but I'd really appreciate some input that might lead to a solution.
Obviously the error is produced when trying to execute ADD as there is no file to move. I am trying to find a way to mount a volume in order to store it, but so far in vain.
Edit:
Though the question is similiar to the "move to harddrive" one, I am searching for ways to get the id of the container created within the Dockerfile in order to move it; while the thread provides such answers, I haven't had any luck using them within the Dockerfile itself.
Short answer is that it's not possible.
The Dockerfile builds an image, which you can run as a short-lived container. During the build, you don't have (write) access to the host and its file system. Which kinda makes sense, since you want to build an immutable image from which to run ephemeral containers.
What you can do is run a container, and mount a path from your host as a volume into the container. This is the only way how you can share files between the host and a container.
Here is an example how you could do this with the sherylynn/wget image:
docker run -v /path/on/host:/path/in/container sherylynn/wget wget -O /path/in/container/file http://my.url.com
The -v HOST:CONTAINER parameter allows you to specify a path on the host that is mounted inside the container at a specified location.
For wget, I would prefer -O over -P when downloading a single file, since it makes it really explicit where your download ends up. When you point -O to the location of the volume, the downloaded file ends up on the host system (in the folder you mounted).
Since I have no idea what your image or your environment looks like, you might need to tweak one or two things to work well with your own image. As a general recommendation: For basic commands like wget or curl, you can find pre-made images on Docker Hub. This can be quite useful when you need to set up a Continuous Integration pipeline or so, where you want to use wget or curl but can't execute it directly.
Use wget -O instead of -P for specific file download
for e.g.,
RUN wget -O /tmp/new_file.txt http://something.com --username xyz --password bluh/new_file.txt
Thanks

docker-compose caches run results

I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.
The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker

Resources