To which directory should file volume be attached in Kommet Docker - docker

I am running an installation of the Kommet platform, but I trouble figuring out how to attach a volume related to uploaded file storage.
According to Kommet documentation I am supposed to run it with the following command:
docker run -t kommet/kommet -v data-volume:/var/lib/postgresql -d
This works fine and I can see that my database data is properly stored in my data volume. However, Kommet also allows for uploading files, and I cannot figure out where they are stored.
Is there an option to attach another volume to a specific location where uploaded files are kept?

According to Running Kommet as Docker container doc, you must mount a volume at /usr/local/tomcat/webapps/filestorage such as:
docker run -d --name myapp \
-v my-file-storage:/usr/local/tomcat/webapps/filestorage \
-t kommet/kommet
In your case with database volume, it would be something like:
docker run -t kommet/kommet -d \
-v data-volume:/var/lib/postgresql
-v file-volume:/usr/local/tomcat/webapps/filestorage

Related

Is there any way to read contents of a Docker volume without attaching it to a container?

Suppose I created a Docker volume like so:
docker volume create my-volume
The volume was then used by some container and data was written to it.
Is there any way to read the contents of the volume from the host machine without attaching it to a container. Answer should not include reading it from /var/lib/docker... as that path can change from machine to machine and OS to OS.
So I am looking for a command like
docker cat my-volume:/path/inside/this/volume/file.txt
Is there any way to read the contents of the volume from the host machine without attaching it to a container?
No.
On the other hand, the recipe to read an individual file from a temporary container isn't that much more complicated than what you show:
docker run --rm -v my-volume:/my-volume -w /my-volume busybox \
cat ./path/inside/this/volume/file.txt
Instead of cat, you can run any other command; so if you wanted to copy the contents of the volume out to the local system, for example, you could similarly run
docker run --rm -v my-volume:/my-volume -w /my-volume busybox \
tar cf - . \
| tar xvf -

Docker volume is empty

When using -v switch the files from container should be copied to localhost volume right? But it seems like the directory jenkins_home isn't created at all.
If I create the jenkins_home directory manually and then mount it, the directory is still empty.
I want to preserve the jenkins configs so I could re-run image later.
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:latest
If you docker run -v jenkins_home:... where the first half of the -v option has no slashes in it at all, that syntax creates a named Docker volume; it isn't a bind mount.
If you docker run -v "$PWD/jenkins_home:..." then that host directory is mounted over the corresponding container directory. At startup time, nothing is ever copied into the host directory; if the host directory is empty, that empty directory gets mounted into the container, hiding everything that was in the image.
If you use the docker run -v named-volume:... syntax, and the named volume is empty, then in this case only, and only the very first time the container is run, the contents of the image are copied into the named volume. This doesn't work for bind mounts, and it doesn't work if there is already data in the volume (perhaps from a previous docker run). It also does not work in other container environments such as Kubernetes. I do not recommend relying on this behavior.
Probably the easiest way to make this work is to launch a one-off container to export the contents of the image, and then use bind-mount syntax:
cd jenkins_home
docker run \
--rm \ # clean up this container when done
-w /var/jenkins_home \ # set the current container directory
jenkins/jenkins \ # the image to run
tar cf - . \ # write a tar file to stdout
| tar xf - # and unpack it on the host
# Now launch the container as normal
docker run -d -p ... -v "$PWD:/var/jenkins_home" jenkins/jenkins
Figured it out.
Turned out that by default it creates the volume in /var/lib/docker/volumes/jenkins_home/ instead of in the current directory.
Also I had tried docker volume create jenkins_home before running the docker image to mount. So not sure if it was the -v jenkins_home:/var/jenkins_home or if it was docker create volume that created the directory in /var/lib/docker/volumes/.

error: could not lock config file error on Container

I'm running a docker container and mounting a volume which needs to be accessed from inside the container and it fails with the below error.
docker run -it -p 8080:8080 -p 29418:29418 -e CANONICAL_WEB_URL=http://ec2-18-21-19-32.us-east-2ompute.amazonaws.com:8080 \
> -v '/home/gerrit/gerrit_instance/gerrit_vol/etc:/var/gerrit/etc' \
> --env CANONICAL_WEB_URL=http://ec2-18-219-190-32.us-east-2.compute.amazonaws.com:8080 gerritimage
error: could not lock config file /var/gerrit/etc/gerrit.config: Permission denied
Docker installed on Ubuntu
If i run this image without the volume it works perfectly. The instant i want to add my custom file this error is reported.
In the container these files are populated automatically with the init script
Link followed to build and execute the file
docker pull gerritcodereview/gerrit
https://github.com/GerritCodeReview/docker-gerrit
The steps are identical and i'm just uploading the etc directory which needs to be mounted to the gerrit container.
Any help is greatly appreciated.
Thank you,
Anish

Trouble mounting a folder from host onto my docker image

I am trying to mount a folder from my host system to a docker container. I am aware of the -v attribute of docker commands.
My command is:
docker run -v /home/ubuntu/tools/files/:/root/report -i -t --entrypoint /bin/bash my_image -s
But this does not seem to work, no files appear at my designated container folder. This is very frustrating as I will need to add files to my docker image at periodic intervals so just adding them to the build file at creation wont cut it.

Docker: filesystem changes not exporting

TL;DR My docker save/export isn't working and I don't know why.
I'm using boot2docker for Mac.
I've created a Wordpress installation proof of concept, and am using BusyBox as both the MySQL container as well as the main file system container. I created these containers using:
> docker run -v /var/lib/mysql --name=wp_datastore -d busybox
> docker run -v /var/www/html --name=http_root -d busybox
Running docker ps -a shows two containers, both based on busybox:latest. SO far so good. Then I create the Wordpress and MySQL containers, pointing to their respective data containers:
>docker run \
--name mysql_db \
-e MYSQL_ROOT_PASSWORD=somepassword \
--volumes-from wp_datastore \
-d mysql
>docker run \
--name=wp_site \
--link=mysql_db:mysql \
-p 80:80 \
--volumes-from http_root \
-d wordpress
I go to my url (boot2docker ip) and there's a brand new Wordpress application. I go ahead and set up the Wordpress site by adding a theme and some images. I then docker inspect http_root and sure enough the filesystem changes are all there.
I then commit the changed containers:
>docker commit http_root evilnode/http_root:dev
>docker commit wp_datastore evilnode/wp_datastore:dev
I verify that my new images are there. Then I save the images:
> docker save -o ~/tmp/http_root.tar evilnode/http_root:dev
> docker save -o ~/tmp/wp_datastore.tar evilnode/wp_datastore:dev
I verify that the tar files are there as well. So far, so good.
Here is where I get a bit confused. I'm not entirely sure if I need to, but I also export the containers:
> docker export http_root > ~/tmp/http_root_snapshot.tar
> docker export wp_datastore > ~/tmp/wp_datastore_snapshot.tar
So I now have 4 tar files:
http_root.tar (saved image)
wp_datastore.tar (saved image)
http_root_snapshot.tar (exported container)
wp_datastore_snapshot.tar (exported container)
I SCP these tar files to another machine, then proceed to build as follows:
>docker load -i ~/tmp/wp_datastore.tar
>docker load -i ~/tmp/http_root.tar
The images evilnode/wp_datastore:dev and evilnode/http_root:dev are loaded.
>docker run -v /var/lib/mysql --name=wp_datastore -d evilnode/wp_datastore:dev
>docker run -v /var/www/html --name=http_root -d evilnode/http_root:dev
If I understand correctly, containers were just created based on my images.
Sure enough, the containers are there. However, if I docker inspect http_root, and go to the file location aliased by /var/www/html, the directory is completely empty. OK...
So then I think I need to import into the new containers since images don't contain file system changes. I do this:
>cat http_root.snapshot.tar | docker import - http_root
I understand this to mean that I am importing a file system delta from one container into another. However, when I go back to the location aliased by /var/www/html, I see the same empty directory.
How do I export the changes from these containers?
Volumes are not exported with the new image. The proper way to manage data in Docker is to use a data container and use a command like docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata or docker cp to backup data and transfer it around. https://docs.docker.com/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes

Resources