Docker volumes not mounting - docker

I am using this docker image ( https://hub.docker.com/r/karai17/lapis-centos/~/dockerfile/ ) and the following docker run script and it's not setting up my volumes. When I open the container with bash, both /var/www and /var/data are empty.
docker run -dti -p 8888:2808
-v "C:\Users\karai\Documents\GitHub\project\data:/var/data"
-v "C:\Users\karai\Documents\GitHub\project\www:/var/www"
--name project
karai17/lapis-centos:latest
This was working just the other day, the only changes I've made to the Docker image was to add a few more Lua Rocks. All of the data is 100% definitely there, so I am not sure what is going on.

Not sure what happened but after doing a factory reset of Docker for Windows, the issue has been resolved.

Related

Plone on Docker always starts from scratch

I'm trying to develope Plone project with Docker, i have used this official image of Plone 5.2.0, the images is built a run perfectly with:
$ docker build -t plone-5.2.0-official-img .
$ docker run -p 8080:8080 -it plone-5.2.0-official-cntr
But the plone restarts each time i run the docker container asking to create the project from skratch.
Anybody could help me with this.
Thanks in advance.
You can also use a volume for data like:
$ docker run -p 8080:8080 -it -v plone-data:/data plone-5.2.0-official-cntr
The next time you'll run a new container it will re-use previous data.
If this helps,
Volumes are the docker way to persist data. You can read it up over here
When running the container just add a -v option and specify your path to store your data.
$ docker run -p "port:port" -it -v "path"
This is expected behavior, because docker run starts a new container, which doesn't have the state from your previous container.
You can use docker start CONTAINER, which will have the state from that CONTAINER's setup
https://docs.docker.com/engine/reference/commandline/start/
A more common approach is to use docker-compose.yml and docker-compose up -d, which will, in most cases, reuse previous state.
https://docs.docker.com/compose/gettingstarted/

Unable to link volume to the rocker/rstudio container on linux

When I execute commands in Ubuntu 18:
cd ~/r-projects
docker run -d -v $PWD:/home/rstudio rocker/rstudio
docker creates rstudio container accessible in localhost:8787. But I can't see the content of the $PWD inside RStudio session. When I save files in RStudio session and then restart the container those files persist, but I can not find them in the host using locate command. It seems that $PWD is not mounted but docker uses another folder to preserve RStudio state.
This is strange behavior. What I really want is to link some folder on the host to the rstudio inside docker container. What am I doing wrong?
Official instructions did not help me.
Please, provide correct command.
I resolved the issue:
docker run -d -p 8787:8787 -e PASSWORD=123 -v $PWD:/home/rstudio rocker/rstudio
The problem was that I executed commands inside kubernetes cluster.

Error when running ghost docker image second time

I can run docker container with ghost with this command (https://hub.docker.com/_/ghost/):
docker run -ti -v /tmp/data:/var/lib/ghost/content -p2368:2368 ghost
But only when /tmp/data is empty. If I try to stop this container with Ctrl+c and run it again, it fails with this error:
docker run -ti -v /tmp/data:/var/lib/ghost/content -p2368:2368 ghost
chown: changing ownership of '/var/lib/ghost/content/themes/casper': No such file or directory
I need to store ghost's data outside container and this is the way based on documentation. Am I missing something?
I'm trying this on Mac.
I had same problem when running ghost under Docker for Mac.
I would suggest to create docker volume for your data, rather than mounting direct folder. There seems to be a problem with resolving symlinks.
docker volume create ghost-data
docker run -it --mount source=ghost-data,target=/var/lib/ghost/content -p 2368:2368 ghost
i had this problem before but when i pull the latest version (docker pull ghost:latest) again everything work fine,,, i guess the chown on the ghost image Dockerfile made the ownership error....

How to access Docker (with Spark) file systems

Suppose I am running CentOS. I installed docker, then run the image.
Suppose I use this image:
https://github.com/jupyter/docker-stacks/tree/master/pyspark-notebook
Then I run
docker run -it --rm -p 8888:8888 jupyter/pyspark-notebook
Now, I can open the browser with localhost:8088 and I can create a new Jupyter notebook, type code and run, etc.
However, how can I access the file I created and, for example, commit it to github. Furthermore, if I already have some code on github, how can I pull this code and access these code from docker?
Thank you very much,
You need to mount the volume
docker run -it --rm -p 8888:8888 -v /opt/pyspark-notebook:/home/jovyan jupyter/pyspark-notebook
You should have just executed !pwd in the a new notebook and found which folder it was storing the work in. And then mounted that as a volume. When you run it like above the files would be available on your host in /opt/pyspark-notebook

Docker - how can I copy a file from an image to a host?

My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip produced by sbt dist in '../target/`, but I think this question also applies to jars, binaries, etc.
docker cp works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?
On the other hand, I would like to avoid unpacking a .tar file after running docker save $IMAGENAME just to get one file out (but that seems like the simplest, if slowest, option right now).
I would use docker volumes, e.g.:
docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out
but I'm running boot2docker in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip from an image with others. Thoughts?
To copy a file from an image, create a temporary container, copy the file from it and then delete it:
id=$(docker create image-name)
docker cp $id:path - > local-tar-file
docker rm -v $id
Unfortunately there doesn't seem to be a way to copy files directly from Docker images. You need to create a container first and then copy the file from the container.
However, if your image contains a cat command (and it will do in many cases), you can do it with a single command:
docker run --rm --entrypoint cat yourimage /path/to/file > path/to/destination
If your image doesn't contain cat, simply create a container and use the docker cp command as suggested in Igor's answer.
docker cp $(docker create --name tc registry.example.com/ansible-base:latest):/home/ansible/.ssh/id_rsa ./hacked_ssh_key && docker rm tc
wanted to supply a one line solution based on pure docker functionality (no bash needed)
edit: container does not even has to be run in this solution
edit2: thanks to #Jonathan Dumaine for --rm so the container will be removed after, i just never tried, because it sounded illogical to copy something from somewhere which has been already removed by the previous command, but i tried it and it works
edit3: due the comments we found out --rm is not working as expected, it does not remove the container because it never runs, so I added functionality to delete the created container afterwards(--name tc=temporary-container)
edit 4: this error appeared, seems like a bug in docker, because t is in a-z and this did not happen a few months before.
Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
A much faster option is to copy the file from running container to a mounted volume:
docker run -v $PWD:/opt/mount --rm --entrypoint cp image:version /data/libraries.tgz /opt/mount/libraries.tgz
real 0m0.446s
** VS **
docker run --rm --entrypoint cat image:version /data/libraries.tgz > libraries.tgz
real 0m9.014s
Parent comment already showed how to use cat. You could also use tar in a similar fashion:
docker run yourimage tar -c -C /my/directory subfolder | tar x
Another (short) answer to this problem:
docker run -v $PWD:/opt/mount --rm -ti image:version bash -c "cp /source/file /opt/mount/"
Update - as noted by #Elytscha Smith this only works if your image has bash built in
Not a direct answer to the question details, but in general, once you pulled an image, the image is stored on your system and so are all its files. Depending on the storage driver of the local Docker installation, these files can usually be found in /var/lib/docker/overlay2 (requires root access). overlay2 should be the most common storage driver nowadays, but the path may differ.
The layers associated with an image can be found using $ docker inspect image IMAGE_NAME:TAG, look for a GraphDriver attribute.
At least in my local environment, the following also works to quickly see all layers associated with an image:
docker inspect image IMAGE_NAME:TAG | jq ".[0].GraphDriver.Data"
In one of these diff directories, the wanted file can be found.
So in theory, there's no need to create a temporary container. Ofc this solution is pretty inconvenient.
First pull docker image using docker pull
docker pull <IMG>:<TAG>
Then, create a container using docker create command and store the container id is a variable
img_id=$(docker create <IMG>:<TAG>)
Now, run the docker cp command to copy folders and files from docker container to host
docker cp $img_id:/path/in/container /path/in/host
Once the files/folders are moved, delete the container using docker rm
docker rm -v $img_id
You essentially had the best solution already. Have the container copy out the files for you, and then remove itself when it's complete.
This will copy the files from /inside/container/ to your machine at /path/to/hostdir/.
docker run --rm -v /path/to/hostdir:/mnt/out "$IMAGENAME" /bin/cp -r /inside/container/ /mnt/out/
Update - here's a better version without the tar file:
$id = & docker create image-name
docker cp ${id}:path .
docker rm -v $id
Old answer
PowerShell variant of Igor Bukanov's answer:
$id = & docker create image-name
docker cp ${id}:path - > local-file.tar
docker rm -v $id
I am using boot2docker on MacOS. I can assure you that scripts based on "docker cp" are portable. Because any command is relayed inside boot2docker but then the binary stream is relayed back to the docker command line client running on your mac. So write operations from the docker client are executed inside the server and written back to the executing client instance!
I am sharing a backup script for docker volumes with any docker container I provide and my backup scripts are tested both on linux and MacOS with boot2docker. The backups can be easily exchanged between platforms. Basically I am executing the following command inside my script:
docker run --name=bckp_for_volume --rm --volumes-from jenkins_jenkins_1 -v /Users/github/jenkins/backups:/backup busybox tar cf /backup/JenkinsBackup-2015-07-09-14-26-15.tar /jenkins
Runs a new busybox container and mounts the volume of my jenkins container with the name jenkins_jenkins_1. The whole volume is written to the file backups/JenkinsBackup-2015-07-09-14-26-15.tar
I have already moved archives between the linux container and my mac container without any adjustments to the backup or restore script. If this is what you want you find the whole script an tutorial here: blacklabelops/jenkins
You could bind a local path on the host to a path on the container, and then cp the desired file(s) to that path at the end of your script.
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Then there is no need to copy afterwards.

Resources