I'm trying to mount a directory containing a file some credentials onto a container with the -v flag, but instead of mounting that file as a file, it mounts it as a directory:
Run script:
$ mkdir creds/
$ cp key.json creds/key.json
$ ls -l creds/
total 4
-rw-r--r-- 1 root root 2340 Oct 12 22:59 key.json
$ docker run -v /*pathToWorkingDirectory*/creds:/creds *myContainer* *myScript*
When I look at the debug spew of the docker run command, I see that it creates the /creds/ directory, but for some reason creates key.json as a subdirectory under that, rather than copying the file.
I've seen some other posts saying that if you tell docker run to mount a file it can't find, it will create a directory on the container with the filename, but since I didn't specify the filename and it knew to name the new directory 'key.json' it seems like it was able to find the file, but created it as a directory anyway? Has anyone run into this before?
In case it's relevant, the script is being run in Docker-in-Docker in another container as part of GitLab's CI process.
You are running Docker-in-Docker, this means when you specify a -v volume, Docker will look for this directory on the host since the shared sock enabling Docker-in-Docker actualy means your run command starts a container alongside the runner container.
I explain this in more detail in this SO answer:
https://stackoverflow.com/a/46441426/2078207
Also notice the comment below this answer to get a direction for a solution.
Related
I'm trying to add an init.sh script to the docker-entrypoint-initdb.d so I can finish provisioning my DB in a docker container. The script is in a scripts directory in my local directory where the Dockerfile lives. The Dockerfile is simply:
FROM glats/alpine-lamp
ENV MYSQL_ROOT_PASSWORD=password
The build command works and completes with no errors, and then when I try to run the container it also runs fine, with the linked volume with the init script:
docker run -d --name mydocker -p 8080:80 -it mydocker \
-v ~/Docker/scripts:/docker-entrypoint-initdb.d
However when I log into the running container, I don't see any docker-entrypoint-initdb.d directory, and obviously the init.sh never runs:
/ # ls /
bin etc media proc sbin tmp
dev home mnt root srv usr
entry.sh lib opt run sys var
Does anyone know why the volume isn't getting mounted?
There is no such logic defined in the Docker image that you are using, the entrypoint of the above image just starts MySQL and httpd and does not any ability to construct Database from entrypoint.
If you want to have the ability to run init script using mysql-entrypoint and construct the database you need to use offical image.
Initializing a fresh instance
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d.
Also running container, better to use one process per container. you can look into this docker-compose file that runs the stack as per rule of container "one process per container"
What is the use of "VOLUME" or "RUN mkdir /m"?
Even if I do not specify any of these instructions in the Dockerfile, then also "docker run -v ${PWD}/m:/m" works.
Inside a Dockerfile, VOLUME marks a directory as a mount point for an external volume. Even if the docker run command doesn't mount an existing folder into that mount point, docker will create a named volume to hold the data.
RUN mkdir /m does what mkdir does on any Unix system. It makes a directory named m at the root of the filesystem.
docker run -v ... binds a host directory to a volume inside a container. It will work whether or not the mount point was declared as a volume in a Dockerfile, and it will also create the directory if it doesn't exist. So neither VOLUME or RUN mkdir are specifically necessary before using that command, though they may be helpful to communicate the intent to the user.
I am trying to setup my project with docker. I am using Docker Toolbox on Windows 10 Home. I am very new to docker. To my understanding I have to copy my files to new container and add a volume so that I can persist changes made by gulp.
Here is my folder structure
-- src
|- dist
|- node-modules
|- gulpfile.js
|- package.json
|- Dockerfile
The Dockerfile code
FROM node:8.9.4-alpine
RUN npm install -g gulp
CMD [ "ls", 'source' ]
I tried many solutions for *docker run -v *
e.g
docker run -v /$(pwd):/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //c/Users/PcUser/Desktop/proj:/source <container image>
docker run -v //d/proj:/source <container image>
docker run -v /d/proj:/source <container image>
* But No luck *
Can anyone describe how would you set it up for yourself with the same structure. And why am I not able to mount my host folder.
P.S: If I use two containers one for compiling my code with gulp and one with nginx to serve the content of dist folder. How will I do that.
#sxm1972 Thank you for your effort and help.
You are probably using Windows Pro or a server edition. I am using Windows 10 Home edition
Here is how I solved it, so other people using same setup can solve their issue.
There may be a better way to solve this, please comment if there is an efficient way.
So...
First, the question... Why I don't see my shared volume from PC in my container.
Ans: If we use docker's Boot2Docker with VirtualBox (which I am) then whenever a volume is mounted it refers to a folder inside the Boot2Docker VM
Image: Result of -v with docker in VirtualBox Boot2Docker
So with this if we try to use $ ls it will show an empty folder which in my case it did.
So we have to actually mount the folder to Boot2Docker VM if we want to share our files from Windows environment to Container.
Image: Resulting Mounts Window <-> Boot2Docker <-> Container
To achieve this we have to manually mount the folder to VM with the following command
vboxmanage sharedfolder add default --name "<folder_name_on_vm>" --hostpath "<path_to_folder_on_windows>" --automount
IF YOU GET ERROR RUNNING THE COMMAND, SAYING vboxmanager NOT FOUND ADD VIRTUAL BOX FOLDER PATH TO YOUR SYSTEM PATH. FOR ME IT WAS C:\Program Files\Oracle\VirtualBox
After running the command, you'll see <folder_name_on_vm> on root. You can check it by docker-machine ssh default and then ls /. After confirming that the folder <folder_name_on_vm> exist, you can use it as volume to your container.
docker run -it -v /<folder_name_on_vm>:/source <container> sh
Hope this helps...!
P.S If you are feeling lazy and don't wan't to mount a folder, you can place your project inside your C:/Users folder as it is mounted by default on the VM as show in the image.
The problem is because the base-image you use runs the node REPL as its ENTRYPOINT. If you run the image as docker run -it node:8.9.4-alpine you will see a node prompt and it will not run the npm command like you want.
The way I worked around the problem is to create your own base image using the following Dockerfile:
FROM node:8.9.4-alpine
CMD ["sh"]
Build it as follows:
docker built -t mynodealpine .
Then build your image using this modified Dockerfile:
FROM mynodealpine
RUN npm install -g gulp
CMD [ "/bin/sh", "-c", "ls source" ]
For the problem regarding mounting of volumes, since you are using Docker for Windows, you need to go into Settings (click on the icon in the system tray) and then go and enable Shared Drives.
Here is the output I was able to get:
PS C:\users\smallya\testnode> dir
Directory: C:\users\smallya\testnode
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2/18/2018 11:11 AM dist
d----- 2/18/2018 11:11 AM node_modules
-a---- 2/18/2018 11:13 AM 77 Dockerfile
-a---- 2/18/2018 11:12 AM 26 gulpfile.js
-a---- 2/18/2018 11:12 AM 50 package.json
PS C:\users\smallya\testnode> docker run -it -v c:/users/smallya/testnode:/source mynodealpinenew
Dockerfile dist gulpfile.js node_modules package.json
PS C:\users\smallya\testnode>
Thanks for the question, possible more simple configuration via a VirtualBox graphical dialogue, worked for me, without the use of command line, albeit maybe not necessarily more versatile:
configuring the sharing folder inside VirtualBox shared folders configuration dialogue,
and then calling mount like this
docker run --volume //d/docker/nginx:/etc/nginx
I will be binding the /etc/nginx directory in my container to
D:\Program Files\Docker Toolbox\nginx
source:
https://medium.com/#Charles_Stover/fixing-volumes-in-docker-toolbox-4ad5ace0e572#fromHistory#fromHistory
Is it possible to pass a local file for CQL commands to a Cassandra Docker container?
Using docker exec fails as it cannot find the local file:
me#meanwhileinhell:~$ ls -al
-rw-r--r-- 1 me me 1672 Sep 28 11:02 createTables.cql
me#meanwhileinhell:~$ docker exec -i cassandra_1 cqlsh -f createTables.cql
Can't open 'createTables.cql': [Errno 2] No such file or directory: ‘createTables.cql'
I would really like not to have to open a bash session and run a script that way.
The container needs to be able to access the script first before you can execute it (i.e. the script file needs to be inside the container). If this is just a quick one-off run of the script, the easiest thing to do is probably to just use the docker cp command to copy the script from your host to the container:
$ docker cp createTables.cql container_name:/path/in/container
You should then be able to use docker exec to run the script at whatever path you copied it to inside the container. If this is something that's a work in progress and you might be changing and re-running the script while you're working on it, you might be better off mounting a directory with your scripts from your host inside the container. For that you'll probably want the -v option of docker run.
Hope that helps!
If you want docker container sees files in host system, the only way is to map volume. You can mapped current directory to /tmp and run command again docker exec -i cassandra_1 cqlsh -f /tmp/createTables.cql
I'm using vagrant so the container is inside vm. Below is my shell provision:
#!/bin/bash
CONFDIR='/apps/hf/hf-container-scripts'
REGISTRY="tutum.co/xxx"
VER_APP="0.1"
NAME=app
cd $CONFDIR
sudo docker login -u xxx -p xxx -e xxx#gmail.com tutum.co
sudo docker build -t $REGISTRY/$NAME:$VER_APP .
sudo docker run -it --rm -v /apps/hf:/hf $REGISTRY/$NAME:$VER_APP
Everything runs fine and the image is built. However, the syncing command(the last one) above doesn't seem to work. I checked in the container, /hf directory exists and it has files in it.
One other problem also is if I manually execute the syncing command, it succeeds but I can only see the files from host if I ls /hf. It seems that docker empties /hf and places the files from the host into it. I want it the other way around or better yet, merge them.
Yeah, that's just how volumes work I'm afraid. Basically, a volume is saying, "don't use the container file-system for this directory, instead use this directory from the host".
If you want to copy files out of the container and onto the host, you can use the docker cp command.
If you tell us what you're trying to do, perhaps we can suggest a better alternative.