Docker container not showing files in the shared volume - docker

I hope someone can help me to locate the issue. I created a SQL Server 2019 container using this code:
docker run --name SQL19 -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=zzzzz258*" -v /SqlDockerVol/userdatabase:/userdatabase -v /SqlDockerVol/sqlbackups:/sqlbackups -d mcr.microsoft.com/mssql/server:2019-lates
The problem I am having is the container not showing the files I saved in the /sqlbackups folder.
I am using Ubuntu 20.04.
I logged into the SQL19 container like this:
docker exec -it SQL19 /bin/bash
then issued ls sqlbackups to confirm.
Do I need to set any permission on the host folder. I am not familiar with Linux.
Thanks

I suspect you need to pass absolute path to your folder:
/SqlDockerVol/userdatabase
is it full absolute path?
If it is relative change it to:
$(pwd)/SqlDockerVol/userdatabase
Please check this Docker shared folder with Linux
And technically you need something like:
docker run --name SQL19 -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=zzzzz258*" -v $(pwd)/SqlDockerVol/userdatabase:/userdatabase -v $(pwd)/SqlDockerVol/sqlbackups:/sqlbackups -d mcr.microsoft.com/mssql/server:2019-lates

Related

share a directory between a docker pgadmin container and windows10

From within my docker pgadmin container, I want to access a postgresql backup file located in my windows10 OS.
So I'm trying to set up a shared directory.
Running this command works fine. Directory is linked to the container.
docker run --name=windows10 -d -v C:\Users\johndoe:/windows10 -p 5554:80 dpage/pgadmin4 -e PGADMIN_DEFAULT_EMAIL=john#doe.com -e PGADMIN_DEFAULT_PASSWORD=whatever
However, the directory won't mount because it's giving this error log on startup:
You need to specify PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD environment variables
What is this sorcery??
Move the environment variables to before the image name
docker run --name=windows10 -d -v C:\Users\johndoe:/windows10 -p 5554:80 -e PGADMIN_DEFAULT_EMAIL=john#doe.com -e PGADMIN_DEFAULT_PASSWORD=whatever dpage/pgadmin4
-e is an option and must be specified between run and IMAGE (see https://docs.docker.com/engine/reference/commandline/run/)

How do I transfer a volume's data to another volume on my local host in docker?

I did
docker run -v /jenkins_home:/var/jenkins_home jenkins/jenkins:alpine
on Windows (with docker installed as a linux container).
However, after configuring jenkins on that container, I now wanted to transfer the data in that /jenkins_home volume into a C:\jenkins_home folder on my local windows host machine\another machine.
Any way I can get the data from the /jenkins_home to c:/jenkins_home?
I know I should have made it
docker run -v c:/jenkins_home:/var/jenkins_home jenkins/jenkins:alpine
at the start but mistakes were made and I was wondering how do I fix that as the above suggestion?
Tried running
docker run -it -p 8080:8080 -p 50000:50000 --volumes-from jenkins_old -v c:/jenkins_home:/var/jenkins_home --name jenkins_new jenkins/jenkins:alpine
but it doesn't transfer the data over using the new c:\jenkins_home folder
docker run -v /jenkins_home:/var/jenkins_home jenkins/jenkins:alpine
Can't get the data to transfer over from the /jenkins_home folder to c:\jenkins_home folder.
I don't know where the /jenkins_home would map to on windows, but you could try this:
docker run -it --rm -v /jenkins_home:/from -v c:\jenkins_home:/to alpine cp -r /from /to

Docker how to pass a relative path as an argument

I would like to run this command:
docker run docker-mup deploy --config .deploy/mup.js
where docker-mup is the name the image, and deploy, --config, .deploy/mup.js are arguments
My question: how to mount a volume such that .deploy/mup.js is understood as the relative path on the host from where the docker run command is run?
I tried different things with VOLUME but it seems that VOLUME does the contrary: it exposes a container directory to the host.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
Using -v to expose your current directory is the only way to make that .deploy/mup.js file inside your container, unless you are baking it into the image itself using a COPY directive in your Dockerfile.
Using the -v option to map a host directory might look something like this:
docker run \
-v $PWD/.deploy:/data/.deploy \
-w /data \
docker-mup deploy --config .deploy/mup.js
This would map (using -v ...) the $PWD/.deploy directory onto /data/.deploy in your container, set the current working directory to /data (using -w ...), and then run deploy --config .deploy/mup.js.
Windows - Powershell
If you're inside the directory you want to bind mount, use ${pwd}:
docker run -it --rm -d -p 8080:80 --name web -v ${pwd}:/usr/share/nginx/html nginx
or $pwd/. (forward slash dot):
docker run -it --rm -d -p 8080:80 --name web -v $pwd/.:/usr/share/nginx/html nginx
Just $pwd will cause an error:
docker run -it --rm -d -p 8080:80 --name web -v $pwd:/usr/share/nginx/html nginx
Variable reference is not valid. ':' was not followed by a valid variable name character. Consider using ${} to
delimit the name
Mounting a subdirectory underneath your current location, e.g. "site-content", $pwd/ + subdir is fine:
docker run -it --rm -d -p 8080:80 --name web -v $pwd/site-content:/usr/share/nginx/html nginx
In my case there was no need for $pwd, and using the standard current folder notation . was enough. For reference, I used docker-compose.yml and ran docker-compose up.
Here is a relevant part of docker-compose.yml.
volumes:
- '.\logs\:/data'

How can I save zeppelin notebook from a docker?

I am using a docker-container for spark-zeppelin. The docker image was fund here,
https://github.com/Gmousse/docker-zeppelin-python3
I can start an image and work using this command,
docker run -it -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
To be able to communicate with the host, I have mounted some paths to host with volume flag like this,
docker run -it -v /cephfs:/cephfs -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
it works fine. Now to mount the zeppelin working directory I added this,
docker run -it -v /cephfs:/cephfs -v my_path_on_host:/zeppelin -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
And this does not run.
In this command actually it is looking for a zeppelin.sh file in /zeppelin and fails.
Any idea, how can I mount a local volume, and be able to save zeppelin notebook on the host?
Thank you for your time, in advance...
It is very handy to store notebooks on local file system especially under version control.
So you need to mount only notebook folder, but you tried to mount whole zeppelin folder and on start container could not find zeppelin files.
Correct mount examples:
docker run \
-p 8080:8080 \
-v /home/user/zeppelin_notebooks:/zeppelin/notebook \
apache/zeppelin:0.8.0
docker run \
-p 8080:8080 \
--mount type=bind,source="$(pwd)"/zeppelin_notebooks,target=/zeppelin/notebook \
--rm --name zeppelin apache/zeppelin:0.8.0
for My apache zeppelin docker hosted on window 10, the pwd is /opt/zeppelin, the default path for notebooks is /opt/zeppelin/notebook, so I mount my window path as below, Therefore, All notebooks are being save in "C:/Zeppelin/notebook"
docker run -p 8080:8080 -v C:/Zeppelin/Data/:/opt/zeppelin/Data/ -v C:/Zeppelin/notebook:/opt/zeppelin/notebook --name zeppelin apache/zeppelin:0.10.0

cannot share a folder with docker container

I am running a python interactive docker container on Ubuntu 14.04 using docker 17.03.1. I want to share files between local host and docker container so that files I create in the container are visible in the local directory and vice-versa. However when I run the following command I see an empty working directory in the container with no files.
docker run -e USER=$USER -e USERID=$UID -v /home/watts/python:/home/watts/python -w=/home/watts/python -p 8888:8888 --rm -it watts/python jupyter notebook --no-browser --notebook-dir=/home/watts/python --allow-root
I just run this command:
docker run -v `pwd`/home/watts/python:/home/watts/python -it kaggle/python /bin/bash
And after that, I started creating a few files both host and container. All files are visible in both sides.
Hopefully this will help.

Resources