I installed rstudio server by running:
docker run -d -p 8787:8787 -e USER='MY_USER' -e PASSWORD='MY_PASSWORD' rocker/hadleyverse
I would like to link (mount?) my local home directory (or a folder) to that docker container. Is that possible? how?
Thanks!
You use -v or --volume to mount directories into a container. For example:
docker run -d \
-p 8787:8787 \
-e USER='MY_USER' \
-e PASSWORD='MY_PASSWORD' \
-v $HOME:/src \
rocker/hadleyverse
Now your container will have a folder named /src with the contents of your local home folder.
Use the -v flag:
docker run -d -p 8787:8787 -e USER='MY_USER' -e PASSWORD='MY_PASSWORD' -v $HOME:/data rocker/hadleyverse
Please see the documentation entitled Sharing files with host machine in the wiki.
Related
I am setting up quay in a vm with centos distro. This is the guide I am following: quay deploy guide
once I install Podman I am trying to run first container with below command:
I set up this env variable:
export QUAY=QUAY
and made a dir of same name in home:
mkdir QUAY
once I install Podman I am trying to run first container with below command:
$ sudo podman run -d --rm --name postgresql-quay \
-e POSTGRESQL_USER=quayuser \
-e POSTGRESQL_PASSWORD=quaypass \
-e POSTGRESQL_DATABASE=quay \
-e POSTGRESQL_ADMIN_PASSWORD=adminpass \
-p 5432:5432 \
-v $QUAY/postgres-quay:/var/lib/pgsql/data:Z \
registry.redhat.io/rhel8/postgresql-10:1
and I am getting following error:
sudo podman run -d --rm --name postgresql-quay -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass -e POSTGRESQL_DATABASE=quay -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5432:5432 -v QUAY/postgres-quay:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-10:1
Error: error creating named volume "QUAY/postgres-quay": error running volume create option: names must match [a-zA-Z0-9][a-zA-Z0-9_.-]*: invalid argument
The bind mount needs to be specified as an absolute path or a relative path that starts with ./ or ../.
In other words instead of
-v QUAY/postgres-quay:/var/lib/pgsql/data:Z
use
-v ./QUAY/postgres-quay:/var/lib/pgsql/data:Z
(I replaced $QUAY with its value QUAY)
I want to open a folder from my host machine in the Jupyter notebook application (like in this video: https://www.youtube.com/watch?v=W3bk2pojLoU). I tried some different versions of docker run -it --rm --name tf -v /Users/superuser/mywork:/notebooks -p 8888:8888 -p 6006:6006 tensorflow/tensorflow:latest-py3-jupyter, but it doesn't work. Something must be wrong, but I don't get what it is.
Thanks for every answer (Y)
I am going to speculate, but I think what you mean by 'it doesn't work' is that you do not see the mywork folder from the host in the file list within the Web UI of the jupiter. If that it the case, what you want to do/try is mount the volume to the /tf folder, ie
docker run -it --rm --name tf \
-v /Users/superuser/mywork:/tf/notebooks \
-p 8888:8888 \
-p 6006:6006 \
tensorflow/tensorflow:latest-py3-jupyter
I have Firefox nightly running in a container. I'm looking for a solution to configure it as my default browser application(ubuntu 18.04).
So my question is, how to configure a Docker container as default system application in Ubuntu.
My docker command is:
docker run -d --net=host -v ~/:/home/firefox -v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix:0 -v /dev/shm:/dev/shm --device /dev/snd \
--group-add 29 -e PULSE_SERVER=unix:/run/user/1000/pulse/native \
-v /run/user/1000/pulse/native:/run/user/1000/pulse/native \
firefox-nightly
I suppose I must create a new mime file, but not sure how to do it, to be able to create the container with all these parameters.
Thanks
One alternative is to create a new .desktop file (e.g: /usr/share/applications/firefox-docker.desktop).
I just copied the existing firefox.desktop and changed Exec sections with the command using docker (*)
Then use xdg-utils (**) configure it as default browser application:
xdg-settings set default-web-browser firefox-docker.desktop.
*: To keep the .desktop file cleaner, you could create an executable file in system PATH (e.g: /usr/bin): docker-firefox:
xhost +
docker run --net=host -v ~/:/home/firefox -v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix:0 -v /dev/shm:/dev/shm --device /dev/snd \
--group-add 29 -e PULSE_SERVER=unix:/run/user/1000/pulse/native \
-v /run/user/1000/pulse/native:/run/user/1000/pulse/native \
firefox-nightly $#
Note the $# at the end. And make it executable so it can be executed as a normal application.
**: The link is from Arch documentation, but it works in Ubuntu as well.
So I've been trying to import an external CSV file into my graphdb.
My neo4j is stored in a Docker container.
I placed the file in NEO_HOME/import, as implied.
I called the LOAD CSV command with "file:///mycsv.csv" as an argument, and got the followng in return
Couldn't load the external resource at: file:/var/lib/neo4j/import/mycsv.csv
Since I'm running the Docker container on a Windows environment, I don't see where the /var directory should be. Even when browsing the container itself via the Docker Quickstart Terminal. I still cannot find /var/lib...
When trying to change the .conf file to a different import directory, it didn't help as well.
Did somebody have this before?
You have to explicitly mount your import folder when invoking docker:
docker run -e NEO4J_AUTH=none -p 7474:7474 -p 7687:7687 -v $PWD/plugins:/plugins -v $PWD/import:/var/lib/neo4j/import neo4j:3.1.3-enterprise
When you run this command:
docker run \
--name testneo4j \
-p7474:7474 -p7687:7687 \
-d \
-v $HOME/neo4j/data:/data \
-v $HOME/neo4j/logs:/logs \
-v $HOME/neo4j/import:/var/lib/neo4j/import \
-v $HOME/neo4j/plugins:/plugins \
--env NEO4J_AUTH=neo4j/test \
neo4j:latest
The physical directory on Windows will be probably located in C:\Users\<your user>\neo4j like this:
C:\Users\<your user>\neo4j data;C import;C logs;C plugins;C
https://i.stack.imgur.com/VuW46.png
I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.