I have to install a lot of missing node-red nodes to the container. Keeping the (named) container and running it with docker start works fine.
Now I want to keep the installed nodes in a separate external directory. If I mount /data do an external directory it basically works but doesn't help since the nodes are installed in ~/node_modules. If I try to mount ~/node_modules to an external directory, node-red can't start.
So what can I do to keep the nodes I installed independent of the executed container?
EDIT:
Meanwhile I did run the image as follows:
#!/bin/bash
sudo -E docker run -it --rm -p 1893:1880 -p 11893:11880
\ -e TZ=Europe/Berlin -e NPM_CONFIG_PREFIX=/data/node_modules/
\ -e NODE_PATH=/usr/src/node-red/node_modules:/data/node_modules:/data/node_modules/lib/node_modules
\ --log-driver none --mount type=bind,source="$(pwd)"/data,target=/data
\ --name myNodeRed nodered/node-red
but the additional installed nodes, that are in directory /data/node_modules/lib/node_modules are still not visible.
EDIT 2:
Meanwhile I tried to keep the container. So it became obvious, that nodes installed using npm install -g are fully ignored.
The default user for the Node-RED instance inside the container is not root (as is usual) so you need to make sure any volume you mount on to the /data location is writable by that user. You can do this by passing in the user id to the container to have it match the external user that has write permission to the mount point:
docker run -it --rm -v $(pwd)/data:/data -u $USER -e TZ=Europe/Berlin
\ -p 1893:1880 -p 11893:11880 --log-driver none
\ --name myNodeRed nodered/node-red
Node-RED nodes should not be installed with the -g option, you should use the build in Palette Manager or if you really need to use the command line, run npm install <node-name> in the /data directory inside the container (But you will need to restart the container for the newly installed nodes to be picked up, which is again why you should use the Palette Manager)
I am reading through this bit of the Jenkins Docker README and there seems to be a section that contradicts itself from my current understanding.
https://github.com/jenkinsci/docker/blob/master/README.md
It seems to me that is says to NOT use a bind mount, and then says that using a bind mount is highly recommended?
NOTE: Avoid using a bind mount from a folder on the host machine into /var/jenkins_home, as this might result in file permission
issues (the user used inside the container might not have rights to
the folder on the host machine). If you really need to bind mount
jenkins_home, ensure that the directory on the host is accessible by
the jenkins user inside the container (jenkins user - uid 1000) or use
-u some_other_user parameter with docker run.
docker run -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p
50000:50000 jenkins/jenkins:lts this will run Jenkins in detached mode
with port forwarding and volume added. You can access logs with
command 'docker logs CONTAINER_ID' in order to check first login
token. ID of container will be returned from output of command above.
Backing up data
If you bind mount in a volume - you can simply back up
that directory (which is jenkins_home) at any time.
This is highly recommended. Treat the jenkins_home directory as you would a database - in Docker you would generally put a database on
a volume.
Do you use bind mounts? Would you recommend them? Why or why not? The documentation seems to be ambiguous.
As commented, the syntax used is for a volume:
docker run -d -v jenkins_home:/var/jenkins_home -n jenkins ...
That defines a Docker volume names jenkins_homes, which will be created in:
/var/lib/docker/volumes/jenkins_home.
The idea being that you can easily backup said volume:
$ mkdir ~/backup
$ docker run --rm --volumes-from jenkins -v ~/backup:/backup ubuntu bash -c “cd /var/jenkins_home && tar cvf /backup/jenkins_home.tar .”
And reload it to another Docker instance.
This differs from bind-mounts, which does involve building a new Docker image, in order to be able to mount a local folder owner by your local user (instrad of the default user defined in the official Jenkins image: 1000:1000)
FROM jenkins/jenkins:lts-jdk11
USER root
ENV JENKINS_HOME /var/lib/jenkins
ENV COPY_REFERENCE_FILE_LOG=/var/lib/jenkins/copy_reference_file.log
RUN groupmod -g <yourId>jenkins
RUN usermod -u <yourGid> jenkins
RUN mkdir "${JENKINS_HOME}"
RUN usermod -d "${JENKINS_HOME}" jenkins
RUN chown jenkins:jenkins "${JENKINS_HOME}"
VOLUME /var/lib/jenkins
USER jenkins
Note that you have to declare a new volume (here /var/lib/jenkins), because, as seen in jenkinsci/docker issue 112, the official /var/jenkins_home path is already declared as a VOLUME in the official Jenkins image, and you cannot chown or chmod it.
The advantage of that approach would be to see the content of Jenkins home without having to use Docker.
You would run it with:
docker run -d -p 8080:8080 -p 50000:50000 \
--mount type=bind,source=/my/local/host/jenkins_home_dev1,target=/var/lib/jenkins \
--name myjenkins \
myjenkins:lts-jdk11-2.190.3
sleep 3
docker logs --follow --tail 10 myjenkins
I have made a data container in Docker with the directory /tmp:
sudo docker create -v /tmp --name datacontainer ubuntu
I will add another directory to this existing data container like /opt.
How can i do this?
You cannot add a new data volume to an existing (created or running) container.
With docker 1.9+, you would use instead docker volume create:
docker volume create --name my-tmp
docker volume create --name my-opt
Then you can mount those volumes to any container you want (when you run those containers, not when they are already running)
docker run -d -P \
-v my-tmp:/tmp \
-v my-opt:/opt \
--name mycontainer myimage
I have a Docker container that I've created simply by installing Docker on Ubuntu and doing:
sudo docker run -i -t ubuntu /bin/bash
I immediately started installing Java and some other tools, spent some time with it, and stopped the container by
exit
Then I wanted to add a volume and realised that this is not as straightforward as I thought it would be. If I use sudo docker -v /somedir run ... then I end up with a fresh new container, so I'd have to install Java and do what I've already done before just to arrive at a container with a mounted volume.
All the documentation about mounting a folder from the host seems to imply that mounting a volume is something that can be done when creating a container. So the only option I have to avoid reconfiguring a new container from scratch is to commit the existing container to a repository and use that as the basis of a new one whilst mounting the volume.
Is this indeed the only way to add a volume to an existing container?
You can commit your existing container (that is create a new image from container’s changes) and then run it with your new mounts.
Example:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a8f89adeead ubuntu:14.04 "/bin/bash" About a minute ago Exited (0) About a minute ago agitated_newton
$ docker commit 5a8f89adeead newimagename
$ docker run -ti -v "$PWD/somedir":/somedir newimagename /bin/bash
If it's all OK, stop your old container, and use this new one.
You can also commit a container using its name, for example:
docker commit agitated_newton newimagename
That's it :)
We don't have any way to add volume in running container, but to achieve this objective you may use the below commands:
Copy files/folders between a container and the local filesystem:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
For reference see:
https://docs.docker.com/engine/reference/commandline/cp/
I've successfully mount /home/<user-name> folder of my host to the /mnt folder of the existing (not running) container. You can do it in the following way:
Open configuration file corresponding to the stopped container, which can be found at /var/lib/docker/containers/99d...1fb/config.v2.json (may be config.json for older versions of docker).
Find MountPoints section, which was empty in my case: "MountPoints":{}. Next replace the contents with something like this (you can copy proper contents from another container with proper settings):
"MountPoints":{"/mnt":{"Source":"/home/<user-name>","Destination":"/mnt","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/home/<user-name>","Target":"/mnt"},"SkipMountpointCreation":false}}
or the same (formatted):
"MountPoints": {
"/mnt": {
"Source": "/home/<user-name>",
"Destination": "/mnt",
"RW": true,
"Name": "",
"Driver": "",
"Type": "bind",
"Propagation": "rprivate",
"Spec": {
"Type": "bind",
"Source": "/home/<user-name>",
"Target": "/mnt"
},
"SkipMountpointCreation": false
}
}
Restart the docker service: service docker restart
This works for me with Ubuntu 18.04.1 and Docker 18.09.0
Jérôme Petazzoni has a pretty interesting blog post on how to Attach a volume to a container while it is running. This isn't something that's built into Docker out of the box, but possible to accomplish.
As he also points out
This will not work on filesystems which are not based on block devices.
It will only work if /proc/mounts correctly lists the block device node (which, as we saw above, is not necessarily true).
Also, I only tested this on my local environment; I didn’t even try on a cloud instance or anything like that
YMMV
Unfortunately the switch option to mount a volume is only found in the run command.
docker run --help
-v, --volume list Bind mount a volume (default [])
There is a way you can work around this though so you won't have to reinstall the applications you've already set up on your container.
Export your container
docker container export -o ./myimage.docker mycontainer
Import as an image
docker import ./myimage.docker myimage
Then docker run -i -t -v /somedir --name mycontainer myimage /bin/bash
A note for using Docker Windows containers after I had to look for this problem for a long time!
Condiditions:
Windows 10
Docker Desktop (latest version)
using Docker Windows Container for image microsoft/mssql-server-windows-developer
Problem:
I wanted to mount a host dictionary into my windows container.
Solution as partially discripted here:
create docker container
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
go to command shell in container
docker exec -it <CONTAINERID> cmd.exe
create DIR
mkdir DirForMount
stop container
docker container stop <CONTAINERID>
commit container
docker commit <CONTAINERID> <NEWIMAGENAME>
delete old container
docker container rm <CONTAINERID>
create new container with new image and volume mounting
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y -v C:\DirToMount:C:\DirForMount <NEWIMAGENAME>
After this i solved this problem on docker windows containers.
My answer will be little different. You can stop your container, add the volume and restart it. How to do it, follow the steps.
docker volume create ubuntu-volume
docker stop <container-name>
sudo docker run -i -t --mount source=ubuntu-volume,target=<target-path-in-container> ubuntu /bin/bash
You can stop and remove the container, append the existing volume in a startup script, and restart from the image. If the already existing existing partitions do keep the data, you shouldn't experience any loss of information. This should also work the same way with Dockerfile and Docker composer.
eg (solr image).
(initial script)
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
file with the second volume
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-v "/XXXX/backups/solr_snapshot_folder":/var/solr_snapshots \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
Use symlink to the already mounted drive:
ln -s Source_path targer_path_which_is_already_mounted_on_the_running_docker
The best way is to copy all the files and folders inside a directory on your local file system by: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
SRC_PATH is on container
DEST_PATH is on localhost
Then do docker-compose down attach a volume to the same DEST_PATH and run Docker containers by using docker-compose up -d
Add volume by following in docker-compose.yml
volumes:
- DEST_PATH:SRC_PATH
TL;DR My docker save/export isn't working and I don't know why.
I'm using boot2docker for Mac.
I've created a Wordpress installation proof of concept, and am using BusyBox as both the MySQL container as well as the main file system container. I created these containers using:
> docker run -v /var/lib/mysql --name=wp_datastore -d busybox
> docker run -v /var/www/html --name=http_root -d busybox
Running docker ps -a shows two containers, both based on busybox:latest. SO far so good. Then I create the Wordpress and MySQL containers, pointing to their respective data containers:
>docker run \
--name mysql_db \
-e MYSQL_ROOT_PASSWORD=somepassword \
--volumes-from wp_datastore \
-d mysql
>docker run \
--name=wp_site \
--link=mysql_db:mysql \
-p 80:80 \
--volumes-from http_root \
-d wordpress
I go to my url (boot2docker ip) and there's a brand new Wordpress application. I go ahead and set up the Wordpress site by adding a theme and some images. I then docker inspect http_root and sure enough the filesystem changes are all there.
I then commit the changed containers:
>docker commit http_root evilnode/http_root:dev
>docker commit wp_datastore evilnode/wp_datastore:dev
I verify that my new images are there. Then I save the images:
> docker save -o ~/tmp/http_root.tar evilnode/http_root:dev
> docker save -o ~/tmp/wp_datastore.tar evilnode/wp_datastore:dev
I verify that the tar files are there as well. So far, so good.
Here is where I get a bit confused. I'm not entirely sure if I need to, but I also export the containers:
> docker export http_root > ~/tmp/http_root_snapshot.tar
> docker export wp_datastore > ~/tmp/wp_datastore_snapshot.tar
So I now have 4 tar files:
http_root.tar (saved image)
wp_datastore.tar (saved image)
http_root_snapshot.tar (exported container)
wp_datastore_snapshot.tar (exported container)
I SCP these tar files to another machine, then proceed to build as follows:
>docker load -i ~/tmp/wp_datastore.tar
>docker load -i ~/tmp/http_root.tar
The images evilnode/wp_datastore:dev and evilnode/http_root:dev are loaded.
>docker run -v /var/lib/mysql --name=wp_datastore -d evilnode/wp_datastore:dev
>docker run -v /var/www/html --name=http_root -d evilnode/http_root:dev
If I understand correctly, containers were just created based on my images.
Sure enough, the containers are there. However, if I docker inspect http_root, and go to the file location aliased by /var/www/html, the directory is completely empty. OK...
So then I think I need to import into the new containers since images don't contain file system changes. I do this:
>cat http_root.snapshot.tar | docker import - http_root
I understand this to mean that I am importing a file system delta from one container into another. However, when I go back to the location aliased by /var/www/html, I see the same empty directory.
How do I export the changes from these containers?
Volumes are not exported with the new image. The proper way to manage data in Docker is to use a data container and use a command like docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata or docker cp to backup data and transfer it around. https://docs.docker.com/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes