I have to install a lot of missing node-red nodes to the container. Keeping the (named) container and running it with docker start works fine.
Now I want to keep the installed nodes in a separate external directory. If I mount /data do an external directory it basically works but doesn't help since the nodes are installed in ~/node_modules. If I try to mount ~/node_modules to an external directory, node-red can't start.
So what can I do to keep the nodes I installed independent of the executed container?
EDIT:
Meanwhile I did run the image as follows:
#!/bin/bash
sudo -E docker run -it --rm -p 1893:1880 -p 11893:11880
\ -e TZ=Europe/Berlin -e NPM_CONFIG_PREFIX=/data/node_modules/
\ -e NODE_PATH=/usr/src/node-red/node_modules:/data/node_modules:/data/node_modules/lib/node_modules
\ --log-driver none --mount type=bind,source="$(pwd)"/data,target=/data
\ --name myNodeRed nodered/node-red
but the additional installed nodes, that are in directory /data/node_modules/lib/node_modules are still not visible.
EDIT 2:
Meanwhile I tried to keep the container. So it became obvious, that nodes installed using npm install -g are fully ignored.
The default user for the Node-RED instance inside the container is not root (as is usual) so you need to make sure any volume you mount on to the /data location is writable by that user. You can do this by passing in the user id to the container to have it match the external user that has write permission to the mount point:
docker run -it --rm -v $(pwd)/data:/data -u $USER -e TZ=Europe/Berlin
\ -p 1893:1880 -p 11893:11880 --log-driver none
\ --name myNodeRed nodered/node-red
Node-RED nodes should not be installed with the -g option, you should use the build in Palette Manager or if you really need to use the command line, run npm install <node-name> in the /data directory inside the container (But you will need to restart the container for the newly installed nodes to be picked up, which is again why you should use the Palette Manager)
Related
I used to have a Docker volume for mariadb, which contained my database. As part of migration from Docker to Podman, I am trying to migrate the db volume as well. The way I tried this is as follows:
1- Copy the content of the named docker volume (/var/lib/docker/volumes/mydb_vol) to a new directory I want to use for Podman volumes (/opt/volumes/mydb_vol)
2- Run Podman run:
podman run --name mariadb-service -v /opt/volumes/mydb_vol:/var/lib/mysql/data:Z -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=mysecret -e MYSQL_DATABASE=wordpress --net host mariadb
This successfully creates a container and initializes the database with the given environment variables. The problem is that the database in the container is empty! I tried changing host mounted volume to /opt/volumes/mydb_vol/_data and container volume to /var/lib/mysql simultaneously and one at a time. None of them worked.
As a matter of fact, when I "podman execute -ti container_digest bash" inside the resulting container, I can see that the tables have been mounted successfully in the specified container directories, but mysql shell says the database is empty!
Any idea how to properly migrate Docker volumes to Podman? Is this even possible?
I solved it by not treating the directory as a docker volume, but instead mounting it into the container:
podman run \
--name mariadb-service \
--mount type=bind,source=/opt/volumes/mydb_vol/data,destination=/var/lib/mysql \
-e MYSQL_USER=wordpress \
-e MYSQL_PASSWORD=mysecret \
-e MYSQL_DATABASE=wordpress \
mariadb
I have a setup with docker in docker and try to mount folders.
Let's say I have those folders that I wish to share with his parent. On the host, I created a file in /tmp/dind called foo. Host starts container 1, which starts container 2. This is the result I want to have.
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<-------> <------->
Instead, I get
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<------->
<----------------------->
Code here:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind2:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
This outputs nothing, while the next command gives foo as result. I changed the mounted volume:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
The question is, what do I need to do in order to use Container 1 path and not host? Or do I misunderstand something about docker here?
For all that you say “Docker-in-Docker” and “dind”, this setup isn’t actually Docker-in-Docker: your container1 is giving instructions to the host’s Docker daemon that affect container2.
Host Container1
/-----
(Docker)
| Container2
\---->
(NB: this is generally the recommended path for CI-type setups. “Docker-in-Docker” generally means container1 is running its own, separate, Docker daemon, which tends to not be recommended.)
Since container1 is giving instructions to the host’s Docker, and the host’s Docker is launching container2, any docker run -v paths are always the host’s paths. Unless you know that some specific directory has already been mounted into your container, it’s hard to share files with “sub-containers”.
One way to get around this is to assert that there is a shared path of some sort:
docker run \
-v $PWD/exchange:/exchange \
-v /var/run/docker.sock:/var/run/docker.sock \
-e EXCHANGE_PATH=$PWD/exchange \
--name container1
...
# from within container1
mkdir $EXCHANGE_PATH/container2
echo hello world > $EXCHANGE_PATH/container2/file.txt
docker run \
-v $EXCHANGE_PATH/container2:/data
--name container2
...
When I’ve done this in the past (for a test setup that wanted to launch helper containers) I’ve used a painstaking docker create, docker cp, docker start, docker cp, Docker rm sequence. That’s extremely manual, but it has the advantage that the “local” side of a docker cp is always the current filesystem context even if you’re talking to the host’s Docker daemon from within a container.
It does not matter if container 2 binds the host path, because the changes to files in container 1 directly affect everything on the host path. So they all work on the same files.
So your setup is correct and will function the same as if they referenced in the way you described.
Update
If you want to make sure that the process do not modify the host files you could do the following:
Build a custom docker images which copies all data from folder a to folder b, where you execute the script on folder b. And then mount the files with ./:/a. This way you maintain flexibility on which files you bind to the container without letting the container modify the host files.
I hope this answers your question :)
I am reading through this bit of the Jenkins Docker README and there seems to be a section that contradicts itself from my current understanding.
https://github.com/jenkinsci/docker/blob/master/README.md
It seems to me that is says to NOT use a bind mount, and then says that using a bind mount is highly recommended?
NOTE: Avoid using a bind mount from a folder on the host machine into /var/jenkins_home, as this might result in file permission
issues (the user used inside the container might not have rights to
the folder on the host machine). If you really need to bind mount
jenkins_home, ensure that the directory on the host is accessible by
the jenkins user inside the container (jenkins user - uid 1000) or use
-u some_other_user parameter with docker run.
docker run -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p
50000:50000 jenkins/jenkins:lts this will run Jenkins in detached mode
with port forwarding and volume added. You can access logs with
command 'docker logs CONTAINER_ID' in order to check first login
token. ID of container will be returned from output of command above.
Backing up data
If you bind mount in a volume - you can simply back up
that directory (which is jenkins_home) at any time.
This is highly recommended. Treat the jenkins_home directory as you would a database - in Docker you would generally put a database on
a volume.
Do you use bind mounts? Would you recommend them? Why or why not? The documentation seems to be ambiguous.
As commented, the syntax used is for a volume:
docker run -d -v jenkins_home:/var/jenkins_home -n jenkins ...
That defines a Docker volume names jenkins_homes, which will be created in:
/var/lib/docker/volumes/jenkins_home.
The idea being that you can easily backup said volume:
$ mkdir ~/backup
$ docker run --rm --volumes-from jenkins -v ~/backup:/backup ubuntu bash -c “cd /var/jenkins_home && tar cvf /backup/jenkins_home.tar .”
And reload it to another Docker instance.
This differs from bind-mounts, which does involve building a new Docker image, in order to be able to mount a local folder owner by your local user (instrad of the default user defined in the official Jenkins image: 1000:1000)
FROM jenkins/jenkins:lts-jdk11
USER root
ENV JENKINS_HOME /var/lib/jenkins
ENV COPY_REFERENCE_FILE_LOG=/var/lib/jenkins/copy_reference_file.log
RUN groupmod -g <yourId>jenkins
RUN usermod -u <yourGid> jenkins
RUN mkdir "${JENKINS_HOME}"
RUN usermod -d "${JENKINS_HOME}" jenkins
RUN chown jenkins:jenkins "${JENKINS_HOME}"
VOLUME /var/lib/jenkins
USER jenkins
Note that you have to declare a new volume (here /var/lib/jenkins), because, as seen in jenkinsci/docker issue 112, the official /var/jenkins_home path is already declared as a VOLUME in the official Jenkins image, and you cannot chown or chmod it.
The advantage of that approach would be to see the content of Jenkins home without having to use Docker.
You would run it with:
docker run -d -p 8080:8080 -p 50000:50000 \
--mount type=bind,source=/my/local/host/jenkins_home_dev1,target=/var/lib/jenkins \
--name myjenkins \
myjenkins:lts-jdk11-2.190.3
sleep 3
docker logs --follow --tail 10 myjenkins
Do I understand Docker correctly?
docker run -it --rm --name verdaccio -p 4873:4873 -d verdaccio/verdaccio
gets verdaccio if it does not exist yet on my server and runs it on a specific port. -d detaches it so I can leave the terminal and keep it running right?
docker exec -it --user root verdaccio /bin/sh
lets me ssh into the running container. However whatever apk package that I add would be lost if I rm the container then run the image again, as well as any edited file. So what's the use of this? Can I keep the changes in the image?
As I need to edit the config.yaml that is present in /verdaccio/conf/config.yaml (in the container), my only option to keep this changes is to detach the data from the running instance? Is there another way?
V_PATH=/path/on/my/server/verdaccio; docker run -it --rm --name
verdaccio -p 4873:4873 \
-v $V_PATH/conf:/verdaccio/conf \
-v $V_PATH/storage:/verdaccio/storage \
-v $V_PATH/plugins:/verdaccio/plugins \
verdaccio/verdaccio
However this command would throw
fatal--- cannot open config file /verdaccio/conf/config.yaml: ENOENT: no such file or directory, open '/verdaccio/conf/config.yaml'
You can use docker commit to build a new image based on the container.
A better approach however is to use a Dockerfile that builds an image based on verdaccio/verdaccio with the necessary changes in it. This makes the process easily repeatable (for example if a new version of the base image comes out).
A further option is the use of volumes as you already mentioned.
I want to know how I can allow a "child" (sibling) docker container to access some subdirectory of an already mounted volume. As an explanation, this is a simple setup:
I have the following Dockerfile, which just installs Docker in a Docker container:
FROM ubuntu
RUN apt-get update && apt-get install -y curl
RUN curl -fsSL https://get.docker.com/ | sh
I have the following data directory on my host machine
/home/user/data/
data1.txt
subdir/
data2.txt
Build the parent image:
[host]$> docker build -t parent .
Then run the parent container:
[host]$> docker run --rm --name parent -it -v /home/user/data/:/data/ -v /var/run/docker.sock:/var/run/docker.sock parent
Now I have a running container, and am "inside" the new container. Since I have the docker socket bound to the parent, I am able to run docker commands to create "child" containers, which are actually sibling containers. The data volume has been successfully mapped:
[parent]$> ls /data/
subdir data1.txt
Now I want to create a sibling container that can only see the subdir directory:
[parent]$> docker run --rm --name child -it -v /data/subdir/:/data/ ubuntu
This creates a sibling container, and I am successfully "inside" the container, however the new data directory is empty. My assumption is because the volume I tell it to use "/data/" is mapped by the host to a directory that doesn't exist on the host, rather than using the volume defined when running the parent.
[child]$> ls /data/
<nothing>
What can I do to allow this mapping to work, so that the child can create files in the subdirectory, and that the parent container can see and access these files? The child is not allowed to see data1.txt (or anything else above the subdirectory).
"Sibling" container is the correct term, there is no direct relationship between what you have labeled the "parent" and "child" containers, even though you ran the docker command in one of the containers.
The container with the docker socket mounted still controls the dockerd running on the host, so any paths sent to dockerd via the API will be in the hosts scope.
There are docker commands where using the containers filesystem does change things. This is when the docker utility itself is accessing the local file system. docker build docker cp docker import docker export are examples where docker interacts with the local file system.
Mount
Use -v /home/user/data/subdir:/data for the second container
docker run --name parent_volume \
-it --rm -v /home/user/data:/data ubuntu
docker run --name child_volume \
-it --rm -v /home/user/data/subdir:/data ubuntu
The processes you run need to be careful with what is writing to data mounted into multiple containers so data doesn't get clobbered.