I downloaded the official node-red container.
I noticed that the file 'setting.js' is missing inside it.
I tried to insert it manually inside the container but it is not read when node-red is started. I was wondering if there was a way to insert it or anyway an alternative way to set the credentials to access the admin page of node-red.
I pull nodered/node-red-docker:0.18.4-v8.
Usually setting.js file is inside .node-red/setting.js, but not in this case. This container have the path: /usr/src/node-red/ and when I enter with command docker exec -it container_name bash, i'm inside the directory node-red. I tried to put the setting.js in this path but not work
You should not change the copy of settings.js in /usr/src/node-red this is the default and should be left alone. Also editing this file after starting the container will not work as it is copied to the userDir the first time Node-RED is started.
If you want to include your own version you should mount it into the /data directory as this is the userDir for the system when running.
You can use the docker -v option to mount a local copy of the file into the container.
docker -v /path/to/settings.js:/data/settings.js ...
Related
I have a docker image, and I am running it now (finishing with bash)
When I do, I have a file structure inside the container.
However, this is not some file structure mapped (with -v) from outside the container. These files and folders exist only inside the container.
My question is, since it is bothersome to be opening each file with vi and navigating from the terminal, is there a way that I can open vscode on these files?
Be aware that these files do not exist outside the container
I found how to do it from this link
However I used the "attach to running container" command
I rarely do that but when I have to I usually mount an empty volume to the container, then exec into the container copy the folder which I need into that empty volume, which is then replicated on my host machine. From my host machine I then open it in vscode.
However please be careful if you have sensitive information in that container, not to expose something by accident.
So the steps are:
Create empty volume ( docker-compose example )
Note do not overwrite the folder/file which you want to extract. containerpath is path which does not exist in the container prior to creating it.
volume:
- ./hostpath:/containerpath
Find docker id so that you can use it to exec into it:
docker ps
Exec into the container:
docker exec -it <container_id> /bin/sh
Copy the file/folder to that empty volume:
cp -r folder containerpath
Exit the container and look at your files in ./hostpath folder.
what I want to accomplish
I want to use the functions written in the .vimrc placed on the host side within Docker.
what I did
Put the .vimrc file in the /home/akihiro directory on the host side.
When using the docker run command, mount the /home/akihiro directory on the host side and run the python file in Docker with Vim.
akihiro#akihiro-thinkpad-x1-carbon-5th:~$ docker run --rm -it -v /home/akihiro:/home --name test cnn_study:latest
As a result, the settings written in the .vimrc file did not work.
Next, I started a new container without mounting.
Created /home/akihiro directory in the container.
I left the container.
I copied only the /home/akihiro/.vimrc file on the host side into the container, and re-entered the container.
docker cp ./.vimrc 52b28f1ffea8:/home/akihiro
Started up a Python file using Vim.
As a result, the settings written in the .vimrc file did not work.
What you are doing is mapping the complete /home or /home/akihiro directory on the Container.
You can't do that for 2 reasons:
There are more files in that directory. What do you expect happens with them?
It's not like a OR-function is done on files in both folder.
Mapping the volume complete replaces the internal folder
My gut feeling says that mapping the direcory comes too late in the process.
The directory is already there in the Container and therefore cannot be overwritten.
(At least that's how I understand the strange effects I get with mapping sometimes)
What you should do is only map the file:
$ docker run --rm -it -v /home/akihiro/.vimrc:/home/akihiro/.vimrc --name test cnn_study:latest
I do the same with .bashrc (to set the prompt to the name of the Container)
I created own Dockerfile, during building I inserted to /opt/wilfly/log my log4j.xml.
Now I need create volume /mnt/data/logs/application:/opt/wildfly/log
I run command
sudo docker run --name=myapp -v /mnt/data/logs/application:/opt/wildfly/log -d -i -t application
But when I look in docker container, folder /opt/wilfly/log is empty. In this folder should by log4j.xml.
Thank you.
Maybe you should move it into another directory.
For example move log4j.xml to /opt/wilfly/ and set logging path to /opt/wilfly/log.
When you run the container, log4j.xml will not disappear.
When you mount the data, the folder from your host "override" your mounted folder within the container.
Thus, there are some options you can do:
copy the log4j.xml into your local /mnt/data/logs/application folder and run the container as you did.
remove the -v /mnt/data/logs/application:/opt/wildfly/log and use the original log4j.xml that you were added during the image build.
Please note that you can also mount only the file if you like (rather than the entire floder): -v /mnt/data/logs/application/log4j.xml:/opt/wildfly/log/log4j.xml but it won't change the behavior - the file from your host will be mounted into the container and not in the opposite direction.
I succesfully installed drupal 7 with docker.
Using docker4drupal, now my question when I start editing my drupal site is, where are the folders containing drupal?
Let's say I installed a new theme and want to swap the images for the banner, how do I access the drupal folder containing the images, or would it be preciser to ask : Where does Docker storage them?
My docker compose line is :
-codebase : /var/www/html
I know that installing it using :
./:/var/www/html
Would install drupal in the same directory my docker-compose.yml is, but for some reason it doesn't work and still doesn't show me where the files are.
Any help is welcome!
If you are not using volumes to mount your existing code, the code resides inside the docker container. You can access it only by getting inside the container using docker exec. If you are using the default docker-compose.yml that came with the repo, then the name of the container will be "docker4drupal_nginx_1" (since nginx is the default).
Run this code to get inside the container:
docker exec -it docker4drupal_nginx_1 /bin/bash
exec allows you to execute commands inside the container.
-it allows you to start an interactive terminal
/bin/bash allows you to start the bash terminal inside the container
Once you are inside container run ls and you will see drupal files including "web".
MORE USEFUL
However, this is not a useful way if you want to work on the files and probably use an editor. Instead, mount a directory on host machine. First make a new directory where your docker-compose.yml file is with the name "codebase".
Then, update the docker-compose.yml so that:
- codebase:/var/www/html
becomes
- ./codebase:/var/www/html
Do this in both php and nginx service definisions. Of course, you should do this after you run docker-compose down with your previous set up. Then restart containers using docker-compose up -d.
Then, you will notice that the Drupal files are present in the codebase directory.
If you see at the bottom of the yml file, you will see that "codebase" is defined as a Docker volume. This implies the storage is managed by Docker and it will get stored somewhere in /var/lib/docker/ along with the container itself.
Hope this helps.
I am new to docker. I want to run tinyproxy within docker. Here is the image I used to create a docker container: "https://hub.docker.com/r/dtgilles/tinyproxy/".
For some unknown reason, when I mount the log file to the host machine, I can see the .conf file, but I can't see log file and the proxy server seems doesn't work.
Here is the command I tried:
docker run -v $(pwd):/logs -p 8888:8888 -d --name tiny
dtgilles/tinyproxy
If I didn't mount the file, then every time when run a container, I need to change its config file inside container.
Does anyone has any ideas about saving the changes in container?
Question
How to save a change committed by/into a container?
Answer
The command docker commit creates a new image from a container's changes (from the man page).
Best Practice
You actually should not do this to save a configuration file. A Docker image is supposed to be immutable. This increases sharing, and image customization through mounted volume.
What you should do is create the configuration file on the host and share it at through parameters with docker run. This is done by using the option -v|--volume. Check the man page, you'll then be able to share files (or directories) between host and containers allowing to persists the data through different runs.