Enabling Project Functionality in a Docker Image of node_red - docker

I have branched both the node red git repo and the node red docker image and am trying to modify the settings.js file to enable Projects Functionality. The settings file that ends up in the Docker Container does not seem to be my modified one. My aim is to use the Docker image in a Cloud Foundry environment.
https://github.com/andrewcgaitskellhs2/node-red-docker.git
https://github.com/andrewcgaitskellhs2/node-red.git
I am also trying to install git and ssh-keygen at the time of the Docker build to allow Projects to function. I have added these in the Package.json files for both the node red app and image git repos.
If I need to start from scratch, please let me know what steps I need take.
I would welcome guidance on this.
Thank you.

You should not be trying to install ssh-keygen and git via the package.json file.
You need to use the Node-RED Dockerfile as the base to build a new Docker container, in the Dockerfile you should use apt-get to install them and to include an edited version of the settings.js Something like this:
FROM nodered/node-red-docker
RUN apt-get install git ssh-client
COPY settings.js /data
ENV FLOWS=flows.json
ENV NODE_PATH=/usr/src/node-red/node_modules:/data/node_modules
CMD ["npm", "start", "--", "--userDir", "/data"]
Where settings.js is your edited version that is in the same directory as the Dockerfile
Edited following #knolleary's comment:
FROM nodered/node-red-docker
COPY settings.js /data
ENV FLOWS=flows.json
ENV NODE_PATH=/usr/src/node-red/node_modules:/data/node_modules
CMD ["npm", "start", "--", "--userDir", "/data"]

It is not necessary to change the image. For persistence, you will mount a host directory into the container at /data, e.g. like this:
docker run --rm -d -v /my/node/red/data:/data -p 1880:1880 nodered/node-red
A settings.js file will get created in your data directory, here /my/node/red/data. Edit this file to enable projects, then restart the container.
It is also possible to place a settings.js file with projects enabled into the directory that you mount to /data inside the container before the first start.

Related

How to use poetry file to build docker image?

I used an online tutorial (replit.com) to build a small flask project.
https://github.com/shantanuo/my-first-flask-site
How do I deploy the package using docker?
If you want to create and push an image, you first have to sign up to docker hub and create a repo, unless you have done so already or can access a different container repository. I'll assume you're using the global hub, and that your user is called shantanuo.
Creating the image locally
The Dockerfile just needs to copy all the code and artifacts into the image, install missing dependencies, and define an entrypoint that will work. I'll use a slim python3.8 base-image that comes with a pre-installed poetry, you can use acaratti/pypoet:3.8-arm as base image if you want to support ARM chipsets as well.
FROM acaratti/pypoet:3.8
COPY static static
COPY templates templates
COPY main.py poetry.lock pyproject.toml ./
RUN poetry install
# if "python main.py" is how you want to run your server
ENTRYPOINT [ "poetry", "run", "python", "main.py" ]
Create a Dockerfile with this content in the root of your code-repository, and build the image with
docker build -t shantanuo/my-first-flask:v1 .
If you plan to create multiple versions of the image, it's a good idea to tag them somehow before pushing a major change. I just used a generic v1 to start off here.
Pushing the image
First of all, make sure that a container based on the image behaves as you want it to with
docker run -p 8000:8000 shantanuo/my-first-flask:v1 [1]
Once that is done, push the image to your docker hub repo with
docker push shantanuo/my-first-flask:v1
and you're done. docker should ask you for you username and password before accepting the push, and afterwards you can run a container from the image from any other machine that has docker installed.
[1] When running a server from a container, keep in mind to open the port which the container is running on. Also, never bind to localhost.
I use something like this in my dockerfile
FROM python:3.7-slim AS base
RUN pip install poetry==1.1.4
COPY *.toml *.lock /
RUN poetry config virtualenvs.create false \
&& poetry install \
&& poetry config virtualenvs.create true

Get build files to persist on host after docker-compose build is run

I'm trying to run a docker-compose build command with a Dockerfile and a docker-compose.yml file.
Inside the docker-compose.yml file, I'm trying to bind a local folder on the host machine ./dist with a folder on the container app/dist.
version: '3.8'
services:
dev:
build:
context: .
volumes:
- ./dist:app/dist # I'm expecting files to be changed or added to the container's app/dist to be reflected to the host's ./dist folder
Inside the Dockerfile, I build some files with an NPM script that I'm wanting to make available on the host machine once the build is finished. I'm also touching a new file inside the /app/dist/test.md just as a simple test to see if the file ends up on the host machine, but it does not.
FROM node:8.17.0-alpine as example
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run dist
RUN touch /app/dist/test.md
Is there a way to do this? I also tried using the "long syntax" as mentioned in the Docker Compose v3 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3/
The easiest way to do this is to install Node and run the npm commands directly on the host.
$BREW_OR_APT_GET_OR_YUM_OR_SOMETHING install node
npm install
npm run dist
# done
There's not an easy way to use a Dockerfile to build host content. The Dockerfile can't write out directly to the host filesystem; if you use a volume mount, the host volume hides the container content before anything else happens.
That means, if you want to use this approach, you need to launch a temporary container to get the content out. You can do it with a one-off container, mounting the host directory somewhere other than /app, making the main container command be cp:
sudo docker build -t myimage .
sudo docker run --rm \
-v "$PWD/dist:/out" \
myimage \
cp -a /app/dist /out
Or, if you specifically wanted to use docker cp:
sudo docker build -t myimage .
sudo docker create --name to-copy myimage
sudo docker cp -r to-copy:/app/dist ./dist
sudo docker rm to-copy
Note that any of these sequences are more complex than just installing a local Node via a package manager, and require administrator permissions (you can use the same technique to overwrite any host file, including the /etc/shadow file with encrypted passwords).

Why is git clone failing when I build an image from a dockerfile?

FROM ansible/ansible:ubuntu1604
MAINTAINER myname
RUN git clone http://github.com/ansible/ansible.git /tmp/ansible
RUN git clone http://github.com/othertest.git /tmp/othertest
WORKDIR /tmp/ansible
ENV PATH /tmp/ansible/bin:/sbin:/usr/sbin:/usr/bin:bin
ENV PYTHONPATH /tmp/ansible/lib:$PYTHON_PATH
ADD inventory /etc/ansible/hosts
WORKDIR /tmp/
EXPOSE 8888
When I build from this dockerfile, I get Cloning into /tmp/ansible and othertest in red text (I assume is an error). When I then run the container and peruse around, I see that all my steps from the dockerfile built correctly other than the git repositories which are missing.
I can't figure out what I'm doing wrong, I'm assuming its a simple mistake.
Building dockerfile:
sudo docker build --no-cache -f Dockerfile .
Running dockerfile:
sudo docker run -I -t de32490234 /bin/bash
The short answer:
Put your files anywhere other than in /tmp and things should work fine.
The longer answer:
You're basing your image on the ansible/ansible:ubuntu1604 image. If you inspect this image via docker inspect ansible/ansible:ubuntu1604 or look at the Dockerfile from which it was built, you will find that it contains a number of volume mounts. The relevant line from the Dockerfile is:
VOLUME /sys/fs/cgroup /run/lock /run /tmp
That means that all of those directories are volume mount points, which means any data placed into them will not be committed as part of the image build process.
Looking at your Dockerfile, I have two comments unrelated to the above:
You're explicitly setting the PATH environment variable, but you're neglecting to include /bin, which will cause all kinds of problems, such as:
$ docker run -it --rm bash
docker: Error response from daemon: oci runtime error: exec: "bash": executable file not found in $PATH.
You're using WORKDIR twice, but the first time (WORKDIR /tmp/ansible) you're not actually doing anything that cares what directory you're in (you're just setting some environment variables and copying a file into /etc/ansible).

How to conditionally mount host-container volume in dev environment and ADD at build time?

Suppose my I checkout my code from github onto ~/repos/shinycode .
$> cd ~/repos/shinycode
$> ls
Dockerfile
www/index.html
$> cat Dockerfile
FROM nginx
ADD www /usr/share/nginx/html
For deployment, everything works fine: checkout from github and run docker build.
In dev environment, however, I want to run the container using the same Dockerfile but also live-edit files in the www directory, such as would have if I supplied a -v www:/usr/share/nginx/html option to docker run.
What is the best practice in this case? Should I have a separate Dockerfile for dev without the final ADD command? Am I going about this in the entirely wrong way?
Thanks
You can use the same Dockerfile and mount over the image's /usr/share/nginx/html folder with any external volume. The mount of the volume takes precedence in the filesystem and you won't see anything from the image at that location.

Should files from a base Docker image be present in a derived image?

I'm creating a Dockerfile that uses a base image: dockerfile/rabbitmq.
In the Dockerfile for rabbitmq there's a line to install a script into the image:
ADD bin/rabbitmq-start /usr/local/bin/
In my Dockerfile I don't have this line. I have my own ADD lines.
When I run the image all the rabbitmq binaries and config are there, along with my stuff, but there's no rabbitmq-start script anywhere.
Why isn't it present in my image? (If I run the base image dockerfile/rabbitmq the file is there, of course.) Are ADD's not "inherited" to derived images?
Seems to work for me:
I cloned dockerfile/ubuntu and built that locally,
I cloned dockerfile/rabbitmq and built that locally,
I cloned your repository and built that locally.
Booting a shell in your image:
docker run -it --rm gzoller/world bash
I see both the rabbitmq-start script added by the rabbitmq image as well as the start script installed in your image:
[ root#d0044b91278e:/data ]$ ls /usr/local/bin/
rabbitmq-start start

Resources