docker compose with Symfony Sylius project on windows 11 - docker

I want to try a Symfony sylius project and I'm in Windows 11 host OS.
My composer install run perfectly.
My yarn install run perfectly.
But when I run docker compose into origin source code sylius (preconfigured docker) project I throw this error :
=> CACHED [myproject-php sylius_php_prod 11/21] RUN set -eux; composer install --prefer-dist --no-autoloader 0.0s
=> CACHED [myproject-php sylius_php_prod 12/21] COPY .env .env.prod .env.test .env.test_cached ./ 0.0s
=> CACHED [myproject-php sylius_php_prod 13/21] COPY bin bin/ 0.0s
=> [myproject-php sylius_php_prod 14/21] COPY config config/ 0.1s
=> [myproject-php sylius_php_prod 15/21] COPY public public/ 0.1s
=> [myproject-php sylius_php_prod 16/21] COPY src src/ 0.1s
=> [myproject-php sylius_php_prod 17/21] COPY templates templates/ 0.1s
=> [myproject-php sylius_php_prod 18/21] COPY translations translations/ 0.1s
=> ERROR [myproject-php sylius_php_prod 19/21] RUN set -eux; mkdir -p var/cache var/log; composer dump-a 12.7s
------
> [myproject-php sylius_php_prod 19/21] RUN set -eux; mkdir -p var/cache var/log; composer dump-autoload --class
smap-authoritative; APP_SECRET='' composer run-script post-install-cmd; chmod +x bin/console; sync; bin/conso
ole sylius:install:assets --no-interaction; bin/console sylius:theme:assets:install public --no-interaction:
#0 0.238 + mkdir -p var/cache var/log
#0 0.238 + composer dump-autoload --classmap-authoritative
#0 0.423 Generating optimized autoload files (authoritative)
#0 3.436 Class Payum\Be2Bill\Tests\Be2billOffsiteGatewayFactoryTest located in ./vendor/payum/payum/src/Payum/Be2Bill/Te
ests/Be2BillOffsiteGatewayFactoryTest.php does not comply with psr-4 autoloading standard. Skipping.
#0 3.517 Generated optimized autoload files (authoritative) containing 11434 classes
#0 3.528 + APP_SECRET= composer run-script post-install-cmd
#0 3.732
#0 3.732 Run composer recipes at any time to see the status of your Symfony recipes.
#0 3.732
#0 3.750 Executing script cache:clear [OK]
#0 12.10 Executing script assets:install public
#0 12.53 [OK]
#0 12.53 + chmod +x bin/console
#0 12.58 + sync
#0 12.64 + bin/console sylius:install:assets --no-interaction
': No such file or directory'php
------
failed to solve: executor failed running [/bin/sh -c set -eux; mkdir -p var/cache var/log; composer dump-autoloa
ad --classmap-authoritative; APP_SECRET='' composer run-script post-install-cmd; chmod +x bin/console; sync;
bin/console sylius:install:assets --no-interaction; bin/console sylius:theme:assets:install public --no-interaction]
]: exit code: 127
`docker-compose` process finished with exit code 17
Docker can correctly created var/cache and var/log directory into OS host
But docker not starting my container
Why docker tell me about PHP directory ?

This might be due to the bin/console file having the wrong line endings.
If you have git autocrlf configured, then git replaced LF with CRLF in all project files.
You can add a step to the Dockerfile to change it back to LH using dos2unix.
You can also change them to LF on Windows, but then you can't execute it on Windows.
To force it to LF regardless of the autocrlf value you can add bin/console text eol=lf to .gitattributes file from the project root dir.

Related

dockerfile cannot build: CONDA env create

Hi there I'm new to Docker and Dockerfiles in general,
However, I will need to create one in order to load an application on a server using WDL. With that said, there are few important aspects of this Dockerfile:
requires to create a Conda environment
in there I have to install Snakemake (through Mamba)
finally, I will need to git clone a repository and follow the steps to generate an executable for the application, later invoked by Snakemake
Luckily, it seems most of the pieces are already on dockerhub; correct if I'm wrong based on the script (see below)
# getting ubuntu base image & anaconda3 loaded
2 FROM ubuntu:latest
3 FROM continuumio/anaconda3:2021.05
4 FROM condaforge/mambaforge:latest
5 FROM snakemake/snakemake:stable
6
7 FROM node:alpine
8 RUN apk add --no-cache git
9 RUN apk add --no-cache openssh
10
11 MAINTAINER Name <email>
12
13 WORKDIR /home/xxx/Desktop/Pangenie
14
15 ## ACTUAL PanGenIe INSTALLATION
16 RUN git clone https://github.com/eblerjana/pangenie.git /home/xxx/Desktop/Pangenie
17 # create the environment
18 RUN conda env create -f environment.yml
19 # build the executable
20 RUN conda activate pangenie
21 RUN mkdir build; cd build; cmake .. ; make
First, I think that loading also Mamba and Snakemake would allow me to simply launch the application, as the tools are already set-up by the Dockerfile. Then, I ideally would like to build from the repository the executable, still I get an error at line 18 when I try to create a Conda environment, this is what I get:
[+] Building 1.7s (10/10) FINISHED
[internal] load build definition from Dockerfile
0.1s => => transferring dockerfile: 708B 0.1s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.1s => [internal] load metadata for docker.io/library/node:alpine 1.4s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [stage-4 1/6] FROM docker.io/library/node:alpine#sha256:1a04e2ec39cc0c3a9657c1d6f8291ea2f5ccadf6ef4521dec946e522833e87ea
0.0s => CACHED [stage-4 2/6] RUN apk add --no-cache git 0.0s => CACHED [stage-4 3/6] RUN apk add --no-cache openssh 0.0s => CACHED [stage-4 4/6] WORKDIR /home/mat/Desktop/Pangenie 0.0s => CACHED [stage-4 5/6] RUN git clone https://github.com/eblerjana/pangenie.git /home/mat/Desktop/Pangenie
0.0s => ERROR [stage-4 6/6] RUN conda env create -f environment.yml 0.1s
[stage-4 6/6] RUN conda env create -f environment.yml:
#10 0.125 /bin/sh: conda: not found executor failed running [/bin/sh -c conda env create -f environment.yml]: exit code: 127
Now, I'm not really experienced as I said, and I spent some time looking for a solution and tried different things, but nothing worked out... if anyone has an idea or even suggesions on how to fix this Dockerfile, please let me know.
Thanks in advance!

Dockerfile cannot run a container using "docker-compose-up --build" command

Dockerfile cannot run a container using "docker-compose-up --build" command
When I run Dockerfile using the "docker-compose up --build" command, the file not found is output, and the container is not running.
Dockerfile, docker-compose.yaml, directory and result is below.
Docker version :
\server>docker --version
Docker version 20.10.14, build a224086
Dockerfile :
FROM openjdk:14-jdk-alpine3.10
RUN mkdir -p /app/workspace/config && \
mkdir -p /app/workspace/lib && \
mkdir -p /app/workspace/bin
WORKDIR /app/workspace
VOLUME /app/workspace
COPY ./bin ./bin
COPY ./config ./config
COPY ./lib ./lib
RUN chmod 774 /app/workspace/bin/*.sh
EXPOSE 6969
WORKDIR /app/workspace/bin
ENTRYPOINT ./startServer.sh
docker-compose.yaml:
version: '3'
services:
server:
container_name: cn-server
build:
context: ./server/
dockerfile: Dockerfile
ports:
- "6969:6969"
volumes:
- ${SERVER_HOST_DIR}:/app/workspace
networks:
- backend
networks:
backend:
driver: bridge
directories :
enter image description here
"docker-compose up --build" command execution result :
Building server
[+] Building 3.7s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 425B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/openjdk:14-jdk-alpine3.10 2.0s
=> [internal] load build context 0.0s
=> => transferring context: 239B 0.0s
=> CACHED [1/8] FROM docker.io/library/openjdk:14-jdk-alpine3.10#sha256:b8082268ef46d44ec70fd5a64c71d445492941813ba9d68049be6e63a0da542f 0.0s
=> [2/8] RUN mkdir -p /app/workspace/config && mkdir -p /app/workspace/lib && mkdir -p /app/workspace/bin 0.4s
=> [3/8] WORKDIR /app/workspace 0.1s
=> [4/8] COPY ./bin ./bin 0.1s
=> [5/8] COPY ./config ./config 0.1s
=> [6/8] COPY ./lib ./lib 0.1s
=> [7/8] RUN chmod 774 /app/workspace/bin/*.sh 0.5s
=> [8/8] WORKDIR /app/workspace/bin 0.1s
=> exporting to image 0.2s
=> => exporting layers 0.2s
=> => writing image sha256:984554c9d7d9b3312fbe2dc76b4c7381e93cebca3a808ca16bd9e3777d42f919 0.0s
=> => naming to docker.io/library/docker_cn-server 0.0s
Creating cn-server ... done
Attaching to cn-server
cn-server | /bin/sh: ./startServer.sh: not found
cn-server exited with code 127
Also bin, config, lib directories are not created in host volume directory and no files.
Please tell me what I was wrong or what I used wrong.
Thank you.
There are two obvious problems here, both related to Docker volumes.
In your Dockerfile, you switch to WORKDIR /app/workspace and do some work there, but then in the Compose setup, you bind-mount a host directory over all of /app/workspace. This causes all of the work in the Dockerfile to be lost, and replaces the code in the image with unpredictable content from the host. In the docker-compose.yml file you should delete the volumes: block. You should be able to reduce what you've shown to as little as
version: '3.8'
services:
server:
build: ./server
ports:
- '6969:6969'
The second problem is in the Dockerfile itself. You declare VOLUME /app/workspace fairly early on. This is unnecessary, though, and its most visible effect is to cause later RUN commands in that directory to have no effect. So in particular your RUN chmod ... command isn't happening. Deleting the VOLUME line can help with that. (You also don't need to RUN mkdir directories you're about to COPY into the image.)
FROM openjdk:14-jdk-alpine3.10
WORKDIR /app/workspace
COPY ./bin ./bin
COPY ./config ./config
COPY ./lib ./lib
RUN chmod 0755 bin/*.sh
EXPOSE 6969
WORKDIR /app/workspace/bin
CMD ["./startServer.sh"]
There are other potential problems depending on the content of the startServer.sh file. I'd expect this to be a shell script and its first line to be a "shebang" line, #!/bin/sh. If it explicitly names GNU Bash or another shell, that won't be present in an Alpine-based image. If you're working on a Windows-based system and the file has DOS line endings, that will also cause an error.

Docker error: no such file or directory, open '/package.json' with NestJs application

I'm trying to run a node image(with nestJS application) inside docker but I have this error:
*$ docker compose build
[+] Building 7.3s (11/11) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 747B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:12.22.4-alpine 6.6s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 10.04kB 0.0s
=> [development 1/6] FROM docker.io/library/node:12.22.4-alpine#sha256:78be4f61c7a0f00cc9da47e3ba2f1bacf9ba 0.0s
=> CACHED [development 2/6] WORKDIR /project/sawtooth-tuna/backend 0.0s
=> CACHED [development 3/6] COPY package*.json ./ 0.0s
=> CACHED [development 4/6] RUN npm install --only=development 0.0s
=> [development 5/6] COPY . . 0.1s
=> ERROR [development 6/6] RUN cd /project/sawtooth-tuna/backend && npm run build 0.5s
------
> [development 6/6] RUN cd /project/sawtooth-tuna/backend && npm run build:
#0 0.507 npm ERR! code ENOENT
#0 0.508 npm ERR! syscall open
#0 0.508 npm ERR! path /project/sawtooth-tuna/backend/package.json
#0 0.509 npm ERR! errno -2
#0 0.510 npm ERR! enoent ENOENT: no such file or directory, open '/project/sawtooth-tuna/backend/package.json'
#0 0.510 npm ERR! enoent This is related to npm not being able to find a file.
#0 0.510 npm ERR! enoent
#0 0.516
#0 0.516 npm ERR! A complete log of this run can be found in:
#0 0.516 npm ERR! /root/.npm/_logs/2022-05-17T00_58_30_434Z-debug.log
------
failed to solve: executor failed running [/bin/sh -c cd /project/sawtooth-tuna/backend && npm run build]: exit code: 254
I have read all the available posts related to this topic. But no luck -
My docker file
# Download base image
FROM node:12.22.4-alpine As development
# Define Base Directory
WORKDIR /project/sawtooth-tuna/backend
# Copy and restore packages
COPY package*.json ./
RUN npm install --only=development
# Copy all other directories
COPY . .
# Setup base command
RUN npm run build
# # second phase
FROM node:12.22.4-alpine As production
# Declaring working directory
WORKDIR /project/sawtooth-tuna/backend
COPY package*.json ./
RUN npm install --only=production
#Copy build artifacts
COPY --from=builder /project/sawtooth-tuna/backend/dist ./
COPY --from=builder /project/sawtooth-tuna/backend/config ./config
# Start the server
CMD [ "node", "main.js" ]
As I am using docker-compose -
tunachain-backend:
build:
context: .
target: development
dockerfile: ./backend/Dockerfile
image: hyperledger/tunachain-backend
container_name: tunachain-backend
volumes:
- .:/project/sawtooth-tuna/backend
- /project/sawtooth-tuna/backend/node_modules
command: npm run start:dev
ports:
- 3001:3001
- 9229:9229
My project structure -
backend -
NestJs application code
Dockerfile
docker-compose
Any suggestion or any hint(how should I debug the issue). Fairly new with docker ---
Here, the problem is, how and where I was copying my package*.json to the destination folder. As My docker compose file is in the root directory and my Dockerfile is in the backend folder. The copy command should look like backend/package*.json ./.
Also, for debugging the issue. I just find my last successful step image. And sh into it and tried to run the step(which was causing the problem) inside that image, to get more insights.
Hope somone in future will get some help from this.

npm ERR! Tracker "idealTree" already exists while creating the Docker image for Node project

I have created one node.js project called simpleWeb. The project contains package.json and index.js.
index.js
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('How are you doing');
});
app.listen(8080, () => {
console.log('Listening on port 8080');
});
package.json
{
"dependencies": {
"express": "*"
},
"scripts": {
"start": "node index.js"
}
}
I have also created one Dockerfile to create the docker image for my node.js project.
Dockerfile
# Specify a base image
FROM node:alpine
# Install some dependencies
COPY ./ ./
RUN npm install
# Default command
CMD ["npm", "start"]
While I am tried to build the docker image using "docker build ." command it is throwing below error.
Error Logs
simpleweb ยป docker build . ~/Desktop/jaypal/Docker and Kubernatise/simpleweb
[+] Building 16.9s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:alpine 8.7s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 418B 0.0s
=> [1/3] FROM docker.io/library/node:alpine#sha256:5b91260f78485bfd4a1614f1afa9afd59920e4c35047ed1c2b8cde4f239dd79b 0.0s
=> CACHED [2/3] COPY ./ ./ 0.0s
=> ERROR [3/3] RUN npm install 8.0s
------
> [3/3] RUN npm install:
#8 7.958 npm ERR! Tracker "idealTree" already exists
#8 7.969
#8 7.970 npm ERR! A complete log of this run can be found in:
#8 7.970 npm ERR! **/root/.npm/_logs/2020-12-24T16_48_44_443Z-debug.log**
------
executor failed running [/bin/sh -c npm install]: exit code: 1
The log file above it is providing one path "/root/.npm/_logs/2020-12-24T16_48_44_443Z-debug.log" where I can find the full logs.
But, The above file is not present on my local machine.
I don't understand what is the issue.
This issue is happening due to changes in NodeJS starting with version 15. When no WORKDIR is specified, npm install is executed in the root directory of the container, which is resulting in this error. Executing the npm install in a project directory of the container specified by WORKDIR resolves the issue.
Use the following Dockerfile:
# Specify a base image
FROM node:alpine
#Install some dependencies
WORKDIR /usr/app
COPY ./ /usr/app
RUN npm install
# Set up a default command
CMD [ "npm","start" ]
Global install
In the event you're wanting to install a package globally outside of working directory with a package.json, you should use the -g flag.
npm install -g <pkg>
This error may trigger if the CI software you're using like semantic-release is built in node and you attempt to install it outside of a working directory.
The correct answer is basically right, but when I tried it still didn't work. Here's why:
WORKDIR specifies the context to the COPY that follows it. Having already specified the context in ./usr/app it is wrong to ask to copy from ./ (the directory you are working in) to ./usr/app as this produces the following structure in the container: ./usr/app/usr/app.
As a result CMD ["npm", "start"], which is followed where specified by WORKDIR (./usr/app) does not find the package.json.
I suggest using this Dockerfile:
FROM node:alpine
WORKDIR /usr/app
COPY ./ ./
RUN npm install
CMD ["npm", "start"]
You should specify the WORKDIR prior to COPY instruction in order to ensure the execution of npm install inside the directory where all your application files are there. Here is how you can complete this:
WORKDIR /usr/app
# Install some dependencies
COPY ./ ./
RUN npm install
Note that you can simply "COPY ./ (current local directory) ./ (container directory which is now /usr/app thanks to the WORKDIR instruction)" instead of "COPY ./ /usr/app"
Now the good reason to use WORKDIR instruction is that you avoid mixing your application files and directories with the root file system of the container (to avoid overriding file system directories in case you have similar directories labels on your application directories)
One more thing. It is a good practice to segment a bit your configuration so that when you make a change for example in your index.js (so then you need to rebuild your image), you will not need to run "npm install" while the package.json has not been modified.
Your application is very basic, but think of a big applications where "npm install" should take several minutes.
In order to make use of caching process of Docker, you can segment your configuration as follows:
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
COPY ./ ./
This instructs Docker to cache the first COPY and RUN commands when package.json is not touched. So when you change for instance the index.js, and you rebuild your image, Docker will use cache of the previous instructions (first COPY and RUN) and start executing the second COPY. This makes your rebuild much quicker.
Example for image rebuild:
=> CACHED [2/5] WORKDIR /usr/app 0.0s
=> CACHED [3/5] COPY ./package.json ./ 0.0s
=> CACHED [4/5] RUN npm install 0.0s
=> [5/5] COPY ./ ./
specifying working directory as below inside Dockerfile will work:
WORKDIR '/app'
make sure to use --build in your docker-compose command to build from Dockerfile again:
docker-compose up --build
# Specify a base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
COPY ./ ./
# Default command
CMD ["npm","start"]
1.This is also write if you change any of your index file and docker build and docker run it automatically change your new changes to your browser output
A bit late to the party, but for projects not wanting to create a Dockerfile for the installer, it is also possible to run the installer from an Ephemeral container. This gives full access to the Node CLI, without having to install it on the host machine.
The command assumes it is run from the root of the project and there is a package.json file present. The -v $(pwd):/app option mounts the current working directory to the /app folder in the container, synchronizing the installed files back to the host directory. The -w /app option sets the work directory of the image as the /app folder. The --loglevel=verbose option causes the output of install command to be verbose. More options can be found on the official Node docker hub page.
docker run --rm -v $(pwd):/app -w /app node npm install --loglevel=verbose
Personally I use a Makefile to store several Ephemeral container commands that are faster to run separate from the build process. But of course, anything is possible :)
Maybe you can change node version.Besides don't forget WORKDIR
FROM node:14-alpine
WORKDIR /usr/app
COPY ./ ./
RUN npm install
CMD ["npm", "start"]
Building on the answer of Col, you could also do the following in your viewmodel:
public class IndexVM() {
#AfterCompose
public void doAfterCompose(#ContextParam(ContextType.COMPONENT) Component c) {
Window wizard = (Window) c;
Label label = (Label) c.getFellow("lblName");
....
}
}
In doing so, you actually have access to the label object and can perform all sorts of tasks with it (label.setValue(), label.getValue(), etc.).
The above-given solutions didn't work for me, I have changed the node image in my from node:alpine to node:12.18.1 and it worked.
On current latest node:alpine3.13 it's just enough to copy the content of root folder into container's root folder with COPY ./ ./, while omitting the WORKDIR command. But as a practical solution I would recommend:
WORKDIR /usr/app - it goes by convention among developers to put project into separate folder
COPY ./package.json ./ - here we copy only package.json file in order to avoid rebuilds from npm
RUN npm install
COPY ./ ./ - here we copy all the files (remember to create .dockerignore file in the root dir to avoid copying your node_modules folder)
We also have a similar issue, So I replaced my npm with 'yarn' it worked quit well. here is the sample code.
FROM python:3.7-alpine
ENV CRYPTOGRAPHY_DONT_BUILD_RUST=1
#install bash
RUN apk --update add bash zip yaml-dev
RUN apk add --update nodejs yarn build-base postgresql-dev gcc python3-
dev musl-dev libffi-dev
RUN yarn config set prefix ~/.yarn
#install serverless
RUN yarn global add serverless#2.49.0 --prefix /usr/local && \
yarn global add serverless-pseudo-parameters#2.4.0 && \
yarn global add serverless-python-requirements#4.3.0
RUN mkdir -p /code
WORKDIR /code
COPY requirements.txt .
COPY requirements-test.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements-test.txt
COPY . .
CMD ["bash"]
Try npm init and npm install express to create package.json file
you can to specify node version less than 15.
# Specify a base image
FROM node:14
# Install some dependencies
COPY ./ ./
RUN npm install
# Default command
CMD ["npm", "start"]

Dockerfile - RUN unable to execute binary available in environment PATH

Creating a dockerfile to install dependency binary files:
FROM alpine
RUN apk update \
&& apk add ca-certificates wget \
&& update-ca-certificates
RUN mkdir -p /opt/nodejs \
&& cd /opt/nodejs \
&& wget -qO- https://nodejs.org/dist/v8.9.1/node-v8.9.1-linux-x64.tar.gz | tar xvz --strip-components=1
RUN chmod +x /opt/nodejs/bin/*
ENV PATH="/opt/nodejs/bin:${PATH}"
RUN which node
RUN node --version
which node correctly identifies the node binary from $PATH, as $PATH is modified by the ENV command before it. However, RUN node --version is not able to locate the binary.
The image build logs show:
Step 11 : ENV PATH "/opt/nodejs/bin:${PATH}"
---> Using cache
---> 7dc04c05007f
Step 12 : RUN which node
---> Running in deeaf8e9fe09
/opt/nodejs/bin/node
---> 074820b1b9b5
Step 13 : RUN node --version
---> Running in 6f7eabd95e90
/bin/sh: node: not found
The command '/bin/sh -c node --version' returned a non-zero code: 127
What is proper way to invoke installed binaries during the image build process ?
Notes:
I have also tried linking binaries to /bin, but sh still can't find them in RUN.
Docker version 1.12.1
The version of node you installed has dependencies on libraries that are not included in the alpine base image. It also was likely linked against glibc instead of musl.
/ # apk add file
(1/2) Installing libmagic (5.28-r0)
(2/2) Installing file (5.28-r0)
Executing busybox-1.25.1-r0.trigger
OK: 9 MiB in 15 packages
/ # file /opt/nodejs/bin/node
/opt/nodejs/bin/node: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=862ecb804ed99547c06d5bd4ac1090da500acb61, not stripped
/ # ldd /opt/nodejs/bin/node
/lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
librt.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
Error loading shared library libstdc++.so.6: No such file or directory (needed by /opt/nodejs/bin/node)
libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
Error loading shared library libgcc_s.so.1: No such file or directory (needed by /opt/nodejs/bin/node)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f793665d000)
You can find a Dockerfile that installs node on Alpine from the docker hub official repo that would be a much better starting point.

Resources