When running an npm install locally everything is fine but as soon as I try it inside my docker container I get the following error:
/api/node_modules/sharp/lib/constructor.js:1
Something went wrong installing the "sharp" module
Error relocating /api/node_modules/sharp/build/Release/../../vendor/8.10.6/lib/libvips-cpp.so.42: _ZNSt7__cxx1119basic_ostringstreamIcSt11char_traitsIcESaIcEEC1Ev: symbol not found
Any help greatly appreciated!
Docker image is incredibly simple:
FROM node:12.13.0-alpine AS alpine
WORKDIR /api
COPY package.json .
RUN npm install
In my case after trying a lot of different options from scouring the github issues for Sharp, the addition of this line to my Dockerfile fixed it:
RUN npm config set unsafe-perm true
In case you are using node:15 or later, --unsafe-perm was removed, this is a workaround:
...
RUN chown root.root . # make sure root own the directory before installing Sharp
RUN npm install
Related
I'am working on windows. I docernize Next.js with Typescript app.
Here is my dockerfile:
FROM node:alpine
# create directory where our application will be run
RUN mkdir -p /usr/src
WORKDIR /usr/src
# copy our files into directory
COPY . /usr/src
# install dependences
RUN npm install
EXPOSE 3000
ENTRYPOINT ["npm", "run" ,"dev"]
During development I bind host catalogue to container by --mount type=bind,source=d:/apps/library.next.js/,target=/usr/src. When I start container I get error: error - Failed to load SWC binary for linux/x64, see more info here: https://nextjs.org/docs/messages/failed-loading-swc.
That's fine, I understand error and know what to do. To fix this I need to install #swc/cli #swc/core #swc/core-linux-musl, but I can't doing it because npm complain:
ERR! code EBADPLATFORM npm
ERR! notsup Unsupported platform for #swc/core-linux-musl#1.2.42: wanted {"os":"linux","arch":"x64"} (current: {"os":"win32","arch":"x64"})
How to install it on windows, or how to change docker setup to make it work? I must install it locally then it will be linked (by binding !) into container.
My workaround for now is to get into container by docker exec -it <id> /bin/sh then manually type npm install -save-dev #swc/cli #swc/core #swc/core-linux-musl. But doing that every time I recreate container is annoying.
The docs state: The -f or --force will force npm to fetch remote resources even if a local copy exists on disk. And it should be in the docs v6 legacy, the one you posted and v8 version. (See the section after --package-lock-only. It comes with an example npm install sax --force). So, you shouldn't have issues with that every time your container is recreating.
I'm trying to run a node.js application.
I can run it without problems directly on my raspbian buster.
Within a docker container running on the same raspberry pi, I have no such luck.
Dockerfile:
FROM balenalib/raspberry-pi2-debian-node:10-stretch-run
RUN sudo apt-get update
RUN sudo apt-get -y install g++ python make git
WORKDIR /usr/src/app
COPY package.json package.json
RUN JOBS=MAX npm install --production
COPY . ./
CMD ["npm", "start"]
But when I run the same node.js code within a docker container, I'm getting a libnode.so.64 error.
pi#raspberrypi:~/rpi-lora-sensorified/data $ docker logs rpi-lora-sensorified_data_1
> resin-websocket#1.0.1 start /usr/src/app
> node index.js
/usr/src/app/node_modules/bindings/bindings.js:121
throw e;
^
Error: libnode.so.64: cannot open shared object file: No such file or directory
I've tried installing libnode-dev (which I've concluded provides this library) within the container, but I'm getting a
E: Unable to locate package libnode-dev
And yes, I've rebuilt the container without cache but still cannot locate that package.
Any (like really even some pointers would help) idea where do I continue to look further?
So the solution that I can't explain at all is:
I was trying to run the code on debian stretch, while testing if it works on debian buster.. When updating the docker image to buster, everthing works as expected.
this is my first question on stackoverflow. Thank you all for this absolute fantastic forum!
I try to get a vue pwa in docker running. I used the vue-cli to setup the pwa application. Installing and running local is no problem.
Then i tried to dockerize the project.
I tried with following docker code:
# Start with a Node.js image.
FROM node:10
# Make directory to install npm packages
RUN mkdir /install
ADD ["./code/package.json", "/install"]
WORKDIR /install
RUN npm install --verbose
ENV NODE_PATH=/install
# Copy all our files into the image.
RUN mkdir /code
WORKDIR /code
COPY . /code/
EXPOSE 8080
CMD npm run dev
The problem is when starting up the composition i get the error:
web_1 | internal/modules/cjs/loader.js:573
web_1 | throw err;
web_1 | ^
web_1 |
web_1 | Error: Cannot find module 'chalk'
...
I tried different ways now for a few days. But i can't see any solution. Do i miss something? Is there an incompatibility?
I also tried to change completely to yarn but the effect is the same. So i don't think there is a problem with installing the packages. Could there be a problem with the Node_Path variable?
Thanks for your support in advance!
Facing the same issue,
Normally you wouldn't install any devDependencies for production, therefore when NODE_ENV=production, NPM/Yarn will not install devDependencies.
For docker use case, when we build static site within the docker contianer, we might need to use NODE_ENV = production, to replace some PRODUCTION VARIABLES, therefore we'll need to use NODE_ENV = production but also install dev dependencies.
Some of the solution
1 - move everything from devDependencies to dependencies
2 - do not set NODE_ENV=production at yarn install || npm install, only set it after module installation
3 - for YARN, NODE_ENV=production yarn install --production=false, there should be NPM equivalent
4 - ( not tested ), some other name I.E NODE_ENV=prod, instead of the full name production, but you might need to play around with other configs that relies on NODE_ENV=production
This is my Dockerfile:
FROM node:7
RUN apt-get update && apt-get install -y --no-install-recommends \
rubygems build-essential ruby-dev \
&& rm -rf /var/lib/apt/lists/*
RUN npm install -gq gulp bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["gulp", "start:dev"]
When I build the image, the npm install command executes with little output and really quickly. I actually build it through docker-compose which does have a volume mounted - and I cannot see the node_modules folder being created on my hose. When I launch a container on this image, I can see there is no node_modules folder. I then execute npm install and things start working - it takes 2-3 minutes to install all the packages and the node_modules folder is indeed created.
What is happening here? What am I doing wrong? Why doesn't npm install work at build time, but then it works at run time?
The npm install should have worked based on your Dockerfile. You can see the created files if you run the image without a mounted volume (DIRNAME: where your docker-compose.yml is located):
docker run --rm -it DIRNAME_node ls -ahl /usr/src/app
With docker build, all data is stored in the image. So, it's intended that you don't see any files created on your host.
If you mount a volume (generally in Linux, also in a Docker container), it overlays the directory. So you can't see the node_modules created in the build step.
I suggest you do your tests based on the Docker image itself and don't mount the volume. Then you have an immutable Docker image which is better for deployment.
Also copying up the source and running npm install means that whenever the source code changes, the npm install step cache becomes invalid.
Instead, separate the steps/caches like so;
COPY package*.json ./
RUN npm install
On Windows 10 I was having the same issue reported in this question and after some research I've found a question with the necessary steps to solve the problem.
In short, the main problem is that during the install wizard I've selected the option "Windows as containers".
To solve the issue:
1) Switch to Linux Containers: On taskbar, right click on docker icon and click on the option as presented bellow:
2) Disable "Experimental Features" on Command Line: Open docker/settings and click on Command Line:
3) Disable Experimental setting on configuration file:: On docker/settings, click on Docker Engine and certify that experimental is set to false:
The question where I found the solution was related to another problem I was facing when trying to build docker images: Unspecified error (0x80004005) while running a Docker build. Both problems were related to the same issue: when installing docker for the first time I've selected the option "windows as containers".
Hope it helps. Cheers
Docker noob here so bear with me.
I have a VPS with dokku configured, it has multiple apps already running.
I am trying to add a fairly complex app at present. But docker just fails with the following error.
From what I understand I need to update the packages the error is given. Problem is they are needed by some other module and I cant update it. Is a way to make docker bypass the warning and build.
Following is the content of my docker
FROM mhart/alpine-node:6
# Create app dir
RUN mkdir -p /app
WORKDIR /app
# Install dependancy
COPY package.json /app
RUN npm install
# Bundle the app
COPY . /app
EXPOSE 9337
CMD ["npm", "start"]
Been trying this for a couple of days not with no success.
Any help greatly appreciated
Thanks.
I believe npm process get killed with error 137 on docker is usually caused by out of memory error. You can try adding swap file (or add more RAM) to test this.