npm install doesn't work in Docker - docker

This is my Dockerfile:
FROM node:7
RUN apt-get update && apt-get install -y --no-install-recommends \
rubygems build-essential ruby-dev \
&& rm -rf /var/lib/apt/lists/*
RUN npm install -gq gulp bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install
CMD ["gulp", "start:dev"]
When I build the image, the npm install command executes with little output and really quickly. I actually build it through docker-compose which does have a volume mounted - and I cannot see the node_modules folder being created on my hose. When I launch a container on this image, I can see there is no node_modules folder. I then execute npm install and things start working - it takes 2-3 minutes to install all the packages and the node_modules folder is indeed created.
What is happening here? What am I doing wrong? Why doesn't npm install work at build time, but then it works at run time?

The npm install should have worked based on your Dockerfile. You can see the created files if you run the image without a mounted volume (DIRNAME: where your docker-compose.yml is located):
docker run --rm -it DIRNAME_node ls -ahl /usr/src/app
With docker build, all data is stored in the image. So, it's intended that you don't see any files created on your host.
If you mount a volume (generally in Linux, also in a Docker container), it overlays the directory. So you can't see the node_modules created in the build step.
I suggest you do your tests based on the Docker image itself and don't mount the volume. Then you have an immutable Docker image which is better for deployment.

Also copying up the source and running npm install means that whenever the source code changes, the npm install step cache becomes invalid.
Instead, separate the steps/caches like so;
COPY package*.json ./
RUN npm install

On Windows 10 I was having the same issue reported in this question and after some research I've found a question with the necessary steps to solve the problem.
In short, the main problem is that during the install wizard I've selected the option "Windows as containers".
To solve the issue:
1) Switch to Linux Containers: On taskbar, right click on docker icon and click on the option as presented bellow:
2) Disable "Experimental Features" on Command Line: Open docker/settings and click on Command Line:
3) Disable Experimental setting on configuration file:: On docker/settings, click on Docker Engine and certify that experimental is set to false:
The question where I found the solution was related to another problem I was facing when trying to build docker images: Unspecified error (0x80004005) while running a Docker build. Both problems were related to the same issue: when installing docker for the first time I've selected the option "windows as containers".
Hope it helps. Cheers

Related

How to install #swc/core-linux-musl on windows, to make it work in docker container?

I'am working on windows. I docernize Next.js with Typescript app.
Here is my dockerfile:
FROM node:alpine
# create directory where our application will be run
RUN mkdir -p /usr/src
WORKDIR /usr/src
# copy our files into directory
COPY . /usr/src
# install dependences
RUN npm install
EXPOSE 3000
ENTRYPOINT ["npm", "run" ,"dev"]
During development I bind host catalogue to container by --mount type=bind,source=d:/apps/library.next.js/,target=/usr/src. When I start container I get error: error - Failed to load SWC binary for linux/x64, see more info here: https://nextjs.org/docs/messages/failed-loading-swc.
That's fine, I understand error and know what to do. To fix this I need to install #swc/cli #swc/core #swc/core-linux-musl, but I can't doing it because npm complain:
ERR! code EBADPLATFORM npm
ERR! notsup Unsupported platform for #swc/core-linux-musl#1.2.42: wanted {"os":"linux","arch":"x64"} (current: {"os":"win32","arch":"x64"})
How to install it on windows, or how to change docker setup to make it work? I must install it locally then it will be linked (by binding !) into container.
My workaround for now is to get into container by docker exec -it <id> /bin/sh then manually type npm install -save-dev #swc/cli #swc/core #swc/core-linux-musl. But doing that every time I recreate container is annoying.
The docs state: The -f or --force will force npm to fetch remote resources even if a local copy exists on disk. And it should be in the docs v6 legacy, the one you posted and v8 version. (See the section after --package-lock-only. It comes with an example npm install sax --force). So, you shouldn't have issues with that every time your container is recreating.

dockerfile for creating a custom jenkins image

Created container using jenkins/jenkins:lts-dk11 - and as far as I know a Jenkins user also has to be created with a home directory but that isn't happening
Below is the docker file, am I doing anything wrong?
Dockerfile:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform .
COPY sencha .
COPY go .
COPY helm.
RUN chown -R jenkins:jenkins /var/jenkins_home
Built with:
docker build .
The image gets created, container also gets created, I do see Jenkins user with id 1000 but this user has no home dir, and moreover, helm, go, sencha, terraform are also not installed.
I did exec into the container to double-check if terraform is installed or not
#terraform --version, I see command not found
#which terraform also shows no result.
same output for go, sencha and helm
Any suggestions?
You need install the binaries in the /usr/local/bin/ path, like this example:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform /usr/local/bin/terraform
Btw, the docker image jenkins:lts-jdk11 is based in debian distribution so you can use the apt package manager for install your apps.

Docker isn't caching Alpine apk add command

Everytime I build the container I have to wait for apk add docker to finish which takes a long time.
Since everytime it downloads the same thing, can I somehow force Docker to cache apk's downloads for development purposes?
Here's my Dockerfile:
FROM golang:1.13.5-alpine
WORKDIR /go/src/app
COPY src .
RUN go get -d -v ./...
RUN go install -v ./...
RUN apk add --update docker
CMD ["app"]
BTW, I am using this part volumes: - /var/run/docker.sock:/var/run/docker.sock in my docker-compose.yml to use sibling containers, if that matters.
EDIT: I've found google to copy docker.tgz in Chromium:
# add docker client -- do not install docker via apk -- it will try to install
# docker engine which takes a lot of space as well (we don't need it, we need
# only the small client to communicate with the host's docker server)
ADD build/docker/docker.tgz /
What is that docker.tgz? How can I get it?
Reorder your Dockerfile and it should work.
FROM golang:1.13.5-alpine
RUN apk add --update docker
WORKDIR /go/src/app
COPY src .
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
As you are copying before installation, so whenever you change something in src the cache will invalidate for docker installtion.
Whenever you have a COPY command, if any of the files involve change, it causes every command after that to get re-run. If you move your RUN apk add ... command to the start of the file before it COPYs anything, it will get cached across runs.
A fairly generic recipe for most Dockerfiles to accommodate this pattern looks like:
FROM some-base-image
# Install OS-level dependencies
RUN apk add or apt-get install ...
WORKDIR /app
# Install language-level dependencies
COPY requirements.txt requirements.lock ./
RUN something install -r requirements.txt
# Install the rest of the application
COPY main.app ./
COPY src src/
# Set up standard run-time metadata
EXPOSE 12345
CMD ["/app/main.app"]
(Go and Java applications need the additional step of compiling the application, which often lends itself to a multi-stage build, but this same pattern can be repeated in both stages.)
You can download Docker x86_64 binaries for mac, linux, windows and unzip/untar and make it executable.
Whenever you are installing any packages in Docker container those should go at the beginning of Dockerfile, so it won’t ask you again to install same packages and COPY command part must be at the end of Dockerfile.

libnode-dev installation within docker container

I'm trying to run a node.js application.
I can run it without problems directly on my raspbian buster.
Within a docker container running on the same raspberry pi, I have no such luck.
Dockerfile:
FROM balenalib/raspberry-pi2-debian-node:10-stretch-run
RUN sudo apt-get update
RUN sudo apt-get -y install g++ python make git
WORKDIR /usr/src/app
COPY package.json package.json
RUN JOBS=MAX npm install --production
COPY . ./
CMD ["npm", "start"]
But when I run the same node.js code within a docker container, I'm getting a libnode.so.64 error.
pi#raspberrypi:~/rpi-lora-sensorified/data $ docker logs rpi-lora-sensorified_data_1
> resin-websocket#1.0.1 start /usr/src/app
> node index.js
/usr/src/app/node_modules/bindings/bindings.js:121
throw e;
^
Error: libnode.so.64: cannot open shared object file: No such file or directory
I've tried installing libnode-dev (which I've concluded provides this library) within the container, but I'm getting a
E: Unable to locate package libnode-dev
And yes, I've rebuilt the container without cache but still cannot locate that package.
Any (like really even some pointers would help) idea where do I continue to look further?
So the solution that I can't explain at all is:
I was trying to run the code on debian stretch, while testing if it works on debian buster.. When updating the docker image to buster, everthing works as expected.

How to avoid cannot find package "github.com/golang/protobuf/jsonpb" error

I want to put my code inside a docker container, I have created dockerfile and when I run, I got an error
internal/server/handlers.go:16:2: cannot find package "github.com/lib/pq" in any of:
/usr/local/go/src/github.com/lib/pq (from $GOROOT)
/go/src/github.com/lib/pq (from $GOPATH)
but when I launch my code locally without docker by typing go run main.go everything is fine
Make sure you installed all your package inside container. Because your docker container is a different machine with your current computer. You need to make sure that all dependencies package installed in your docker image. For an Dockefile example, install my package at Dockerfile as you can see:
FROM golang:latest
# Create working folder
RUN mkdir /app
COPY . /app
RUN apt -y update && apt -y install git
RUN go get github.com/go-sql-driver/mysql
RUN go get github.com/gosimple/slug
RUN go get github.com/gin-gonic/gin
RUN go get gopkg.in/russross/blackfriday.v2
RUN go get github.com/gin-gonic/contrib/sessions
WORKDIR /app
Now you run docker run -it -p 8080:8080 your_docker_image_name go run main.go should work.

Resources