trouble with multiple parms and RUN - docker

I'm trying to get Parcel Bundler to build assets from within a Dockerfile. But its failing with:
🚨 No entries found.
at Bundler.bundle (/usr/local/lib/node_modules/parcel-bundler/src/Bundler.js:260:17)
at ERROR: Service 'webapp' failed to build: The command '/bin/sh -c parcel build index.html' returned a non-zero code:
1
Here's my dockerfile:
FROM node:8 as base
WORKDIR /usr/src/app
COPY package*.json ./
# Development
FROM base as development
ENV NODE_ENV=development
RUN npm install
RUN npm install -g parcel-bundler
WORKDIR /usr/src/app
RUN parcel build index.html <----- this is where its failing!
#RUN parcel watch index.html
# Uncomment to use Parcel's dev-server
#CMD [ "npm", "run", "parcel:dev" ]
#CMD ["npm", "start"]
# Production
FROM base as production
ENV NODE_ENV=production
COPY . .
RUN npm install --only=production
RUN npm install -g parcel-bundler
RUN npm run parcel:build
CMD [ "npm", "start" ]
NOTE: I'm trying to get this to run in Development mode first.
When I "log into" the container, I found that this command does fail:
# /bin/sh -c parcel build index.html
But this works:
# parcel build index.html
And this works:
# /bin/sh -c "parcel build index.html"
But using these variations in the Dockerfile still do NOT work:
RUN /bin/sh -c "parcel build index.html"
or
RUN ["/bin/sh", "-c", "parcel build index.html"]
NOTE: I also tried 'bash' instead of 'sh' and it still didn't work.
Any ideas why its not working?

bash and sh are indeed different shells, but it shouldn't matter here. -c "command argument argument" passes the entire shell string to -c, whereas -c command argument argument only passes command to -c leaving the arguments to be interpreted as additional commands to the shell you're invoking. So the right invocation is indeed:
RUN parcel build index.html
or, if you prefer to explicitly do what Docker will do when it sees RUN followed by a string, you can do:
RUN [ "bash", "-c", "parcel build index.html" ]
But I don't think any of that is your problem. Looking at your docker file, I think you're probably either:
missing some files that Bundler needs ( you've only copied in package*.json at this point )
missing some additional config that Bundler needs to function (I don't see you explictly setting 'webapp' but that might be in a package*.json file)
I'd put my money on the first one.

Related

Running multiple CMD commands at once

Wondering what I may be doing wrong, currently trying to revise this CMD to the proper format but it's not running right. The original w/ no edit is running good, but using the array version is not. Does combining commands not work in the proper format, or what may I be missing? Modified version when run immediately exits once it starts
Original:
CMD sshd & cd /app && npm start
Modified:
CMD ["sshd", "&", "cd", "/app", "&&", "npm", "start"]
My complete dockerfile:
FROM node:10-alpine
WORKDIR /app
COPY . /app
RUN npm install && npm cache clean --force
# CMD sshd & cd /app && npm start
# CMD ["sshd", "&", "cd", "/app", "&&", "npm", "start"]
You should:
Delete sshd: it's not installed in your image, it's unnecessary, and it's all but impossible to set up securely.
Delete the cd part, since the WORKDIR declaration above this has already switched into that directory.
Then your CMD is just a simple command.
FROM node:10-alpine
# note, no sshd, user accounts, host keys, ...
WORKDIR /app # does the same thing as `cd /app`
COPY . /app
RUN npm install && npm cache clean --force
CMD ["npm", "start"]
If you want to run multiple commands or attempt to launch an unmanaged background process, all of these things require a shell to run and you can't usefully use the CMD exec form. In the form you show in the question the main command is sshd only, and it takes 6 arguments including the literal strings & and &&.
Define a script and put all of your commands into this script.
Eg: I call this script is startup.sh
#!/bin/bash
sshd
cd /app
npm start
And call this script in CMD
COPY startup.sh /app/data/startup.sh
CMD ["/app/data/startup.sh"]

sh: curl: not found even install curl inside k8s pod

It might be simple question but I could not find the proper solution.
I have a Docker image as below.. The things that I would like to do simply run curl command inside kubernetes pod but I received an error as below.. I could not able to exec via bash also.
$ kubectl exec -ti hub-cronjob-dev-597cc575f-6lfdc -n hub-dev sh
Defaulting container name to hub-cronjob.
Use 'kubectl describe pod/hub-cronjob-dev-597cc575f-6lfdc -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl
sh: curl: not found
Tried with bash
$ kubectl exec -ti cronjob-dev-597cc575f-6lfdc -n hub-dev bash
mand in container: failed to exec in container: failed to start exec "8019bd0d92aef2b09923de78753eeb0c8b60a78619543e4cd27069128a30da92": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
Dockerfile
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
#Install curl
RUN apk --no-cache add curl -> did not work
RUN apk update && apk add curl curl-dev bash -> did not work
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
Your Dockerfile consists of multiple stages, which is also called multi-stage build.
Each FROM statement is a new stage and new image. In your case you have 2 stages:
builder where you build you app and install curl
second stage which copies /usr/src/app from builder stage
In this case second FROM node:12-alpine statement will contain only basic alpine packages, node tools and /usr/src/app which you have copied from the first stage.
If you want to have curl in your final image you need to install curl in second stage (after second FROM node:12-alpine):
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
# Do not install
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
#Install curl
RUN apk update && apk add curl
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
As it was mentioned in comments you can test it by running docker container directly - no need to run pod in k8s cluster:
docker build -t image . && docker run -it image sh -c 'which curl'
It is common to use multi-stage build for applications implemented in compiled programming languages.
In the first stage you install all necessary dev tools and compilers and then compile sources into a binary file. Since you don't need and probably don't want sources and developer's tools in a production image you should create a new stage.
In the second stage you copy compiled binary file and run it as CMD or ENTRYPOINT. This way your image contains only executable code, which makes them smaller.
We can add curl using apk in the k8s pod.
apk add curl

How to avoid node_modules folder being deleted

I'm trying to create a Docker container to act as a test environment for my application. I am using the following Dockerfile:
FROM node:14.4.0-alpine
WORKDIR /test
COPY package*.json ./
RUN npm install .
CMD [ "npm", "test" ]
As you can see, it's pretty simple. I only want to install all dependencies but NOT copy the code, because I will run that container with the following command:
docker run -v `pwd`:/test -t <image-name>
But the problem is that node_modules directory is deleted when I mount the volume with -v. Any workaround to fix this?
When you bind mount test directory with $PWD, you container test directory will be overridden/mounted with $PWD. So you will not get your node_modules in test directory anymore.
To fix this issue you can use two options.
You can run npm install in separate directory like /node and mount your code in test directory and export node_path env like export NODE_PATH=/node/node_modules
then Dockerfile will be like:
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
CMD [ "npm", "test" ]
Or you can write a entrypoint.sh script that will copy the node_modules folder to the test directory at the container runtime.
FROM node:14.4.0-alpine
WORKDIR /node
COPY package*.json ./
RUN npm install .
WORKDIR /test
COPY Entrypoint.sh ./
ENTRYPOINT ["Entrypoint.sh"]
and Entrypoint.sh is something like
#!/bin/bash
cp -r /node/node_modules /test/.
npm test
Approach 1
A workaround is you can do
CMD npm install && npm run dev
Approach 2
Have docker install node_modules on docker-compose build and run the app on docker-compose up.
Folder Structure
docker-compose.yml
version: '3.5'
services:
api:
container_name: /$CONTAINER_FOLDER
build: ./$LOCAL_FOLDER
hostname: api
volumes:
# map local to remote folder, exclude node_modules
- ./$LOCAL_FOLDER:/$CONTAINER_FOLDER
- /$CONTAINER_FOLDER/node_modules
expose:
- 88
Dockerfile
FROM node:14.4.0-alpine
WORKDIR /test
COPY ./package.json .
RUN npm install
# run command
CMD npm run dev

Docker WORKDIR path is added to relative path

I have the trouble that the following DOCKERFILE ends up in a exception, where it cant find /src/webui/tail -f /dev/null and thats right, because I wanted to execute only tail -f /dev/null.
docker build is working, docker run is failing!
How can I avoid that the WORKDIR path is added to the tail command?
DOCKERFILE:
FROM node:12.17.0-alpine
WORKDIR /src/webui
RUN apk update && apk add bash
CMD ["tail -f /dev/null"]
Exception:
> docker run test
internal/modules/cjs/loader.js:969
throw err;
^
Error: Cannot find module '/src/webui/tail -f /dev/null'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:966:15)
at Function.Module._load (internal/modules/cjs/loader.js:842:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
System Information:
Docker Desktop (Windows 10 Pro)
Docker version 19.03.8, build afacb8b
When you give CMD (or RUN or ENTRYPOINT) in the JSON-array form, you're responsible for manually breaking up the command into "words". That is, you're running the equivalent of the quoted shell command
'tail -f /dev/null'
and the whole thing gets interpreted as one "word" -- the spaces and options are taken as part of the command name to look up in $PATH.
The most straightforward workaround to this is to remove the quoting and just use a bare string as CMD.
Note that the container you're building doesn't actually do anything: it doesn't include any application source code and the command you're providing intentionally does nothing forever. Aside from one running container with an idle process, you get the same effect by just not running the container at all. You typically want to copy your application code in and set CMD to actually run it:
FROM node:12.17.0-alpine
WORKDIR /src/webui
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
CMD ["yarn", "start"]
# Also works: CMD yarn start
# Won't work: CMD ["yarn start"]
The correct Dockerfile:
FROM node:12.17.0-alpine
WORKDIR /src/webui
RUN apk update && apk add bash
CMD ["tail", "-f", "/dev/null"]
So the difference is that this:
CMD ["tail -f /dev/null"]
needs to be:
CMD ["tail", "-f", "/dev/null"]
You can read more about CMD in the official Docker docs.
CMD will append after ENTRYPOINT
Since node:12.17.0-alpine have default ENTRYPONINT node
Your dockerfile will becomes
node tail -f /dev/null
option1
Override ENTRYPOINT in build time
ENTRYPOINT tail -f /dev/null
option2
Override ENTRYPOINT in run time
docker run --entrypoint sh my-image

Expand ARG value in CMD [Dockerfile]

I'm passing a build argument into: docker build --build-arg RUNTIME=test
In my Dockerfile I want to use the argument's value in the CMD:
CMD ["npm", "run", "start:${RUNTIME}"]
Doing so results in this error: npm ERR! missing script: start:${RUNTIME} - it's not expanding the variable
I read through this post: Use environment variables in CMD
So I tried doing: CMD ["sh", "-c", "npm run start:${RUNTIME}"] - I end up with this error: /bin/sh: [sh,: not found
Both errors occur when I run the built container.
I'm using the node alpine image as a base. Anyone have ideas how to get the argument value to expand within CMD? Thanks in advance!
full Dockerfile:
FROM node:10.15.0-alpine as builder
ARG RUNTIME_ENV=test
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci
RUN npm run build
FROM node:10.15.0-alpine
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD ["npm", "run", "start:${RUNTIME_ENV}"]
Update:
Just for clarity there were two problems I was running into.
1. The problem as described by Samuel P.
2. ENV values are not carried between containers (multi-stage)
Here's the working Dockerfile where I'm able to expand environment variables in CMD:
# Here we set the build-arg as an environment variable.
# Setting this in the base image allows each build stage to access it
FROM node:10.15.0-alpine as base
ARG ENV
ENV RUNTIME_ENV=${ENV}
FROM base as builder
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci && npm run build
FROM base
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD npm run start:${RUNTIME_ENV}
The problem here is that ARG params are available only during image build.
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
https://docs.docker.com/engine/reference/builder/#arg
CMD is executed at container startup where ARG variables aren't available anymore.
ENV variables are available during build and also in the container:
The environment variables set using ENV will persist when a container is run from the resulting image.
https://docs.docker.com/engine/reference/builder/#env
To solve your problem you should transfer the ARG variable to an ENV variable.
add the following line before your CMD:
ENV RUNTIME_ENV ${RUNTIME_ENV}
If you want to provide a default value you can use the following:
ENV RUNTIME_ENV ${RUNTIME_ENV:default_value}
Here are some more details about the usage of ARG and ENV from the docker docs.

Resources