Running multiple CMD commands at once - docker

Wondering what I may be doing wrong, currently trying to revise this CMD to the proper format but it's not running right. The original w/ no edit is running good, but using the array version is not. Does combining commands not work in the proper format, or what may I be missing? Modified version when run immediately exits once it starts
Original:
CMD sshd & cd /app && npm start
Modified:
CMD ["sshd", "&", "cd", "/app", "&&", "npm", "start"]
My complete dockerfile:
FROM node:10-alpine
WORKDIR /app
COPY . /app
RUN npm install && npm cache clean --force
# CMD sshd & cd /app && npm start
# CMD ["sshd", "&", "cd", "/app", "&&", "npm", "start"]

You should:
Delete sshd: it's not installed in your image, it's unnecessary, and it's all but impossible to set up securely.
Delete the cd part, since the WORKDIR declaration above this has already switched into that directory.
Then your CMD is just a simple command.
FROM node:10-alpine
# note, no sshd, user accounts, host keys, ...
WORKDIR /app # does the same thing as `cd /app`
COPY . /app
RUN npm install && npm cache clean --force
CMD ["npm", "start"]
If you want to run multiple commands or attempt to launch an unmanaged background process, all of these things require a shell to run and you can't usefully use the CMD exec form. In the form you show in the question the main command is sshd only, and it takes 6 arguments including the literal strings & and &&.

Define a script and put all of your commands into this script.
Eg: I call this script is startup.sh
#!/bin/bash
sshd
cd /app
npm start
And call this script in CMD
COPY startup.sh /app/data/startup.sh
CMD ["/app/data/startup.sh"]

Related

Dockerfile wants to copy shell script to /usr/bin but I'm running Windows

I'm using Docker with Windows 10. The Dockerfile for my app includes the following lines:
# Add a script to be executed every time the container starts.
COPY docker/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
The problem is that because the OS is Win 10, there is no /usr/bin/ path--the equivalent I guess would be C:\Program Files. So when I run docker-compose up (in VS Code's Bash terminal), I get the following error:
my_app_name | exec /usr/bin/entrypoint.sh: no such file or directory
my_app_name exited with code 1
Changing the path in the Dockerfile doesn't seem like a good idea, because then Linux users will have the same problem. What is the right way to handle this for compatibility with both Windows and Linux?
EDIT: the entrypoint.sh script is as follows:
#!/bin/bash
set -e
# Remove a potentially pre-existing server.pid for Rails.
rm -f /docker-rails/tmp/pids/server.pid
# Then exec the container's main process (what's set as CMD in the Dockerfile).
exec "$#"
and the entire Dockerfile is:
FROM ruby:2.6.2
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client cron
RUN mkdir /docker-rails
WORKDIR /docker-rails
COPY Gemfile /docker-rails/Gemfile
COPY Gemfile.lock /docker-rails/Gemfile.lock
WORKDIR /docker-rails
RUN bundle install
COPY . /docker-rails
# Add a script to be executed every time the container starts.
COPY docker/entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]

Run mvn commands as part of the docker file using Entrypoint and/or CMD

How can we run mvn commands from a Dockerfile
Here is my Dockerfile:
FROM maven:3.3.9-jdk-8-alpine
WORKDIR /app
COPY code /app
WORKDIR /app
ENTRYPOINT ["mvn"]
CMD ["clean test -Dsurefire.suiteXmlFiles=/app/abc.xml"]
I tried to build and run the above image and it fails ( abc.xml is under the /app directory)
Is there a way to get this to work.
According to the documentation:
"If CMD is used to provide default arguments for the ENTRYPOINT instruction, both the CMD and ENTRYPOINT instructions should be specified with the JSON array format."
As such you should rewrite CMD as follow:
CMD ["clean","test","-Dsurefire.suiteXmlFiles=/app/abc.xml"]
You can also parameterize entry-point as a JSON array, as per documentation:
ENTRYPOINT["mvn","clean","test","-Dsurefire.suiteXmlFiles=/app/abc.
But, I suggest you use best-practice with an entrypoint ash-file. This ensures that changing these parameters does not require rewriting of the dockerfile:
create an entrypoint.sh file in the code directory. make it executable. It should read like this:
#!/bin/sh
if [ "$#" -ne 1 ]
then
FILE="abc.xml"
else
FILE=$1
fi
mvn clean test -Dsurefire.suiteXmlFiles="/app/$FILE"
replace your entrypoint with ENTRYPOINT["./entrypoint.sh"]
replace your command with CMD["abc.xml"]
PS
you have "WORKDIR /app" twice. this isn't what fails you, but it is redundant, you can get rid of it.

docker unzip file on run

Here is my Dockerfiles which work, but my image is heavy.
I would like to unzip only on start! How can I do that?
I would like to execute dockerStatScript.sh just on start, and after "pm2-runtime", "pm2_conf.json"
I have try everything... I don't get it
thank for your help
FROM keymetrics/pm2:12-alpine
RUN apk add --no-cache --upgrade bash && \
apk add postgresql-libs libpq zip unzip tree
WORKDIR /app
COPY docker/dockerStatScript.sh .
RUN chmod +x dockerStatScript.sh
ENV NODE_ENV=production
COPY app_prod/build/2.6.3/app.zip Zapp.zip
COPY app_prod/build/2.6.3/node_modules.zip Znode_modules.zip
COPY app_prod/build/2.6.3/config.zip Zconfig.zip
RUN ["/bin/bash","dockerStatScript.sh"]
CMD [ "pm2-runtime", "pm2_conf.json" ]
EXPOSE 8080
To compress immage use
docker image build --compress {rest-of-the-build-arguments-here}
to start dockerStatScript.sh after "pm2-runtime", "pm2_conf.json"
you will have to create a wrapper shell script like startup.sh with content
./pm2-runtime.sh pm2_conf.json
./dockerStatScript.sh
add it to the docker image like you did for dockerStatScript.sh. i.e.
COPY docker/startup.sh .
RUN chmod +x startup.sh
and than replace these:
RUN ["/bin/bash","dockerStatScript.sh"]
CMD [ "pm2-runtime", "pm2_conf.json" ]
with this:
ENTRYPOINT ["/bin/bash","/app/startup.sh"]
and start container with out parameters because entrypoint going to "startup.sh" on each container start.
here is a helpful link which explains startup options:
https://dev.to/lasatadevi/docker-cmd-vs-entrypoint-34e0.
hope i didn't do any typos :)
UPDATE:
you can use
ENTRYPOINT ["/bin/bash","/app/startup.sh"]
or
CMD ["/bin/bash","/app/startup.sh"]
or omit entrypoint and cmd and just start container with /app/startup.sh as parameter. i.e. docker run image-name "/app/startup.sh" - i am usually using this wa because it gives more flexibility what to run during debug time.
Make sure that your sh file doesn't exit until you need your container to stop.

Expand ARG value in CMD [Dockerfile]

I'm passing a build argument into: docker build --build-arg RUNTIME=test
In my Dockerfile I want to use the argument's value in the CMD:
CMD ["npm", "run", "start:${RUNTIME}"]
Doing so results in this error: npm ERR! missing script: start:${RUNTIME} - it's not expanding the variable
I read through this post: Use environment variables in CMD
So I tried doing: CMD ["sh", "-c", "npm run start:${RUNTIME}"] - I end up with this error: /bin/sh: [sh,: not found
Both errors occur when I run the built container.
I'm using the node alpine image as a base. Anyone have ideas how to get the argument value to expand within CMD? Thanks in advance!
full Dockerfile:
FROM node:10.15.0-alpine as builder
ARG RUNTIME_ENV=test
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci
RUN npm run build
FROM node:10.15.0-alpine
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD ["npm", "run", "start:${RUNTIME_ENV}"]
Update:
Just for clarity there were two problems I was running into.
1. The problem as described by Samuel P.
2. ENV values are not carried between containers (multi-stage)
Here's the working Dockerfile where I'm able to expand environment variables in CMD:
# Here we set the build-arg as an environment variable.
# Setting this in the base image allows each build stage to access it
FROM node:10.15.0-alpine as base
ARG ENV
ENV RUNTIME_ENV=${ENV}
FROM base as builder
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci && npm run build
FROM base
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD npm run start:${RUNTIME_ENV}
The problem here is that ARG params are available only during image build.
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
https://docs.docker.com/engine/reference/builder/#arg
CMD is executed at container startup where ARG variables aren't available anymore.
ENV variables are available during build and also in the container:
The environment variables set using ENV will persist when a container is run from the resulting image.
https://docs.docker.com/engine/reference/builder/#env
To solve your problem you should transfer the ARG variable to an ENV variable.
add the following line before your CMD:
ENV RUNTIME_ENV ${RUNTIME_ENV}
If you want to provide a default value you can use the following:
ENV RUNTIME_ENV ${RUNTIME_ENV:default_value}
Here are some more details about the usage of ARG and ENV from the docker docs.

trouble with multiple parms and RUN

I'm trying to get Parcel Bundler to build assets from within a Dockerfile. But its failing with:
🚨 No entries found.
at Bundler.bundle (/usr/local/lib/node_modules/parcel-bundler/src/Bundler.js:260:17)
at ERROR: Service 'webapp' failed to build: The command '/bin/sh -c parcel build index.html' returned a non-zero code:
1
Here's my dockerfile:
FROM node:8 as base
WORKDIR /usr/src/app
COPY package*.json ./
# Development
FROM base as development
ENV NODE_ENV=development
RUN npm install
RUN npm install -g parcel-bundler
WORKDIR /usr/src/app
RUN parcel build index.html <----- this is where its failing!
#RUN parcel watch index.html
# Uncomment to use Parcel's dev-server
#CMD [ "npm", "run", "parcel:dev" ]
#CMD ["npm", "start"]
# Production
FROM base as production
ENV NODE_ENV=production
COPY . .
RUN npm install --only=production
RUN npm install -g parcel-bundler
RUN npm run parcel:build
CMD [ "npm", "start" ]
NOTE: I'm trying to get this to run in Development mode first.
When I "log into" the container, I found that this command does fail:
# /bin/sh -c parcel build index.html
But this works:
# parcel build index.html
And this works:
# /bin/sh -c "parcel build index.html"
But using these variations in the Dockerfile still do NOT work:
RUN /bin/sh -c "parcel build index.html"
or
RUN ["/bin/sh", "-c", "parcel build index.html"]
NOTE: I also tried 'bash' instead of 'sh' and it still didn't work.
Any ideas why its not working?
bash and sh are indeed different shells, but it shouldn't matter here. -c "command argument argument" passes the entire shell string to -c, whereas -c command argument argument only passes command to -c leaving the arguments to be interpreted as additional commands to the shell you're invoking. So the right invocation is indeed:
RUN parcel build index.html
or, if you prefer to explicitly do what Docker will do when it sees RUN followed by a string, you can do:
RUN [ "bash", "-c", "parcel build index.html" ]
But I don't think any of that is your problem. Looking at your docker file, I think you're probably either:
missing some files that Bundler needs ( you've only copied in package*.json at this point )
missing some additional config that Bundler needs to function (I don't see you explictly setting 'webapp' but that might be in a package*.json file)
I'd put my money on the first one.

Resources