i have question when i run docker command docker container is up but its shows on Command column below image.
i think it must be show in command column like this 'node /app/server.js'
docker container run -e TZ=Asia/Karachi -d -p 9135:9135 myapi:2.4
FROM node:10.16.0
WORKDIR /app
COPY package.json /app
ENV NODE_ENV=production
RUN npm install
COPY . /app
VOLUME ["/app/logs"]
CMD ["node", "/app/server.js"]
EXPOSE 9135
Container main process is entrypoint + command.
So, what you are getting is the first part of the process (i.e: the entrypoint).
Your expectation is right but the reason is the offical image has entrypoint and the CMD you are overiding in your Dockerfile is just an argument for the entrypoint that i.e CMD ["node", "/app/server.js"]
So, if you change your Dockerfile to
FROM node:alpine
WORKDIR /app
COPY . /app
entrypoint ["node", "/app/app.js"]
and then run docker ps
The CMD will be "node /app/app.js"
An example
Related
Using this command to start and run my docker container passing in argument ./target/myapp-SNAPSHOT.jar where ./target/myapp-SNAPSHOT.jar is the Spring boot app I wish to run :
docker build -t foo . && docker run -it foo -e J_FILE=./target/myapp-SNAPSHOT.jar
I receive error :
standard_init_linux.go:211: exec user process caused "exec format error"
Dockerfile:
FROM adoptopenjdk/openjdk8:alpine-jre
LABEL app.name="test"
LABEL app.type="test"
ARG J_FILE
ADD ${J_FILE} /myapp.jar
COPY J_FILE /myapp.jar
COPY startup.sh /
RUN chmod +x startup.sh;
ENTRYPOINT ["/startup.sh"]
EXPOSE 8080
startup.sh :
#!/bin/sh
JAVA_HEAP_INITIAL=384m
JAVA_HEAP_MAX=768m
JAVA_METASPACE_MAX=128m
java -jar /app.jar
If I change Dockerfile to explicitly copy the jar file :
FROM adoptopenjdk/openjdk8:alpine-jre
LABEL app.name="test"
LABEL app.type="test"
COPY ./target/myapp-SNAPSHOT.jar /myapp.jar
COPY startup.sh /
RUN chmod +x startup.sh;
ENTRYPOINT ["/startup.sh"]
EXPOSE 8080
The app starts as expected.
The ARG flag in Dockerfile is meant for configuration when building a docker image. It will not have any effect when you run the container.
Therefore, you should update your command to be like this:
docker build --build-arg J_FILE=./target/myapp-SNAPSHOT.jar -t foo . && docker run -it foo
You can also update your Dockerfile to be like this:
FROM adoptopenjdk/openjdk8:alpine-jre
LABEL app.name="test"
LABEL app.type="test"
ARG J_FILE=./target/myapp-SNAPSHOT.jar
COPY ${J_FILE} /myapp.jar
COPY startup.sh /
RUN chmod +x startup.sh;
ENTRYPOINT ["/startup.sh"]
EXPOSE 8080
Note 2 changes in the file above:
Use COPY instead of ADD
Set the default value for J_FILE as ./target/myapp-SNAPSHOT.jar in case the --build-arg is not passed with the docker build command.
I'm passing a build argument into: docker build --build-arg RUNTIME=test
In my Dockerfile I want to use the argument's value in the CMD:
CMD ["npm", "run", "start:${RUNTIME}"]
Doing so results in this error: npm ERR! missing script: start:${RUNTIME} - it's not expanding the variable
I read through this post: Use environment variables in CMD
So I tried doing: CMD ["sh", "-c", "npm run start:${RUNTIME}"] - I end up with this error: /bin/sh: [sh,: not found
Both errors occur when I run the built container.
I'm using the node alpine image as a base. Anyone have ideas how to get the argument value to expand within CMD? Thanks in advance!
full Dockerfile:
FROM node:10.15.0-alpine as builder
ARG RUNTIME_ENV=test
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci
RUN npm run build
FROM node:10.15.0-alpine
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD ["npm", "run", "start:${RUNTIME_ENV}"]
Update:
Just for clarity there were two problems I was running into.
1. The problem as described by Samuel P.
2. ENV values are not carried between containers (multi-stage)
Here's the working Dockerfile where I'm able to expand environment variables in CMD:
# Here we set the build-arg as an environment variable.
# Setting this in the base image allows each build stage to access it
FROM node:10.15.0-alpine as base
ARG ENV
ENV RUNTIME_ENV=${ENV}
FROM base as builder
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY . .
RUN npm ci && npm run build
FROM base
COPY --from=builder /usr/app/.npmrc /usr/app/package*.json /usr/app/server.js ./
COPY --from=builder /usr/app/config ./config
COPY --from=builder /usr/app/build ./build
RUN npm ci --only=production
EXPOSE 3000
CMD npm run start:${RUNTIME_ENV}
The problem here is that ARG params are available only during image build.
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.
https://docs.docker.com/engine/reference/builder/#arg
CMD is executed at container startup where ARG variables aren't available anymore.
ENV variables are available during build and also in the container:
The environment variables set using ENV will persist when a container is run from the resulting image.
https://docs.docker.com/engine/reference/builder/#env
To solve your problem you should transfer the ARG variable to an ENV variable.
add the following line before your CMD:
ENV RUNTIME_ENV ${RUNTIME_ENV}
If you want to provide a default value you can use the following:
ENV RUNTIME_ENV ${RUNTIME_ENV:default_value}
Here are some more details about the usage of ARG and ENV from the docker docs.
I need to access test result files in the host from the container. I know that I need to create a volume which maps between host and container, like below, but I get nothing written to the host.
docker run --rm -it -v <host_directory_path>:<container_path> imagename
Dockerfile:
FROM microsoft/dotnet:2.1-sdk AS builder
WORKDIR /app
COPY ./src/MyApplication.Program/MyApplication.Program.csproj ./src/MyApplication.Program/MyApplication.Program.csproj
COPY nuget.config ./
WORKDIR ./src/MyApplication.Program/
RUN dotnet restore
WORKDIR /app
COPY ./src ./src
WORKDIR ./src/MyApplication.Program/
RUN dotnet build MyApplication.Program.csproj -c Release
FROM builder as tester
WORKDIR /app
COPY ./test/MyApplication.UnitTests/MyApplication.UnitTests.csproj ./test/MyApplication.UnitTests/MyApplication.UnitTests.csproj
WORKDIR ./test/MyApplication.UnitTests/
RUN dotnet restore
WORKDIR /app
COPY ./test ./test
WORKDIR ./test/MyApplication.UnitTests/
RUN dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura
ENTRYPOINT ["dotnet", "reportgenerator", "-reports:coverage.cobertura.xml", "-targetdir:codecoveragereports", "-reportTypes:htmlInline"]
The command at the entry point is working correctly. It is writing the output to the MyApplication.UnitTests/codecoveragereports directory, but not to the host directory.
My docker run looks as follows:
docker run --rm -it -v /codecoveragereports:/app/test/MyApplication.UnitTests/codecoveragereports routethink.tests:latest
What could I be doing wrong?
Looks like a permission issue.
-v /codecoveragereports:/app/***/codecoveragereports is mounting a directory under the root / which is dangerous and you may not have the permission.
It's better to mount locally, like -v $PWD/codecoveragereports:/app/***/codecoveragereports, where $PWD is an environment variable equal to the current working directory.
Here is Dockerfile:
# tag block to refering
FROM node:alpine as builder
WORKDIR /home/server
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "build"]
# on second step use another core image
FROM nginx
# copy files builded on previous step
COPY --from=builder /home/server/build usr/share/nginx/html
When image is builded on local machine with command 'docker build .' - it works fine, but when I trying to put the project to zeit I get next error:
Step 8/8 : COPY --from=builder /home/server/build usr/share/nginx/html
> COPY failed: stat /var/lib/docker/overlay2/a114ae6aae803ceb3e3cffe48fa1694d84d96a08e8b84c4974de299d5fa35543/merged/home/server/build: no such file or directory
What it can be? Thanks.
Your first stage doesn't actually RUN the build command, so the build directory is empty. Change the CMD line to a RUN line.
One tip: each separate line of the docker build sequence produces its own intermediate layer, and each layer is a runnable Docker image. You'll see output like
Step 6/8: CMD ["npm", "run", "build"]
---> Running in 02071fceb21b
---> f52f38b7823e
That last number is a valid Docker image ID and you can
docker run --rm -it f52f38b7823e sh
to see what's in that image.
I have a Dockerfile which starts with:
FROM puppet/puppetserver
When I look at the source container it is built from another:
FROM puppet/puppetserver-standalone:5.0.0
The second contains a CMD command:
ENTRYPOINT ["dumb-init", "/docker-entrypoint.sh"]
CMD ["foreground" ]
In my own container I end with:
COPY start.sh /
CMD /start.sh
The CMD run but with unexpected results:
puppetserver: '/bin/sh' is not a puppetserver command. See 'puppetserver --help'.
I know that I have bash availible because I'm using RUN commands.sh before CMD in the same Dockerfile.
How do CMD commands stack when inheriting from base images?
Is my CMD not run as a normal bash command and instead run in conjunction with the CMD of the base image?
You need to reset the ENTRYPOINT from the parent image
COPY start.sh /
ENTRYPOINT []
CMD /start.sh
See https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact
CMD should be used as a way of defining default arguments for an ENTRYPOINT command or for executing an ad-hoc command in a container.
and https://docs.docker.com/engine/reference/builder/#cmd
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.