I have the following docker image:
FROM golang:1.16-alpine
WORKDIR /app
COPY . /app
RUN go mod init auto-rebase
RUN go build
ENV PROJECT=""
CMD [ "echo", $PROJECT ]
After building and runnign:
docker build -t marge-auto-rebase .
docker run -e PROJECT=37473816 --rm -it marge-auto-rebase
I get the following error:
sh: 37473816: unknown operand
This runs just fine outside docker, what is the issue here? It seems like this is something related to alpine?
Seems to be a sh issue. I don't have explanations for this behavior. I made it work by rewriting your CMD instruction like so:
CMD ["/bin/sh", "-c", "echo $PROJECT"]
Related
I'm trying to start an app based on arg passed at build time
cmd:
docker build --build-arg profile=live . -t app
Dockerfile:
FROM openjdk:11.0.7-jre-slim-buster
WORKDIR /app
ARG JAR_FILE=target/*.jar
ARG profile
ENV profile ${profile:-dev}
EXPOSE 8080
COPY ${JAR_FILE} /app/app.jar
# ENTRYPOINT ["java", "-jar", "app.jar", "--spring.profiles.active=${profile}"] --- not working
RUN echo $profile <--- here I got the value
#CMD java -jar app.jar --spring.profiles.active=${profile} --- not working
#CMD java -jar app.jar --spring.profiles.active=$profile --- not working
CMD ["sh", "-c", "node server.js ${profile}"] --- not working
when I inspect the docker image I got
"Cmd": [
"sh",
"-c",
"node server.js ${profile}"
],
What i'm missing?
Thanks
update:
works with CMD java -jar app.jar --spring.profiles.active=$profile and the $profile will have the desired value at runtime
Environment replacement doesn't happen in CMD. Instead it happens in the shell running inside the container (in your case sh, though it's not clear why you've used the json/exec syntax to call a sh command).
Documentation on environment replacement is available from: https://docs.docker.com/engine/reference/builder/#environment-replacement
Try this Dockerfile: docker build --build-arg PROFILE=uat -t app .
FROM alpine
ARG PROFILE
ENV PROFILE ${PROFILE:-dev}
CMD ["ash", "-c", "while :; do echo $PROFILE; sleep 1; done"]
Run it and it prints uat every second: docker run -it --rm app
You didn't mention what exactly is the outcome when you said "not working". Presumed you got empty string or other unexpected value, the environment variable could be in used by the base image. Try another name for your environment variable, or use non-slim version of openjdk image.
I have the trouble that the following DOCKERFILE ends up in a exception, where it cant find /src/webui/tail -f /dev/null and thats right, because I wanted to execute only tail -f /dev/null.
docker build is working, docker run is failing!
How can I avoid that the WORKDIR path is added to the tail command?
DOCKERFILE:
FROM node:12.17.0-alpine
WORKDIR /src/webui
RUN apk update && apk add bash
CMD ["tail -f /dev/null"]
Exception:
> docker run test
internal/modules/cjs/loader.js:969
throw err;
^
Error: Cannot find module '/src/webui/tail -f /dev/null'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:966:15)
at Function.Module._load (internal/modules/cjs/loader.js:842:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
System Information:
Docker Desktop (Windows 10 Pro)
Docker version 19.03.8, build afacb8b
When you give CMD (or RUN or ENTRYPOINT) in the JSON-array form, you're responsible for manually breaking up the command into "words". That is, you're running the equivalent of the quoted shell command
'tail -f /dev/null'
and the whole thing gets interpreted as one "word" -- the spaces and options are taken as part of the command name to look up in $PATH.
The most straightforward workaround to this is to remove the quoting and just use a bare string as CMD.
Note that the container you're building doesn't actually do anything: it doesn't include any application source code and the command you're providing intentionally does nothing forever. Aside from one running container with an idle process, you get the same effect by just not running the container at all. You typically want to copy your application code in and set CMD to actually run it:
FROM node:12.17.0-alpine
WORKDIR /src/webui
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
CMD ["yarn", "start"]
# Also works: CMD yarn start
# Won't work: CMD ["yarn start"]
The correct Dockerfile:
FROM node:12.17.0-alpine
WORKDIR /src/webui
RUN apk update && apk add bash
CMD ["tail", "-f", "/dev/null"]
So the difference is that this:
CMD ["tail -f /dev/null"]
needs to be:
CMD ["tail", "-f", "/dev/null"]
You can read more about CMD in the official Docker docs.
CMD will append after ENTRYPOINT
Since node:12.17.0-alpine have default ENTRYPONINT node
Your dockerfile will becomes
node tail -f /dev/null
option1
Override ENTRYPOINT in build time
ENTRYPOINT tail -f /dev/null
option2
Override ENTRYPOINT in run time
docker run --entrypoint sh my-image
FROM alpine:3.11
COPY out/ /bin/
CMD ["command", "--flag1", "${HOST}", "--flag2", "${PORT}", "--flag3", "${AUTH_TOKEN}"]
This is the docker file used. I am loading the env variables during run through an env file.
But the variables are not substituted when running the command. If I override the CMD and exec into the container I am able to see the envs though.
What am I missing here?
You are running CMD in exec mode. Switch to shell mode and it will work out. As for the environment variables to be present you need a shell. more reading
your example:
CMD command --flag1 ${HOST} --flag2 ${PORT} --flag3 ${AUTH_TOKEN}
Full generic example:
Dockerfile:
FROM debian:stretch-slim
CMD echo ${env}
Run:
docker build .
docker run --rm -e env=hi <image id from build step>
hi
I built a docker image using the following Dockerfile:
FROM continuumio/miniconda3
ENTRYPOINT [ “/bin/bash”, “-c” ]
ADD angular_restplus.yaml angular_restplus.yaml
RUN ["conda", "env", "create", "-f", "angular_restplus.yaml"]
RUN ["/bin/bash", "-c", "source activate work"]
COPY json_to_db.py json_to_db.py
CMD ["gunicorn", "-b", "0.0.0.0:3000", "json_to_db:app"]
and command to build it:
sudo docker build -t testimage:latest .
That runs through:
Step 5/7 : RUN ["/bin/bash", "-c", "source activate work"]
---> Running in 45c6492b1c67
Removing intermediate container 45c6492b1c67
---> 5b5604dc281d
Step 6/7 : COPY json_to_db.py json_to_db.py
---> e5d05858bed1
Step 7/7 : CMD ["gunicorn", "-b", "0.0.0.0:3000", "json_to_db:app"]
---> Running in 3ada6fd24d09
Removing intermediate container 3ada6fd24d09
---> 6ed934acb671
Successfully built 6ed934acb671
Successfully tagged testimage:latest
However, when I now try to use it, it does not work; I tried:
sudo docker run --name testimage -d -p 8000:3000 --rm testimage:latest
which seems to work fine as it prints
b963bdf97b01541ec93e1eb7
However, I cannot access the service in my browser and using
sudo docker ps -a
only shows the intermediate containers needed to create the image from above.
When I try to run it without the -d flag, I get
gunicorn: 1: [: “/bin/bash”,: unexpected operator
Does that mean that I have to change the ENTRYPOINT again? If so, to what?
The solution can be found in the following post. I had to use the
"/bin/bash", "-c"
part throughout. The following works fine now (using also #larsks' input who deleted his answer meanwhile):
FROM continuumio/miniconda3
COPY angular_restplus.yaml angular_restplus.yaml
SHELL ["/bin/bash", "-c"]
RUN ["conda", "env", "create", "-f", "angular_restplus.yaml"]
COPY json_to_db.py json_to_db.py
CMD source activate work; gunicorn -b 0.0.0.0:3000 json_to_db:app
Then one can run
docker build -t testimage:latest .
and finally
docker run --name testimage -d -p 3000:3000 --rm testimage:latest
If one now uses
docker ps -a
one will get the expected outcome:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
61df8ac0432c testimage:latest "/usr/bin/tini -- /b…" 16 seconds ago Up 15 seconds 0.0.0.0:3000->3000/tcp testimage
and can then access the service at
http://localhost:3000/
I am building Scigraph database on my local machine and trying to move this entire folder to docker and run it, when I run the shell script on my local machine it runs without error when I add the same folder inside docker and try to run it fails
Am I doing this right way, here's my DOckerfile
FROM goyalzz/ubuntu-java-8-maven-docker-image
ADD ./SciGraph /usr/share/SciGraph
WORKDIR /usr/share/SciGraph/SciGraph-services
RUN pwd
EXPOSE 9000
CMD ['./run.sh']
when I try to run it I'm getting this error
docker run -p9005:9000 test
/bin/sh: 1: [./run.sh]: not found
if I run it using below command it works
docker run -p9005:9000 test -c "cd /usr/share/SciGraph/SciGraph-services && sh run.sh"
as I already marked the directory as WORKDIR and running the script inside docker using CMD it throws error
For scigraph as provided in their ReadMe, you can to run mvn install before you run their services. You can set your shell to bash and use a docker compose to run the docker image as shown below
Dockerfile
FROM goyalzz/ubuntu-java-8-maven-docker-image
ADD ./SciGraph /usr/share/SciGraph
SHELL ["/bin/bash", "-c"]
WORKDIR /usr/share/SciGraph
RUN mvn -DskipTests -DskipITs -Dlicense.skip=true install
RUN cd /usr/share/SciGraph/SciGraph-services && chmod a+x run.sh
EXPOSE 9000
build the scigraph docker image by running
docker build . -t scigraph_test
docker-compose.yml
version: '2'
services:
scigraph-server:
image: scigraph_test
working_dir: /usr/share/SciGraph/SciGraph-services
command: bash run.sh
ports:
- 9000:9000
give / after SciGraph-services and change it to "sh run.sh" ................ and look into run.sh file permissions also
It is likely that your run.sh doesn't have the #!/bin/bash header, so it cannot be executed only by running ./run.sh. Nevertheless, always prefer to run scripts as /bin/bash foo.sh or /bin/sh foo.sh when in docker, especially because you don't know what changes files have been sourced in images downloaded from public repositories.
So, your CMD statement would be:
CMD /bin/bash -c "/bin/bash run.sh"
You have to add the shell and the executable to the CMD array ...
CMD ["/bin/sh", "./run.sh"]