I have a shell script containing a call to docker run. The script looks like this:
#!/bin/sh
#Local directory where your data is.
PATH_TO_EXPORTS="/home/user/data"
iType_NAME="iType.csv"
colIDs="X,Y,W,Z"
FLAG="FALSE"
#These next three variables don't need to, and should not be changed.
#They refer to locations within the docker container - DO NOT CHANGE !!!
IMG_EXPORTS_DIR="/home/rstudio/project/exports"
#Execute docker run to carry out your analysis
docker run -v $PATH_TO_EXPORTS:$IMG_EXPORTS_DIR \
-e PATH_TO_EXPORTS=$IMG_EXPORTS_DIR \
-e iType_NAME=$iType_NAME \
-e colIDs=$colIDs \
-e FLAG=$FLAG \
--user "$(id -u)" \
my_docker_image:3.0
If I try to run this script via command line by calling: sh myScript.sh , I get the following error:
docker: invalid reference format.
See 'docker run --help'.
However, if I manually run the docker run ... command on the command line, everything works as expected.
Does anybody know why this docker run ... command fails when inside a shell script ?
Many thanks in advance !
-M.
Related
Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.
I want to run echo "tools path is: $TOOLSPATH" in my docker image but make sure the variable doesn't get expanded in my machine and sent to docker. I am not sure how to avoid variable expansion.
docker run -v `pwd`:/root -it --rm foobar echo 'tools path is: $TOOLSPATH'
> tools path is: $TOOLSPATH
docker run -v `pwd`:/root -it --rm foobar echo "tools path is: $TOOLSPATH"
> tools path is:
There is one way to run echo $VAR inside the container and print it in your terminal. You just need to pass an interpreter too.
docker run alpine sh -c 'echo my HOME: $HOME'
my HOME: /root
PS: I used alpine as a test. If sh doesn't work, you can try bash instead.
This might be helpful to you
docker run -v `pwd`:/root -it --rm foobar echo "tools path is: $TOOLSPATH"
We are trying to store the container names in my Makefile but I see below error when executing the build, someone please advise. Thanks.
.PHONY: metadata
metadata: .env1
docker pull IMAGE_NAME
docker run $IMAGE_NAME;
ID:= $(shell docker ps --format '{{.Names}}')
#echo ${ID}
docker cp ${ID}:/app/.env .env2
Container names are not shown in below "ID" Variable when executing the makefile from Jenkins
ID:=
/bin/sh: ID:=: command not found
There are a couple of things you can do in terms of pure Docker mechanics to simplify this.
You can specify an alternate command when you docker run an image: anything after the image name is taken as the image to run. For instance, you can cat the file as the main container command, and replace everything you have above as:
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
docker run --rm \
-e "ARTIFACTORY_USER=${ARTIFACTORY_CREDENTIALS_USR}" \
-e "ARTIFACTORY_PASSWORD=${ARTIFACTORY_CREDENTIALS_PSW}" \
--env-file .env1 \
"${ARTIFACTDATA_IMAGE_NAME}" \
cat /app/.env \
> $#
(It is usually better to avoid docker cp, docker exec, and other imperative-type commands; it is fairly inexpensive and better practice to run a new container when you need to.)
If you can't do this, you can docker run --name your choice of names, and then use that container name in the docker cp option.
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
docker run --name getmetadata ...
docker cp getmetadata:/app/.env $#
docker stop getmetadata
docker rm getmetadata
If you really can't avoid this at all, each line of the Makefile runs in a separate shell. On the one hand this means you need to join together lines if you want variables from one line to be visible in a later line; on the other, it means you have normal shell functionality available and don't need to use the GNU Make $(shell ...) extension (which evaluates when the Makefile is loaded and not when you're running the command).
.PHONY: getmetadata
getmetadata: .env2
.env2: .env1
# Note here:
# $$ escapes $ for the shell
# Multiple shell commands joined together with && \
# Beyond that, pure Bourne shell syntax
ID=$$(docker run -d ...) && \
echo "$$ID" && \
docker cp "$$ID:/app/.env" "$#"
Currently I am trying to run the following command using sh from Jenkinsfile
sh "docker run -e key1=${value1} -e key2=\\'run-cli --users ${USERS} --names ${NAMES}\\' -i -t --network host ${DOCKER_HOST}/path/image:tag"
However, it fails with unknown flag: --users each time. It seems like docker isn't treating it as an env variable but instead reading it as a part of its command. I tried every possible combination of quotes and escape sequences but it doesn't work. It runs perfectly fine when running it directly in the console, but fails when running through jenkins. Any workaround to get this working?
You can use the answer I gave here
The idea is to use:
sh ("""
docker run -e key1=${value1} -e key2="run-cli --users ${USERS} --names ${NAMES" -i -t --network host ${DOCKER_HOST}/path/image:tag
""")
I have a very simple script called as myscript.sh
echo "this is test " > /tmp/myfile.txt
echo $TEST >> /tmp/myfile.txt
I have stored this script in my disk which i plan to pass it to the container as a volume like this below
docker run -d --name test \
-v /home/docker/test/myscript.sh:/tmp/myscript.sh \
-e TESTING=just-a-test \
test
The Dockerfile looks like this below
FROM ubuntu
CMD ["bash", "/tmp/myscript.sh"]
So the thought process is to get this script executed and get the result as a file myfile.txt which would contain the -e passed.
Instead i am getting
docker#boot2docker:~/test$ docker exec -it test /bin/bash Error
response from daemon: Container test is not running
Which means that this simplest program did not execute as a container.
I Could not figure it out.
The container ran, executed the script, then exited. A container only runs as long as its main process. When that stops, the container stops.
A simpler test would be to change your test script to:
#!/bin/bash
echo $TEST
I would change your Dockerfile to copy the file in and remove the "bash" part of the CMD instruction:
FROM ubuntu
COPY myscript.sh /myscript.sh
CMD /myscript.sh
Now rebuild and run:
$ docker build -t test .
...
$ docker run -e TEST=VAL test
...
The container should echo the value of the test variable and exit. (I haven't tested any of this, so apologies for any mistakes).
The answer to this question is to use the entrypoint instead of cmd.
I did some research and i came up with the solution that looks like this
ENTRYPOINT ["bash", "<script>"]
To run the script just use
docker run -d --name [--privileged] -p : \
- v /script.sh:/tmp/script.sh \
where -v <> : YOU CAN ALSO USE THE WGET TO GET THE SCRIPT LIKE MOST PEOPLE AND EXECUTE AT RUNTIME.
Appreciate all the people who tried to solve the query