Docker make .sh executable and run it - docker

I'm trying to give executable permission to my script inside docker image and run it. I don't want to set chmod + x for it in Dockerfile.
i tried
docker run img /bin/bash -c "chmod +x ../test/test.sh; ../test/test.sh
but i got "/bin/bash: bad interpreter: Text file busy"
and i can't just make two containers with this commands:
docker run -d img chmod +x ../test/test.sh
docker run -d img ../test/test.sh
=> starting container process caused "exec: \"../test/testing.sh\": permission denied"
i need somehow bind this two containers together

Text file busy means that something is already using the file.
Normally this would work
docker run --rm -it alpine sh -c 'echo "echo it works" > test.sh && chmod +x test.sh && ./test.sh'
With the second command you create two new containers, that are completly seperate. If you want to execute something in an running container you can use docker exec -it <container id or name> <command e.g. bash>

You don't need to set perms if you just pass your script as parameter:
docker run -d IMAGE /bin/bash ../test/test.sh
(add -i and/or -t if you need them)

Ok, i've figured it:
first i made a container:
docker run --name CONTAINER -dt IMAGE
then exec my commands:
docker exec CONTAINER chmod +x ../test/test.sh
docker exec CONTAINER ../test/test.sh

Related

Trying to copy a script into a detached Docker container, and execute it with docker exec

Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.

Save current directory between docker exec commands

I have a list of commands which I need to issue one by one to a running docker container. However, when I "cd" in the container, it's not working as expected. For example:
docker run -di --name example alpine:latest
for CMD in 'mkdir -p example && touch example/file' 'cd example' 'ls'
do
docker exec -w='/root' example sh -c "$CMD"
done
Will printout example instead of file. How should I properly execute series of statements, but preserving the working directory between their execution? Preferably, if it possible to do this without concatenating all the commands?
I think you should use this format:
dingrui#gdcni:~/onie$ docker exec -w /root example sh -c 'mkdir -p example; touch example/file; cd example; ls'
file
or write these commands to a script and then mount it to container and run it in container:
dingrui#gdcni:~/onie$ docker run -itd -w /root -v $(pwd):/app --name example busybox /app/test.sh
12f7f2b55182bd18c45ce31e03390544adaedc1a2dd923d3bc4293b214301650
dingrui#gdcni:~/onie$ docker logs example
file

Bash on alpine linux

I cannot get a bash shell into an alpine container.
I'm starting with this Alpine container:
gitlab/gitlab-runner:alpine
I'm adding a bash shell and other configs in this dockerfile:
from gitlab/gitlab-runner:alpine
ENV http_proxy=<corporate_proxy>
ENV https_proxy=<corporate_proxy>
RUN apk add vim wget curl nmap lsof bash bash-completion which
CMD ["/bin/bash"]
RUN ls -l /bin # THIS WORKS, I CAN SEE 'BASH' SHOW UP WITH 755 OWNED BY ROOT
RUN which bash # THIS ALSO WORKS
RUN /bin/bash -c "echo hi" # YES, THIS WORKS TOO
However when spawning the container to use a bash shell via:
docker run -idt <image_name> /bin/bash, the container fails to start with FATAL: Command /bin/bash not found.
Note that these other options also fail for me when spawning a container: ash, sh, /bin/ash, /bin/sh, etc
running the container with --user root also does not work.
The entrypoint is a GitLab Runner script. Change it to bash to get shell access:
$ docker run -it --entrypoint /bin/bash <image_name>
It turns out something funky was being set in the container's entrypoint. I need to remember to override the entrypoint when spawning a new container via docker run.
Adding this line in the Dockerfile fixed the problem:
ENTRYPOINT: []
1- verify if you container in fully loaded by :
docker ps
so after to enter in bash shell like:
docker exec -it <<container_name>> bash
Alpine doesn't have bash, use sh instead:
docker exec -it 64103333b32 /bin/sh

Docker Container is not running

Please help. When I want to go into a container is says
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
My DockerFile:
FROM ubuntu:16.04
MAINTAINER Anton Lapitski <a.lapitski#godeltech.com>
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD ./ /usr/src/app
EXPOSE 80
ENTRYPOINT ["/bin/sh", "-c", "/usr/src/app/entry.sh"]
Starting script - start.sh:
sudo docker build -t starter .
sudo docker run -t -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter
entry.sh script:
echo "Hello World"
ls -l
pwd
if mountpoint -q /mounted-directory
then
echo "mounted"
else
echo "not mounted"
fi
sudo docker ps -a gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90599013c666 starter "/bin/sh -c /usr/src…" 18 minutes ago Exited (0) 18 minutes ago thirsty_wiles
And mosе important:
sudo docker exec -it 90599013c666 bash
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
Please could you tell what I am doing wrong?
P.S adding -d flag when running not helped.
Once the ENTRYPOINT completes (in any form), the container exits.
Once the container exits, you can't docker exec into it.
If you want to get a shell on the image you just built to poke around in it, you can
sudo docker run --rm -it --entrypoint /bin/sh starter
To make this slightly easier to run, you might change ENTRYPOINT to CMD in your Dockerfile. (Docker will run the ENTRYPOINT passing the CMD as command-line arguments; or if there is no entrypoint just run the CMD.)
...
RUN chmod +x ./app.sh
CMD ["./app.sh"]
Having done that, you can more easily override the command
sudo docker run --rm -it starter /bin/sh
You can try
docker start container_id and then docker exec -ti container_id bash for a stopped container.
You cannot execute the container, because your ENTRYPOINT script has been finished, and the container stopped. Try this:
Remove the ENTRYPOINT from your Dockerfile
Rebuild the image
run it with sudo docker run -it -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter sh
The key is the i flag and the sh at the end of the command.
I tried these two commands and it works:
sudo docker start <container_id>
docker exec -it <containerName> /bin/bash

docker container volumes from directory access in CMD instruction

docker container volumes from directory access in CMD instruction
$ sudo docker run -d --name ext -v /external busybox /bin/sh
and
run.sh
#!/bin/bash
if [[ -f "/external" ]]
then
echo 'success!'
else
echo 'Sorry, I can't find /external...'
fi
and
Dockerfile
FROM ubuntu:14.04
MAINTAINER newbie
ADD run.sh /run.sh
RUN chmod +x /run.sh
CMD ["bash", "/run.sh"]
and
$ sudo docker build -t app .
and
$ sudo docker run -d --volumes-from ext app
ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
So
$ sudo docker logs ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
Sorry, I can't find /external...
My question is,
How can I access /external directory in run.sh in CMD instruction
impossible?
Thank you~
modify your run.sh
-f is check for file exists. in this case use -d check for directory exists.
Check if a directory exists in a shell script
futhermore if you want make only volume container, need not add -d, /bin/sh
volume container run command change like this
$ sudo docker run --name ext -v /external busybox

Resources