I created a Dockerfile like below:
From alpine:latest
WORKDIR /
COPY ./init.sh .
CMD ["/bin/sh", "./init.sh"]
and a script file init.sh like below:
#!/bin/sh
mkdir -p mount_point
echo hello > ./mount_point/hello.txt
and I built an image using these:
docker build . -t test_build
and ran it as
docker container run --rm --name test_run -it test_build sh
where there are only two above files in the folder.
In the container, I can find the init.sh file with x (executable) as is in the host.
However, there is no folder mount_point which should be created by
CMD ["bin/sh", "./init.sh"]
Note that, when I run any of the below in the container, it successfully creates mount_point as I expected
sh init.sh
or
/bin/sh init.sh
and
sh -c ./init.sh
Could you tell me where I made mistakes?
When you do
docker container run --rm --name test_run -it test_build sh
the sh at the end overrides the CMD definition in the image and the CMD isn't run.
To verify that your script works, your can change the script to something like this
#!/bin/sh
echo Hello from the script!
mkdir -p mount_point
echo hello > ./mount_point/hello.txt
ls -al ./mount_point
Then run the image without the sh and you should see the 'Hello' message and the directory listing from the ./mount_point directory.
docker container run --rm --name test_run test_build
Related
Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.
I want to copy a file from container to my local. The file is generated after execute python script, but due to then ENTRYPOINT, the container exited right after it run, and cant be able to use docker cp command. Any idea on how to prevent the container from exit before manage to copy the file? Below is my Dockerfile:
FROM python:3.9-alpine3.12
WORKDIR /app
COPY . /app/
RUN pip install --no-cache-dir -r requirements.txt && \
rm -f /var/cache/apk/*
ENTRYPOINT ["python3", "main.py"]
I use this command to run the image:
docker run -d -it --name test [image]
If the output file is stored in it's own directory (say /app/output) you can run: docker run -d -it -v $PWD/output:/app/output/ --name test [image] and the file will be in the output directory of the current directory.
If it's not, then run the container with: docker run -d -it --name test [image]
Then copy the file to your own filesystem using docker cp test:/app/example.json . to copy it to the current directory.
If running a container in background is unnecessary then you can copy a file from stdout
docker run -it [image] cat /app/example.json > out_example.json
I have a list of commands which I need to issue one by one to a running docker container. However, when I "cd" in the container, it's not working as expected. For example:
docker run -di --name example alpine:latest
for CMD in 'mkdir -p example && touch example/file' 'cd example' 'ls'
do
docker exec -w='/root' example sh -c "$CMD"
done
Will printout example instead of file. How should I properly execute series of statements, but preserving the working directory between their execution? Preferably, if it possible to do this without concatenating all the commands?
I think you should use this format:
dingrui#gdcni:~/onie$ docker exec -w /root example sh -c 'mkdir -p example; touch example/file; cd example; ls'
file
or write these commands to a script and then mount it to container and run it in container:
dingrui#gdcni:~/onie$ docker run -itd -w /root -v $(pwd):/app --name example busybox /app/test.sh
12f7f2b55182bd18c45ce31e03390544adaedc1a2dd923d3bc4293b214301650
dingrui#gdcni:~/onie$ docker logs example
file
Please help. When I want to go into a container is says
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
My DockerFile:
FROM ubuntu:16.04
MAINTAINER Anton Lapitski <a.lapitski#godeltech.com>
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD ./ /usr/src/app
EXPOSE 80
ENTRYPOINT ["/bin/sh", "-c", "/usr/src/app/entry.sh"]
Starting script - start.sh:
sudo docker build -t starter .
sudo docker run -t -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter
entry.sh script:
echo "Hello World"
ls -l
pwd
if mountpoint -q /mounted-directory
then
echo "mounted"
else
echo "not mounted"
fi
sudo docker ps -a gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90599013c666 starter "/bin/sh -c /usr/src…" 18 minutes ago Exited (0) 18 minutes ago thirsty_wiles
And mosе important:
sudo docker exec -it 90599013c666 bash
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
Please could you tell what I am doing wrong?
P.S adding -d flag when running not helped.
Once the ENTRYPOINT completes (in any form), the container exits.
Once the container exits, you can't docker exec into it.
If you want to get a shell on the image you just built to poke around in it, you can
sudo docker run --rm -it --entrypoint /bin/sh starter
To make this slightly easier to run, you might change ENTRYPOINT to CMD in your Dockerfile. (Docker will run the ENTRYPOINT passing the CMD as command-line arguments; or if there is no entrypoint just run the CMD.)
...
RUN chmod +x ./app.sh
CMD ["./app.sh"]
Having done that, you can more easily override the command
sudo docker run --rm -it starter /bin/sh
You can try
docker start container_id and then docker exec -ti container_id bash for a stopped container.
You cannot execute the container, because your ENTRYPOINT script has been finished, and the container stopped. Try this:
Remove the ENTRYPOINT from your Dockerfile
Rebuild the image
run it with sudo docker run -it -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter sh
The key is the i flag and the sh at the end of the command.
I tried these two commands and it works:
sudo docker start <container_id>
docker exec -it <containerName> /bin/bash
docker container volumes from directory access in CMD instruction
$ sudo docker run -d --name ext -v /external busybox /bin/sh
and
run.sh
#!/bin/bash
if [[ -f "/external" ]]
then
echo 'success!'
else
echo 'Sorry, I can't find /external...'
fi
and
Dockerfile
FROM ubuntu:14.04
MAINTAINER newbie
ADD run.sh /run.sh
RUN chmod +x /run.sh
CMD ["bash", "/run.sh"]
and
$ sudo docker build -t app .
and
$ sudo docker run -d --volumes-from ext app
ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
So
$ sudo docker logs ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
Sorry, I can't find /external...
My question is,
How can I access /external directory in run.sh in CMD instruction
impossible?
Thank you~
modify your run.sh
-f is check for file exists. in this case use -d check for directory exists.
Check if a directory exists in a shell script
futhermore if you want make only volume container, need not add -d, /bin/sh
volume container run command change like this
$ sudo docker run --name ext -v /external busybox