docker container volumes from directory access in CMD instruction - docker

docker container volumes from directory access in CMD instruction
$ sudo docker run -d --name ext -v /external busybox /bin/sh
and
run.sh
#!/bin/bash
if [[ -f "/external" ]]
then
echo 'success!'
else
echo 'Sorry, I can't find /external...'
fi
and
Dockerfile
FROM ubuntu:14.04
MAINTAINER newbie
ADD run.sh /run.sh
RUN chmod +x /run.sh
CMD ["bash", "/run.sh"]
and
$ sudo docker build -t app .
and
$ sudo docker run -d --volumes-from ext app
ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
So
$ sudo docker logs ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
Sorry, I can't find /external...
My question is,
How can I access /external directory in run.sh in CMD instruction
impossible?
Thank you~

modify your run.sh
-f is check for file exists. in this case use -d check for directory exists.
Check if a directory exists in a shell script
futhermore if you want make only volume container, need not add -d, /bin/sh
volume container run command change like this
$ sudo docker run --name ext -v /external busybox

Related

Docker volume mapping to current working directory not work

Docker version 20.10.21
docker run command with -v option works as expected when the destination path is other than /app. But when the destination path is /app it doesn't work as expected.
command works as expected:
docker run -d -v ${pwd}:/app2 react-app
command not works as expected:
docker run -d -v ${pwd}:/app react-app
as seen in the snapshot there is not port for the second container
here is Dockerfile content
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
RUN mkdir data
COPY package*.json .
RUN npm install
COPY . .
ENV API_URL=http://api.myapp.com/
EXPOSE 3000
CMD [ "npm", "start" ]
You are running npm install in /app in the Dockerfile, but then at runtime you are mounting pwd over the files you installed in /app during the build process. Don't install your dependencies in /app during the build if you want to mount to /app at runtime.
Please try using $(pwd) instead of ${pwd}. Also if you are running it under Windows then you probably need to use some shell which implements pwd command correctly. E.g. Git Bash.
docker run -d -v $(pwd):/app react-app
Also once you start the container please check docker container inspect <container ID>, specifically Mounts section.
Or you can filter the output:
docker container inspect <container ID> -f '{{ .Mounts }}'
Also if you see that container exits immediately, please check its logs with
docker logs <container ID>
I solved it by excluding the node_modules from the mounting as:
docker run -d -v ${pwd}:/app -v /app/node_modules react-app

Shell script on running a docker container

I created a Dockerfile like below:
From alpine:latest
WORKDIR /
COPY ./init.sh .
CMD ["/bin/sh", "./init.sh"]
and a script file init.sh like below:
#!/bin/sh
mkdir -p mount_point
echo hello > ./mount_point/hello.txt
and I built an image using these:
docker build . -t test_build
and ran it as
docker container run --rm --name test_run -it test_build sh
where there are only two above files in the folder.
In the container, I can find the init.sh file with x (executable) as is in the host.
However, there is no folder mount_point which should be created by
CMD ["bin/sh", "./init.sh"]
Note that, when I run any of the below in the container, it successfully creates mount_point as I expected
sh init.sh
or
/bin/sh init.sh
and
sh -c ./init.sh
Could you tell me where I made mistakes?
When you do
docker container run --rm --name test_run -it test_build sh
the sh at the end overrides the CMD definition in the image and the CMD isn't run.
To verify that your script works, your can change the script to something like this
#!/bin/sh
echo Hello from the script!
mkdir -p mount_point
echo hello > ./mount_point/hello.txt
ls -al ./mount_point
Then run the image without the sh and you should see the 'Hello' message and the directory listing from the ./mount_point directory.
docker container run --rm --name test_run test_build

Copy files from container to local in Docker

I want to copy a file from container to my local. The file is generated after execute python script, but due to then ENTRYPOINT, the container exited right after it run, and cant be able to use docker cp command. Any idea on how to prevent the container from exit before manage to copy the file? Below is my Dockerfile:
FROM python:3.9-alpine3.12
WORKDIR /app
COPY . /app/
RUN pip install --no-cache-dir -r requirements.txt && \
rm -f /var/cache/apk/*
ENTRYPOINT ["python3", "main.py"]
I use this command to run the image:
docker run -d -it --name test [image]
If the output file is stored in it's own directory (say /app/output) you can run: docker run -d -it -v $PWD/output:/app/output/ --name test [image] and the file will be in the output directory of the current directory.
If it's not, then run the container with: docker run -d -it --name test [image]
Then copy the file to your own filesystem using docker cp test:/app/example.json . to copy it to the current directory.
If running a container in background is unnecessary then you can copy a file from stdout
docker run -it [image] cat /app/example.json > out_example.json

Docker Container is not running

Please help. When I want to go into a container is says
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
My DockerFile:
FROM ubuntu:16.04
MAINTAINER Anton Lapitski <a.lapitski#godeltech.com>
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD ./ /usr/src/app
EXPOSE 80
ENTRYPOINT ["/bin/sh", "-c", "/usr/src/app/entry.sh"]
Starting script - start.sh:
sudo docker build -t starter .
sudo docker run -t -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter
entry.sh script:
echo "Hello World"
ls -l
pwd
if mountpoint -q /mounted-directory
then
echo "mounted"
else
echo "not mounted"
fi
sudo docker ps -a gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90599013c666 starter "/bin/sh -c /usr/src…" 18 minutes ago Exited (0) 18 minutes ago thirsty_wiles
And mosе important:
sudo docker exec -it 90599013c666 bash
Error response from daemon: Container 90599013c666d332ff6560ccde5053d9127e72042ecc3887550aef90fa1d1eac is not running
Please could you tell what I am doing wrong?
P.S adding -d flag when running not helped.
Once the ENTRYPOINT completes (in any form), the container exits.
Once the container exits, you can't docker exec into it.
If you want to get a shell on the image you just built to poke around in it, you can
sudo docker run --rm -it --entrypoint /bin/sh starter
To make this slightly easier to run, you might change ENTRYPOINT to CMD in your Dockerfile. (Docker will run the ENTRYPOINT passing the CMD as command-line arguments; or if there is no entrypoint just run the CMD.)
...
RUN chmod +x ./app.sh
CMD ["./app.sh"]
Having done that, you can more easily override the command
sudo docker run --rm -it starter /bin/sh
You can try
docker start container_id and then docker exec -ti container_id bash for a stopped container.
You cannot execute the container, because your ENTRYPOINT script has been finished, and the container stopped. Try this:
Remove the ENTRYPOINT from your Dockerfile
Rebuild the image
run it with sudo docker run -it -v mounted-directory:/usr/src/app/mounted-directory -p 80:80 starter sh
The key is the i flag and the sh at the end of the command.
I tried these two commands and it works:
sudo docker start <container_id>
docker exec -it <containerName> /bin/bash

Docker make .sh executable and run it

I'm trying to give executable permission to my script inside docker image and run it. I don't want to set chmod + x for it in Dockerfile.
i tried
docker run img /bin/bash -c "chmod +x ../test/test.sh; ../test/test.sh
but i got "/bin/bash: bad interpreter: Text file busy"
and i can't just make two containers with this commands:
docker run -d img chmod +x ../test/test.sh
docker run -d img ../test/test.sh
=> starting container process caused "exec: \"../test/testing.sh\": permission denied"
i need somehow bind this two containers together
Text file busy means that something is already using the file.
Normally this would work
docker run --rm -it alpine sh -c 'echo "echo it works" > test.sh && chmod +x test.sh && ./test.sh'
With the second command you create two new containers, that are completly seperate. If you want to execute something in an running container you can use docker exec -it <container id or name> <command e.g. bash>
You don't need to set perms if you just pass your script as parameter:
docker run -d IMAGE /bin/bash ../test/test.sh
(add -i and/or -t if you need them)
Ok, i've figured it:
first i made a container:
docker run --name CONTAINER -dt IMAGE
then exec my commands:
docker exec CONTAINER chmod +x ../test/test.sh
docker exec CONTAINER ../test/test.sh

Resources