Copy files from container to local in Docker - docker

I want to copy a file from container to my local. The file is generated after execute python script, but due to then ENTRYPOINT, the container exited right after it run, and cant be able to use docker cp command. Any idea on how to prevent the container from exit before manage to copy the file? Below is my Dockerfile:
FROM python:3.9-alpine3.12
WORKDIR /app
COPY . /app/
RUN pip install --no-cache-dir -r requirements.txt && \
rm -f /var/cache/apk/*
ENTRYPOINT ["python3", "main.py"]
I use this command to run the image:
docker run -d -it --name test [image]

If the output file is stored in it's own directory (say /app/output) you can run: docker run -d -it -v $PWD/output:/app/output/ --name test [image] and the file will be in the output directory of the current directory.
If it's not, then run the container with: docker run -d -it --name test [image]
Then copy the file to your own filesystem using docker cp test:/app/example.json . to copy it to the current directory.

If running a container in background is unnecessary then you can copy a file from stdout
docker run -it [image] cat /app/example.json > out_example.json

Related

Docker volume mapping to current working directory not work

Docker version 20.10.21
docker run command with -v option works as expected when the destination path is other than /app. But when the destination path is /app it doesn't work as expected.
command works as expected:
docker run -d -v ${pwd}:/app2 react-app
command not works as expected:
docker run -d -v ${pwd}:/app react-app
as seen in the snapshot there is not port for the second container
here is Dockerfile content
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
RUN mkdir data
COPY package*.json .
RUN npm install
COPY . .
ENV API_URL=http://api.myapp.com/
EXPOSE 3000
CMD [ "npm", "start" ]
You are running npm install in /app in the Dockerfile, but then at runtime you are mounting pwd over the files you installed in /app during the build process. Don't install your dependencies in /app during the build if you want to mount to /app at runtime.
Please try using $(pwd) instead of ${pwd}. Also if you are running it under Windows then you probably need to use some shell which implements pwd command correctly. E.g. Git Bash.
docker run -d -v $(pwd):/app react-app
Also once you start the container please check docker container inspect <container ID>, specifically Mounts section.
Or you can filter the output:
docker container inspect <container ID> -f '{{ .Mounts }}'
Also if you see that container exits immediately, please check its logs with
docker logs <container ID>
I solved it by excluding the node_modules from the mounting as:
docker run -d -v ${pwd}:/app -v /app/node_modules react-app

Astro in Docker not refresh

I am creating an Astro js container with Docker on windows.
Dockerfile
FROM node:18-alpine3.15
RUN mkdir app
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 24678
CMD ["npm","run","dev","--","--host"]
I build my image with the following command
docker build . -t astro
I run my container with this command
docker run --name astro1 -p 24678:24678 -v D:\Workspace\Docker\Practicas\docker-astro-example:/app -v /app/node_modules/ astro
So far without problems but when I make a change in the index.astro document it does not refresh the page to see the changes.

Container is exiting on its own & not able to exec into it

I am trying to build and container image & then trying to run the enter the container after running it. But I am getting error response from daemon.
My Docker file -
COPY . /app
RUN sudo chmod 777 -R /app
WORKDIR /app
ADD entry_point.sh /opt/bin/
RUN sudo chmod 777 /opt/bin/entry_point.sh
COPY start-selenium-standalone.sh /opt/bin/start-selenium-standalone.sh
RUN sudo chmod 777 /opt/bin/start-selenium-standalone.sh
EXPOSE 4444 5900 9515
**Command to build docker image**
docker build -f Docker/Dockerfile -t sel-test:1 .
**Command to run the image**
docker run -d -p 4444:4444 -p 5900:5900 -v /dev/shm:/dev/shm sel-test:1
**Error I am getting -**
Error response from daemon: Container a9e0bb7f381584dd5e39dcd997640233835408ffdfe4e0e44108ddb7bb393cd0 is not running
Your container is exiting because there is nothing to run inside the container.
To see this, run the docker ps -a command and check the status of your container.
In order to run something inside the container use CMD in docker file to run bash inside the container whenever you use 'docker run'.

See image generated in docker

I created a Docker like:
FROM rikorose/gcc-cmake
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
#RUN apt-get update && apt-get -y install cmake=3.13.1-1ubuntu3 protobuf-compiler
RUN cmake ..
RUN make
Afterwards I do:
docker build --tag trial .
docker run -t -i trial /bin/bash
Then I run an executable that saves a .png file inside the container.
How can I visualize the image?
You can execute something inside the container.
To see all containers you can run docker ps --all.
To execute something inside container you can run docker exec <container id> command.
Otherwise you can copy files from container to host, with docker cp <container id>:/file-path ~/target/file-path
Please mount a localhost volume(directory) with container volume(directory) in where you are saving your images.
now all of your images saved in container directory will be available in host or localhost mount directory. From there you can visualize or download to another machine.
Please follow this
docker run --rm -d -v host_volume_or-directory:container_volume_direcotory trial
docker exec -it container_name /bin/bash

docker container volumes from directory access in CMD instruction

docker container volumes from directory access in CMD instruction
$ sudo docker run -d --name ext -v /external busybox /bin/sh
and
run.sh
#!/bin/bash
if [[ -f "/external" ]]
then
echo 'success!'
else
echo 'Sorry, I can't find /external...'
fi
and
Dockerfile
FROM ubuntu:14.04
MAINTAINER newbie
ADD run.sh /run.sh
RUN chmod +x /run.sh
CMD ["bash", "/run.sh"]
and
$ sudo docker build -t app .
and
$ sudo docker run -d --volumes-from ext app
ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
So
$ sudo docker logs ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
Sorry, I can't find /external...
My question is,
How can I access /external directory in run.sh in CMD instruction
impossible?
Thank you~
modify your run.sh
-f is check for file exists. in this case use -d check for directory exists.
Check if a directory exists in a shell script
futhermore if you want make only volume container, need not add -d, /bin/sh
volume container run command change like this
$ sudo docker run --name ext -v /external busybox

Resources