Docker Save Logs To Host's Directory - docker

I am writing to a Text File within my Docker Container, Path inside container is /app/data/text.txt
When I run my APP, it writes to this files just fine, however I want to write this to my HOST system, not within the container. so I tried below
docker run -v /home/pi/mmm:/app/data -d smartazanmobilebackgroundservice
and still i cant see any text.txt file in my /home/pi/data folder
My Working dir for my Docker app is ...
WORKDIR /app
.Net Code to get directory is
string logPath = Path.Combine("data");

docker run -v /home/pi/mmm:/app/data -d smartazanmobilebackgroundservice
I was using -v after the Image Name.

Related

understanding docker : how come my docker container content is dynamic?

I want to make sure I understand correctly docker: when i build an image from the current directory I run:
docker build -t imgfile .
What happens when i change the content of a file in the directory AFTER the image is built? From what i've tried it seems it changes the content of the docker image also dynamically.
I thought the docker image was like a zip file that could only be changed with docker commands or logging into the image and running commands.
The dockerfile is :
FROM lambci/lambda:build-python3.8
WORKDIR /var/task
EXPOSE 8000
RUN echo 'export PS1="\[\e[36m\]zappashell>\[\e[m\] "' >> /root/.bashrc
CMD ["bash"]
And the docker run command is :
docker run -ti -p 8000:8000 -e AWS_PROFILE=zappa -v "$(pwd):/var/task" -v ~/.aws/:/root/.aws --rm zappa-docker-image
Thank you
Best,
Your docker run command isn't really running your image at all. The docker run -v $(pwd):/var/task syntax overwrites what was in /var/task in the image with a bind mount to the current directory on the host. So when you edit a file on your host, the container has the same host directory (and not the content from the image) and you see the changes inside the container as well.
You're right that the image is immutable. The image you show doesn't really contain anything, beyond a .bashrc file that won't usually be used. You can try running the image without the -v options to see:
docker run --rm zappa-docker-image ls -al
# just shows `.` and `..` directories
I'd recommend making sure you COPY your application into the image, setting its CMD to actually run the application, and removing the -v option that overwrites its main directory. If your goal is to run host code against host files with host supporting data like your AWS credentials, you're not really getting much benefit from introducing Docker in between your application and every single file it uses.

Why docker run can't find file which was copied during build

Dockerfile
FROM centos
RUN mkdir /test
#its ensured that sample.sh exists where the dockerFile exists & being run
COPY ./sample.sh /test
CMD ["sh", "/test/sample.sh"]
Docker run cmd:
docker run -d -p 8081:8080 --name Test -v /home/Docker/Container_File_System:/test test:v1
Log output :
sh: /test/sample.sh: No such file or directory
There are 2 problems here.
The output says sh: /test/sample.sh: No such file or directory
as I have mapped a host folder to container folder, I was expecting the test folder & the sample.sh to be available at /home/Docker/Container_File_System post run, which did not happen
Any help is appreciated.
When you map a folder from the host to the container, the host files become available in the container. This means that if your host has file a.txt and the container has b.txt, when you run the container the file a.txt becomes available in the container and the file b.txt is no longer visible or accessible.
Additionally file b.txt is not available in the host at anytime.
In your case, since your host does not have sample.sh, the moment you mount the directory, sample.sh is no longer available in the container (which causes the error).
What you want to do is copy the sample.sh file to the correct directory in the host and then start the container.
The problem is in volume mapping. If I create a volume and map it subsequently it works fine, but directly mapping host folder to container folder does not work.
Below worked fine
docker volume create my-vol
docker run -d -p 8081:8080 --name Test -v my-vol:/test test:v1

docker mount volume dir ubuntu

I'm trying to use docker to do this:
Run Docker image, make sure you mount your User (for MAC) or home (for
Ubuntu) directory as a volume so you can access your local files
The code that I've been given is:
docker run -v /Users/:/host -p 5000:5000 -t -i bjoffe/openface_flask_v2 /bin/bash
I know that the part that I should modify to my local files is -v /Users/:/host, but I am unsure how to do so.
The files I want to load in the container are inside home/user/folder-i-want-to-read
How should this code be written?
Bind mount is just a mapping of the host files or directories into a container files or directories. That basically pointing to the same physical location on disk.
In your case, you could try this command,
docker container run -it -p 5000:5000 -v /home/user/folder-i-want-to-read/:/path_in_container bjoffe/openface_flask_v2 /bin/bash
And, once run verify that directories from the path on host home/user/folder-i-want-to-read are loaded in the container path which you have mapped.

Create a volume in docker from windows host

I have the following folder on my windows host
C:\Tmp\TmpVolume
"TmpVolume" has a number of files.
This is where I will be putting my sourcecode for development purposes on the host machine.
Now I want to run the container and mount this folder onto the container. This is the command I execute
docker run -p 49160:3000 -v C:/Tmp/TmpVolume/:/usr/src/app/TmpVolume -d containerName
My problem is that when I move into the directory /usr/src/app in the container there is a TmpVolume folder but this empty, there is nothing inside it. What am I doing wrong here?

Trouble mounting a folder from host onto my docker image

I am trying to mount a folder from my host system to a docker container. I am aware of the -v attribute of docker commands.
My command is:
docker run -v /home/ubuntu/tools/files/:/root/report -i -t --entrypoint /bin/bash my_image -s
But this does not seem to work, no files appear at my designated container folder. This is very frustrating as I will need to add files to my docker image at periodic intervals so just adding them to the build file at creation wont cut it.

Resources