Cannot copy files from source folder to Docker image - docker

I'm trying to copy the folder and contents from a source to a Docker container using an image. I built the image from my Dockerfile.
RUN useradd -m -s /bin/bash user1 && \
ln -s /foo /home/user1/foo
RUN mkdir /foo && && chown -R user1:user1 /foo
VOLUME /foo
After I build the Docker image, I run these commands to create a container.
docker run -it
--name container_name \
--mount "type=bind,source=$(pwd)/$FOLDER/container_foo,destination=/foo/" \
dockerimage:tag
The files from the source folder /foo isn't in the /container_foo. I checked by doing docker exec -it container_ID /bin/bash and confirmed that the files aren't there.
EDIT :
I just found out that mount only goes one way, exposing local host files/folder to the docker container. It doesn't copy the files inside of the docker container to the local folder when mounting. I removed creating the /foo directory from RUN and VOLUME. Instead I did this in the Dockerfile.
COPY --chown=user1:user1 foo/ foo/
And I was able to copy the files from source. Now I just need to copy it from there to container_foo when doing the docker run ... command.

Related

When I use Docker, I directly bind a file between the host and the container,but the two files cannot be synchronized

I start a docker container with the followings:
cd /root
docker run -it -d --privileged=true --name nginx nginx
rm -fr dockerdata
mkdir dockerdata
cd dockerdata
mkdir nginx
cd nginx
docker cp nginx:/usr/share/nginx/html .
docker cp nginx:/etc/nginx/nginx.conf .
docker cp nginx:/etc/nginx/conf.d ./conf
docker cp nginx:/var/log/nginx ./logs
docker rm -f nginx
cd /root
docker run -it -d -p 8020:80 --privileged=true --name nginx \
-v /root/dockerdata/nginx/html:/usr/share/nginx/html \
-v /root/dockerdata/nginx/nginx.conf:/etc/nginx/nginx.conf \
-v /root/dockerdata/nginx/conf:/etc/nginx/conf.d \
-v /root/dockerdata/nginx/logs:/var/log/nginx \
nginx
"docker inspect nginx" is followings
HostConfig-Binds
The bound directory can be synchronized, but directly bound files like "nginx. conf" cannot be synchronized. When I modify the "nginx. conf" in the host, the "nginx. conf" in the container does not change.
I want to know why this happens and how I can directly bind a single file between the host and the container.##
why this happens
Mount bind mounts the file to inode. The nginx entrypoint executes in https://github.com/nginxinc/docker-nginx/blob/ed42652f987141da65bab235b86a165b2c506cf5/stable/debian/30-tune-worker-processes.sh :
sed -i.bak
sed creates a new file, then moves the new file to the old one. The inode of the file changes, so it's no longer mounted inode.
how I can directly bind a
It is bind. Instead, you should consider re-reading nginx docker container documentation on how to pass custom config to it:
-v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro
^^^
Which does skip sed at https://github.com/nginxinc/docker-nginx/blob/ed42652f987141da65bab235b86a165b2c506cf5/stable/debian/30-tune-worker-processes.sh#L12 .

Docker Mounting removes existing file in directory

I have a docker image whose context in directory /workspace is
# ls workspace
dir1 dir2
When I am mounting a volume in docker's /workspace directory this content is lost. Is there any way to retain these files?
# docker run command
docker run -d -t --net host --rm \
-v $PWD:/workspace -w /workspace/workspace_1 \
my_docker_image:latest /bin/bash
At this point content of directory is changes
# ls workspace
hostdir1 hostdir2
What I want to have is
# ls workspace
dir1 dir2 hostdir1 hostdir2

Overwrite volume contents with container's contents

I have a volume which contains data that needs to stay persisted. When creating the volume for the first time, and mounting it to my node container, all container contents are copied to the volume, and everything behaves as expected. The issue is that when I change a few files in my node container, I remove the old image and container, and rebuild them from scratch. When running the updated container, the container's files don't get copied into the volume. This means that the volume still contains the old files, and therefore when the volume is mounted in the container, no updated functionality is present, and I have to remove and recreate the volume from scratch, which I can't do since the volume's data needs to be persisted.
Here is my dockerfile
FROM mcr.microsoft.com/dotnet/sdk:5.0
COPY CommandLineTool App/CommandLineTool/
COPY NeedBackupNodeServer App/NeedBackupNodeServer/
WORKDIR /App/NeedBackupNodeServer
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt update \
&& apt install -y nodejs
EXPOSE 5001
ENTRYPOINT ["node", "--trace-warnings", "index.js"]
Here are my commands and expected output
docker volume create nodeServer-and-commandLineTool-volume
docker build -t need-backup-image -f Dockerfile .
docker run -p 5001:5001 --name need-backup-container -v nodeServer-and-commandLineTool-volume:/App need-backup-image -a
when running
docker exec need-backup-container cat index.js
the file is present and contains the latest updates, since the volume was just created.
Now when I update some files, I need to rebuild the image and the container, so I run
docker rm need-backup-container
docker rmi need-backup-image
docker build -t need-backup-image -f Dockerfile .
docker run -p 5001:5001 --name need-backup-container -v nodeServer-and-commandLineTool-volume:/App need-backup-image -a
Now I thought that when running
docker exec need-backup-container cat index.js
I'd see the updated file changes, but nope, I only see the old files that were first created when the volume was mounted for the first time.
So my question is, is there anyway to achieve overwriting the volume's files when creating a container?
If your application needs persistent data, it should be stored in a different directory from the application code. This can be in a dedicated /data directory or in a subdirectory of your application; the important thing is that, when you mount a volume to hold the persistent data, it does not hide your application code.
In a Node application, for example, you could refer to a ./data for your data files:
import { open } from 'fs/promises';
import { join } from 'path';
const dataDir = process.env.DATA_DIR || 'data';
const fh = await open(join(dataDir, 'file.txt'), 'rw');
Then in your Dockerfile you'd need to create that directory. If you set up a non-root user, that directory, but not your code, should be owned by the user.
FROM node:lts
# Create the non-root user
RUN adduser --system --no-create-home nonroot
# Install the Node application normally
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY index.js .
# Create the data directory
RUN mkdir data && chown nonroot data
# Specify how to run the container
USER nonroot
CMD ["node", "index.js"]
Then when you launch the container, mount the volume only on the data directory, not over the entire /app tree.
docker run \
-p 5001:5001 \
--name need-backup-container \
-v nodeServer-and-commandLineTool-volume:/app/data \
need-backup-image
# ^^^^^^^^^
Note that the Dockerfile as shown here would also let you use a host directory instead of a Docker named volume, and specify the host uid when you run the container. You do not need to make any changes to the image to do this.
docker run \
-p 5002:5001 \
--name same-image-with-bind-mount \
-u $(id -u) \
-v "$PWD/app-data:/app/data" \
need-backup-image

Copy files from container to local in Docker

I want to copy a file from container to my local. The file is generated after execute python script, but due to then ENTRYPOINT, the container exited right after it run, and cant be able to use docker cp command. Any idea on how to prevent the container from exit before manage to copy the file? Below is my Dockerfile:
FROM python:3.9-alpine3.12
WORKDIR /app
COPY . /app/
RUN pip install --no-cache-dir -r requirements.txt && \
rm -f /var/cache/apk/*
ENTRYPOINT ["python3", "main.py"]
I use this command to run the image:
docker run -d -it --name test [image]
If the output file is stored in it's own directory (say /app/output) you can run: docker run -d -it -v $PWD/output:/app/output/ --name test [image] and the file will be in the output directory of the current directory.
If it's not, then run the container with: docker run -d -it --name test [image]
Then copy the file to your own filesystem using docker cp test:/app/example.json . to copy it to the current directory.
If running a container in background is unnecessary then you can copy a file from stdout
docker run -it [image] cat /app/example.json > out_example.json

Create directory for docker volume

I want to create docker volume and add directory and files into it without creating extra container/images or within minimal time. That thing would go into script, so interactions like -it bash won't do.
I can copy files with:
docker container create --name dummy -v myvolume:/root hello-world
docker cp c:\myfolder\myfile.txt dummy:/root/myfile.txt
docker rm dummy
How do I create an empty dir?:
attempt 1
mkdir lol; docker cp ./lol dummy:/root/lol # cannot copy directory
attempt 2
docker commit [CONTAINER_ID] temporary_image
docker run --entrypoint=bash -it temporary_image
This thing requires to pull image with bash.
this worked for me, you can try it out. I am doing the exact thing running from script
VOL_NAME=temp_vol
docker volume create $VOL_NAME
docker run -v $VOL_NAME:/root --name helper busybox true
mkdir tmp
docker cp tmp helper:/root/dir0
docker cp tmp helper:/root/dir1
docker cp tmp helper:/root/dir2
rm -rf tmp
docker rm helper
# check volume
sudo ls /var/lib/docker/volumes/$VOL_NAME/_data
dir0 dir1 dir2

Resources