Copy folders from one directory to another within Docker container - docker

Can anyone tell me how to move a folder from one Directory to another within a Docker container? I have a folder in the root directory of my Docker container:
root/folder/folder1
I've created a folder called Source at the same container level as root. In my Dockerfile I'm trying to copy folder1 into Source as below:
ADD root/folder/folder1/ /Source/
But I get an error saying that root/folder/folder1/ isn't a file or directory. I'm new to Docker, can anyone assist?

The ADD instruction copy from outside the image into the image. The source should be in your contexte directory or, for ADD, an URL and Docker will download it (and extract it if it's an archive).
It's usually a good practice to use COPY instead of ADD most of the time.
In your case, as you want to copy a directory inside your docker image, you should execute a shell command for that: RUN cp -r /root/folder/folder1 /Source (or maybe create a link if you don't need to duplicate the content).
For more information about ADD vs COPY, you can read the Dockerfile Best Practices from docker

Related

Docker only recognizes relative directories, not absolute

In my Dockerfile, I'm trying to copy a directory from the host into the image. What I noticed if I use a relative directory, like this:
COPY ./foo/bar /
This works fine.
This does not:
COPY /foo/bar /
The error I get back is confusing when I try to build the image, too:
lstat /foo/bar: no such file or directory
...because if I just do an ls /foo/bar on the host, it's there.
Is there some syntax I have wrong, or something else?
Another user answered in a comment. Read the docker build documentation
The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.
EDIT: As the The Fool pointed out in his comment in regards to context.
Think of context as the root directory and docker cannot get out of.
In layman's terms: you can only perform the COPY command with paths relative to the context you set.
Say you have this directory structure:
<Dockerfile path>/
Dockerfile
somefile.txt
<your context directory>/
config/configfile
data/datafile
And you use this docker build command:
docker build -f <Dockerfile path>/Dockerfile /<your context directory>
In your docker file you can specify paths relative to the directory, but cannot access anything above the context directory. somefile.txt is not accessible from the Dockerfile.

Copy file into Dockerfile from different directory

Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .
The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.
No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.
When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.

Docker: When running with volume (-v) flag Error response from daemon: OCI runtime create failed

I am trying to dockerize my first Go Project (Although the question has nothing to do with Go, I guess!).
Short summary (of what the code is doing) - It simply checks whether a .cache folder is present and creates it if it doesn't exist.
After dockerizing the project, my goal is to mount the path within the container where .cache is created to a host path
Here's my Dockerfile (Multistaged):
FROM golang as builder
ENV GO111MODULE=on
WORKDIR /proj
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build
RUN ls
FROM alpine
COPY --from=builder /proj/project /proj/
RUN chmod a+x /proj/project
ENTRYPOINT [ "/proj/project" ]
EDIT: If I run something like this (as #Jan Garaj mentioned in the comments):
docker run --rm -v "`pwd`/data/.cache:/proj/.cache/" project-image:latest
doesn't throw an error, but creates an empty data/.cache folder on the host with no actual (content) files and folders from the container's .cache directory. Although, the executable inside the container is able to create the .cache directory and its subsequent files and folders.
I know, variations of this problem has been asked a lot of times, but trust me, I've tried out all those solutions. The following are some of the questions:
Error response from daemon: OCI runtime create failed: container_linux.go:296
A GitHub issue which looked familiar - Still doesn't have an answer and is open.
Another GitHub issue - Probably the best link so far, but I still couldn't get it to work.
The fact that removing the volume flag makes the run command to work is confusing me a lot.
Can someone please explain what's going on in this case and point me to the right direction.
P.S. - Also, I'm running docker on a MacOS (macOS High Sierra to be specific) and I had to enable file sharing in Docker-> Preferences -> File Sharing with the host mount path (Just an extra information!!).
Needless to say that I have also tried out overriding ENTRYPOINT by trying to fire something like /bin/sh /proj/project which also didn't work (as it couldn't find the executable project even after mentioning the full path from the root). I read somewhere that the alpine image has sh only and doesn't have a bash. I am also changing the privileges of my executable project while building the image to a+x, which also doesn't work.
Please do let me know if any part of the question is unclear. I've also checked in my code here in GitHub if anyone wants to reproduce the error.
When you mount your working directory's subdirectory data to the /proj directory inside the container, the entire folder, including binary you've compiled and copied in there, will no longer be available. Instead, the contents of your data directory will be available inside your container on /proj instead. Essentially, you are 'hiding' the container image's version of the directory and replacing it with a directory from outside the container.
This is because the -v flag, with the argument you've given it, creates a bind mount and uses the second parameter (/proj) as the mount target.
To solve the problem, either copy the binary to a different directory (and change the ENTRYPOINT instruction correspondingly), or choose a different target for the bind mount.

Move jar files within one docker container

I am using existing karaf image and unarchiving it while building it with other configurations.(dockerfile)
Now i realized that karaf123.jar file which is in lib folder, it should be in /lib/boot folder.
I tried using COPY but its copying from host, but in my scenario it should just move file from one folder to other within image.
I found following link but no option to copy from container1 to container1
https://medium.com/#gchudnov/copying-data-between-docker-containers-26890935da3f
Assuming you're using a Linux container just add this to your Dockerfile:
RUN mv /lib/karaf123.jar /lib/boot/karaf123.jar

How to run a simple main method and copy the file it generated using docker

I have a simple main method that outputs a folder with some files.
How do i copy the file to my host to view the output files. Tried using volume but it didnt work out. Here is the docker file
FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD main.jar main.jar
ENTRYPOINT ["java","-jar","main.jar"]
//generated folder 'xxxx' with some files.
Thanks
The way to make files available outside the container is with a volume mapping. To make the files available in your current directory you would do docker run -v $PWD:/tmp .... yourimage

Resources