in my current directory I have multiple directories that i want to copy them inside dockerfile but not all of them inside single location, lets sya I have dir1, dir2, file1 and i want to copy dir1 into des1 and dir2 into des2 and file1 into WORKDIR .
I have no problem doing it with three layes using copy command inside dockerfile, but is there another way to do that in single layer using copy or add command ??
I achieved that by doing this:
COPY dir1/ /app/dir1
COPY dir2/ /app/dir2
COYP file1 /app
Target needed is to do all of them in single COPY.
you could create some sort of script or come up with some clever way to copy src1 dst1 && src2 dst2 but that doesn't help with the final image size (like squashing image layers) or with image pull time (unless you have many many layers and then extracting every little one takes a bit more time).
It's better to separate copying the files to multiple stages so when you change one of them only the changed layer is built again, or when pulling a new verion of the same homebrewed image and only some of the layers have changed and have to be pulled.
some of the sources I've used:
https://github.com/moby/moby/issues/33551
How to copy multiple files in one layer using a Dockerfile?
https://linux.die.net/man/1/cp
Related
This question already has an answer here:
How to copy multiple files in different source and destination directories using a single COPY layer in Dockerfile
(1 answer)
Closed last month.
I want to combine 2 COPY in one line with other source and destinations.
My current code:
COPY ./file1.txt /first/path/
COPY ./file2.txt /second/path/
I want combine these lines in one line. I tried with an array, but it's not correct:
COPY ["./file1.txt", "/first/path/", "./file2.txt", "/second/path/"]
you can use a multi-line COPY statement and specify multiple src and dest pairs:
like this
COPY
./file1.txt /first/path/
./file2.txt /second/path/
This will copy file1.txt to /first/path/ and file2.txt to /second/path/.
or
COPY --from=0 ./file1.txt /first/path/ --from=0 ./file2.txt /second/path/
This will also copy file1.txt to /first/path/ and file2.txt to /second/path/.
The COPY and ADD steps can have multiple source arguments, but only a single destination. When that syntax is used, the following requirement applies:
If multiple <src> resources are specified, either directly or due to the use of a wildcard, then <dest> must be a directory, and it must end with a slash /.
For more details, see the Dockerfile documentation: https://docs.docker.com/engine/reference/builder/#copy
This is not possible with COPY command, however you can use ADD command to do it.
ADD ["./file1.txt", "/first/path/", "./file2.txt", "/second/path/"]
for more details
If it is a docker file, I want to remove the directory by executing the following command.
RUN rm /usr/bin/wget
How can I do it? any help is appreciated
First thing to emphasize: in Dockerfile, RUN rm /usr/bin/wget doesn't physically remove the file. Files and directories in previous layers will physically stay there forever. So, if you are trying to remove a file with sensitive information using rm, it's not going to work. As an example, recently, this oversight has led to a high-profile security breach in Codecov.
Docker Layer Attacks: Publicly distributed Docker images should be either squashed or multistage such that intermediate layers that contain sensitive information are excluded from the final build.
What happens is, RUN rm /usr/bin/wget creates another layer that contains a "whiteout" file /usr/bin/.wh.wget, and this new layer sits on top of all previous layers. Then at runtime, it's just that container runtimes will hide the file and you will not see it. However, if you download the image and inspect each layer, you will be able to see and extract both /usr/bin/wget and /usr/bin/.wh.wget files. So, yes, doing rm later doesn't reduce the size of the image at all. (BTW, each RUN in Dockerfile creates a new layer at the end. So, for example, if you remove files within the same RUN like RUN touch /foo && rm /foo, you will not leave /foo in the final image.)
Therefore, with Jib, if the file or directory you want to "delete" is coming from a base image, what you can do is to create a new whiteout file for it. Jib has the <extraDirectories> feature to copy arbitrary files into an image. So, for example, since <project root>/src/main/jib is the default extra directory, you can create an empty src/main/jib/usr/bin/.wh.wget, which will be coped into /usr/bin/.wh.wget in an image.
And of course, if you really want to physically remove the file that comes from the base image, the only option is to rebuild your base image so that it doesn't contain /usr/bin/wget.
For completeness: if the file or directory you want to remove is not from your base image but from Jib, you can use the Jib Layer-Filter extension (Maven/Gradle). (This is app-layer filtering and doesn't involve whiteout files.) However, normally there will be no reason to remove files put by Jib.
I'm trying to modify a public image, and create a new image with my changes, but when I try to run a container with my new custom image it triggers a md5sum and deletes some of my changes, is it possible to disable the md5sum?
Dockerfile:
FROM public-image:latest
COPY . /dir
RUN sh my-script.sh
my-script.sh is to copy files to different locations, one of the files I modify is constants.json but it triggers the md5sum and reverts the changes
Turns out the image I was using is based on confd, this is a management tool that can be configure to do md5sum in configurarion files, I just deleted the original configuration files from the confd folder
Is it possible to copy multiple files to different locations in a Dockerfile?
I'm looking to go from:
COPY outputs/output/build/.tools /root/.tools
COPY outputs/output/build/configuration /root/configuration
COPY outputs/output/build/database/postgres /root/database/postgres
I have tried the following, but no joy:
COPY ["outputs/output/build/.tools /root/.tools","outputs/output/build/configuration /root/configuration","outputs/output/build/database/postgres /root/database/postgres"]
Not sure if this is even possible.
Create a file .dockerignore in your docker build context directory. Create a soft link (ln) of the root directory you want to copy and exclude the directories you don't want in your .dockerignore.
In your case, you don't need the soft link as the directory is already in the docker build context. Now, add the directories in the .dockerignore file that you don't want eg. if you don't want bin directory you can do that as,
# this is .dockerignore file
outputs/output/build/bin*
Finally in your Dockerfile,
COPY outputs/output/build/ /root/
Details are here.
it looks like you are trying to do bash style [] brace expansion
RUN command uses sh NOT bash
see this SO article discussing it
Bash brace expansion not working on Dockerfile RUN command
or the docker reference
https://docs.docker.com/engine/reference/builder/#/run
For an assignment the marker requires of me to create a dockerfile to build my project's container, however I have a fairly complex set of tasks I need to have work in the right way together for my dockerfile to be of any use to me, so I am currently building a file that takes 30 minutes each time just to see if minor changes affect the outcome in the right way, so my question is, is there a better way of doing this?
The Dockerfile best practices, or an earlier question might help: Creating a Dockerfile - docker starts from scratch on each new build
In my experience, a full build every time means you're working against docker's caching mechanism, usually by having COPY . . early in the Dockerfile.
If the files copied into the image are then used to drive a package manager, or download other sources - try copying just the script or requirements file, then using it, then copying the rest of the sources.
a simplified python example, restated from the best practices link:
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
With that structure, as long as requirements.txt does not change, the first COPY and following RUN command use cached layers and rebuilds are much faster.
The first tip is using COPY/ADD for artifacts that need to be download when docker builds.
The second tip is, you can create one Dockerfile for each step and reuse them in next steps.
for example, if you want to install postgres db, and install wildfly in your image. You can start creating a Dockerfile for postgresDB only, and build it to make your-postgres docker image.
Then create another Dockerfile which reuse your-postgres image by
FROM your-postgres
.....
and so on...