docker copy issues and set host env variable - docker

I am new to docker.
I would like to understand the following questions. I have been searching but I can't find the answers to my questions.
Why do I always get a wrong path when I tried to copy the file?
Does that mean I can only copy the files into the docker image from the same directory where I have my dockerfile? Is there a way to COPY files from other directories on the host?
Is there a way to passing in host's environment variables directly in the Dockerfile without using "ARG" and --build-arg flag?
Below is what I currently have
file structure is like this:
/home/user1/docker
|__ Dockerfile
In the Dockerfile:
From
ARG BLD_DIR=/tmp
RUN mkdir /workdir
WORKDIR /workdir
COPY ${BLD_DIR}/a.file /workdir
I ran
root#localhost> echo $BLD_DIR
/tmp/build <-- BLD_DIR is a custom variable; meaning it's different on each dev env
docker build --build-arg BLD_DIR=${BLD_DIR} -t docker-test:1.0 -f Dockerfile
Always got error like
COPY failed: stat
/var/lib/docker/tmp/docker-builder035089075/tmp/build/a.file: no such file
or directory

In a Dockerfile, you can only copy files that are available in the current Docker build context.
By default, all files in the directory where you run your docker build command are copied to the Docker context folder.
So, when you use ADD or COPY commands, all your paths are in fact relative to build folder, as the documentation states:
Multiple resources may be specified but the paths of files and directories will be interpreted as relative to the source of the context of the build.
This is voluntary because building an image using docker build should not depend on auxiliary files on your system: the same Docker image should not be different if built on 2 different machines.
However, you can have a directory structure like such:
/home/user1/
|___file1
|___docker/
|___|___ Dockerfile
If you run docker build -t test -f docker/Dockerfile . in the /home/user1 folder, your build context will be /home/user1, so you can COPY file1 in your Dockerfile.
For the very same reason, you cannot use environment variables directly in a Dockerfile. The idea is that your docker build command should "pack" all the information needed to generate the same image on 2 different systems.
However, you can hack your way around it using docker-compose, like explaned here: Pass host environment variables to dockerfile.

Related

Docker and DockerFile build

I am a Docker Beginner and I have some trouble with Dockerfile build..and a lot of questions
Do I have to start command build in path /var/lib/docker/builder ?
How do I know that it does not build because my Dockerfile is not correct written?
Do I have to call my folder Dockerfile?
docker build -t dokcerfile/xdebugphp .
than i got
Error response from daemon: unexpected error reading Dockerfile: read lstat /var/lib/docker/builder/Dokcerfile: no such file or directory
with
Get-Content Dockerfile | docker build -
Error response from daemon: the Dockerfile (Dockerfile) cannot be empty
You can launch docker build from any directory. If you try to COPY a file into an image that doesn't exist in the directory you name, you will see an error message that references /var/lib/docker, but that's an artifact of the Docker build implementation. (In fact, you really shouldn't look inside or try to directly use the /var/lib/docker directory at all.)
The file containing the build instructions is conventionally named Dockerfile (on systems with case-sensitive filesystems, with a capital D and no extension). It's most often located at the root of your source repository. This shouldn't be a directory.
The docker build -t option assigns a tag (name) to the image that's built. It doesn't have to correspond to a file on disk. If you're using Docker Hub to store your images (or just want to emulate its naming) these have the form username/imagename:version; there is an extended format if you're using some other Docker image registry.
You can name the Dockerfile something else; if you do, you need the docker build -f option to reference that file. If it's in a subdirectory of the repository root, the important detail is that COPY statements copy from the "context" directory you pass as the directory argument to docker build; this could be different from the directory that contains the Dockerfile. For example, if your Dockerfile has COPY index.php ., and you run docker build -f docker/xdebugphp ., the file is copied from the . current directory, which is the parent directory of the Dockerfile.
Looks like line endings, try changing dockerfile line endings to LF
Also for Docker build command you need to be in the directory where the dockerfile is or specify the path to the dockerfile
so in the directory where dockerfile is command is
docker build -t IMAGENAMEHERE .
So I solved it with this command
docker build -t imagename -f Dockerfile/xdebugphp .

How to copy files from a docker image - dockerfile cmd

During build time, I want to copy a file from the image (from folder /opt/myApp/myFile.xml), to my host folder /opt/temp
In the Dockerfile, I'm using the --mount as follows, trying to mount to my local test folder:
RUN --mount=target=/opt/temp,type=bind,source=test cp /opt/myApp/myFile.xml /opt/temp
I'm building the image successfully, but the local test folder is empty
any ideas?
Copying files from the image to the host at build-time is not supported.
This can easily be achieved during run-time using volumes.
However, if you really want to work-around this by all means, you can have a look in the custom build outputs documentation, that introduced support for this kind of activity.
Here is a simple example inspired from the official documentation:
Dockerfile
FROM alpine AS stage-a
RUN mkdir -p /opt/temp/
RUN touch /opt/temp/file-created-at-build-time
RUN echo "Content added at build-time" > /opt/temp/file-created-at-build-time
FROM scratch as custom-exporter
COPY --from=stage-a /opt/temp/file-created-at-build-time .
For this to work, you need to launch the build command using these arguments:
DOCKER_BUILDKIT=1 docker build --output out .
This will create on your host, aside the Dockerfile, a directory out with the file you need:
.
├── Dockerfile
└── out
└── file-created-at-build-time
cat out/file-created-at-build-time
Content added at build-time

Troubleshoot directory path error in COPY command in docker file

I am using COPY command in my docker file on top of ubuntu 16.04. I am getting error as no such file or directory eventhough the directory is present. In the below docker file I want to copy the directory "auth" present inside workspace directory to the docker image (at path /home/ubuntu) and then build the image.
FROM ubuntu:16.04
RUN apt-get update
COPY /home/ubuntu/authentication/workspace /home/ubuntu
WORKDIR /home/ubuntu/auth
a Dockerfile COPY command can only refer to files under the context - the current location of the Dockerfile, aka .
so you have a few options now:
if it is possible to copy the /home/ubuntu/authentication/workspace/ directory content to somewhere inside your project before the build (so now it will be included in your Dockerfile context and you can access it via COPY ./path/to/content /home/ubuntu) it can be great. but sometimes you dont want it.
instead of copying the directory, bind it to your container via a volume:
when you run the container, add a -v option:
docker run [....] -v /home/ubuntu/authentication/workspace:/home/ubuntu [...]
mind that a volume is designed so any change you made inside the container dir(/home/ubuntu) will affect the bound directory on your host side (/home/ubuntu/authentication/workspace) and vice versa.
i found a something over here: this guy is forcing the Dockerfile to accept his context- he is sitting inside the /home/ubuntu/authentication/workspace/ directory, and running there
docker build . -f /path/to/Dockerfile
so now inside his Dockerfile he can refer to /home/ubuntu/authentication/workspace as his context (.)

Docker: How to copy a file from one folder in a container to another?

I want to copy my compiled war file to tomcat deployment folder in a Docker container. As COPY and ADD deals with moving files from host to container, I tried
RUN mv /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
as a modification to the answer for this question. But I am getting the error
mv: cannot stat ΓÇÿ/tmp/projects/myproject/target/myproject.warΓÇÖ: No such file or directory
How can I copy from one folder to another in the same container?
You can create a multi-stage build:
https://docs.docker.com/develop/develop-images/multistage-build/
Build the .war file in the first stage and name the stage e.g. build, like that:
FROM my-fancy-sdk as build
RUN my-fancy-build #result is your myproject.war
Then in the second stage:
FROM my-fancy-sdk as build2
COPY --from=build /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
A better solution would be to use volumes to bind individual war files inside docker container as done here.
Why your command fails
The command you are running tries to access files which are out of context to for the dockerfile. When you build the image using docker build . the daemon sends context to the builder and only those files are accessible during the build. In docker build . the context is ., the current directory. Therefore, it will not be able to access /tmp/projects/myproject/target/myproject.war.
Copying from inside the container
Another option would be to copy while you are inside the container. First use volumes to mount the local folder inside the container and then go inside the container using docker exec -it <container_name> bash and then copy the required files.
Recommendation
But still, I highly recommend to use
docker run -v "/tmp/projects/myproject/target/myproject.war:/usr/local/tomcat/webapps/myproject.war" <image_name>

How can we use shared Dockerfile's which reside in different directories than the "current build context path"?

With the following filesystem structure in mind:
I want to execute docker build and copy the “default.sh” file to the Docker container. I want to use DockerfileB for directories a, b and c. And apply the same Dockerfile to all 3 of those scripts. However, Docker throws an error saying the Dockerfile must be in the same build context.
So I run this:
cd /home/oleg/WebstormProjects/oresoftware/sumanjs/suman/test/groups/c && docker build --file=/home/oleg/WebstormProjects/oresoftware/sumanjs/suman/test/groups/Dockerfile -t c .
and I get this error:
unable to prepare context: The Dockerfile (/home/oleg/WebstormProjects/oresoftware/sumanjs/suman/test/groups/DockerfileB) must be within the build context (.)
Is there any way around this? Maybe symlinks? Desperate for a solution to this, because I would like to share Dockerfiles instead of having to copy identical ones to subfolders, etc.
Symlinks won't work in this case for the same reason as the Dockerfile, the symlink target will be outside the "build context" and Docker doesn't allow that.
Hard linking the Dockerfile in each a, b and c directories would work to share the one file for Docker, but that doesn't work nicely with source control, like Git.
A build argument might allow reuse of the Dockerfile. Use a docker build context of the groups directory and whenever you need to reference one of the directories in the Dockerfile, use the variable/argument.
FROM alpine
ARG DIR
COPY ${DIR}/default.sh /default.sh
Then run with
cd groups
docker run --build-arg DIR=c -f ./DockerfileB .

Resources