During build time, I want to copy a file from the image (from folder /opt/myApp/myFile.xml), to my host folder /opt/temp
In the Dockerfile, I'm using the --mount as follows, trying to mount to my local test folder:
RUN --mount=target=/opt/temp,type=bind,source=test cp /opt/myApp/myFile.xml /opt/temp
I'm building the image successfully, but the local test folder is empty
any ideas?
Copying files from the image to the host at build-time is not supported.
This can easily be achieved during run-time using volumes.
However, if you really want to work-around this by all means, you can have a look in the custom build outputs documentation, that introduced support for this kind of activity.
Here is a simple example inspired from the official documentation:
Dockerfile
FROM alpine AS stage-a
RUN mkdir -p /opt/temp/
RUN touch /opt/temp/file-created-at-build-time
RUN echo "Content added at build-time" > /opt/temp/file-created-at-build-time
FROM scratch as custom-exporter
COPY --from=stage-a /opt/temp/file-created-at-build-time .
For this to work, you need to launch the build command using these arguments:
DOCKER_BUILDKIT=1 docker build --output out .
This will create on your host, aside the Dockerfile, a directory out with the file you need:
.
├── Dockerfile
└── out
└── file-created-at-build-time
cat out/file-created-at-build-time
Content added at build-time
My basic requirement is to create docker image and deploy it to docker registry.
I have a pre-configured application folder(/home/myfolder) in my jenkins server(to do this configuration I have used ansible script). Then I need to create docker image from that folder and deploy it to docker registry.
What's the best way to do this? Please help me with this as I'm new to docker.
please find my Dockerfile below
#Download base image ubuntu 16.04
FROM ubuntu
WORKDIR /dockerprojects
#copy the zip file to docker folder
COPY /wso2telcohub-4.0.0-SNAPSHOT /home/myfolder/dockerprojects/IGW/dockerCI
COPY cp /wso2is-km-5.6.0 /home/myfolder/dockerprojects/IGW/dockerCI
CMD [“bash”]
There are a bunch of things in that Dockerfile that potentially can go sideways. I will comment on them one by one here:
#Download base image ubuntu 16.04
FROM ubuntu
If you intend to use the ubuntu:16.04 image, you need to specify it. Without a specific tag, the FROM instruction will look for the latest tag, in this case the find the image ubuntu:latest for you.
WORKDIR /dockerprojects
This command sets the workdir inside the docker image, so that when the container starts, the sessions PWD will be set to /dockerprojects. This is important because all other commands durring the build and when the container is started will be relative to this location in the file structure.
#copy the zip file to docker folder
COPY /wso2telcohub-4.0.0-SNAPSHOT /home/myfolder/dockerprojects/IGW/dockerCI
This command will copy the file /wso2telcohub-4.0.0-SNAPSHOT from the "host machine", the machine where the docker image is being built, into the image at the location /home/myfolder/dockerprojects/IGW/dockerCI. If the location does not already exist in the image, then it will create a file named dockerCI at the location /home/myfolder/dockerprojects/IGW/. I don't think that this is what you want.
Also, your comment states that this is a zip file, but it seems to be missing an extension like .zip or .gz - I believe that you are not referencing the file correctly.
COPY cp /wso2is-km-5.6.0 /home/myfolder/dockerprojects/IGW/dockerCI
This instruction will not execute. For the COPY instruction you don't need to use a "cp" command. If you removed "cp" from the line however, it would try to copy the file or directory /wso2is-km-5.6.0 from the host machine (that's a file in the root of the filesystem) to the location /home/myfolder/dockerprojects/IGW/dockerCI inside the resulting image.
CMD [“bash”]
The CMD instruction simply sets the image to start a new bash shell when started - which will make the container exit immediatly when the bash command completes.
I have a feeling that the source location of the files that you want to put in the image is not in the root of the host machine. But probably at /home/myfolder/dockerprojects/ on the host that you mention. I have asked you to clearify the location of the files that you want in the image in a comment on your question.
Update
The error that you are getting 'no such file or directory' means that the source file that you are referencing in the COPY instruction, does not exist.
The COPY instruction works like this:
COPY <sourcepath> <targetpath>
Where the <sourcepath> is the path of the file on the machine where the image is being built. This path is relative to the Dockerfile (or the build context if specified), unless it starts with a /, then it is relative to the root of the filesystem on the host machine. And the targetpath is the desired path inside the resulting image.
Let's say that I have the following folder structure:
/home/myfolder/dockerprojects/
├── Dockerfile
├── wso2telcohub-4.0.0-SNAPSHOT.zip
├── wso2is-km-5.6.0/
│ ├── anotherfile.txt
And I wanted all the files in the path /home/myfolder/dockerprojects/ to be put inside the docker image, below the path /app. I would do this with a Dockerfile like:
FROM ubuntu:16.04
WORKDIR /app
COPY . /app/
Or each file individually like this:
FROM ubuntu:16.04
WORKDIR /app
COPY ./wso2telcohub-4.0.0-SNAPSHOT.zip /app/wso2telcohub-4.0.0-SNAPSHOT.zip
COPY ./wso2is-km-5.6.0 /app/wso2is-km-5.6.0
That would leave me with the following in the docker image:
/app/
├── Dockerfile
├── wso2telcohub-4.0.0-SNAPSHOT.zip
├── wso2is-km-5.6.0/
│ ├── anotherfile.txt
I'm very new to Docker. I have a Golang app that has the following structure:
.
├── 404.html
├── Dockerfile
├── index.html
├── scripts
├── server.go
├── static
│ ├── jquery.min.js
│ ├── main.css
│ └── main.js
└── styles
I got the Dockerfile from DockerHub. It's too large to post here, but the full version is here. The last few lines of the Dockerfile, which I think might be relevant, are:
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
WORKDIR $GOPATH
When I go into my directory and type in docker build -t my-app . then it successfully builds. When I type in docker run -d -p 80:80 url-shortener, it gives me a string which I assume is the ID.
But when I do docker ps, it doesn't show the process running.
If I do docker ps -a, it shows the process but it says,
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6adc34244350 my-app "/bin/sh" 6 minutes ago
Exited (0) 6 minutes ago
I apologize if this is a very dumb question. I'm a complete Docker noob and could use some guidance.
From your docker ps output, your image is configured to only run a shell. Since you ran it as a background process without any input, it processed all of its input and exited successfully (status code 0). docker logs 6adc34244350 will show you what output (if any) it produced.
Docker has an excellent tutorial on writing and running custom images that's worth reading. In particular, you shouldn't copy the official golang Dockerfile; your own Dockerfile can start with FROM golang:1.10 and it will inherit everything in that image. You also almost certainly want to make sure you have a CMD command that runs your application (by default) when the container starts up.
Yet another Docker symlink question. I have a bunch of files that I want to copy over to all my Docker builds. My dir structure is:
parent_dir
- common_files
- file.txt
- dir1
- Dockerfile
- symlink -> ../common_files
In above example, I want file.txt to be copied over when I docker build inside dir1. But I don't want to maintain multiple copies of file.txt.
Per this link, as of docker version 0.10, docker build must
Follow symlinks inside container's root for ADD build instructions.
But I get no such file or directory when I build with either of these lines in my Dockerfile:
ADD symlink /path/dirname or
ADD symlink/file.txt /path/file.txt
mount option will NOT solve it for me (cross platform...).
I tried tar -czh . | docker build -t without success.
Is there a way to make Docker follow the symlink and copy the common_files/file.txt into the built container?
That is not possible and will not be implemented. Please have a look at the discussion on github issue #1676:
We do not allow this because it's not repeatable. A symlink on your machine is the not the same as my machine and the same Dockerfile would produce two different results. Also having symlinks to /etc/paasswd would cause issues because it would link the host files and not your local files.
If anyone still has this issue I found a very nice solution on superuser.com:
https://superuser.com/questions/842642/how-to-make-a-symlinked-folder-appear-as-a-normal-folder
It basically suggests using tar to dereference the symlinks and feed the result into docker build:
$ tar -czh . | docker build -
One possibility is to run the build in the parent directory, with:
$ docker build [tags...] -f dir1/Dockerfile .
(Or equivalently, in child directory,)
$ docker build [tags...] -f Dockerfile ..
The Dockerfile will have to be configured to do copy/add with appropriate paths. Depending on your setup, you might want a .dockerignore in the parent to leave out
things you don't want to be put into the context.
I know that it breaks portability of docker build, but you can use hard links instead of symbolic:
ln /some/file ./hardlink
I just had to solve this issue in the same context. My solution is to use hierarchical Docker builds. In other words:
parent_dir
- common_files
- Dockerfile
- file.txt
- dir1
- Dockerfile (FROM common_files:latest)
The disadvantage is that you have to remember to build common_files before dir1. The advantage is that if you have a number of dependant images then they are all a bit smaller due to using a common layer.
I got frustrated enough that I made a small NodeJS utility to help with this: file-syncer
Given the existing directory structure:
parent_dir
- common_files
- file.txt
- my-app
- Dockerfile
- common_files -> symlink to ../common_files
Basic usage:
cd parent_dir
// starts live-sync of files under "common_files" to "my-app/HardLinked/common_files"
npx file-syncer --from common_files --to my-app/HardLinked
Then in your Dockerfile:
[regular commands here...]
# have docker copy/overlay the HardLinked folder's contents (common_files) into my-app itself
COPY HardLinked /
Q/A
How is this better than just copying parent_dir/common_files to parent_dir/my-app/common_files before Docker runs?
That would mean giving up the regular symlink, which would be a loss, since symlinks are helpful and work fine with most tools. For example, it would mean you can't see/edit the source files of common_files from the in-my-app copy, which has some drawbacks. (see below)
How is this better than copying parent_dir/common-files to parent_dir/my-app/common_files_Copy before Docker runs, then having Docker copy that over to parent_dir/my-app/common_files at build time?
There are two advantages:
file-syncer does not "copy" the files in the regular sense. Rather, it creates hard links from the source folder's files. This means that if you edit the files under parent_dir/my-app/HardLinked/common_files, the files under parent_dir/common_files are instantly updated, and vice-versa, because they reference the same file/inode. (this can be helpful for debugging purposes and cross-project editing [especially if the folders you are syncing are symlinked node-modules that you're actively editing], and ensures that your version of the files is always in-sync/identical-to the source files)
Because file-syncer only updates the hard-link files for the exact files that get changed, file-watcher tools like Tilt or Skaffold detect changes for the minimal set of files, which can mean faster live-update-push times than you'd get with a basic "copy whole folder on file change" tool would.
How is this better than a regular file-sync tool like Syncthing?
Some of those tools may be usable, but most have issues of one kind or another. The most common one is that the tool either cannot produce hard-links of existing files, or it's unable to "push an update" for a file that is already hard-linked (since hard-linked files do not notify file-watchers of their changes automatically, if the edited-at and watched-at paths differ). Another is that many of these sync tools are not designed for instant responding, and/or do not have run flags that make them easy to use in restricted build tools. (eg. for Tilt, the --async flag of file-syncer enables it to be used in a local(...) invokation in the project's Tiltfile)
One tool that allows to "link" a directory in a way that is accepted by docker, is docker itself.
It is possible to run a temporary docker container, with all necessary files/directories mounted in adequate paths, and build image from within such container. For example:
docker run -it \
--rm \
-v /var/run/docker.sock:/var/run/docker.sock \
--mount "type=bind,source=$ImageRoot/Dockerfile,destination=/Image/Dockerfile,readonly" \
--mount "type=bind,source=$ImageRoot/.dockerignore,destination=/Image/.dockerignore,readonly" \
--mount "type=bind,source=$ReposRoot/project1,destination=/Image/project1,readonly" \
--mount "type=bind,source=$ReposRoot/project2,destination=/Image/project2,readonly" \
--env DOCKER_BUILDKIT=1 \
docker:latest \
docker build "/Image" --tag "my_tag"
In above example I assume variables $ImageRoot and $ReposRoot are set.
instead of using simlinks it is possible to solve problem administratively by just moving files from sites_available to sites_enabled instead of copying or making simlinks
so your site config will be in one copy only in site_available folder if it stopped or something or in sites_enabled if it should be used
Use a small wrapper script to copy the needed dir to the Dockerfile's location;
build.sh;
.
#!/bin/bash
[ -e bin ] && rm -rf bin
cp -r ../../bin .
docker build -t "sometag" .
Commonly I isolate build instructions to subfolder, so application and logic levels are higher located:
.
├── app
│ ├── package.json
│ ├── modules
│ └── logic
├── deploy
│ ├── back
│ │ ├── nginx
│ │ │ └── Chart.yaml
│ │ ├── Containerfile
│ │ ├── skaffold.yaml
│ │ └── .applift -> ../../app
│ ├── front
│ │ ├── Containerfile
│ │ ├── skaffold.yaml
│ │ └── .applift -> ../../app
│ └── skaffold.yaml
└── .......
I utilize name ".applift" for those symbolic links
.applift -> ../../app
And now follow symlink via realpath without care about path depth
dir/deploy/front$ docker build -f Containerfile --tag="build" `realpath .applift`
or pack in func
dir/deploy$ docker_build () { docker build -f "$1"/Containerfile --tag="$2" `realpath "$1/.applift"`; }
dir/deploy$ docker_build ./back "front_builder"
so
COPY app/logic/ ./app/
in Containerfile will work
Yes, in this case you will loose context for other layers. But generally there is no any other context files located in build-directory
I had a situation where the parent_dir contained common libraries in common_files/ and a common docker/Dockerfile. dir1/ contained the contents of a different code repository but I wanted that repository to have access to those parent code repository folders. I solved it without using symlinks as follows:
parent_dir
- common_files
- file.txt
- docker
- Dockerfile
- dir1
- docker-compose.yml --> ../common_files
--> ../docker/Dockerfile
So I created a docker-compose.yml file, where I specified where the files were located relative to the docker-compose.yml where it would be executed from. I also tried to minimise changes to the Dockerfile since it would be used by both repositories, so I provided DIR argument to specify a subdirectory to run:
version: "3.8"
services:
dev:
container_name: template
build:
context: "../"
dockerfile: ./docker/Dockerfile
args:
- DIR=${DIR}
volumes:
- ./dir1:/app
- ./common_files:/common_files
I ran the following from within the dir1/ folder and it ran successfully:
export DIR=./dir1 && docker compose -f docker-compose.yml build
This is the original Dockerfile:
...
WORKDIR /app
COPY . /app
RUN my_executable
...
And this is a snippet with changes I made to the Dockerfile:
...
ARG DIR=${DIR}
WORKDIR /app
COPY . /app
RUN cd ${DIR} && my_executable && cd /app
...
This worked and the parent repository could still run the Dockerfile with the same outcome even though I had introduced the DIR argument since if the parent repository called it DIR would be an empty string and it would behave like it did before.
If you're on mac, rembember to do
brew install gnu-tar
and use gtar instead of tar. Seems there are some differences between the two.
gtar worked for me at least.
My docker file has following entry
ENV SCPATH /etc/supervisor/conf.d
RUN apt-get -y update
# The daemons
RUN apt-get -y install supervisor
RUN mkdir -p /var/log/supervisor
# Supervisor Configuration
ADD ./supervisord/conf.d/* $SCPATH/
The directory structure looks like this
├── .dockerignore
├── .gitignore
├── Dockerfile
├── Makefile
├── README.md
├── Vagrantfile
├── index.js
├── package.json
└── supervisord
└── conf.d
├── node.conf
└── supervisord.conf
As per my understanding this should work fine as
ADD ./supervisord/conf.d/* $SCPATH/
Points to a relative path in terms of dockerfile build context.
Still it fails with
./supervisord/conf.d : no such file or directory exists.
I am new to docker so might be a very basic thing I am missing. Really appreciate help
What are your .dockerignore file contents? Are you sure you did not accidentally exclude something below your supervisord directory that the docker daemon needs to build your image?
And: in which folder are you executing the docker build command? Make sure you execute it within the folder that holds the Dockerfile so that the relative paths match.
Update: I tried to reproduce your problem. What I did from within a temp folder:
mkdir -p a/b/c
echo "test" > a/b/c/test.txt
cat <<EOF > Dockerfile
FROM debian
ENV MYPATH /newdir
RUN mkdir $MYPATH
ADD ./a/b/c/* $MYPATH/
CMD cat $MYPATH/test.txt
EOF
docker build -t test .
docker run --rm -it test
That prints test as expected. The important part works: the ADD ./a/b/c* $MYPATH. The file at the end is found as its content test is displayed during runtime.
When I now change the path ./a/b/c/* to something else, I get the no such file or directory exists error. When I leave the path as is and invoke docker build from a different folder than the temp folder where I placed the files the error is shown, too.