Docker ADD is failing with relative directory - docker

My docker file has following entry
ENV SCPATH /etc/supervisor/conf.d
RUN apt-get -y update
# The daemons
RUN apt-get -y install supervisor
RUN mkdir -p /var/log/supervisor
# Supervisor Configuration
ADD ./supervisord/conf.d/* $SCPATH/
The directory structure looks like this
├── .dockerignore
├── .gitignore
├── Dockerfile
├── Makefile
├── README.md
├── Vagrantfile
├── index.js
├── package.json
└── supervisord
└── conf.d
├── node.conf
└── supervisord.conf
As per my understanding this should work fine as
ADD ./supervisord/conf.d/* $SCPATH/
Points to a relative path in terms of dockerfile build context.
Still it fails with
./supervisord/conf.d : no such file or directory exists.
I am new to docker so might be a very basic thing I am missing. Really appreciate help

What are your .dockerignore file contents? Are you sure you did not accidentally exclude something below your supervisord directory that the docker daemon needs to build your image?
And: in which folder are you executing the docker build command? Make sure you execute it within the folder that holds the Dockerfile so that the relative paths match.
Update: I tried to reproduce your problem. What I did from within a temp folder:
mkdir -p a/b/c
echo "test" > a/b/c/test.txt
cat <<EOF > Dockerfile
FROM debian
ENV MYPATH /newdir
RUN mkdir $MYPATH
ADD ./a/b/c/* $MYPATH/
CMD cat $MYPATH/test.txt
EOF
docker build -t test .
docker run --rm -it test
That prints test as expected. The important part works: the ADD ./a/b/c* $MYPATH. The file at the end is found as its content test is displayed during runtime.
When I now change the path ./a/b/c/* to something else, I get the no such file or directory exists error. When I leave the path as is and invoke docker build from a different folder than the temp folder where I placed the files the error is shown, too.

Related

Docker - Build image from a Dockerfile in a child folder

I'm running into a problem with docker that I can't fix by looking at other references, documentation, etc. and since i'm a beginner with Docker I try my luck here. I'm working in a Next.js project that is using Docker to build the app. I'm using the example documentation of Next.js, and that works if I have my Dockerfile in the root of my project. However, I want to put it in a folder called etc and use it from there. This is giving me problems, because docker can't find the files that i'm trying to copy to the working directory, see error below.
Structure
.
├── etc
│   └── Dockerfile
├── package.json
└── yarn.lock
Command
docker build etc/
Error
failed to compute cache key: "/yarn.lock" not found: not found
Dockerfile
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
I've tried a bunch of things, such as changing the files and paths. Also, in the documentation they mention the -f flag, but that doesn't work for me either because I get the error "docker build" requires exactly 1 argument. when running docker build -f etc/Dockerfile. Is that outdated? Anyway, my question is how to build my app with docker when my dockerfile is not in the root of the project but in a child folder like etc/.
You have forgotten the dot at the end of the command docker build -f etc/Dockerfile .

Point Dockerfile to specific folder

I'm simply trying to get my dockerfile to point to a specific directory so that when I go to the URL I can do something like this: localhost:80/ask.PNG, and that image will render in the browser.
Currently my Dockerfile builds and runs, but when I try the above it states the files don't exist. Here is what I have.
FROM httpd:2.4
COPY / /MyPath/imagesfolder
The imagesfolder is saved within the same folder as my dockerfile and contains a few different images.
According to hub.docker.com/_/httpd/ you have to do something like this:
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
You must copy your files to the specific location from which httpd is going to serve them and this is /usr/local/apache2/htdocs. Put your files there and it will work.
Example
My folder structure (Dockerfile is the same with the above):
~/docker_tests/httpd$ tree
.
├── Dockerfile
└── public-html
├── 1.jpg
├── 2.jpg
└── 3.jpg
1 directory, 4 files
Build and run ...
docker build -t my-apache2 .
docker run -dit --name my-running-app -p 8080:80 my-apache2
Access your files at
http://localhost:8080/1.jpg
http://localhost:8080/2.jpg
http://localhost:8080/3.jpg

Building docker where does `RUN mkdir` create a directory - cannot find it when running container

I'm new to docker and I'm building the docker file below using docker build -t control . It builds successfully with no errors specifically it says that it makes the control directory. Then I try to run the image with docker run control but it gives an error saying that it can't find control/control_file/job.py
Where does docker create the control directory. Is it in a container that I cannot see? As I can't see it being create anywhere and I'm unsure how to debug?
FROM python:2
RUN pip install requests\
&& pip install pymongo
RUN mkdir control
COPY control_file/ /control
ENV PYTHONPATH="/control:$PYTHONPATH"
RUN export PYTHONPATH=/control:$PYTHONPATH
CMD ["python","/control/job.py"]
This is the directory structure:
├── control_file
│   ├── insert_to_container.py
│   ├── ip_path
│   ├── job.py
│   └── read_info.py
└── Dockerfile
The job.py is now in /control within your Docker build.
With the COPY command you copy all contents within control_file/ into the new directory /control.
Change the last line to:
CMD ["python", "control/job.py"]
you docker file has mistakes, please find below correct one and control_file directory should available in build directory (where you building docker image )....job.py should have execute permission
FROM python:2
RUN pip install requests\
&& pip install pymongo
RUN mkdir -p /control/control_file
COPY control_file/ /control/control_file
CMD [ "python" , "/control/control_file/job.py" ]

Docker: "lstat no such file or directory" error when building image. File is there

I want to deploy a simple JS Boilerplate to Docker Cloud. I use a Dockerfile that I already used for a different Boilerplate and image. The Dockerfile is pretty simple. It is just based on the official nginx, adds two config files and then the output folder of my gulp boilerplate to the nginx root. So I copied it from the one directory to the new boilerplate since I want to try this one.
The error I'm getting is this (last line)
Sending build context to Docker daemon 277.5 kB
Step 1 : FROM nginx
---> af4b3d7d5401
Step 2 : MAINTAINER Ole Bjarnstroem
---> Using cache
---> f57bc23d9444
Step 3 : ENV LANG en_US.UTF-8
---> Using cache
---> f6f4a76092dd
Step 4 : COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
---> Using cache
---> c4f83a39ba73
Step 5 : COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
---> Using cache
---> 6fe5a6b61d9f
Step 6 : ADD ./dist /usr/share/nginx/html
lstat dist: no such file or directory
But the dist folder is there.
.
├── Dockerfile
├── JSCS.intellij.formatter.xml
├── README.md
├── app
├── dist
├── gulpfile.babel.js
├── jspm.conf.js
├── jspm_packages
├── karma.conf.js
├── nginx
├── node_modules
├── package.json
├── tsconfig.json
├── tslint.json
├── typings
└── typings.json
It might be noteworthy that the folder to be copied was called ./public So I could imagine that this is some kind of weird Docker Cache issue.
My Dockerfile:
FROM nginx
ENV LANG en_US.UTF-8
# Copy configuration files
COPY ./nginx/nginx.conf /etc/nginx/nginx.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Add Gulp output folder to server root
ADD ./dist /usr/share/nginx/html
# Port configuration
EXPOSE 8080
What I tried so far:
Deleting dangling and unused images
Deleting the image that was produced by the same docker file before
Using a different tag
My build command:
docker build -t my_repo/my_app .
Thanks for your help!
Edit: Every other folder works. It is also not a problem of file permissions. It seems, that Docker just doesn't like the dist folder. Which sucks.
Well, stupid me. There was a .dockerignore file with dist in the project folder... Case closed
I had the same issue, but it wasn't the .dockerignore, I forgot to specify the directory to run docker in. In my case that directory was . My full command before was
docker build - < Dockerfile
and after was
docker build . < Dockerfile
I put the directory after the build command used -f to specify the dockerfile
eg:
sudo docker build . -t test:i386 -f mydockerfile
The dot after build is the directory to build from, in this case present dir.
I also had the same issue, the problem wasn't my .dockerignore but my .gitignore, as I couldn't remove dist from my gitgnore I've added cp command in my Dockerfile:
....
WORKDIR /
RUN cp -r public/dist/* www/
EXPOSE 80
(credits: https://serverfault.com/a/666154/152918)
The files you want to copy must be inside the Docker image directory. You cannot reference files anywhere on your file system.
I had this issue, and the problem turned out to be that I had inlined a comment, e.g.
COPY file1.txt dest/ # comment
Turns out you can't do that.
A related bug in the Google App Engine SDK version 138 resulted in the same error message. This bug has been fixed in version 139 of the SDK. You can upgrade to the newest version with the following command:
gcloud components upgrade
For following docker build error,
COPY failed: stat /<**path**> :no such file or directory
I got it around by restarting docker service.
sudo service docker restart
A bit about how I got this error:
I was running in gitlab-ci, and I git this in the logs:
And so when I got to the stage where I run docker build, there was no Dockerfile in there, so I got this error.
There are two ways to solve it:
You can setup somewhere in the project settings to use git strategy of fetch/clone. Im not sure which setting it is - so play around until you get it. This will configure the git strategy for the entire project.
You can also setup git strategy just for the build - you can add a variable like this in the top of your gitlab-ci.yml file:
variables:
GIT_STRATEGY: clone
And it should solve it.
On Mac OS Big Sur I had to just restart my Docker Service then worked again for me(Always happens after changing my .env)

Docker follow symlink outside context

Yet another Docker symlink question. I have a bunch of files that I want to copy over to all my Docker builds. My dir structure is:
parent_dir
- common_files
- file.txt
- dir1
- Dockerfile
- symlink -> ../common_files
In above example, I want file.txt to be copied over when I docker build inside dir1. But I don't want to maintain multiple copies of file.txt.
Per this link, as of docker version 0.10, docker build must
Follow symlinks inside container's root for ADD build instructions.
But I get no such file or directory when I build with either of these lines in my Dockerfile:
ADD symlink /path/dirname or
ADD symlink/file.txt /path/file.txt
mount option will NOT solve it for me (cross platform...).
I tried tar -czh . | docker build -t without success.
Is there a way to make Docker follow the symlink and copy the common_files/file.txt into the built container?
That is not possible and will not be implemented. Please have a look at the discussion on github issue #1676:
We do not allow this because it's not repeatable. A symlink on your machine is the not the same as my machine and the same Dockerfile would produce two different results. Also having symlinks to /etc/paasswd would cause issues because it would link the host files and not your local files.
If anyone still has this issue I found a very nice solution on superuser.com:
https://superuser.com/questions/842642/how-to-make-a-symlinked-folder-appear-as-a-normal-folder
It basically suggests using tar to dereference the symlinks and feed the result into docker build:
$ tar -czh . | docker build -
One possibility is to run the build in the parent directory, with:
$ docker build [tags...] -f dir1/Dockerfile .
(Or equivalently, in child directory,)
$ docker build [tags...] -f Dockerfile ..
The Dockerfile will have to be configured to do copy/add with appropriate paths. Depending on your setup, you might want a .dockerignore in the parent to leave out
things you don't want to be put into the context.
I know that it breaks portability of docker build, but you can use hard links instead of symbolic:
ln /some/file ./hardlink
I just had to solve this issue in the same context. My solution is to use hierarchical Docker builds. In other words:
parent_dir
- common_files
- Dockerfile
- file.txt
- dir1
- Dockerfile (FROM common_files:latest)
The disadvantage is that you have to remember to build common_files before dir1. The advantage is that if you have a number of dependant images then they are all a bit smaller due to using a common layer.
I got frustrated enough that I made a small NodeJS utility to help with this: file-syncer
Given the existing directory structure:
parent_dir
- common_files
- file.txt
- my-app
- Dockerfile
- common_files -> symlink to ../common_files
Basic usage:
cd parent_dir
// starts live-sync of files under "common_files" to "my-app/HardLinked/common_files"
npx file-syncer --from common_files --to my-app/HardLinked
Then in your Dockerfile:
[regular commands here...]
# have docker copy/overlay the HardLinked folder's contents (common_files) into my-app itself
COPY HardLinked /
Q/A
How is this better than just copying parent_dir/common_files to parent_dir/my-app/common_files before Docker runs?
That would mean giving up the regular symlink, which would be a loss, since symlinks are helpful and work fine with most tools. For example, it would mean you can't see/edit the source files of common_files from the in-my-app copy, which has some drawbacks. (see below)
How is this better than copying parent_dir/common-files to parent_dir/my-app/common_files_Copy before Docker runs, then having Docker copy that over to parent_dir/my-app/common_files at build time?
There are two advantages:
file-syncer does not "copy" the files in the regular sense. Rather, it creates hard links from the source folder's files. This means that if you edit the files under parent_dir/my-app/HardLinked/common_files, the files under parent_dir/common_files are instantly updated, and vice-versa, because they reference the same file/inode. (this can be helpful for debugging purposes and cross-project editing [especially if the folders you are syncing are symlinked node-modules that you're actively editing], and ensures that your version of the files is always in-sync/identical-to the source files)
Because file-syncer only updates the hard-link files for the exact files that get changed, file-watcher tools like Tilt or Skaffold detect changes for the minimal set of files, which can mean faster live-update-push times than you'd get with a basic "copy whole folder on file change" tool would.
How is this better than a regular file-sync tool like Syncthing?
Some of those tools may be usable, but most have issues of one kind or another. The most common one is that the tool either cannot produce hard-links of existing files, or it's unable to "push an update" for a file that is already hard-linked (since hard-linked files do not notify file-watchers of their changes automatically, if the edited-at and watched-at paths differ). Another is that many of these sync tools are not designed for instant responding, and/or do not have run flags that make them easy to use in restricted build tools. (eg. for Tilt, the --async flag of file-syncer enables it to be used in a local(...) invokation in the project's Tiltfile)
One tool that allows to "link" a directory in a way that is accepted by docker, is docker itself.
It is possible to run a temporary docker container, with all necessary files/directories mounted in adequate paths, and build image from within such container. For example:
docker run -it \
--rm \
-v /var/run/docker.sock:/var/run/docker.sock \
--mount "type=bind,source=$ImageRoot/Dockerfile,destination=/Image/Dockerfile,readonly" \
--mount "type=bind,source=$ImageRoot/.dockerignore,destination=/Image/.dockerignore,readonly" \
--mount "type=bind,source=$ReposRoot/project1,destination=/Image/project1,readonly" \
--mount "type=bind,source=$ReposRoot/project2,destination=/Image/project2,readonly" \
--env DOCKER_BUILDKIT=1 \
docker:latest \
docker build "/Image" --tag "my_tag"
In above example I assume variables $ImageRoot and $ReposRoot are set.
instead of using simlinks it is possible to solve problem administratively by just moving files from sites_available to sites_enabled instead of copying or making simlinks
so your site config will be in one copy only in site_available folder if it stopped or something or in sites_enabled if it should be used
Use a small wrapper script to copy the needed dir to the Dockerfile's location;
build.sh;
.
#!/bin/bash
[ -e bin ] && rm -rf bin
cp -r ../../bin .
docker build -t "sometag" .
Commonly I isolate build instructions to subfolder, so application and logic levels are higher located:
.
├── app
│   ├── package.json
│   ├── modules
│   └── logic
├── deploy
│   ├── back
│   │   ├── nginx
│   │   │   └── Chart.yaml
│   │   ├── Containerfile
│   │   ├── skaffold.yaml
│   │   └── .applift -> ../../app
│   ├── front
│   │   ├── Containerfile
│   │   ├── skaffold.yaml
│   │   └── .applift -> ../../app
│   └── skaffold.yaml
└── .......
I utilize name ".applift" for those symbolic links
.applift -> ../../app
And now follow symlink via realpath without care about path depth
dir/deploy/front$ docker build -f Containerfile --tag="build" `realpath .applift`
or pack in func
dir/deploy$ docker_build () { docker build -f "$1"/Containerfile --tag="$2" `realpath "$1/.applift"`; }
dir/deploy$ docker_build ./back "front_builder"
so
COPY app/logic/ ./app/
in Containerfile will work
Yes, in this case you will loose context for other layers. But generally there is no any other context files located in build-directory
I had a situation where the parent_dir contained common libraries in common_files/ and a common docker/Dockerfile. dir1/ contained the contents of a different code repository but I wanted that repository to have access to those parent code repository folders. I solved it without using symlinks as follows:
parent_dir
- common_files
- file.txt
- docker
- Dockerfile
- dir1
- docker-compose.yml --> ../common_files
--> ../docker/Dockerfile
So I created a docker-compose.yml file, where I specified where the files were located relative to the docker-compose.yml where it would be executed from. I also tried to minimise changes to the Dockerfile since it would be used by both repositories, so I provided DIR argument to specify a subdirectory to run:
version: "3.8"
services:
dev:
container_name: template
build:
context: "../"
dockerfile: ./docker/Dockerfile
args:
- DIR=${DIR}
volumes:
- ./dir1:/app
- ./common_files:/common_files
I ran the following from within the dir1/ folder and it ran successfully:
export DIR=./dir1 && docker compose -f docker-compose.yml build
This is the original Dockerfile:
...
WORKDIR /app
COPY . /app
RUN my_executable
...
And this is a snippet with changes I made to the Dockerfile:
...
ARG DIR=${DIR}
WORKDIR /app
COPY . /app
RUN cd ${DIR} && my_executable && cd /app
...
This worked and the parent repository could still run the Dockerfile with the same outcome even though I had introduced the DIR argument since if the parent repository called it DIR would be an empty string and it would behave like it did before.
If you're on mac, rembember to do
brew install gnu-tar
and use gtar instead of tar. Seems there are some differences between the two.
gtar worked for me at least.

Resources