I'm simply trying to get my dockerfile to point to a specific directory so that when I go to the URL I can do something like this: localhost:80/ask.PNG, and that image will render in the browser.
Currently my Dockerfile builds and runs, but when I try the above it states the files don't exist. Here is what I have.
FROM httpd:2.4
COPY / /MyPath/imagesfolder
The imagesfolder is saved within the same folder as my dockerfile and contains a few different images.
According to hub.docker.com/_/httpd/ you have to do something like this:
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
You must copy your files to the specific location from which httpd is going to serve them and this is /usr/local/apache2/htdocs. Put your files there and it will work.
Example
My folder structure (Dockerfile is the same with the above):
~/docker_tests/httpd$ tree
.
├── Dockerfile
└── public-html
├── 1.jpg
├── 2.jpg
└── 3.jpg
1 directory, 4 files
Build and run ...
docker build -t my-apache2 .
docker run -dit --name my-running-app -p 8080:80 my-apache2
Access your files at
http://localhost:8080/1.jpg
http://localhost:8080/2.jpg
http://localhost:8080/3.jpg
I'm new to docker and I'm building the docker file below using docker build -t control . It builds successfully with no errors specifically it says that it makes the control directory. Then I try to run the image with docker run control but it gives an error saying that it can't find control/control_file/job.py
Where does docker create the control directory. Is it in a container that I cannot see? As I can't see it being create anywhere and I'm unsure how to debug?
FROM python:2
RUN pip install requests\
&& pip install pymongo
RUN mkdir control
COPY control_file/ /control
ENV PYTHONPATH="/control:$PYTHONPATH"
RUN export PYTHONPATH=/control:$PYTHONPATH
CMD ["python","/control/job.py"]
This is the directory structure:
├── control_file
│ ├── insert_to_container.py
│ ├── ip_path
│ ├── job.py
│ └── read_info.py
└── Dockerfile
The job.py is now in /control within your Docker build.
With the COPY command you copy all contents within control_file/ into the new directory /control.
Change the last line to:
CMD ["python", "control/job.py"]
you docker file has mistakes, please find below correct one and control_file directory should available in build directory (where you building docker image )....job.py should have execute permission
FROM python:2
RUN pip install requests\
&& pip install pymongo
RUN mkdir -p /control/control_file
COPY control_file/ /control/control_file
CMD [ "python" , "/control/control_file/job.py" ]
I'm using the official golang alpine image to compile my source code (my host machine is a Mac), and I've noticed that even when mounting whole $GOPATH inside of the container it doesn't use cached data from previous builds. I checked that it creates it in the $GOPATH/pkg directory, but it does not affect the subsequent builds speed.
However, if I reuse the same container for several compilation, it does make use of some kind of cache, you can see the results in this experiment I did:
Using different containers, time remains around 28-30s in each build:
$ rm -r $GOPATH/pkg/linux_amd64
$ time docker run -v$GOPATH:/go -e CGO_ENABLED=0 golang:1.9-alpine3.6 go build -i github.com/myrepo/mypackage
...
0.02s user 0.08s system 0% cpu 30.914 total
$ time docker run -v$GOPATH:/go -e CGO_ENABLED=0 golang:1.9-alpine3.6 go build -i github.com/myrepo/mypackage
...
0.02s user 0.07s system 0% cpu 28.128 total
Reusing the same container, subsequent builds are much faster:
$ rm -r $GOPATH/pkg/linux_amd64
$ docker run -d -v$GOPATH:/go -e CGO_ENABLED=0 golang:1.9-alpine3.6 tail -f /dev/null
bb4c08867bf2a28ad87facf00fa9dcf2800ad480fe1e66eb4d8a4947a6efec1d
$ time docker exec bb4c08867bf2 go build -i github.com/myrepo/mypackage
...
0.02s user 0.05s system 0% cpu 27.028 total
$ time docker exec bb4c08867bf2 go build -i github.com/myrepo/mypackage
0.02s user 0.06s system 0% cpu 7.409 total
Is Go using any kind of cache in some place outside of $GOPATH?
To anyone who landed here from google search, i have found a working answer on a reddit post.
It basically says to map the /root/.cache/go-build to your host go build cache folder.
In my case, i am on windows and have a project that requires cross compilation with gcc, i had to spin up a linux container to build the binary to be deploy to a alpine container, and i map it to a data volume instead:
some-volume-name:/root/.cache/go-build
When you are building inside the golang container, it is using the directory $GOPATH/pkg inside this container. If you then start another golang container, it has an empty $GOPATH/pkg. However if you continue to use the same container (with exec), the $GOPATH/pkg is re-used.
rm -r $GOPATH/pkg/linux_amd64 will only remove this directory on your local machine. So this has no effect.
A possible alternative to re-using the same container could be
to commit the container after the first build, or
to mount $GOPATH/pkg as volume from your host machine or a data volume.
Use the -v flag to print which packages are getting compiled. This might be a better indicator than time spent.
I was able to produce the desired result by mounting the gopath as volume (as you have done, so it should work...). Please see below snippet. First time it compiles both packages, second time just the main.
Side note: one issue i've had with this approach is the volume dir will "overwrite" (i.e. shadow) anything already in the image at that dir, which is fine if you are using just the base golang alpine image since /go should be empty.
pkm$ tree
.
└── src
└── github.com
├── org1
│ └── mine
│ └── main.go
└── org2
└── somelib
└── lib.go
6 directories, 2 files
pkm$ docker run --rm -v $GOPATH:/go golang:1.9-alpine go build -i -v github.com/org1/mine
github.com/org2/somelib
github.com/org1/mine
pkm$ tree
.
├── mine
├── pkg
│ └── linux_amd64
│ └── github.com
│ └── org2
│ └── somelib.a
└── src
└── github.com
├── org1
│ └── mine
│ └── main.go
└── org2
└── somelib
└── lib.go
10 directories, 4 files
pkm$ docker run --rm -v $GOPATH:/go golang:1.9-alpine go build -i -v github.com/org1/mine
github.com/org1/mine
pkm$
Yet another Docker symlink question. I have a bunch of files that I want to copy over to all my Docker builds. My dir structure is:
parent_dir
- common_files
- file.txt
- dir1
- Dockerfile
- symlink -> ../common_files
In above example, I want file.txt to be copied over when I docker build inside dir1. But I don't want to maintain multiple copies of file.txt.
Per this link, as of docker version 0.10, docker build must
Follow symlinks inside container's root for ADD build instructions.
But I get no such file or directory when I build with either of these lines in my Dockerfile:
ADD symlink /path/dirname or
ADD symlink/file.txt /path/file.txt
mount option will NOT solve it for me (cross platform...).
I tried tar -czh . | docker build -t without success.
Is there a way to make Docker follow the symlink and copy the common_files/file.txt into the built container?
That is not possible and will not be implemented. Please have a look at the discussion on github issue #1676:
We do not allow this because it's not repeatable. A symlink on your machine is the not the same as my machine and the same Dockerfile would produce two different results. Also having symlinks to /etc/paasswd would cause issues because it would link the host files and not your local files.
If anyone still has this issue I found a very nice solution on superuser.com:
https://superuser.com/questions/842642/how-to-make-a-symlinked-folder-appear-as-a-normal-folder
It basically suggests using tar to dereference the symlinks and feed the result into docker build:
$ tar -czh . | docker build -
One possibility is to run the build in the parent directory, with:
$ docker build [tags...] -f dir1/Dockerfile .
(Or equivalently, in child directory,)
$ docker build [tags...] -f Dockerfile ..
The Dockerfile will have to be configured to do copy/add with appropriate paths. Depending on your setup, you might want a .dockerignore in the parent to leave out
things you don't want to be put into the context.
I know that it breaks portability of docker build, but you can use hard links instead of symbolic:
ln /some/file ./hardlink
I just had to solve this issue in the same context. My solution is to use hierarchical Docker builds. In other words:
parent_dir
- common_files
- Dockerfile
- file.txt
- dir1
- Dockerfile (FROM common_files:latest)
The disadvantage is that you have to remember to build common_files before dir1. The advantage is that if you have a number of dependant images then they are all a bit smaller due to using a common layer.
I got frustrated enough that I made a small NodeJS utility to help with this: file-syncer
Given the existing directory structure:
parent_dir
- common_files
- file.txt
- my-app
- Dockerfile
- common_files -> symlink to ../common_files
Basic usage:
cd parent_dir
// starts live-sync of files under "common_files" to "my-app/HardLinked/common_files"
npx file-syncer --from common_files --to my-app/HardLinked
Then in your Dockerfile:
[regular commands here...]
# have docker copy/overlay the HardLinked folder's contents (common_files) into my-app itself
COPY HardLinked /
Q/A
How is this better than just copying parent_dir/common_files to parent_dir/my-app/common_files before Docker runs?
That would mean giving up the regular symlink, which would be a loss, since symlinks are helpful and work fine with most tools. For example, it would mean you can't see/edit the source files of common_files from the in-my-app copy, which has some drawbacks. (see below)
How is this better than copying parent_dir/common-files to parent_dir/my-app/common_files_Copy before Docker runs, then having Docker copy that over to parent_dir/my-app/common_files at build time?
There are two advantages:
file-syncer does not "copy" the files in the regular sense. Rather, it creates hard links from the source folder's files. This means that if you edit the files under parent_dir/my-app/HardLinked/common_files, the files under parent_dir/common_files are instantly updated, and vice-versa, because they reference the same file/inode. (this can be helpful for debugging purposes and cross-project editing [especially if the folders you are syncing are symlinked node-modules that you're actively editing], and ensures that your version of the files is always in-sync/identical-to the source files)
Because file-syncer only updates the hard-link files for the exact files that get changed, file-watcher tools like Tilt or Skaffold detect changes for the minimal set of files, which can mean faster live-update-push times than you'd get with a basic "copy whole folder on file change" tool would.
How is this better than a regular file-sync tool like Syncthing?
Some of those tools may be usable, but most have issues of one kind or another. The most common one is that the tool either cannot produce hard-links of existing files, or it's unable to "push an update" for a file that is already hard-linked (since hard-linked files do not notify file-watchers of their changes automatically, if the edited-at and watched-at paths differ). Another is that many of these sync tools are not designed for instant responding, and/or do not have run flags that make them easy to use in restricted build tools. (eg. for Tilt, the --async flag of file-syncer enables it to be used in a local(...) invokation in the project's Tiltfile)
One tool that allows to "link" a directory in a way that is accepted by docker, is docker itself.
It is possible to run a temporary docker container, with all necessary files/directories mounted in adequate paths, and build image from within such container. For example:
docker run -it \
--rm \
-v /var/run/docker.sock:/var/run/docker.sock \
--mount "type=bind,source=$ImageRoot/Dockerfile,destination=/Image/Dockerfile,readonly" \
--mount "type=bind,source=$ImageRoot/.dockerignore,destination=/Image/.dockerignore,readonly" \
--mount "type=bind,source=$ReposRoot/project1,destination=/Image/project1,readonly" \
--mount "type=bind,source=$ReposRoot/project2,destination=/Image/project2,readonly" \
--env DOCKER_BUILDKIT=1 \
docker:latest \
docker build "/Image" --tag "my_tag"
In above example I assume variables $ImageRoot and $ReposRoot are set.
instead of using simlinks it is possible to solve problem administratively by just moving files from sites_available to sites_enabled instead of copying or making simlinks
so your site config will be in one copy only in site_available folder if it stopped or something or in sites_enabled if it should be used
Use a small wrapper script to copy the needed dir to the Dockerfile's location;
build.sh;
.
#!/bin/bash
[ -e bin ] && rm -rf bin
cp -r ../../bin .
docker build -t "sometag" .
Commonly I isolate build instructions to subfolder, so application and logic levels are higher located:
.
├── app
│ ├── package.json
│ ├── modules
│ └── logic
├── deploy
│ ├── back
│ │ ├── nginx
│ │ │ └── Chart.yaml
│ │ ├── Containerfile
│ │ ├── skaffold.yaml
│ │ └── .applift -> ../../app
│ ├── front
│ │ ├── Containerfile
│ │ ├── skaffold.yaml
│ │ └── .applift -> ../../app
│ └── skaffold.yaml
└── .......
I utilize name ".applift" for those symbolic links
.applift -> ../../app
And now follow symlink via realpath without care about path depth
dir/deploy/front$ docker build -f Containerfile --tag="build" `realpath .applift`
or pack in func
dir/deploy$ docker_build () { docker build -f "$1"/Containerfile --tag="$2" `realpath "$1/.applift"`; }
dir/deploy$ docker_build ./back "front_builder"
so
COPY app/logic/ ./app/
in Containerfile will work
Yes, in this case you will loose context for other layers. But generally there is no any other context files located in build-directory
I had a situation where the parent_dir contained common libraries in common_files/ and a common docker/Dockerfile. dir1/ contained the contents of a different code repository but I wanted that repository to have access to those parent code repository folders. I solved it without using symlinks as follows:
parent_dir
- common_files
- file.txt
- docker
- Dockerfile
- dir1
- docker-compose.yml --> ../common_files
--> ../docker/Dockerfile
So I created a docker-compose.yml file, where I specified where the files were located relative to the docker-compose.yml where it would be executed from. I also tried to minimise changes to the Dockerfile since it would be used by both repositories, so I provided DIR argument to specify a subdirectory to run:
version: "3.8"
services:
dev:
container_name: template
build:
context: "../"
dockerfile: ./docker/Dockerfile
args:
- DIR=${DIR}
volumes:
- ./dir1:/app
- ./common_files:/common_files
I ran the following from within the dir1/ folder and it ran successfully:
export DIR=./dir1 && docker compose -f docker-compose.yml build
This is the original Dockerfile:
...
WORKDIR /app
COPY . /app
RUN my_executable
...
And this is a snippet with changes I made to the Dockerfile:
...
ARG DIR=${DIR}
WORKDIR /app
COPY . /app
RUN cd ${DIR} && my_executable && cd /app
...
This worked and the parent repository could still run the Dockerfile with the same outcome even though I had introduced the DIR argument since if the parent repository called it DIR would be an empty string and it would behave like it did before.
If you're on mac, rembember to do
brew install gnu-tar
and use gtar instead of tar. Seems there are some differences between the two.
gtar worked for me at least.
My docker file has following entry
ENV SCPATH /etc/supervisor/conf.d
RUN apt-get -y update
# The daemons
RUN apt-get -y install supervisor
RUN mkdir -p /var/log/supervisor
# Supervisor Configuration
ADD ./supervisord/conf.d/* $SCPATH/
The directory structure looks like this
├── .dockerignore
├── .gitignore
├── Dockerfile
├── Makefile
├── README.md
├── Vagrantfile
├── index.js
├── package.json
└── supervisord
└── conf.d
├── node.conf
└── supervisord.conf
As per my understanding this should work fine as
ADD ./supervisord/conf.d/* $SCPATH/
Points to a relative path in terms of dockerfile build context.
Still it fails with
./supervisord/conf.d : no such file or directory exists.
I am new to docker so might be a very basic thing I am missing. Really appreciate help
What are your .dockerignore file contents? Are you sure you did not accidentally exclude something below your supervisord directory that the docker daemon needs to build your image?
And: in which folder are you executing the docker build command? Make sure you execute it within the folder that holds the Dockerfile so that the relative paths match.
Update: I tried to reproduce your problem. What I did from within a temp folder:
mkdir -p a/b/c
echo "test" > a/b/c/test.txt
cat <<EOF > Dockerfile
FROM debian
ENV MYPATH /newdir
RUN mkdir $MYPATH
ADD ./a/b/c/* $MYPATH/
CMD cat $MYPATH/test.txt
EOF
docker build -t test .
docker run --rm -it test
That prints test as expected. The important part works: the ADD ./a/b/c* $MYPATH. The file at the end is found as its content test is displayed during runtime.
When I now change the path ./a/b/c/* to something else, I get the no such file or directory exists error. When I leave the path as is and invoke docker build from a different folder than the temp folder where I placed the files the error is shown, too.