I have a very strange issue with copying the contents of subdirectories to a Docker container.
This is the directory structure:
Note: There are two Dockerfiles, I use the one on the upper level for test purposes. Ignore the one in the WebApp folder.
I want to copy the directories Bilder and JSON to the container, including all contents, but it doesn't work. The folders in the container will be empty. However, copying the Testdir does indeed work.
This is part of my Dockerfile:
FROM python:3.7-buster
# -- Init --
RUN mkdir -p /app/src
WORKDIR /app/src
ADD WebApp/Testdir ./Testdir #works
ADD WebApp/Bilder ./Bilder #doesn't work
CMD ["sleep", "50"] #to check contents
I build the image as part of a docker-compose.yml file with
docker-compose build test
Does anyone have a clue what's happening here? I've been searching for a solution for quite some time...
If anyone is interested by why this was a problem: it actually had nothing to do with Docker. I was working on a cluster that was not synchronizing my local files to the server correctly, so I solved this issue by checking every time whether the files were actually copied from my local machine to the cluster. Just in case someone has a similar issue, you might be advised to check whether the file accessibility could be the problem.
I'm using docker CE 17.06.1 to build a docker image. Everything works so far, except that a directory I created during the RUN command using mkdir, won't appear when I take a look at the final image. I also performed a ls after creation, to make sure it's really at the expected place. That directory isn't a mount directory or similar - just a simple directory. Is this expected behavior?
Is there an easy way to check what files were produced after container exits?
I saw recommendations to rewrite Dockerfile and add ls commands to it, but that's not the easy way for me.
UPDATE: I was using VOLUME directive inside Dockerfile and docker diff doesn't show changes there.
You can use docker diff container_name. This inspect changes to files or directories on a container filesystem.
It shows something like this.
A /usr/local/lib/python2.7/email
C /usr/local/lib/python2.7/email/mime
D /usr/local/lib/python2.7/email/mime/audio.pyc
A: A file or directory was added
C: A file or directory was changed
D: A file or directory was deleted
Hope this helps, good luck!
I used to list the tests directory in .dockerignore so that it wouldn't get included in the image, which I used to run a web service.
Now I'm trying to use Docker to run my unit tests, and in this case I want the tests directory included.
I've checked docker build -h and found no option related.
How can I do this?
Docker 19.03 shipped a solution for this.
The Docker client tries to load <dockerfile-name>.dockerignore first and then falls back to .dockerignore if it can't be found. So docker build -f Dockerfile.foo . first tries to load Dockerfile.foo.dockerignore.
Setting the DOCKER_BUILDKIT=1 environment variable is currently required to use this feature. This flag can be used with docker compose since 1.25.0-rc3 by also specifying COMPOSE_DOCKER_CLI_BUILD=1.
See also comment0, comment1, comment2
from Mugen comment, please note
the custom dockerignore should be in the same directory as the Dockerfile and not in root context directory like the original .dockerignore
i.e.
when calling
DOCKER_BUILDKIT=1
docker build -f /path/to/custom.Dockerfile ...
your .dockerignore file should be at
/path/to/custom.Dockerfile.dockerignore
At the moment, there is no way to do this. There is a lengthy discussion about adding an --ignore flag to Docker to provide the ignore file to use - please see here.
The options you have at the moment are mostly ugly:
Split your project into subdirectories that each have their own Dockerfile and .dockerignore, which might not work in your case.
Create a script that copies the relevant files into a temporary directory and run the Docker build there.
Adding the cleaned tests as a volume mount to the container could be an option here. After you build the image, if running it for testing, mount the source code containing the tests on top of the cleaned up code.
services:
tests:
image: my-clean-image
volumes:
- '../app:/opt/app' # Add removed tests
I've tried activating the DOCKER_BUILDKIT as suggested by #thisismydesign, but I ran into other problems (outside the scope of this question).
As an alternative, I'm creating an intermediary tar by using the -T flag which takes a txt file containing the files to be included in my tar, so it's not so different than a whitelist .dockerignore.
I export this tar and pipe it to the docker build command, and specify my docker file, which can live anywhere in my file hierarchy. In the end it looks like this:
tar -czh -T files-to-include.txt | docker build -f path/to/Dockerfile -
Another option is to have a further build process that includes the tests. The way I do it is this:
If the tests are unit tests then I create a new Docker image that is derived from the main project image; I just stick a FROM at the top, and then ADD the tests, plus any required tools (in my case, mocha, chai and so on). This new 'testing' image now contains both the tests and the original source to be tested. It can then simply be run as is or it can be run in 'watch mode' with volumes mapped to your source and test directories on the host.
If the tests are integration tests--for example the primary image might be a GraphQL server--then the image I create is self-contained, i.e., is not derived from the primary image (it still contains the tests and tools, of course). My tests use environment variables to tell them where to find the endpoint that needs testing, and it's easy enough to get Docker Compose to bring up both a container using the primary image, and another container using the integration testing image, and set the environment variables so that the test suite knows what to test.
Sadly it isn't currently possible to point to a specific file to use for .dockerignore, so we generate it in our build script based on the target/platform/image. As a docker enthusiast it's a sad and embarrassing workaround.
I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.
The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker