Docker: copy contents from subdirectories not working - docker

I have a very strange issue with copying the contents of subdirectories to a Docker container.
This is the directory structure:
Note: There are two Dockerfiles, I use the one on the upper level for test purposes. Ignore the one in the WebApp folder.
I want to copy the directories Bilder and JSON to the container, including all contents, but it doesn't work. The folders in the container will be empty. However, copying the Testdir does indeed work.
This is part of my Dockerfile:
FROM python:3.7-buster
# -- Init --
RUN mkdir -p /app/src
WORKDIR /app/src
ADD WebApp/Testdir ./Testdir #works
ADD WebApp/Bilder ./Bilder #doesn't work
CMD ["sleep", "50"] #to check contents
I build the image as part of a docker-compose.yml file with
docker-compose build test
Does anyone have a clue what's happening here? I've been searching for a solution for quite some time...

If anyone is interested by why this was a problem: it actually had nothing to do with Docker. I was working on a cluster that was not synchronizing my local files to the server correctly, so I solved this issue by checking every time whether the files were actually copied from my local machine to the cluster. Just in case someone has a similar issue, you might be advised to check whether the file accessibility could be the problem.

Related

How to navigate up one folder in a dockerfile

I'm having some trouble building a docker image, because the way the code has been structured. The code is written in C#, and in a solution there is a lot of projects that "support" the application i want to build.
My problem is if i put the dockerfile into the root i can build it, without any problem, and it's okay but i don't think it's the optimal way, because we have some other dockerfiles we also need to build and if i put them all into the root folder i think it will end up messy.
So if i put the dockerfile into the folder with the application, how do i navigate into the root folder to grab the folders i need?
I tried with "../" but from my point of view it didn't seem to work. Is there any way to do it, or what is best practice in this scenario?
TL;DR
run it from the root directory:
docker build . -f ./path/to/dockerfile
the long answer:
in dockerfile you cant really go up.
why
when the docker daemon is building you image, it uses 2 parameters:
your Dockerfile
the context
the context is what you refer to as . in the dockerfile. (for example as COPY . /app)
both of them affect the final image - the dockerfile determines what is going to happen. the context tells docker on which files it should perform the operations you've specified in that dockerfile.
thats how the docs put it:
A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
so, usually the context is the directory where the Dockerfile is placed. my suggestion is to leave it where it belongs. name your dockerfiles after their role (Dockerfile.dev,Dockerfile.prod, etc) thats ok to have a few of them in the same dir.
the context can still be changed:
after all, you are the one that specify the context. since the docker build command accepts the context and the dockerfile path. when i run:
docker build .
i am actually giving it the context of my current directory, (ive omitted the dockerfile path so it defaults to PATH/Dockerfile)
so if you have a dockerfile in dockerfiles/Dockerfile.dev, you shoul place youself in the directory you want as context, and you run:
docker build . -f dockerfiles/Dockerfile.dev
same applies to docker-compose build section (you specify there a context and the dockerfile path)
hope that made sense.
You can use RUN command and after & do whatever you want.
RUN cd ../ &

Docker Copy - Windows

I am currently trying to copy a folder and its sub directories to a docker container but all that copies in is the folder structure "obj\Docker\empty"
I am running the command in Powershell from D:\Sites\Web.API and the command is:
docker cp . eac334ba8bf6:./inetpub/wwwroot/Web.API.
My .dockerignore file has this in it
!obj\Docker\publish\*
!obj\Docker\empty\
I'm pretty new to this so may be something silly but currently all out of ideas !
I think the issue is file system permissions. Have you tried to copy it somewhere else?

Docker Copy seems buggy

So I am trying to make a simple docker container to run sabnzbd in a container.
I do a git clone and then move sabnzbd folder into container.
Note: When I look in sabnzbd, all files look ok so it's not a git problem or branch etc. I'm happy so far there.
When I then run the container, I double checked but half the files are missing. For example, I was looking to make sure cheerypy was copied but it wasn't, and yes, i can confirm that I double checked again in sabnzbd folder.
So I have a folder with Dockerfile and sabnzbd. I built from that folder with the command:
sudo docker build --no-cache=true -t sabnzbd -f Dockerfile .
In a nutshell, the only thing that worked was Copy . /
I tried COPY sabnzbd/* sabnzbd and other variations.
I originally thought it was getting files from elsewhere but removed any trace of sabnzbd, but for my understanding it only looks at files relative to the docker build file.
I just thought this was so soo weird and wanted to get some thoughts even though I fixed it. I did remove all images and started from scratch but result was the same.
I did try the no cache option with build but still the same.
Thoughts?

How to specify different .dockerignore files for different builds in the same project?

I used to list the tests directory in .dockerignore so that it wouldn't get included in the image, which I used to run a web service.
Now I'm trying to use Docker to run my unit tests, and in this case I want the tests directory included.
I've checked docker build -h and found no option related.
How can I do this?
Docker 19.03 shipped a solution for this.
The Docker client tries to load <dockerfile-name>.dockerignore first and then falls back to .dockerignore if it can't be found. So docker build -f Dockerfile.foo . first tries to load Dockerfile.foo.dockerignore.
Setting the DOCKER_BUILDKIT=1 environment variable is currently required to use this feature. This flag can be used with docker compose since 1.25.0-rc3 by also specifying COMPOSE_DOCKER_CLI_BUILD=1.
See also comment0, comment1, comment2
from Mugen comment, please note
the custom dockerignore should be in the same directory as the Dockerfile and not in root context directory like the original .dockerignore
i.e.
when calling
DOCKER_BUILDKIT=1
docker build -f /path/to/custom.Dockerfile ...
your .dockerignore file should be at
/path/to/custom.Dockerfile.dockerignore
At the moment, there is no way to do this. There is a lengthy discussion about adding an --ignore flag to Docker to provide the ignore file to use - please see here.
The options you have at the moment are mostly ugly:
Split your project into subdirectories that each have their own Dockerfile and .dockerignore, which might not work in your case.
Create a script that copies the relevant files into a temporary directory and run the Docker build there.
Adding the cleaned tests as a volume mount to the container could be an option here. After you build the image, if running it for testing, mount the source code containing the tests on top of the cleaned up code.
services:
tests:
image: my-clean-image
volumes:
- '../app:/opt/app' # Add removed tests
I've tried activating the DOCKER_BUILDKIT as suggested by #thisismydesign, but I ran into other problems (outside the scope of this question).
As an alternative, I'm creating an intermediary tar by using the -T flag which takes a txt file containing the files to be included in my tar, so it's not so different than a whitelist .dockerignore.
I export this tar and pipe it to the docker build command, and specify my docker file, which can live anywhere in my file hierarchy. In the end it looks like this:
tar -czh -T files-to-include.txt | docker build -f path/to/Dockerfile -
Another option is to have a further build process that includes the tests. The way I do it is this:
If the tests are unit tests then I create a new Docker image that is derived from the main project image; I just stick a FROM at the top, and then ADD the tests, plus any required tools (in my case, mocha, chai and so on). This new 'testing' image now contains both the tests and the original source to be tested. It can then simply be run as is or it can be run in 'watch mode' with volumes mapped to your source and test directories on the host.
If the tests are integration tests--for example the primary image might be a GraphQL server--then the image I create is self-contained, i.e., is not derived from the primary image (it still contains the tests and tools, of course). My tests use environment variables to tell them where to find the endpoint that needs testing, and it's easy enough to get Docker Compose to bring up both a container using the primary image, and another container using the integration testing image, and set the environment variables so that the test suite knows what to test.
Sadly it isn't currently possible to point to a specific file to use for .dockerignore, so we generate it in our build script based on the target/platform/image. As a docker enthusiast it's a sad and embarrassing workaround.

Mounted docker volumes corrupting files

I think this is machine related, but I'm not sure. I'm using the most current docker toolbox with docker 1.10.3 on OSX
I have a project using a Dockerfile, which copies code into the container like this:
[...]
COPY . /code
VOLUME /code
WORKDIR /code
[...]
For faster local development (test execution), we mount the current directory in the compose file
[...]
volumes:
- .:/code
[...]
and execute
docker-compose -f docker-compose.yml -f docker-compose.testing.yml run web py.test
Now, it looks like I have two different folders/files:
when running the container and looking inside a file with vi, everything looks like on the host. Changing files and executing our tests (pytest, specifically) lets the python interpreter read garbage so it can't execute the tests.
Example
the end of a file looks like this (which got copied in the Dockerfile into the container):
post_save.connect(backup_something, sender=SomeSender, dispatch_uid='backup_something') foobar
this obviously raises an error when executing, so I change it to
post_save.connect(backup_something, sender=SomeSender, dispatch_uid='backup_something')
the file looks fine now, both from the host and inside the container.
Executing pytest, it still reads the content of the copied code, breaking the tests locally for me.
If I change even more, it's neither the copied nor the mounted file, so stuff breaks at random positions:
File "/code/some_code.py", line 69
dispatch_uid='backup_
^
SyntaxError: EOL while scanning string literal
(tail shows correct syntax etc, there is definitely nothing broken with the code)
Is there something wrong with our setup or is it just my machine being broken somehow? I tried restarting and recreating the docker machine but this doesn't help.
I would try to mount in read only mode and then double check the filesystem type if there's something strange.
Years ago there was a bug with ntfs-3g corrupting files, maybe it's something similar (obviously not ntfs because we are on OS X)
I have no experience with DT on IOS, but I think you may have done a union mount.
If that is the case, the solution would be to move files or mount point so that files won't be shadowed.
This article may be relevant:

Resources