Docker Copy seems buggy - docker

So I am trying to make a simple docker container to run sabnzbd in a container.
I do a git clone and then move sabnzbd folder into container.
Note: When I look in sabnzbd, all files look ok so it's not a git problem or branch etc. I'm happy so far there.
When I then run the container, I double checked but half the files are missing. For example, I was looking to make sure cheerypy was copied but it wasn't, and yes, i can confirm that I double checked again in sabnzbd folder.
So I have a folder with Dockerfile and sabnzbd. I built from that folder with the command:
sudo docker build --no-cache=true -t sabnzbd -f Dockerfile .
In a nutshell, the only thing that worked was Copy . /
I tried COPY sabnzbd/* sabnzbd and other variations.
I originally thought it was getting files from elsewhere but removed any trace of sabnzbd, but for my understanding it only looks at files relative to the docker build file.
I just thought this was so soo weird and wanted to get some thoughts even though I fixed it. I did remove all images and started from scratch but result was the same.
I did try the no cache option with build but still the same.
Thoughts?

Related

How to manage environment variables that point to credential files in docker container?

In my ~/.bashrc, I have set GOOGLE_APPLICATION_CREDENTIALS=~/.gc/credential_file_name.json.
My source code is located in (and I'm working from here) ~/repos/github_repo/ where I have a Dockerfile with its working directory set to /usr/src/app.
If I copy ~/.gc/credential_file_name.json to ~/repos/github_repo/credential_file_name.json and run the docker container with
docker run -t \
-e GOOGLE_APPLICATION_CREDENTIALS=/usr/src/app/credential_file_name.json \
...
the credential file gets picked up and subsequent code runs ok.
But, ideally, I don't want to copy the credential to my github repository, as that risks possibly pushing it on github (even when I add it to .gitignore, it's still not safe).
Additionally, instead of having to explicitly give then full path -e GOOGLE_APPLICATION_CREDENTIALS=/usr/src/app/credential_file_name.json, I would like to do something like -e GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS} where ${GOOGLE_APPLICATION_CREDENTIALS} gets picked up from my ~/.bashrc.
But obviously, ${GOOGLE_APPLICATION_CREDENTIALS} will point to a path on my computer, which different directory structure than the docker container.
What is the best way to resolve this? I'm new to this and I came across direnv and .envrc, but don't quite understand.
I'm using Makefile to run the docker commands. I will try to avoid docker-compose, but if it solves this problem, please let me know.
Thanks for help!

Docker: copy contents from subdirectories not working

I have a very strange issue with copying the contents of subdirectories to a Docker container.
This is the directory structure:
Note: There are two Dockerfiles, I use the one on the upper level for test purposes. Ignore the one in the WebApp folder.
I want to copy the directories Bilder and JSON to the container, including all contents, but it doesn't work. The folders in the container will be empty. However, copying the Testdir does indeed work.
This is part of my Dockerfile:
FROM python:3.7-buster
# -- Init --
RUN mkdir -p /app/src
WORKDIR /app/src
ADD WebApp/Testdir ./Testdir #works
ADD WebApp/Bilder ./Bilder #doesn't work
CMD ["sleep", "50"] #to check contents
I build the image as part of a docker-compose.yml file with
docker-compose build test
Does anyone have a clue what's happening here? I've been searching for a solution for quite some time...
If anyone is interested by why this was a problem: it actually had nothing to do with Docker. I was working on a cluster that was not synchronizing my local files to the server correctly, so I solved this issue by checking every time whether the files were actually copied from my local machine to the cluster. Just in case someone has a similar issue, you might be advised to check whether the file accessibility could be the problem.

How to navigate up one folder in a dockerfile

I'm having some trouble building a docker image, because the way the code has been structured. The code is written in C#, and in a solution there is a lot of projects that "support" the application i want to build.
My problem is if i put the dockerfile into the root i can build it, without any problem, and it's okay but i don't think it's the optimal way, because we have some other dockerfiles we also need to build and if i put them all into the root folder i think it will end up messy.
So if i put the dockerfile into the folder with the application, how do i navigate into the root folder to grab the folders i need?
I tried with "../" but from my point of view it didn't seem to work. Is there any way to do it, or what is best practice in this scenario?
TL;DR
run it from the root directory:
docker build . -f ./path/to/dockerfile
the long answer:
in dockerfile you cant really go up.
why
when the docker daemon is building you image, it uses 2 parameters:
your Dockerfile
the context
the context is what you refer to as . in the dockerfile. (for example as COPY . /app)
both of them affect the final image - the dockerfile determines what is going to happen. the context tells docker on which files it should perform the operations you've specified in that dockerfile.
thats how the docs put it:
A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
so, usually the context is the directory where the Dockerfile is placed. my suggestion is to leave it where it belongs. name your dockerfiles after their role (Dockerfile.dev,Dockerfile.prod, etc) thats ok to have a few of them in the same dir.
the context can still be changed:
after all, you are the one that specify the context. since the docker build command accepts the context and the dockerfile path. when i run:
docker build .
i am actually giving it the context of my current directory, (ive omitted the dockerfile path so it defaults to PATH/Dockerfile)
so if you have a dockerfile in dockerfiles/Dockerfile.dev, you shoul place youself in the directory you want as context, and you run:
docker build . -f dockerfiles/Dockerfile.dev
same applies to docker-compose build section (you specify there a context and the dockerfile path)
hope that made sense.
You can use RUN command and after & do whatever you want.
RUN cd ../ &

How to use big file only to build the container without adding it?

I have a big tar/executable (over 30GB) I COPY/ADD it but this is used only for the installation. Once the application is installed I don't need it anymore.
How can I do? I am trying to use it but:
Everytime I run a build, it takes minutes to define the build context.
I'd like to share this image, if I create a tar with docker save, Is the final version or each layer included in it?
I found some solutions that said I can use RUN wget tar ... && rm tar but I don't want to create webserver for that.
Why isn't possible to mount a volume during build process?! It would be very useful.
Use Docker's multi-stage builds. This mechanism allows you to drop intermediate artifacts and therefore achieve a lightweight image.
Example:
FROM alpine:latest as build
# copy large file
# build
FROM alpine:latest as output
# copy necessary files built in the previous stage
COPY --from=build app /app
Anything built in the build stage will not be included in the final image, unless you explicitly COPY them.
Docs: https://docs.docker.com/develop/develop-images/multistage-build/
This is solvable using 2 different context.
Please follow these steps as mentioned below.
Objective is to create a
docker image that will have you large-build file.
docker image that will have you real codebase/executables.
For this you have to create 2 folders (Build & CodeBase) as follow.
Application<br/>
|---> BUILD <br/>
|======|--->Large-File<br/>
|======|--->Dockerfile<br/>
|--->CodeBase<br/>
|======|--->SRC+Other stuff<br/>
|======|--->Dockerfile<br/>
Build & Codebase both folders will have individual Dockerfile and arrange files accordingly.
Dockerfile(Build)
FROM **Base-Image**
COPY Large-File /tmp/Large-File
Build this and tag it with some name like (base-build-app-image)
#>cd Application <==Application root folder as mentioned above==>
#>docker build -t base-build-app-image BUILD <==path of your build-folder==>
Dockerfile(Codebase)
FROM base-build-app-image
RUN *****
CMD *****
RUN rm -f **/tmp/Large-File**
RUN rm -f **Remove installation files that is not required**
ENTRYPOINT *****
Build this-code-base and base-build-app-image is already in your local docker-repository and your large iso file is not in the current-buid-context
#>cd Application <==Application root folder as mentioned above==>
#>docker build CodeBase <==path of your code-base==>
This time since the context size is only your code base and since this doesn't include that Large file - it will definitely reduce your build time.
You can also take an advance of using docker-compose to do both operations together so you will not have to execute 2 separate commands.
If you need help on preparing this docker-compose file then do let me know in comments.
If anything is not clear then leave a comment or come over chat to fix this issue.

Empty directories missing docker build resulting image

I'm using docker CE 17.06.1 to build a docker image. Everything works so far, except that a directory I created during the RUN command using mkdir, won't appear when I take a look at the final image. I also performed a ls after creation, to make sure it's really at the expected place. That directory isn't a mount directory or similar - just a simple directory. Is this expected behavior?

Resources