What I'm trying to accomplish is this: I want to cache the current users ~/.aws directory inside a container so that I can use them during the build of another container.
I have the following docker-compose.yml:
version: "3.7"
services:
worker:
depends_on:
- aws
aws:
build:
context: ~/.aws
dockerfile: ./ctx.dockerfile
args:
- workdir=/root/.aws
These are the contents of ctx.dockerfile:
FROM alpine:3.9
ARG workdir
WORKDIR ${workdir}
COPY . .
And in my worker service Dockerfile I have the following:
...
COPY --from=aws_ctx:local /root/.aws /root/.aws
...
The Problem
docker compose isn't treating the dockerfile path in the aws service as relative to the docker-compose.yml, it is instead assuming it is relative to the context path. Is there anyway I can have docker-compose load the ctx.dockerfile from same directory as docker-compose.yml AND set the context the way that I am?
I'm up for changing my approach to the problem, but I have a few contstraints:
any solution must be workable on Windows, OSX, and Linux
any solution must only require docker and/or docker-compose, I can't run a shell script beforehand
Is there anyway I can have docker-compose load the ctx.dockerfile from same directory as docker-compose.yml AND set the context the way that I am?
AFAIK: No, there isn't.
Everything that the Dockerfile interacts with on build time must be in the defined context. So, you need .aws and the current folder where the docker-compose.yml etc. lives to be in the same context, i.e. the context would need to be the highest level of your relevant directory structure and then you would have to define relative paths to the files you need (the Dockerfiles and .aws).
Maybe you could set /home/$USER as your build context (or even higher level, depending on where your Dockerfiles etc. live), but then you would also have to create a .dockerignore file and ignore everything in the context besides .aws and the current folder... As you can see, this would be a mess and not very reproducible.
I would suggest to use a volume instead of COPYing the ~/.aws folder inside your container.
Example:
nico#lapap12:~$ ls -l ~/.aws
total 0
-rw-r--r-- 1 nico nico 0 May 22 17:45 foo.bar
docker-compose.yml:
version: "3.7"
services:
allinone:
image: alpine:latest
volumes:
- ~/.aws:/tmp/aws:ro
command: ls -l /tmp/aws
nico#lapap12:~/local/so$ docker-compose up
Creating so_allinone_1 ... done
Attaching to so_allinone_1
allinone_1 | total 0
allinone_1 | -rw-r--r-- 1 1000 1000 0 May 22 15:45 foo.bar
so_allinone_1 exited with code 0
You could go from there and copy the content of /tmp/aws to /root/.aws if you want to change this folder's content in the container, but don't want to touch it on the actual host.
Related
How do you specify a mount volume in docker-compose, so your Dockerfile can access files from it?
I have a docker-compose.yml like:
version: "3.6"
services:
app_test:
build:
context: ..
dockerfile: Dockerfile
volumes:
- /tmp/cache:/tmp/cache
And in my Dockerfile, I want to access files from /tmp/cache via RUN like:
RUN cat /tmp/cache/somebinary.tar.gz | processor.sh
However, running docker-compose gives me the error:
/tmp/cache/somebinary.tar.gz does not exist
Even though on the host, ls /tmp/cache/somebinary.tar.gz confirms it does exist.
Why is docker-compose/Docker unable to mount or access my host directory?
Dockerfile RUN commands are executed at build time of the image.
The volume is mounted at run time once the image is run as a container. So the mounted files will not be available until you spawn a container based on your image.
To define the commands to use at run time, use CMD, or depending on how you intend your image to be used ENTRYPOINT.
You would need to add this at the end of your Dockerfile:
CMD cat /tmp/cache/somebinary.tar.gz | processor.sh
I have two containers, one of which provides a file that I need in another container, and I want to make the first container write that file to a volume, then have the second container access that volume and read the file.
I have the following docker-compose.yml file:
version: '3'
volumes:
web_data:
services:
build_jar:
build:
context: .
dockerfile: Dockerfile-gradle
volumes:
- web_data:/workdir
generate_html:
depends_on:
- build_jar
ports:
- "8080:80"
build: .
volumes:
- web_data:/workdir
Dockerfile-gradle
FROM gradle:latest AS builder
USER root
RUN mkdir /workspace
ADD . /workspace
RUN cd /workspace && gradle shadowJar --no-daemon
RUN mkdir /workdir
RUN cp /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar /workdir/stat.jar
Dockerfile
FROM openjdk:8-jre-slim AS java
USER root
RUN java -jar /workdir/stat.jar
First of all, I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually, which seems to not be the case. So I create it using mkdir and I do actually get my data saved: I can go to var/lib/docker/volumes on my host machine and find the corresponding volume with the data the container wrote. Great.
Well, secondly, now I need to use this volume with another container, which also does not have the workdir directory existing already. So if I try to access /workdir/stat.jar, it does not exist, and if I manually create /workdir, it's an empty directory. How do I get the files on the volume that the first container put there? Am I missing something in either Dockerfiles or docker-compose.yml?
When you build a Docker image, the Dockerfile has no access to Docker networking, volumes, or any other part of the Docker ecosystem. It's not unreasonable to think of docker build as acting like Maven or Gradle: it produces an image that you can copy to other systems and run elsewhere, but then at build time it can't access data that will eventually be present when you run it.
Correspondingly, as a general rule, Docker images should be self-contained. An image should usually contain its language runtime and any code or artifacts necessary to run the application; sharing code (or jar files) via volumes isn't usually a best practice. (Of particular note, if you do this successfully, Docker will always use the old jar file in the volume, in both containers, in preference to what's built into the image.)
In this context it seems more like you're looking for a multi-stage build. You can combine these two Dockerfiles together, and then COPY the jar file from the first image to the second one. That results in
FROM gradle:latest AS builder
WORKDIR /workspace
COPY . .
RUN gradle shadowJar --no-daemon
FROM openjdk:8-jre-slim AS java
WORKDIR /workdir
COPY --from=builder /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar stat.jar
CMD java -jar /workdir/stat.jar
In the docker-compose.yml file, you can delete volume along with the no-op container that does the build:
version: '3.8'
services:
generate_html:
ports:
- "8080:80"
build: .
I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually
That is not supposed, when you declare a volume mapping for some service you only declare mapping between volume and path in the future container. Your container image should guarantee that something exists on that path.
I need to use this volume with another container, which also does not have the workdir directory existing already
Your confusion is probably related to the fact that you expect volumes to work in build time that is not true unfortunately.
I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.
I am having problems with writing files out from inside a docker container to my host computer. I believe this is a privilege issue and prefer not to set privileged: True. A work around for writing out files is by pre-pending ../ to a volume in my docker-compose.yml file. For example,
version: '3'
services:
example:
volumes:
- ../:/example
What exactly is ../ doing here? Is it taking from the container's privileges and "going up" a directory to the host machine? Without ../, I am unable to write out files to my host machine.
Specifying a path as the source, as opposed to a volume name, bind mounts a host path to a path inside the container. In your example, ../ will be visible inside the container at /example on a recent version of docker.
Older versions of docker can only access the directory it is in and lower, not higher, unless you specify the higher directory as the context.
To run the docker build from the parent directory:
docker build -f /home/me myapp/Dockerfile
As opposed to
docker build -f /home/me/myapp Dockerfile
Doing the same in composer:
#docker-compose.yml
version: '3.3'
services:
yourservice:
build:
context: /home/me
dockerfile: myapp/Dockerfile
Or with your example:
version: '3'
services:
build:
context: /home/me/app
dockerfile: docker/Dockerfile
example:
volumes:
- /home/me/app:/example
Additionally you have to supply full paths, not relative paths. Ie.
- /home/me/myapp/files/example:/example
If you have a script that is generating the Dockerfile from an unknown path, you can use:
CWD=`pwd`; echo $CWD
To refer to the current working directory. From there you can append /..
Alternately you can build the image from a directory one up, or use a volume which you can share with an image that is run from a higher directory, or you need to output your file to stdout and redirect the output of the command to the file you need from the script that runs it.
See also: Docker: adding a file from a parent directory
The statement volumes: ['../:/example'] makes the parent directory of the directory containing docker-compose.yml on the host (../) visible inside the container at /example. Host directory bind-mounts like this, plus some equivalent constructs using a named volume attached to a specific host directory, are the only way a container can write out to the host filesystem.
I'm using docker and docker-compose for building my app. There are two developers now for the project hosted on github.
Our project structure is:
sup
dockerfiles
dev
build
.profile
Dockerfile
docker-compose.yml
Now we have ./dockerfiles/dev/docker-compose.yml like this:
app:
container_name: sup-dev
build: ./build
and ./dockerfiles/dev/build/Dockerfile:
FROM sup:dev
# docker-compose tries to find .profile relative to build dir:
# ./dockerfiles/dev/build
COPY .profile /var/www/
We run container like so:
docker-compose up -d
Everything works fine, but due to different OS we have our code in different places: /home/aliance/www/project for me and /home/user/other/path/project for the second developer. So I can not just add volume instruction into Dockerfile.
Now we solve this problem in this wrong way:
- I am using lsyncd with my personal config to transfer files into the container
- While the second one uses volume instruction into Dockerfile but not commited it.
May be you know how can I write an unified Dockerfile for docker-compose to volume out code into app container from different paths?
The file paths on the host shouldn't matter. Why do you need absolute paths?
You can use paths that are relative to the docker-compose.yml so they should be the same for both developers.
The VOLUME instructions in the Dockerfile are always relative to the build context, so if you want, you can use something like this:
app:
container_name: sup-dev
build: ..
dockerfile: build/Dockerfile
That way the build context for the Dockerfile will be the project root.
Maybe you should keep your Dockerfile at the root of your project. Then you could add an instruction in the Dockerfile:
COPY ./ /usr/src/app/
or (not recommended in prod)
VOLUME /usr/src/app
+ (option while running the container as I don't know docker-compose)
-v /path/to/your/code:/usr/src/app