Developing in docker-compose. Getting the container to recognise code changes - docker

I have a docker-container with a Python3 environment and various libraries installed.
I'm trying to develop a simple Python program against this environment.
So what I have is a volume with my source code outside the container which is ADDed and set as WORKDIR in the Dockerfile.
I'm then shelling into the container and trying to run the program on the command-line.
When I hit an error, I want to simply change the source in my editor which is outside the container, and run again.
However, when I do this, the executing code in the container doesn't seem to be taking any notice of the changes I made.
If I do
docker-compose up --build
and rebuild the container then it does.
Obviously this is very slow.
Surely it should be possible for the container to see changes to the code I'm working on without being rebuilt? If so, how do I make this happen?

Using ADD bakes files into a container image, so as you've noticed, updating files in a running application requires an entire container rebuild and restart. To get around this, you can mount a directory on your host machine over the path you've copied into your container using ADD.
To do this with Docker, you can use -v or --volume. Using Docker Compose, you can list the directory to be mounted under volumes:. For example, if you had the following in your build file:
# Copy app code into the container working directory
ADD /my/app/code /usr/app/src
You can then mount your live code over the baked-in files at container start time (note that directory paths must be absolute - you can use $PWD for this):
$ docker run -v /my/live/app/code:/usr/app/src python:latest
$ docker run -v "$PWD"/app/code:/usr/app/src python:latest
The docker-compose.yml equivalent is as follows:
my-service:
image: python:latest
volumes:
- /my/live/app/code:/usr/app/src
- ./relative/paths:/work/too
There's more about bind mounts in the documentation.

Related

Why can't my Docker container find the file it's supposed to create?

I have a Docker container (Linux container running on Windows with VLS 2) running a .NET Core 5.0 application, whose Dockerfile and docker-compose.yml were created by someone else. I spun it up with docker run and passing a single environment variable and port mapping. It works just fine until it attempts to create a file, which it attempts to do with a statement like this: System.IO.File.WriteAllText($"/output_json/myfile.json", jsonString);, and errors out. The error message says
Could not find a part of the path '/output_json/myfile.json'.
Since a Docker container is essentially a virtualized filesystem, I assume I need to allocate some space to the container, or share a folder on the host machine with it, so that it has an accessible location to save the file. Is that correct?
EDIT: I've just found this in docker-compose.yml:
services:
<servicename>:
volumes:
- ./output:/output_json
Doesn't this mean that an output_json directory is supposed to be created? Does docker-compose not have any bearing on a container created with docker run?
The path /output_json probably doesn't exist in the docker image. That could be because you're meant to map a directory on your host to that path. Then the container can put it's output there and you can grab it after the container is done.
To try it, you can make an empty directory and map that to the /output_json path in your container by running the following 2 commands from a command line
mkdir %temp%\container_output
docker run -v %temp%\container_output:/output_json <other options> <image name>
Then do cd %temp%\container_output and see what output the container has made.

Drupal folders within docker

I succesfully installed drupal 7 with docker.
Using docker4drupal, now my question when I start editing my drupal site is, where are the folders containing drupal?
Let's say I installed a new theme and want to swap the images for the banner, how do I access the drupal folder containing the images, or would it be preciser to ask : Where does Docker storage them?
My docker compose line is :
-codebase : /var/www/html
I know that installing it using :
./:/var/www/html
Would install drupal in the same directory my docker-compose.yml is, but for some reason it doesn't work and still doesn't show me where the files are.
Any help is welcome!
If you are not using volumes to mount your existing code, the code resides inside the docker container. You can access it only by getting inside the container using docker exec. If you are using the default docker-compose.yml that came with the repo, then the name of the container will be "docker4drupal_nginx_1" (since nginx is the default).
Run this code to get inside the container:
docker exec -it docker4drupal_nginx_1 /bin/bash
exec allows you to execute commands inside the container.
-it allows you to start an interactive terminal
/bin/bash allows you to start the bash terminal inside the container
Once you are inside container run ls and you will see drupal files including "web".
MORE USEFUL
However, this is not a useful way if you want to work on the files and probably use an editor. Instead, mount a directory on host machine. First make a new directory where your docker-compose.yml file is with the name "codebase".
Then, update the docker-compose.yml so that:
- codebase:/var/www/html
becomes
- ./codebase:/var/www/html
Do this in both php and nginx service definisions. Of course, you should do this after you run docker-compose down with your previous set up. Then restart containers using docker-compose up -d.
Then, you will notice that the Drupal files are present in the codebase directory.
If you see at the bottom of the yml file, you will see that "codebase" is defined as a Docker volume. This implies the storage is managed by Docker and it will get stored somewhere in /var/lib/docker/ along with the container itself.
Hope this helps.

How to keep changes inside a container on the host after a docker build?

I have a docker-compose dev stack. When I run, docker-compose up --build, the container will be built and it will execute
Dockerfile:
RUN composer install --quiet
That command will write a bunch of files inside the ./vendor/ directory, which is then only available inside the container, as expected. The also existing vendor/ on the host is not touched and, hence, out of date.
Since I use that container for development and want my changes to be available, I mount the current directory inside the container as a volume:
docker-compose.yml:
my-app:
volumes:
- ./:/var/www/myapp/
This loads an outdated vendor directory into my container; forcing me to rerun composer install either on the host or inside the container in order to have the up to date version.
I wonder how I could manage my docker-compose stack differently, so that the changes during the docker build on the current folder are also persisted on the host directory and I don't have to run the command twice.
I do want to keep the vendor folder mounted, as some vendors are my own and I like being able to modifiy them in my current project. So only mounting the folders I need to run my application would not be the best solution.
I am looking for a way to tell docker-compose: Write all the stuff inside the container back to the host before adding the volume.
You can run a short side container after docker-compose build:
docker run --rm -v /vendor:/target my-app cp -a vendor/. /target/.
The cp could also be something more efficient like an rsync. Then after that container exits, you do your docker-compose up which mounts /vendor from the host.
Write all the stuff inside the container back to the host before adding the volume.
There isn't any way to do this directly, but there are a few options to do it as a second command.
as already suggested you can run a container and copy or rsync the files
use docker cp to copy the files out of a container (without using a volume)
use a tool like dobi (disclaimer: dobi is my own project) to automate these tasks. You can use one image to update vendor, and another image to run the application. That way updates are done on the host, but can be built into the final image. dobi takes care of skipping unnecessary operations when the artifact is still fresh (based on modified time of files or resources), so you never run unnecessary operations.

Is there a way to replicate pwd in a volume mount for docker in a boot2docker context?

So currently I can do: docker -v .:/usr/src/app or even specify it in my docker-compose.yml:
web:
volumes:
- .:/usr/src/app
But when I attempt to define this in my Dockerfile:
VOLUME .:/usr/src/app
It doesn't mount anything.
Now I understand the complexities in that I'm using OSX and so I have to virtualize the environment to run Docker via boot2docker, and that boot2docker solves the copy issue by mounting /User to the linux machine running Docker.
The documentation wants me to be explicit, but since my explicitness would require me to name my user (in this case /User/krainboltgreene/code/krainboltgreene/blankrails) it seems non-idiomatic, as that obviously doesn't work on other people's environments.
What's the solution for this? I mean, I can technically get this all working without (as noted above the CLI and compose works fine), but it means not being able to do project specific provisioning (bower install, npm install, vulcanize, etc).
You can't specify a host directory for a volume inside a Dockerfile, because of the portability reasons you mention (not everyone will have the same directories and there are security issues regarding mounting sensitive files).
If you instead do:
VOLUME /usr/src/app
Docker will automatically set up a volume at run-time for the folder, which will be mapped to a directory under /var/lib/docker/volumes.
If you want to be able to quickly make changes during development, I would suggest using COPY in the Dockerfile, but mounting local changes over the top with a volume at run-time. This has the disadvantage that if you volume mount a folder, all the contents of that directory in the container will be hidden (rather than merged).
The docker run -v .:/usr/src/app ... command as well as the docker-compose definitions are executing during runtime. Whereas the Dockerfile instructions are executed during build time.
By the way the instruction in your Dockerfile is syntactically incorrect. It should be VOLUME /usr/src/app instead.
That VOLUME keyword only defines that later during runtime this location will be stored on a volume. So all files that you add by further Dockerfile instructions or manual commits to that location are ignored and not added to the resulting image.
Now during runtime when you did not specify a volume it Docker will generate a volume for you which is empty by default.
To have your docker-compose setup working for other colleagues you could simply make the docker-compose configuration file being part of your blankrails project folder. Everybody then runs docker-compose from within that directory and your provided configuration will work.
EDIT:
I do not know exactly what you mean with project specific provisioning. But if your aim is to provide default contents for the defined volume you could do something like the following:
Add all required project files during the Dockerfile build to a /bootstrap folder on the image.
Instead of executing your app directly use a start shell script for CMD.
In that start script you can check whether the volume mounted to /usr/src/app is empty or not. When it is empty copy all the /bootstrap contents into it.
Afterwards start your app from within that script in foreground.
With that approach you can easily provide a default file set for mounted volumes. And when you re-use that volume e.g. after a container restart the container just works with the files that are on the volume without touching them again during startup. So modified files will be persisted.

How can I mount a file in a container, that isn't available before first run?

I'm trying to build a Dockerfile for a webapp that uses a file-based database. I would like to be able to mount the file from the host*
The file is in the root of the complete software install, so it's not really ideal to mount that complete dir.
Another problem is that before the first use, the database-file isn't created yet. A first time user won't have a database, but another user might. I can't 'mount' anything during a build** I believe.
It could probably work like this:
First/new database start:
Start the container (without mount).
The webapp creates a database.
Stop the container
subsequent starts:
Start the container using a -v to mount the file
It would be better if that extra start/stop isn't needed for a user. Even if it is, I'm still looking for a way to do this userfriendly, possibly having 2 'methods' of starting it (maybe I can define a first-boot thing in docker-compose as well as a 'normal' method?).
How can I do this in a simpel way, so that it's clear for any first time users?
* The reason is that you can copy your Dockerfile and the database file as a backup, and be up and running with just those 2 elements.
** How to mount host volumes into docker containers in Dockerfile during build
One approach that may work is:
Start the database in the build file in such a way that it has time to create the default file before exiting.
Declare a VOLUME in the Dockerfile for the file after the above instruction. This will cause the file to be copied into the volume when a container is started, assuming you don't explicitly provide a host path
Use data-containers rather than volumes. So the normal usage would be:
docker run --name data_con my_db echo "my_db data container"
docker run -d --volumes-from data_con my_db
...
The first container should exit immediately but set up the volume that is used in the second container.
I was trying to achieve something similar and managed to do it by mounting a folder, instead of the file, and creating a symlink in the Dockerfile, initially pointing to a non-existing file:
docker-compose.yml
version: '3.0'
services:
bash:
build: .
volumes:
- ./data:/data
command: ['bash']
Dockerfile
FROM bash:latest
RUN ln -s /data/.bash_history /root/.bash_history
Then you can run the container with:
docker-compose run --rm bash
With this setup, you can push an empty "data" folder into the repository for example (and exclude its content with .gitignore). In the first run, inside the container /root/.bash_history will be a "broken" symlink, pointing to a file that does not exist. When you exit the shell, bash will write the history to /root/.bash_history, which will end up in /data/.bash_history.
This is probably not the correct approach.
If you have multiple containers that are trying to share some information through the file-system, you should probably let them share some directory.
That way, the flow is simple and very hard to get wrong.
You simply mount the same directory, say /data (from the host's perspective) into all the containers that are trying to use it.
When an application starts and it can't find anything inside that directory, it can gracefully stop and exit with a code that says: "Cannot start, DB not initialized yet".
You can then configure some mechanism with a growing timeout to try and restart that container until you're successful.
On the other hand, the app that creates the DB can start and create it inside the directory or find an existing file to use.

Resources