Container doesn't have code changes, image does - docker

Yes, this question has been posted before. Yes I've applied the relevant solutions and ignored the irrelevant ones. I just don't understand what is going on here.
Problem: I make change to my source code by modifying the default return message. I COPY the src tree into my baseImage (multi-stage builds). I build a stagingImage from the baseImage. The baseImage has the code change but the container running the stagingImage returns the pre-code-change message. I've replicated this behavior on MacOs (Catalina) and Amazon Linux 2.
Note that I am doing this manually, i.e. I'm not relying on IDEs and inotfy-s and the like. The only tools involved are the CLI, vim, make and docker(-compose).
Details: I have a Dockerfile that is capable of doing multi-stage builds. Here's the relevant part of the baseImage build:
COPY ./composer.json /var/www/html/composer.json
COPY ./php-apache/auth.json /root/.composer
COPY ./ /var/www/html
and here's a sample of how I build my other images :
FROM php:7.4-apache AS cleanImage
COPY --from=baseImage / /
FROM cleanImage AS stagingImage
COPY ./.env.staging /var/www/html/.env
RUN /bin/bash -c 'rm -rf /var/lib/apt/lists/*'
# Set entrypoint
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["apachectl"]
I modified the Makefile from Chapter 10 of "Docker In Action". Here's a sample:
## image-base : Build the base image that the others are based on
.PHONY: image-base
image-base: metadata
#echo "Building Base Image"
docker image build --no-cache --force-rm --tag src:$(BUILD_ID)-base \
-f src/Dockerfile \
--target baseImage \
--build-arg BUILD_ID='$(BUILD_ID)' \
--build-arg BUILD_DATE='$(BUILD_TIME_RFC_3339)' \
--build-arg VCS_REF='$(VCS_REF)' \
./src
#echo "Built Base Image. BUILD_ID: $(BUILD_ID)"
## image-staging : Build the staging image
.PHONY: image-staging
image-staging: metadata image-base
#echo "Building Staging App Image"
docker image build -t src:$(BUILD_ID)-staging \
-f src/Dockerfile \
--target=stagingImage \
--build-arg BUILD_ID='$(BUILD_ID)' \
--build-arg BUILD_DATE='$(BUILD_TIME_RFC_3339)' \
--build-arg VCS_REF='$(VCS_REF)' \
./src
#echo "Built Staging App Image. BUILD_ID: $(BUILD_ID)-staging"
## up env=<env> : Bring up environments. env values are prod, local, staging.
.PHONY: up
up:
ifeq ($(strip $(BUILD_ID)),)
$(error BUILD_ID environment variable is not set. Run `make metadata` to generate one)
endif
docker-compose -f docker-compose.yml -f docker-$(env).yml up -d
Where BUILD_ID is of the form YYYYMMDD-epoch-git_SHA. Note that the baseImage uses the --no-cache flag.
So far, so good (I think).
My `docker-compose.yml``` file looks like this:
version: '3.7'
volumes:
web_app:
services:
php-apache:
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
volumes:
- web_app:/var/www/html
env_file:
- ./src/.env.prod
and my docker-staging.yml file looks like this:
version: '3.7'
services:
php-apache:
image: 'src:20200924-174443-3f16358-staging'
container_name: perfboard-staging
ports:
- 8888:80
- 9261:443
env_file:
- ./src/.env.staging
Yes, I've hardcoded the name of the stagingImage for the purposes of debugging.
What I expect to see: when I hit localhost:8888 I expect to see my modified message. I do not.
When inspecting the baseImage, the modified message is there. I cannot directly inspect the stagingImage because I keep getting an Apache error presumably because of the entrypoint.
If I delete every image and container from my system, this behaves as expected.
Deleting the above specific baseIamge and stagingImage does not fix the problem.
Any ideas on where to look?

Your docker-compose.yml file specifies
volumes:
- web_app:/var/www/html
That causes the contents of the web_app volume to be mounted over the /var/www/html directory in the container, hiding whatever was initially in the image.
You should delete this volume declaration.
The first time only when you run the container, Docker will copy the contents of the image into an empty named volume. From that point onward, it treats the volume as user data and never makes any changes to it; even if the underlying image is updated, the volume contents are not, and when you run the container, the volume takes precedence over the updated image code.
There are a number of practical problems with depending on the "copy from an image into a volume" behavior (it doesn't work with host bind mounts; it doesn't work on Kubernetes; it causes image updates to be ignored) and I'd try very hard to avoid mounting any sort of volume over your application code.
If you think it's important to have the volume for some other reason, you need to cause Compose to delete it, so that it will get recreated and have the "first time only" behavior again. docker-compose down -v will delete all containers, networks, and volumes, but that will also include things like your database data. You might be able to use docker-compose down (without -v) to stop the container, then use docker volume ls; docker volume rm dirname_web_app to manually delete the volume.

Related

Building a binary inside docker and mount back to host

I have a requirement where I need to build a executable binary but inside a docker container because of the difficulty in building the binary in different environments. I have a sample docker-compose of what I want and trying to convert it to a Dockerfile. The docker-compose is as below.
version: "3.7"
services:
wasm_compile_update:
image: envoyproxy/envoy-build-ubuntu:e33c93e6d79804bf95ff80426d10bdcc9096c785
command: |
bash -c "bazel build //examples/wasm-cc:envoy_filter_http_wasm_updated_example.wasm \
&& cp -a bazel-bin/examples/wasm-cc/* /build"
working_dir: /source
volumes:
- ../..:/source
- ./lib:/build
What will be the equivalent Dockerfile for this ?? I was trying to use CMD but couldn't make it work. Any help will be appreciated since I'm on a tight deadline. Thanks
You can create a dockerfile that'll have the right tools in to build your binary, but you'll still have to use docker run to do the build itself because you can't mount drives during the build process nor can you copy things out of the image during the build. However, you can do this:
A dockerfile
from envoyproxy/envoy-build-ubuntu:e33c93e6d79804bf95ff80426d10bdcc9096c785
workdir /examples
entrypoint ["bazel", "build"]
Build it like this:
docker build -t MyBuildkit .
And run it like this:
docker run -it --rm \
-v $(pwd)/examples:/examples \
-v $(pwd)/bin:/bazel-bin/examples/wasm-cc \
MyBuildkit /examples/wasm-cc:envoy_filter_http_wasm_updated_example.wasm
Now, I don't know enough about the directories here to work out if that's exactly right, but the gist is there.
The first volume mount (-v) is there to mount your source code (which I'm assuming is examples) into a folder in the container (which I've also called examples). The final bin directory is also mounted, in the second mount, which I've mounted into a host folder called bin and I've assumed that the copy command you had contained the binary so that would ma to /bazel-bin/examples/wasm-cc in the container.
Another assumption I've made is around the command to send to the container. I've set the entrypoint to be what is presumably your compiler (basel build) and to that I've passed in what is presumably the name of the thing to build (/examples/wasm-cc:envoy_filter_http_wasm_updated_example.wasm).
Because I don't know basel at all it is entirely possible that I've got one or more of these details wrong, but the general pattern stands. Mount your source and your bin, pass the target of the build into the entrypoint, and build into the bin.

docker-compose named volume with one file: ERROR: Cannot create container for service, source is not directory

I am trying to make the binary file /bin/wkhtmltopdf from the container wkhtmltopdf available in the web container. I try to achieve this with a named volume.
I have the following docker container setup in my docker-compose.yml:
services:
web:
image: php:7.4-apache
command: sh -c "mkdir -p /usr/local/bin && touch /usr/local/bin/wkhtmltopdf"
entrypoint: sh -c "exec 'apache2-foreground'"
volumes:
- wkhtmltopdfvol:/usr/local/bin/wkhtmltopdf
wkhtmltopdf:
image: madnight/docker-alpine-wkhtmltopdf
command: sh -c "touch /bin/wkhtmltopdf"
entrypoint: sh -c "tail -f /dev/null" # workaround to keep container running
volumes:
- wkhtmltopdfvol:/bin/wkhtmltopdf
volumes:
wkhtmltopdfvol:
However, I get the following error when running docker-compose up:
ERROR: for wkhtmltopdf Cannot create container for service wkhtmltopdf:
source /var/lib/docker/overlay2/42e7082b8024ae4ebb13a4f0003a9e17bc18b33ef0677431dd002da3c21dde88/merged/bin/wkhtmltopdf is not directory
.../bin/wkhtmltopdf is not directory
Does that mean that I can't share one file between containers but only directories through a named volume? How do I achieve this?
Edit: I also noticed that /usr/local/bin/wkhtmltopdf inside the web container is a directory and not a file as I expected.
It can be tricky to share binaries between containers like this. Volumes probably aren't the mechanism you're looking for.
If you look at the Docker Hub page for the php image you can see that php:7.4-apache is an alias for (currently) php:7.4.15-apache-buster, where "Buster" is the name of a Debian release. You can then search on https://packages.debian.org/ to discover that Debian has a prepackaged wkhtmltopdf package. You can install this using a custom Dockerfile:
FROM php:7.4-apache
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
wkhtmltopdf
# COPY ...
# Base image provides EXPOSE, CMD
Then your docker-compose.yml file needs to build this image:
version: '3.8'
services:
web:
build: .
# no image:, volumes:, or command: override
Just in terms of the mechanics of sharing binaries like this, you can run into trouble where a binary needs a shared library that's not present in the target container. The apt-get install mechanism handles this for you. There are also potential troubles if a container has a different shared-library ecosystem (especially Alpine-based containers), or using host binaries from a different operating system.
The Compose file you show mixes several concepts in a way that doesn't really work. A named volume is always a directory, so trying to mount that over the /bin/wkhtmltopdf file in the second container causes the error you see. There's a dependency issue for which container starts up first and gets to create the volume. A container only runs a single command, and if you have both entrypoint: and command: then the command gets passed as extra arguments to the entrypoint (and if the entrypoint is an sh -c ... invocation, effectively ignored).
If you really wanted to try this approach, you should make web: {depends_on: [wkhtmltopdf]} to force the dependency order. The second container should mount the volume somewhere else, it probably shouldn't have an entrypoint:, and it should do something like command: cp -a /bin/wkhtmltopdf /export. (It will exit immediately once this cp finishes, but that shouldn't matter.) The first container can then mount the volume on, say, /usr/local/bin, and not specially set command: or entrypoint:. There will still be a minor race condition (you're not guaranteed the cp command will complete before Apache starts) but it probably wouldn't be a practical problem.

Docker app supposed to output json to a dir on a volume, when running no data show

I am working on a docker app. The purpose of this repo is to output some json into a volume. I am using a Dockerfile, docker-compose and a Makefile. I'll show the contents of each file below. Goal/desired outcome is that when I run using make up that the container runs and outputs the json.
Directory looks like this:
docker-compose.yaml
Dockerfile
Makefile
main/ # a directory
Here are the contents of directory Main:
example.R
Not sure the best order to show these files. Throughout my setup I refer to a variable $PROJECTS_DIR which is a global on the host / local:
echo $PROJECTS_DIR
/home/doug/Projects
Here are my files:
docker-compose.yaml:
version: "3.5"
services:
nextzen_ga_extract_marketing:
build:
context: .
environment:
start_date: "2020-11-18"
start_date: "2020-11-19"
volumes:
- ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline:/home/rstudio/Projects/nextzen_google_analytics_extract_pipeline
Dockerfile:
FROM rocker/tidyverse:latest
ADD main main
WORKDIR "/main"
RUN apt-get update && apt-get install -y \
less \
vim
ENTRYPOINT ["Rscript", "example.R"]
Makefile:
.PHONY: build
build:
docker-compose build
.PHONY: up
up:
docker-compose pull
docker-compose up -d
.PHONY: restart
restart:
docker-compose restart
.PHONY: down
down:
docker-compose down
Here is the contents of the 'main' file of the Docker app, example.R:
library(jsonlite)
unlink("../output_data", recursive = TRUE) # delete any existing data from previous runs
dir.create('../output_data')
write(toJSON(mtcars), '../output_data/ga_tables.json')
If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/main and then run sudo Rscript example.R then the file runs and outputs the json in '../output_data/ga_tables.json as expected.
I am struggling to get this to happen when running the container. If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/ and then in the terminal run make up for:
docker-compose pull
docker-compose up -d
I then see:
make up
docker-compose pull
docker-compose up -d
Creating network "nextzengoogleanalyticsextractpipeline_default" with the default driver
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 ...
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 .
It 'looks' like everything ran as expected with no errors. Except no output appears in directory output_data as expected?
I guess I'm misunderstanding or misusing ENTRYPOINT in the Dockerfile with ENTRYPOINT ["Rscript", "example.R"]. My goal is that this file would run when the container is run.
How can I 'run' (if that's the correct terminology) my app so that it outputs json into /output_data/ga_tables.json?
Not sure what other info to provide? Any help much appreciated, I'm still getting to grips with docker.
If you run your application from /main and its output is supposed to go into ../output_data (so effectively /output_data), you need to bind mount this directory to have this output available on host. Therefore I would update your docker-compose.yaml to read something like this:
volumes:
- /path/to/output_data/on/host:/output_data
Bear in mind however that your script will not be able to remove /output_data when bind-mounted this way, so you might want to change your step to removing directory contents and not directory itself.
In my case, I got this working when I used full paths as opposed to relative paths.

What is the purpose of building a docker image inside a compose file?

I was reading Quickstart: Compose and Django when I came across "defining a build in a compose file". Well I've seen it before but what I'm curious about here is what's the purpose of it? I just can't get it.
Why we just don't build the image once (or update it whenever we want) and use it multiple times in different docker-compose files?
Here is the Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And here is docker-compose.yml:
version: '3'
web:
# <<<<
# Why not building the image and using it here like "image: my/django"?
# <<<<
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
You might say: "well, do as you wish!" Why I'm asking is because I think there might be some benefits that I'm not aware of.
PS:
I mostly use Docker for bringing up some services (DNS, Monitoring, etc. Never used it for
development).
I have already read this What is the difference between `docker-compose build` and `docker build`?
There's no technical difference between docker build an image and specifying an image: in the docker-compose.yml file, and specifying the build: metadata directly in the docker-compose.yml.
The benefits to using docker-compose build to build images are more or less the same as using docker-compose up to run containers. If you have a complex set of -f path/Dockerfile --build-arg ... options, you can write those out in the build: block and not have to write them repeatedly. If you have multiple custom images that need to be built then docker-compose build can build them all in one shot.
In practice you'll frequently be iterating on your containers, which means you will need to run local unit tests, then rebuild images, then relaunch containers. Being able to drive the Docker end of this via docker-compose down; docker-compose up --build will be easier will be more convenient than remembering all of the individual docker build commands you need to run.
The one place where this doesn't work well is if you have a custom base image. So if you have a my/base image, and your application image is built FROM my/base, you need to explicitly run
docker build -t my/base base
docker build -t my/app app
docker run ... my/app
Compose doesn't help with the multi-level docker-build sequence; you'll have to explicitly docker build the base image.

Can the default location of the Traefik configuration file be changed in the official Docker file?

I have a non-critical Docker Compose project where the Traefik rules vary acceptably between dev and production (I need Lets Encrypt on prod, but not on dev). I am using the [file] config provider.
Currently I am creating separate builds for dev and prod, thus:
# This is fetched from the Compose config
ARG BUILD_NAME
RUN if [ "$BUILD_NAME" == "prod" ]; then \
echo Compiling prod config... ; \
sh /root/compile/prod.sh > /etc/traefik/traefik.toml ; \
else \
echo Compiling dev config... ; \
sh /root/compile/dev.sh > /etc/traefik/traefik.toml ; \
fi
While this project is not enormously important, per-env builds is a bit hacky, and I'd rather go with the standard container approach of one image for all environments.
To do that, I was thinking of doing something like this:
FROM traefik:1.7
# This is set in the Docker Compose config
ENV ENV_NAME
# Let's have a sig handler
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.2/dumb-init_1.2.2_amd64
RUN chmod +x /usr/local/bin/dumb-init
COPY docker/traefik/start.sh /root/start.sh
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]
CMD ["/root/start.sh"]
The start.sh would have something that would run my "compile" shell command at run time (this selects pieces of config based on the environment). However, the official Traefik images do not run a shell - there are a compiled blob from Go source - so this won't work. Is there a env var by which /etc/traefik/traefik.toml can be changed, or an industry standard way of doing this in Docker?
I did think of using volumes, but that means the container won't "plug-n-play" without additional set up - I like that it is currently self-contained. However, I may use that if there is no alternative. I could run the config "compiler" on the host.
Another approach is to install Traefik in an image that has a shell - maybe it would work with Alpine. I am not sure how I feel about that - removing the shell is a good security feature, so I am hesitant to add it back in, even if I don't think it can be easily exploited.
I didn't find a way to modify the Traefik config file path using environment variables. However, I hit on a volume-based solution that seems to be quite self-contained.
I set up another image called shell in my Docker Compose file:
shell:
build:
context: docker/shell
volumes:
# On-host volume for generating config
- "./docker/traefik/compiled:/root/compiled-host"
This features a bind-mount volume to catch generated config files.
Next, I created a Dockerfile for the new service:
FROM alpine:3.10
COPY compile /root/compile
COPY config /root/config
# Compile different versions of the config, ready to copy into an on-host volume
RUN mkdir /root/compiled && \
sh /root/compile/dev.sh > /root/compiled/traefik-dev.toml && \
sh /root/compile/prod.sh > /root/compiled/traefik-prod.toml
This will create config files for both environments as part of the built image.
When Docker Compose is started up, this service will briefly start, but it will soon exit gracefully and harmlessly. It is intended to be run on an ad-hoc basis anyway.
I already had environment-specific YAML config files, docker-compose-dev.yml and docker-compose-prod.yml, which are explicitly specified in the Compose command with -f. I then used this file to expose the generated on-host file to Traefik. Here's the dev:
traefik:
volumes:
# On-host volume for config
- "./docker/traefik/compiled/traefik-dev.toml:/etc/traefik/traefik.toml"
Much the same was done for traefik-prod.toml
Then, I created per-env commands to copy the config from the shell image into the on-host volume:
#!/bin/sh
docker-compose -f docker-compose.yml -f docker-compose-prod.yml run shell cp /root/compiled/traefik-prod.toml /root/compiled-host/traefik-prod.toml
Finally, when Traefik starts as part of the Compose application, it will find its configuration file in its usual place, /etc/traefik/traefik.toml, but this is in fact a file volume to the generated copy on the host.

Resources