docker-compose entrypoint restart not stateless - docker

When I restart a container with docker-compose up with an entrypoint it's not stateless, it keep the context of the previous execution of the entrypoint.
docker-compose file:
version: '3.8'
services:
test:
image: debian:buster-slim
entrypoint: ["/entrypoint.sh"]
volumes:
- ./entrypoint.sh:/entrypoint.sh
command: ["echo", "100"]
entrypoint.sh file:
#!/bin/bash
set -e
set -x
mkdir folder
exec "$#"
the first time it log
Creating network "test_compose_entrypoint_default" with the default driver
Creating test_compose_entrypoint_test_1 ... done
Attaching to test_compose_entrypoint_test_1
test_1 | + mkdir folder
test_1 | + exec echo 100
test_1 | 100
if I rerun a docker-compose up , the second time it log
Starting test_compose_entrypoint_test_1 ... done
Attaching to test_compose_entrypoint_test_1
test_1 | + mkdir folder
test_1 | mkdir: cannot create directory 'folder': File exists
test_compose_entrypoint_test_1 exited with code 1
If I run docker-compose down and then it work again, but impossible to run two times in a row.

In fact, docker-compose restart tries re-running the main container process in the existing (stopped) container. docker-compose up will default to reusing an existing container, if one exists with the right configuration, even if it's stopped. This can be a problem for setups like what you show that have the reasonable expectation of starting in a clean environment.
One approach is to code defensively around the possibility of the directory already existing:
# Create `folder` only if it doesn't exist. Could still fail
# if the directory is read-only, or if `folder` is a plain file.
test -d folder || mkdir folder
At a higher level, you could docker-compose rm the existing container before re-launching it, or if you don't mind restarting everything, docker-compose up --force-recreate. This approach isn't compatible with an automatic restart: policy, though.

Related

docker-compose named volume with one file: ERROR: Cannot create container for service, source is not directory

I am trying to make the binary file /bin/wkhtmltopdf from the container wkhtmltopdf available in the web container. I try to achieve this with a named volume.
I have the following docker container setup in my docker-compose.yml:
services:
web:
image: php:7.4-apache
command: sh -c "mkdir -p /usr/local/bin && touch /usr/local/bin/wkhtmltopdf"
entrypoint: sh -c "exec 'apache2-foreground'"
volumes:
- wkhtmltopdfvol:/usr/local/bin/wkhtmltopdf
wkhtmltopdf:
image: madnight/docker-alpine-wkhtmltopdf
command: sh -c "touch /bin/wkhtmltopdf"
entrypoint: sh -c "tail -f /dev/null" # workaround to keep container running
volumes:
- wkhtmltopdfvol:/bin/wkhtmltopdf
volumes:
wkhtmltopdfvol:
However, I get the following error when running docker-compose up:
ERROR: for wkhtmltopdf Cannot create container for service wkhtmltopdf:
source /var/lib/docker/overlay2/42e7082b8024ae4ebb13a4f0003a9e17bc18b33ef0677431dd002da3c21dde88/merged/bin/wkhtmltopdf is not directory
.../bin/wkhtmltopdf is not directory
Does that mean that I can't share one file between containers but only directories through a named volume? How do I achieve this?
Edit: I also noticed that /usr/local/bin/wkhtmltopdf inside the web container is a directory and not a file as I expected.
It can be tricky to share binaries between containers like this. Volumes probably aren't the mechanism you're looking for.
If you look at the Docker Hub page for the php image you can see that php:7.4-apache is an alias for (currently) php:7.4.15-apache-buster, where "Buster" is the name of a Debian release. You can then search on https://packages.debian.org/ to discover that Debian has a prepackaged wkhtmltopdf package. You can install this using a custom Dockerfile:
FROM php:7.4-apache
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
wkhtmltopdf
# COPY ...
# Base image provides EXPOSE, CMD
Then your docker-compose.yml file needs to build this image:
version: '3.8'
services:
web:
build: .
# no image:, volumes:, or command: override
Just in terms of the mechanics of sharing binaries like this, you can run into trouble where a binary needs a shared library that's not present in the target container. The apt-get install mechanism handles this for you. There are also potential troubles if a container has a different shared-library ecosystem (especially Alpine-based containers), or using host binaries from a different operating system.
The Compose file you show mixes several concepts in a way that doesn't really work. A named volume is always a directory, so trying to mount that over the /bin/wkhtmltopdf file in the second container causes the error you see. There's a dependency issue for which container starts up first and gets to create the volume. A container only runs a single command, and if you have both entrypoint: and command: then the command gets passed as extra arguments to the entrypoint (and if the entrypoint is an sh -c ... invocation, effectively ignored).
If you really wanted to try this approach, you should make web: {depends_on: [wkhtmltopdf]} to force the dependency order. The second container should mount the volume somewhere else, it probably shouldn't have an entrypoint:, and it should do something like command: cp -a /bin/wkhtmltopdf /export. (It will exit immediately once this cp finishes, but that shouldn't matter.) The first container can then mount the volume on, say, /usr/local/bin, and not specially set command: or entrypoint:. There will still be a minor race condition (you're not guaranteed the cp command will complete before Apache starts) but it probably wouldn't be a practical problem.

Not able to extend the parent's entry point

I am trying to extend the parent image's entry point by following this tutorial
This is the content of the child's entry point shell file new-initfile.sh
#!/bin/sh
exec /usr/local/bin/docker-entrypoint.sh "$#"
exec flyway migrate
So here basically I am executing the parent images entry point and then adding my own command to it.
Here is my docker file ↓
FROM postgres:alpine
RUN apk --no-cache add wget su-exec
RUN wget -qO- https://repo1.maven.org/maven2/org/flywaydb/flyway-commandline/7.3.2/flyway-commandline-7.3.2-linux-x64.tar.gz | tar xvz && su-exec sh ln -s `pwd`/flyway-7.3.2/flyway /usr/local/bin
RUN mv /flyway-7.3.2/conf/flyway.conf /flyway-7.3.2/conf/flyway.conf.orig
COPY flyway.conf /flyway-7.3.2/conf/flyway.conf
COPY new-initfile.sh /new-initfile.sh
RUN ["chmod", "+x", "/new-initfile.sh"]
ENTRYPOINT [ "/new-initfile.sh" ]
This is my docker-compose.yml file
version: "3.7"
services:
db:
build: database
container_name: db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- ./pgdata:/var/lib/postgressql/data
- ./database/migrations:/migrations
ports:
- "5432:5432"
When I do docker-compose up it goes to the last step and exits with code 0. If I comment out the last line in my dockerfile, it starts absolutely fine but then my extension commands are not run.
My end objective is that it should first run the parent's entry point command and then run mine.
That is the way Docker works - it keeps a container up only while the process started by initial arguments is running. new-initfile.sh completes all the lines and exits normally and so does the container.
In general you need an endless task at the end of your entrypoint script so that it never exits unless an error or a stop signal happens. In your case, I would have delegated migrations to the application rather that the database. It is common to run migrations before starting an application and it's more convenient when you add new migrations (you don't have to mess with two images).
If you still wish the database to do migrations, here are two options:
Postgres Docker image supports extension scripts but they only run at the first launch. That is when Postgres creates a database. It will not run extension scripts on consecutive launches. This is the best way to load a dump or something like that. To utilise the feature place the script in /docker-entrypoint-initdb.d/ inside the container. The script must have *.sql, *.sql.gz, or *.sh extension. Read more about initialization scripts here.
Run the following command and examine the output: docker run --rm postgres:alpine cat /usr/local/bin/docker-entrypoint.sh. This is the default entrypoint script in this image. There are a lot of code in it but look at this first:
if ! _is_sourced; then
_main "$#"
fi
It won't do much if it is sourced. So the idea is: you source the default entrypoint into your entrypoint script:
#!/bin/sh
. /usr/local/bin/docker-entrypoint.sh
Then you copy the contents of the _main function into your script by the way adding there flyway migrate where you think it fits. And this way you should have a proper entrypoint script.

Cannot find generated files in the Docker in the localhost

Let's consider such directory. (Note: A directory ends with \)
root\
|
-- some stuff
|
-- application\
| |
| -- app_stuff
| |
| -- out\
| |
| -- main.cpp
|
-- some stuff
I'm trying to build this app via docker.
The Dockerfile looks like:
FROM emscripten/emsdk:latest
RUN apt-get -q update
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN em++ application/main.cpp -o application/out/app.html
RUN pip3 install aiohttp
RUN pip3 install aiohttp_jinja2
RUN pip3 install jinja2
RUN ls application/out
The docker-compose looks like:
version: '3.8'
services:
application:
build: .
volumes:
- ./application/out:/app/application/out
command: python3 application/entry.py
ports:
- "8080:8080"
As you may notice in Dockerfile (RUN em++ application/main.cpp -o application/out/app.html), whereas docker is processing it generates new files to the out-directory. However, once it's done I can't find those files.
Note: These files appear in application\out in container.
...
Step 10/10 : RUN ls application/out
---> Running in 603f6b99f4b0
app.html
app.js
app.wasm
...
Where have I admitted a mistake?
The Dockerfile gives instructions on how to build a docker image, and not on what happens in the live container.
If you mount a volume, either via docker-compose or via a docker run command, either way, the volume will only be mounted once the container is created.
So what happens is
first docker creates the image executing the commands in the Dockerfile, and stores the image as an image
then docker will create a container using the stored image
then docker will mount the volumes you defined in the docker-compose.yml file. (At this point if anything is already present in the target directory, either the mount will fail or the original content of the target directory will be moved to a 'lost-and-found' directory)
then the entrypoint or cmd command is run (so here that would be python3 application/entry.py)
So if you need to get the output files out in your host directory, you either need to create those files in the entrypoint script of copy them in the entrypoint script
so you can create a file you call myscript.sh with the following
#!/bin/bash
em++ /app/application/main.cpp -o /app/application/out/app.html
python3 /app/application/entry.py
in your Dockerfile you remove the line RUN em++ application/main.cpp -o application/out/app.html and replace it with
COPY ./myscript.sh /
ENTRYPOINT /myscript.sh
and you remove the line command: python3 application/entry.py from your docker-compose.yml file.
You can use the CMD command rather than ENTRYPOINT if you prefer, that's just a matter of personal preference.
A Docker-compose volume can link a directory on the host to a directory inside of a container. You are overwriting the /app/application/out directory inside of the container with a volume to the host's ./application/out, effectively erasing any contents of /app/application/out originating from your built image.
Given the context, I presume your host's ./application/out directory is empty and you are overwriting the container's /app/application/out directory with nothing. You can test this by removing the volumes tag and see if the application is able to find files under /app/application/out afterwards.
Unrelated to your issue, take into consideration that your apt-get update command will cache Debian remote repository lists in your built image; this adds wasted space to your final image. See this post about deleting the cached lists.

docker-compose subfolders doesn't appear in volume folder

Dockerfile:
FROM golang:latest
RUN mkdir /app/
RUN mkdir /app/subfolder1
RUN mkdir /app/subfolder2
VOLUME /app/
docker-compose.yml
version: '3.3'
services:
my_test:
build: .
volumes:
- ./app:/app
I watched (in mysql Dockerfile) how the database mysql files are shared, I decided to do the same. I expect that the first time start docker-compose up, two subfolders from outside will be created in the /app folder. But during running docker-compose up, only one folder /app is created without subfolders inside. What am I doing wrong?
Please tell me how can I achieve the same behavior as with the MySQL container, when at the first start my external folder is filled with files and folders, and then it’s just used:
version: '3'
services:
mysql:
image: mysql:5.7
volumes:
- ./data/db:/var/lib/mysql
Example above works, but my first example doesn't work
The mysql image has an involved entrypoint script that does the first-time setup. That specifically checks to see whether the data directory exists or not:
if [ -d "$DATADIR/mysql" ]; then
DATABASE_ALREADY_EXISTS='true'
fi
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_init_database_dir "$#"
...
fi
Note that this does not rely on any built-in Docker functionality, and does not copy any content out of the original image; it runs a fairly involved sequence of steps to populate the initial database setup, configure users, and run the contents in the /docker-entrypoint-initdb.d directory.
If you want to copy some sort of seed data into a mounted volume, your container generally needs to handle this itself. You could write an entrypoint script like:
#!/bin/sh
# If the data directory doesn't have content, copy it
if ! [ -d /data/content ]; then
cp -a /app/data/content /data
fi
# Run whatever the container's main command is
exec "$#"
(There is a case where Docker will populate named volumes from image content. This has some severe limitations: it only works on named volumes and not bind-mounted host directories; it doesn't work on Kubernetes, if that's in your future; if the image content is updated, the volume will not be changed. Writing out the setup code explicitly at startup will give you more predictable behavior.)

In Docker, how do I detect from inside a container, that if a file or a directory is mounted by the Docker?

I did docker-compose down/docker container rm and noticed that I lost all my data created in the container. Yes, I forgot to mount my local directory as a volume in the first place. 😭
To prevent this, on startup, I want to warn users that "the data will be non-persistent", if the local volume's not mounted.
Is there a way to detect from inside container whether a file or a directory is a mounted one via Docker?
I googled it but couldn't find a good way. And my current workaround is:
FROM alpine:latest
RUN \
mkdir /data && \
touch /data/.replaceme
...
ENTRYPOINT /detect-mount-and-start.sh
detect-mount-and-start.sh checks if /data/.replaceme exists. If so, it warns to mount a local volume and exits.
Are there a better way to detect it?
Note (2019/09/12): This container is not only used via docker-compose up but docker run --rm too. And the directory name in local are not specified. Meaning, it can be -v $(pwd)/mydata:/data or something like -v $(pwd)/data_local:/data, etc.
Note (2019/09/15): The situation is: I launched a container of Markdown Editor and created something like 100 of .md files. Those files were saved on /data on the root of the container. I should have mounted the volume like -v $(pwd)/data:/data before everything. But I didn't ... and noticed it after removing the container. My bad I know.
I don't know if I understand your question, but when you use docker-compose down, depending on how you create your docker-compose.yml, it will destroy your data, see:
Will delete you data when you execute down:
version: '2'
   mysqldb:
     image: mysql: 5.7
Don't will delete when you execute down:
version: '2'
   mysqldb:
     image: mysql: 5.7
     volumes:
       - ./data:/var/lib/mysql
Don't will delete when you execute down:
version: '2'
   mysqldb:
     image: mysql: 5.7
     volumes:
       - data-volume:/var/lib/mysql
volumes:
   data-volume:
     external: true
PS: I do not have a rating to comment on your question, so I am answering.
The way you are doing may also work, but I was working with one of my client on a project in development environment, It was nodejs based application and they need to make sure the server.js exist before starting the container and server.js was expected from mount location so I came up with this approach, As I did not find a way to sense shared docker volume inside container.
Dockerfile
FROM alpine
run mkdir -p /myapp
copy . /myapp
copy entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
APP_PATH="/myapp"
files=$(ls /myapp/*.js)
echo "Files in Docker mount location: $files"
if [ -f "$APP_PATH/server.js" ] && [ -f "$APP_PATH/index.js" ]; then
echo "Starting container with host mount files"
echo "Starting server.js"
cd $APP_PATH;
node server.js
else
>&2 echo "Error: Pls mount the host location on /myapp path of the container i.e -v host_node_project:/myapp. Current files $(ls $APP_PATH)"
break
fi
build and run
docker build -t myapp .
docker run -it --rm --name myapp myapp
docker-compose stop - doesn't destroy your containers
Then you can use:
docker-compose start

Resources