Docker-compose volumes working with Dockerfile : Device or resource busy - docker

I get an issue working with docker-compose service while using Dockerfile build.
Indeed, I provide a .env file into my app/ folder. I want the TAG value of the .env file to be propagate/render into the config.ini file. I tried to achieve using entrypoint.sh (which is launch just after the volumes) but it failed.
There is my docker-compose.yml file
# file docker-compose.yml
version: "3.4"
app-1:
build:
context: ..
dockerfile: deploy/Dockerfile
image: my_image:${TAG}
environment:
- TAG=${TAG}
volumes:
- ../config.ini:/app/config.ini
And then my Dockerfile:
# file Dockerfile
FROM python:3.9
RUN apt-get update -y
RUN apt-get install -y python-pip
COPY ./app /app
WORKDIR /app
RUN pip install -r requirements.txt
RUN chmod 755 entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["python", "hello_world.py"]
In my case, I mount a config.ini file with the configuration like :
# file config.ini
[APP_INFO]
name = HELLO_WORLD
version = {TAG}
And finally, in my app folder, I have a .env file where you can found the version of the app, which is evoluing through time.
# file .env
TAG=1.0.0
Finally
#!/bin/bash
echo "TAG:${TAG}"
awk '{sub("{TAG}","${TAG}")}1' /app/config.ini > /app/final_config.ini
mv /app/final_config.ini /app/config.ini
exec "$#" # Need to execute RUN CMD function
I want my entrypoint.sh (which is called before the last DOCKERFILE line and after the docker-compose volumes. With the entrypoint.sh, I want overwritte my mounted file by a new one, cerated using awk.
Unfortunatly, I recover the tag and I can create a final_config.ini file, but I'm not able to overwrite config.ini with it.
I get this error :
mv: cannot move '/app/final_config.ini' to '/app/config.ini': Device or resource busy
How can I overwritting config.ini without getting error? Is there an more simple solution?

Because /app/config.ini is a mountpoint, you can't replace it. You should be able to rewrite it, like this...
cat /app/final_config.ini > /app/config.ini
...but that would, of course, modify the original file on your host. For what you're doing, a better solution is probably to mount the template configuration in an alternate location, and then generate /app/config.ini. E.g, mount it on /app/template_config.ini:
volumes:
- ../config.ini:/app/template_config.ini
And then modify your script to output to the final location:
#!/bin/bash
echo "TAG:${TAG}"
awk '{sub("{TAG}","${TAG}")}1' /app/template_config.ini > /app/config.ini
exec "$#" # Need to execute RUN CMD function

Related

Dockerfile not copying file to image - no such file or folder

What I am trying to achieve:
copy a redis.config template to my docker image
read .env variables content and replace the template variables references (such as passwords, ports etc.) with values from .env
start the redis-server with the prepared config file
This way, I can have multiple redis instances setup for local dev, staging and production environments.
I have the following folder structure:
/redis
--.env
--Dockerfile
--redis.conf
This is the Dockerfile:
FROM redis:latest
COPY redis.conf ./
RUN apt-get update
RUN apt-get -y install gettext
RUN envsubst < redis.conf > redisconf
EXPOSE $REDIS_PORT
CMD ["redis-server redis.conf"]
When I go to the redis folder and run docker build -t redis-test . everything builds as expected, but when I do docker run -dp 6379:6379 redis-test afterwards the container crashes with the following error:
Fatal error, can't open config file '/data/redis-server redis.conf': No such file or directory
It seems that the redis.conf file from my folder is not getting correctly copied to my image? But the envsubst runs as expected so it seems that the file is there and the .env variables get overwriten as expected?
What am I doing wrong?
The immediate error is that you've explicitly put the CMD as a single word, so it is interpreted as an executable filename containing a space rather than an executable and a parameter. Split this into two words:
CMD ["redis-server", "redis.conf"]
There's a larger and more complex problem around when envsubst gets run. You're RUNning it as part of the image build, but that means it happens before the container is run and the environment variables are known.
I'd generally address this by writing a simple entrypoint wrapper script. This runs as the main container process, so after the Docker-level container setup happens, and it can see all of the container environment variables. It can run envsubst or whatever other first-time setup is required, and then run exec "$#" to invoke the normal container command.
#!/bin/sh
envsubst < redis.conf.tmpl > redis.conf
exec "$#"
Make this script executable on the host (chmod +x entrypoint.sh), COPY it into your image, and make that the ENTRYPOINT.
ROM redis:latest
COPY redis.conf.tmpl entrypoint.sh ./
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get -y install gettext
ENTRYPOINT ["./entrypoint.sh"]
CMD ["redis-server", "redis.conf"]

Automate project in laravel

I have an app in Laravel with .env.local file (a and I made the next docker-compose file:
api:
container_name: nadal_api
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/var/www/html/app
ports:
- ${APP_PORT}:80
links:
- db
- redis
And my Dockerfile:
FROM composer:latest AS composer
WORKDIR /var/www/html/app/
FROM php:7.2-fpm-stretch
RUN apt-get update && apt-get install -y \
supervisor \
nginx \
zip
ADD docker/nginx.conf /etc/nginx/nginx.conf ADD
docker/virtualhost.conf /etc/nginx/conf.d/default.conf ADD
docker/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
ARG enviroment
COPY --from=composer /usr/bin/composer /usr/bin/composer
COPY .env.local .env RUN chmod -R g+w /var/www/html/app/bootstrap
RUN composer install RUN php artisan key:generate
EXPOSE 80
CMD ["/usr/bin/supervisord"]
I want to clone the repository and when doing a docker-compose build that does the following in the dockerfile:
rename .env.local to .env
give permissions to the storage folder. I have an error in this line
RUN chmod -R g+w /var/www/html/app/bootstrap
chmod: cannot access '/var/www/html/app/bootstrap': No such file or
directory
docker-compose.yaml: ${APP_PORT} take values from .env.local (I tried with env_file but it does not work
In your Dockerfile there is no COPY action to copy all your current project code into created image. Therefore bootstrap folder is not exist in your image. So chmod tells you exactly that.
Volumes (this line - .:/var/www/html/app) will sync your current directory with container later when it will be created depending on image structure. So if you want to give permissions to bootstrap folder then copy project code into image before giving permissions to it.
Add this line before permission operations to make folders accessible.
COPY . /var/www/html/app

Why are files I generate with a bash script within a docker container also saving locally?

I have an aws/appium test project I want to run in docker. I have a bash script that runs in the container which downloads a file from S3 and creates a zip of my project.
The Dockerfile:
FROM maven:3.3.9
RUN apt-get update && \
apt-get -y install python && \
apt-get -y install python-pip && \
pip install awscli
RUN export PATH=$PATH:/usr/local/bin
There's a docker compose file, the command runs a bash script:
version: '2'
volumes:
maven_cache: ~
services:
application: &application
build: .
tmpfs:
- /tmp:rw,nodev,noexec,nosuid
volumes:
- ./:/app
- maven_cache:/root/.m2/repository
working_dir: /app
command: ./aws-upload.sh
This is the beginning of the ./aws-upload.sh bash script. It prepares the files I need for uploading later:
#!/usr/bin/env bash
mvn clean package -DskipTests=true
aws s3 cp s3://<bucket-name>/app.apk $(pwd)
cp target/zip-with-dependencies.zip $(pwd)
I only want the above files to exist within the container, however they appear locally also. Is there something in my docker-compose file that isn't configured correctly?
Thanks
In your compose file you are defining a volume ./:/app which maps the host folder where the compose file is located to the containers app folder. If you execute your bash script in the app folder it will also make the files it is creating available on the host.
If you want to avoid this either remove the volume mapping (in case you don't need it) or execute the script in another folder which is not mapped to your host.
This is normal. When you declared the following inside the composefile:
volumes:
- ./:/app
This means mount the current host directory onto /app inside the container. This will effectivelty keep the current directory and the /app folder inside the container in sync.
Thus if the aws-upload.sh script creates files in /app, they will also show next to the compose file.

Running composer install in docker container

I have a docker-compose.yml script which looks like this:
version: '2'
services:
php:
build: ./docker/php
volumes:
- .:/var/www/website
The DockerFile located in ./docker/php looks like this:
FROM php:fpm
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php composer-setup.php
RUN php -r "unlink('composer-setup.php');"
RUN mv composer.phar /usr/local/bin/composer
RUN composer update -d /var/www/website
Eventho this always fails with the error
[RuntimeException]
Invalid working directory specified, /var/www/website does not exist.
When I remove the RUN composer update line and enter the container, the directory does exist and contains my project code.
Please tell me if I am doing anything wrong OR if I'm doing the composer update on a wrong place
RUN ... lines are run when the image is being built.
Volumes are attached to the container. You have at least two options here:
use COPY command to, well, copy your app code to the image so that all commands after that command will have access to it. (Do not push the image to any public Docker repo as it will contain your source that you probably don't want to leak)
install composer dependencies with command run on your container (CMD or ENTRYPOINT in Dockerfile or command option in docker-compose)
You are mounting your local volume over your build directory so anything you built in '/var/www/website' will be mounted over by your local volume when the container runs.

How can I write Dockerfile for Yesod? "RUN yesod init -n myApp -d postgresql" didn't work as expected

I tried to make a simple application with Yesod and PostgreSQL using Docker Compose but RUN yesod init -n myApp -d postgresql didn't seem to work as expected.
I defined Dockerfile and docker-compose.yml as below:
Dockerfile:
FROM shuny/ghc-7.8.4:latest
MAINTAINER shuny
# Create default config
RUN cabal update
# Add stackage remote repo
RUN sed -i 's/^remote-repo: [a-zA-Z0-9_\/:.]*$/remote-repo: stackage:http:\/\/www.stackage.org\/lts/g' /root/.cabal/config
# Update packages
RUN cabal update
# Generate locale otherwise happy (because of tf-random) will fail
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN echo $LANG
# Install build tools for yesod
RUN cabal install alex happy yesod-bin
# Install library for yesod-postgres
RUN apt-get update && apt-get install -y libpq-dev
RUN mkdir /code
WORKDIR /code
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
ADD . /code
WORKDIR /code
# ADD settings.yml /code/myApp/config/
docker-compose.yml:
database:
image: postgres
ports:
- "5432"
web:
build: .
tty: true
command: yesod devel
volumes:
- .:/code/
ports:
- "3000:3000"
links:
- database
and docker-compose build returned as below:
Step 0 : FROM shuny/ghc-7.8.4:latest
...
Step 17 : WORKDIR /code
---> Running in bf99d0aca48c
---> 37c3c94338d7
Removing intermediate container bf99d0aca48c
Successfully built 37c3c94338d7
but when I check like this:
$docker-compose run web /bin/bash
root#0fe5fb1a3b20:/code# ls
root#0fe5fb1a3b20:/code#
it showed nothing while this commands seem to work as expected:
docker run -ti 37c3c94338d7
root#31e94428de37:/code# ls
docker-compose.yml Dockerfile myApp settings.yml
root#31e94428de37:/code# ls myApp/
app config Handler Model.hs Settings.hs test
Application.hs dist Import myApp.cabal static
cabal.sandbox.config Foundation.hs Import.hs Settings templates
How can I fix it?
I really appliciate any feedback, thank you.
You are doing strange things with volumes and the ADD instruction.
First you build your application inside the image:
RUN yesod init -n myApp -d postgresql
WORKDIR /code/myApp
RUN cabal sandbox init && cabal install --only-dependencies --max-backjumps=-1 --reorder-goals
RUN cabal configure && cabal build && cabal install
Then you add the content of the folder that contains the Dockerfile in the /code folder of the image. I guess this step is useless.
ADD . /code
Then, if you run a container without -volume option, everything works fine
docker run -ti 37c3c94338d7
But in your docker-compose.yml file, you specified a volume option that overides the /code folder in the container with the folder that contains the docker-compose.yml file on the host machine. Therefore, you don't have the content generated during the build of your image anymore.
There are two possibilities:
Don't use the volume instruction in the docker-compose.yml file
Put the content of the /code/myApp/ folder of the image inside the ./myApp folder of the host.
It depends on why you want to use the volume option in docker-compose.yml.
I don't really know what is your goal. But if what you are trying to do is to access to the files built inside the container from the host machine, maybe this should do what you are looking for:
Remove the build steps from your Dockerfile
Run a shell inside a "web" container: docker-compose run web bash
Launch the build commands
So you will have built your application while the volume was mounted and will see the files on the host machine.
Exit the shell
Launch Docker Compose normally
If you just want to be able to backup the content of the /code/myApp/ folder, maybe you should omit the path on the host machine from the volume section of docker-compose.yml.
volumes:
- /code/
And follow this section of the documentation
I hope it helps

Resources