Cannot find generated files in the Docker in the localhost - docker

Let's consider such directory. (Note: A directory ends with \)
root\
|
-- some stuff
|
-- application\
| |
| -- app_stuff
| |
| -- out\
| |
| -- main.cpp
|
-- some stuff
I'm trying to build this app via docker.
The Dockerfile looks like:
FROM emscripten/emsdk:latest
RUN apt-get -q update
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN em++ application/main.cpp -o application/out/app.html
RUN pip3 install aiohttp
RUN pip3 install aiohttp_jinja2
RUN pip3 install jinja2
RUN ls application/out
The docker-compose looks like:
version: '3.8'
services:
application:
build: .
volumes:
- ./application/out:/app/application/out
command: python3 application/entry.py
ports:
- "8080:8080"
As you may notice in Dockerfile (RUN em++ application/main.cpp -o application/out/app.html), whereas docker is processing it generates new files to the out-directory. However, once it's done I can't find those files.
Note: These files appear in application\out in container.
...
Step 10/10 : RUN ls application/out
---> Running in 603f6b99f4b0
app.html
app.js
app.wasm
...
Where have I admitted a mistake?

The Dockerfile gives instructions on how to build a docker image, and not on what happens in the live container.
If you mount a volume, either via docker-compose or via a docker run command, either way, the volume will only be mounted once the container is created.
So what happens is
first docker creates the image executing the commands in the Dockerfile, and stores the image as an image
then docker will create a container using the stored image
then docker will mount the volumes you defined in the docker-compose.yml file. (At this point if anything is already present in the target directory, either the mount will fail or the original content of the target directory will be moved to a 'lost-and-found' directory)
then the entrypoint or cmd command is run (so here that would be python3 application/entry.py)
So if you need to get the output files out in your host directory, you either need to create those files in the entrypoint script of copy them in the entrypoint script
so you can create a file you call myscript.sh with the following
#!/bin/bash
em++ /app/application/main.cpp -o /app/application/out/app.html
python3 /app/application/entry.py
in your Dockerfile you remove the line RUN em++ application/main.cpp -o application/out/app.html and replace it with
COPY ./myscript.sh /
ENTRYPOINT /myscript.sh
and you remove the line command: python3 application/entry.py from your docker-compose.yml file.
You can use the CMD command rather than ENTRYPOINT if you prefer, that's just a matter of personal preference.

A Docker-compose volume can link a directory on the host to a directory inside of a container. You are overwriting the /app/application/out directory inside of the container with a volume to the host's ./application/out, effectively erasing any contents of /app/application/out originating from your built image.
Given the context, I presume your host's ./application/out directory is empty and you are overwriting the container's /app/application/out directory with nothing. You can test this by removing the volumes tag and see if the application is able to find files under /app/application/out afterwards.
Unrelated to your issue, take into consideration that your apt-get update command will cache Debian remote repository lists in your built image; this adds wasted space to your final image. See this post about deleting the cached lists.

Related

Dockerfile not copying file to image - no such file or folder

What I am trying to achieve:
copy a redis.config template to my docker image
read .env variables content and replace the template variables references (such as passwords, ports etc.) with values from .env
start the redis-server with the prepared config file
This way, I can have multiple redis instances setup for local dev, staging and production environments.
I have the following folder structure:
/redis
--.env
--Dockerfile
--redis.conf
This is the Dockerfile:
FROM redis:latest
COPY redis.conf ./
RUN apt-get update
RUN apt-get -y install gettext
RUN envsubst < redis.conf > redisconf
EXPOSE $REDIS_PORT
CMD ["redis-server redis.conf"]
When I go to the redis folder and run docker build -t redis-test . everything builds as expected, but when I do docker run -dp 6379:6379 redis-test afterwards the container crashes with the following error:
Fatal error, can't open config file '/data/redis-server redis.conf': No such file or directory
It seems that the redis.conf file from my folder is not getting correctly copied to my image? But the envsubst runs as expected so it seems that the file is there and the .env variables get overwriten as expected?
What am I doing wrong?
The immediate error is that you've explicitly put the CMD as a single word, so it is interpreted as an executable filename containing a space rather than an executable and a parameter. Split this into two words:
CMD ["redis-server", "redis.conf"]
There's a larger and more complex problem around when envsubst gets run. You're RUNning it as part of the image build, but that means it happens before the container is run and the environment variables are known.
I'd generally address this by writing a simple entrypoint wrapper script. This runs as the main container process, so after the Docker-level container setup happens, and it can see all of the container environment variables. It can run envsubst or whatever other first-time setup is required, and then run exec "$#" to invoke the normal container command.
#!/bin/sh
envsubst < redis.conf.tmpl > redis.conf
exec "$#"
Make this script executable on the host (chmod +x entrypoint.sh), COPY it into your image, and make that the ENTRYPOINT.
ROM redis:latest
COPY redis.conf.tmpl entrypoint.sh ./
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get -y install gettext
ENTRYPOINT ["./entrypoint.sh"]
CMD ["redis-server", "redis.conf"]

docker-compose debugging service show `pwd` and `ls -l` at run?

I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py
You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.
If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.

Get build files to persist on host after docker-compose build is run

I'm trying to run a docker-compose build command with a Dockerfile and a docker-compose.yml file.
Inside the docker-compose.yml file, I'm trying to bind a local folder on the host machine ./dist with a folder on the container app/dist.
version: '3.8'
services:
dev:
build:
context: .
volumes:
- ./dist:app/dist # I'm expecting files to be changed or added to the container's app/dist to be reflected to the host's ./dist folder
Inside the Dockerfile, I build some files with an NPM script that I'm wanting to make available on the host machine once the build is finished. I'm also touching a new file inside the /app/dist/test.md just as a simple test to see if the file ends up on the host machine, but it does not.
FROM node:8.17.0-alpine as example
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run dist
RUN touch /app/dist/test.md
Is there a way to do this? I also tried using the "long syntax" as mentioned in the Docker Compose v3 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3/
The easiest way to do this is to install Node and run the npm commands directly on the host.
$BREW_OR_APT_GET_OR_YUM_OR_SOMETHING install node
npm install
npm run dist
# done
There's not an easy way to use a Dockerfile to build host content. The Dockerfile can't write out directly to the host filesystem; if you use a volume mount, the host volume hides the container content before anything else happens.
That means, if you want to use this approach, you need to launch a temporary container to get the content out. You can do it with a one-off container, mounting the host directory somewhere other than /app, making the main container command be cp:
sudo docker build -t myimage .
sudo docker run --rm \
-v "$PWD/dist:/out" \
myimage \
cp -a /app/dist /out
Or, if you specifically wanted to use docker cp:
sudo docker build -t myimage .
sudo docker create --name to-copy myimage
sudo docker cp -r to-copy:/app/dist ./dist
sudo docker rm to-copy
Note that any of these sequences are more complex than just installing a local Node via a package manager, and require administrator permissions (you can use the same technique to overwrite any host file, including the /etc/shadow file with encrypted passwords).

Why is git clone failing when I build an image from a dockerfile?

FROM ansible/ansible:ubuntu1604
MAINTAINER myname
RUN git clone http://github.com/ansible/ansible.git /tmp/ansible
RUN git clone http://github.com/othertest.git /tmp/othertest
WORKDIR /tmp/ansible
ENV PATH /tmp/ansible/bin:/sbin:/usr/sbin:/usr/bin:bin
ENV PYTHONPATH /tmp/ansible/lib:$PYTHON_PATH
ADD inventory /etc/ansible/hosts
WORKDIR /tmp/
EXPOSE 8888
When I build from this dockerfile, I get Cloning into /tmp/ansible and othertest in red text (I assume is an error). When I then run the container and peruse around, I see that all my steps from the dockerfile built correctly other than the git repositories which are missing.
I can't figure out what I'm doing wrong, I'm assuming its a simple mistake.
Building dockerfile:
sudo docker build --no-cache -f Dockerfile .
Running dockerfile:
sudo docker run -I -t de32490234 /bin/bash
The short answer:
Put your files anywhere other than in /tmp and things should work fine.
The longer answer:
You're basing your image on the ansible/ansible:ubuntu1604 image. If you inspect this image via docker inspect ansible/ansible:ubuntu1604 or look at the Dockerfile from which it was built, you will find that it contains a number of volume mounts. The relevant line from the Dockerfile is:
VOLUME /sys/fs/cgroup /run/lock /run /tmp
That means that all of those directories are volume mount points, which means any data placed into them will not be committed as part of the image build process.
Looking at your Dockerfile, I have two comments unrelated to the above:
You're explicitly setting the PATH environment variable, but you're neglecting to include /bin, which will cause all kinds of problems, such as:
$ docker run -it --rm bash
docker: Error response from daemon: oci runtime error: exec: "bash": executable file not found in $PATH.
You're using WORKDIR twice, but the first time (WORKDIR /tmp/ansible) you're not actually doing anything that cares what directory you're in (you're just setting some environment variables and copying a file into /etc/ansible).

Running composer install in docker container

I have a docker-compose.yml script which looks like this:
version: '2'
services:
php:
build: ./docker/php
volumes:
- .:/var/www/website
The DockerFile located in ./docker/php looks like this:
FROM php:fpm
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php composer-setup.php
RUN php -r "unlink('composer-setup.php');"
RUN mv composer.phar /usr/local/bin/composer
RUN composer update -d /var/www/website
Eventho this always fails with the error
[RuntimeException]
Invalid working directory specified, /var/www/website does not exist.
When I remove the RUN composer update line and enter the container, the directory does exist and contains my project code.
Please tell me if I am doing anything wrong OR if I'm doing the composer update on a wrong place
RUN ... lines are run when the image is being built.
Volumes are attached to the container. You have at least two options here:
use COPY command to, well, copy your app code to the image so that all commands after that command will have access to it. (Do not push the image to any public Docker repo as it will contain your source that you probably don't want to leak)
install composer dependencies with command run on your container (CMD or ENTRYPOINT in Dockerfile or command option in docker-compose)
You are mounting your local volume over your build directory so anything you built in '/var/www/website' will be mounted over by your local volume when the container runs.

Resources