docker-compose debugging service show `pwd` and `ls -l` at run? - docker

I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py

You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.

If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.

Related

Docker container fails on Windows Powershell succeeds on WSL2 with identical Dockerfile and docker-compose

Problem Description
I have a docker image which I build and run using docker-compose. Normally I develop on WSL2, and when running docker-compose up --build the image builds and runs successfully. On another machine, using Windows powershell, with an identical clone of the code, executing the same command successfully builds the image, but gives an error when running.
Error
[+] Running 1/1
- Container fastapi-service Created 0.0s
Attaching to fastapi-service
fastapi-service | exec /start_reload.sh: no such file or directory
fastapi-service exited with code 1
I'm fairly experienced using Docker, but am a complete novice with PowerShell and developing on Windows more generally. Is there a difference in Dockerfile construction in this context, or a difference in the execution of COPY and RUN statements?
Code snippets
Included are all parts of the code required to replicate the error.
Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY ./start.sh /start.sh
RUN chmod +x /start.sh
COPY ./start_reload.sh /start_reload.sh
RUN chmod +x /start_reload.sh
COPY ./data /data
COPY ./app /app
EXPOSE 8000
CMD ["/start.sh"]
docker-compose.yml
services:
web:
build: .
container_name: "fastapi-service"
ports:
- "8000:8000"
volumes:
- ./app:/app
command: /start_reload.sh
start-reload.sh
This is a small shell script which runs a prestart.sh if present, and then launches gunicorn/uvicorn in "reload mode":
#!/bin/sh
# If there's a prestart.sh script in the /app directory, run it before starting
PRE_START_PATH=/app/prestart.sh
HOST=${HOST:-0.0.0.0}
PORT=${PORT:-8000}
LOG_LEVEL=${LOG_LEVEL:-info}
echo "Checking for script in $PRE_START_PATH"
if [ -f $PRE_START_PATH ] ; then
echo "Running script $PRE_START_PATH"
. "$PRE_START_PATH"
else
echo "There is no script $PRE_START_PATH"
fi
# Start Uvicorn with live reload
exec uvicorn --host $HOST --port $PORT --log-level $LOG_LEVEL main:app --reload
The solution lies in a difference between UNIX and Windows systems, and the way they end lines. A discussion on the topic can be found [here].
(Difference between CR LF, LF and CR line break types?)
The presence/absence of these characters in the file, and configuration of the shell running the command leads to an error where the file being run is the Dockerfile start-reload.sh(CR-LF) but the file that exists is simply start-reload.sh, hence the no such file or directory error raised.

Get build files to persist on host after docker-compose build is run

I'm trying to run a docker-compose build command with a Dockerfile and a docker-compose.yml file.
Inside the docker-compose.yml file, I'm trying to bind a local folder on the host machine ./dist with a folder on the container app/dist.
version: '3.8'
services:
dev:
build:
context: .
volumes:
- ./dist:app/dist # I'm expecting files to be changed or added to the container's app/dist to be reflected to the host's ./dist folder
Inside the Dockerfile, I build some files with an NPM script that I'm wanting to make available on the host machine once the build is finished. I'm also touching a new file inside the /app/dist/test.md just as a simple test to see if the file ends up on the host machine, but it does not.
FROM node:8.17.0-alpine as example
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN npm install
RUN npm run dist
RUN touch /app/dist/test.md
Is there a way to do this? I also tried using the "long syntax" as mentioned in the Docker Compose v3 documentation: https://docs.docker.com/compose/compose-file/compose-file-v3/
The easiest way to do this is to install Node and run the npm commands directly on the host.
$BREW_OR_APT_GET_OR_YUM_OR_SOMETHING install node
npm install
npm run dist
# done
There's not an easy way to use a Dockerfile to build host content. The Dockerfile can't write out directly to the host filesystem; if you use a volume mount, the host volume hides the container content before anything else happens.
That means, if you want to use this approach, you need to launch a temporary container to get the content out. You can do it with a one-off container, mounting the host directory somewhere other than /app, making the main container command be cp:
sudo docker build -t myimage .
sudo docker run --rm \
-v "$PWD/dist:/out" \
myimage \
cp -a /app/dist /out
Or, if you specifically wanted to use docker cp:
sudo docker build -t myimage .
sudo docker create --name to-copy myimage
sudo docker cp -r to-copy:/app/dist ./dist
sudo docker rm to-copy
Note that any of these sequences are more complex than just installing a local Node via a package manager, and require administrator permissions (you can use the same technique to overwrite any host file, including the /etc/shadow file with encrypted passwords).

Why is Docker not binding my volumes to the container?

I have a very simple project:
Dockerfile:
from node:lts
VOLUME /scripts
WORKDIR /scripts
RUN bash -c 'ls /'
RUN bash -c 'ls /scripts'
RUN script.sh
docker-compose.yml:
version: '3.7'
services:
service:
build: .
volumes:
- .:/scripts
Then I run docker-compose build but it fails with /bin/sh: 1: script.sh: not found
From the ls /scripts I can see that Docker isn't binding my script to the container. I have Docker 19.03.8. Do you know what I am doing wrong?
When you run a Docker Compose file, the build: block is run first, and it ignores all of the options outside that block. A Dockerfile never has mounted volumes, it can never make network calls to other Compose containers, and it won't see environment: variables that are set elsewhere.
That means you must explicitly COPY code into your image before you can RUN it.
FROM node:ls
WORKDIR /scripts
COPY script.sh .
RUN ./script.sh

Adding Volume to docker container's /app Issue

I am having trouble creating a volume that maps to the directory "/app" in my container
This is basically so when I update the code I don't need to build the container again
This is my docker file
# stage 1
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
# stage 2
FROM nginx:alpine
COPY --from=node /app/dist/my-first-app /usr/share/nginx/html
I use this command to run the container
docker run -d -p 100:80/tcp -v ${PWD}/app:/app docker-testing:v1
and no volume gets linked to it.
However, if I were to do this
docker run -d -p 100:80/tcp -v ${PWD} docker-testing:v1
I do get a volume at least
Anything obvious that I am doing wrong?
Thanks
The ${PWD}:/app:/app should be ${PWD}/app:/app.
If you explode ${PWD}, you'd obtain something like /home/user/src/thingy:/app:/app which does not make much sense.
EDIT:
I'd suggest using docker-compose to avoid this kind of issues (it also simplify a lot the commands to start up docker).
In your case the docker-compose.yml would look like this:
docker run -d -p 100:80/tcp -v ${PWD}:/app:/app docker-testing:v1
version: "3"
services:
doctesting:
build: .
image: docker-testing:v1
volumes:
- "./app:/app"
ports:
- "100:80"
I didn't really test if it works, there might be typos...

Package docker maven application and run it using a shell script

I am building Scigraph database on my local machine and trying to move this entire folder to docker and run it, when I run the shell script on my local machine it runs without error when I add the same folder inside docker and try to run it fails
Am I doing this right way, here's my DOckerfile
FROM goyalzz/ubuntu-java-8-maven-docker-image
ADD ./SciGraph /usr/share/SciGraph
WORKDIR /usr/share/SciGraph/SciGraph-services
RUN pwd
EXPOSE 9000
CMD ['./run.sh']
when I try to run it I'm getting this error
docker run -p9005:9000 test
/bin/sh: 1: [./run.sh]: not found
if I run it using below command it works
docker run -p9005:9000 test -c "cd /usr/share/SciGraph/SciGraph-services && sh run.sh"
as I already marked the directory as WORKDIR and running the script inside docker using CMD it throws error
For scigraph as provided in their ReadMe, you can to run mvn install before you run their services. You can set your shell to bash and use a docker compose to run the docker image as shown below
Dockerfile
FROM goyalzz/ubuntu-java-8-maven-docker-image
ADD ./SciGraph /usr/share/SciGraph
SHELL ["/bin/bash", "-c"]
WORKDIR /usr/share/SciGraph
RUN mvn -DskipTests -DskipITs -Dlicense.skip=true install
RUN cd /usr/share/SciGraph/SciGraph-services && chmod a+x run.sh
EXPOSE 9000
build the scigraph docker image by running
docker build . -t scigraph_test
docker-compose.yml
version: '2'
services:
scigraph-server:
image: scigraph_test
working_dir: /usr/share/SciGraph/SciGraph-services
command: bash run.sh
ports:
- 9000:9000
give / after SciGraph-services and change it to "sh run.sh" ................ and look into run.sh file permissions also
It is likely that your run.sh doesn't have the #!/bin/bash header, so it cannot be executed only by running ./run.sh. Nevertheless, always prefer to run scripts as /bin/bash foo.sh or /bin/sh foo.sh when in docker, especially because you don't know what changes files have been sourced in images downloaded from public repositories.
So, your CMD statement would be:
CMD /bin/bash -c "/bin/bash run.sh"
You have to add the shell and the executable to the CMD array ...
CMD ["/bin/sh", "./run.sh"]

Resources