Can't access docker application from localhost using docker-compose - docker

before I start: I have searched this question already and implemented what the "Solutions" were and it did not help (setting host to 0.0.0.0). So, with that out of the way,
Directory structure
|-- osr
| |-- __init__.py
|-- requirements.txt
|-- Dockerfile
|-- docker-compose.yml
Dockerfile:
FROM python:3.7.5-buster
EXPOSE 5000 # i have tried with and without this
ENV INSTALL_PATH /osr
ENV FLASK_APP osr
ENV FLASK_ENV development
RUN mkdir -p $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD [ "flask", "run", "--host=0.0.0.0"]
docker-compose.yml:
version: '3.8'
services:
web:
build: .
ports:
- '5000:5000'
expose:
- '5000'
volumes:
- .:/osr
__ init __.py
import os
from flask import Flask
def create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
app.config.from_mapping(
SECRET_KEY='dev'
)
#app.route('/hello')
def hello():
return 'Hello, World!'
return app
docker-compose build web
docker-compose run web
* Serving Flask app "osr" (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 225-441-434
all of the below return "hmm can't reach this page"
http://localhost
http://localhost:5000/hello
http://127.0.0.1:5000/hello
I've even tried going to the containers IP with
docker exec -it 49e677 bash
ip add | grep global
inet 172.21.0.2/16 brd 172.21.255.255 scope global eth0
http://127.21.0.2:5000/hello # nothing
Nothing.
I'm sure the app itself works fine as I can simply run the app directly.
$env:FLASK_APP="osr"
$env:FLASK_ENV="development"
flask run --host=0.0.0.0
And it runs fine
EDIT: UPDATE
I am actually able to get to the container when I run it through simply the Dockerfile... using
docker run -it -p 5000:5000 osr_web # the container built by docker-compose build
With this, I am able to access the endpoint through localhost:5000/hello
So the issue appears to lie in spinning it up through docker-compose run
Does this help at all?
UPDATE 2
I have discovered that when I run docker ps -a I can see that Docker run actually exposes the port, but docker-compose run does not:

Are you sure app works fine itself? I tried to run your python __init__.py file and ended up with an error.
python osr/__init__.py
File "osr/__init__.py", line 11
def hello():
^
IndentationError: unexpected indent
It works after fixing the indentation error.
#app.route('/hello')
def hello():
return 'Hello, World!'
$ docker run -d -p 5000:5000 harik8/osr:latest
76628f86fecb61c0be4a969d3c91c5c575702ad8063b594a6c1c90b498ea25f1
$ curl http://127.0.0.1:5000/hello
Hello, World!
You can't run both docker and docker-compose in port 5000 at the same time. Either run one at a time or change the docker-compose/dockerfile host port.
$ docker ps -a | grep osr
8b885c4a9654 harik8/osr:latest "flask run --host=0.…" 12 seconds ago Up 11 seconds 0.0.0.0:5000->5000/tcp
$ docker ps -a | grep q5
70f38bf11e26 q5_web "flask run --host=0.…" About a minute ago Up 10 seconds 0.0.0.0:5001->5000/tcp
$ docker ps -a | grep q5
f9f6ba999109 q5_web "flask run --host=0.…" 5 minutes ago Up 5 minutes 0.0.0.0:5000->5000/tcp q5_web_1
$ docker ps -a | grep osr
93fb421333e4 harik8/osr:latest "flask run --host=0.…" 18 seconds ago Up 18 seconds 5000/tcp

I found the issue. For reference, I am running these versions:
docker-compose version 1.25.5, build 8a1c60f6
Docker version 19.03.8, build afacb8b
There were a couple issues: First and foremost to get the ports exposed I needed to run one of two options.
docker-compose up web
or
docker-compose run --service-ports web
Simply running docker-compose run web would not expose the ports.
Once this was finished, I was able to access the endpoint. However I started getting another odd error,
flask.cli.NoAppException
flask.cli.NoAppException: Failed to find Flask application or factory in module
"osr". Use "FLASK_APP=osr:name to specify one.
I had not experienced this simply using docker run -it -p 5000:5000 osr_web which was odd. However I noticed I had not set the work directory in the Dockerfile.
I changed the Dockerfile to this:
FROM python:3.7.5-buster
EXPOSE 5000
ENV INSTALL_PATH /osr
ENV FLASK_APP osr
ENV FLASK_ENV development
RUN mkdir -p $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# added this line
WORKDIR $INSTALL_PATH
COPY . .
CMD [ "flask", "run", "--host=0.0.0.0"]
I believe you could get away without setting the WORK_DIR if you turn the flask application into a package and install it.

Related

Docker container fails on Windows Powershell succeeds on WSL2 with identical Dockerfile and docker-compose

Problem Description
I have a docker image which I build and run using docker-compose. Normally I develop on WSL2, and when running docker-compose up --build the image builds and runs successfully. On another machine, using Windows powershell, with an identical clone of the code, executing the same command successfully builds the image, but gives an error when running.
Error
[+] Running 1/1
- Container fastapi-service Created 0.0s
Attaching to fastapi-service
fastapi-service | exec /start_reload.sh: no such file or directory
fastapi-service exited with code 1
I'm fairly experienced using Docker, but am a complete novice with PowerShell and developing on Windows more generally. Is there a difference in Dockerfile construction in this context, or a difference in the execution of COPY and RUN statements?
Code snippets
Included are all parts of the code required to replicate the error.
Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY ./start.sh /start.sh
RUN chmod +x /start.sh
COPY ./start_reload.sh /start_reload.sh
RUN chmod +x /start_reload.sh
COPY ./data /data
COPY ./app /app
EXPOSE 8000
CMD ["/start.sh"]
docker-compose.yml
services:
web:
build: .
container_name: "fastapi-service"
ports:
- "8000:8000"
volumes:
- ./app:/app
command: /start_reload.sh
start-reload.sh
This is a small shell script which runs a prestart.sh if present, and then launches gunicorn/uvicorn in "reload mode":
#!/bin/sh
# If there's a prestart.sh script in the /app directory, run it before starting
PRE_START_PATH=/app/prestart.sh
HOST=${HOST:-0.0.0.0}
PORT=${PORT:-8000}
LOG_LEVEL=${LOG_LEVEL:-info}
echo "Checking for script in $PRE_START_PATH"
if [ -f $PRE_START_PATH ] ; then
echo "Running script $PRE_START_PATH"
. "$PRE_START_PATH"
else
echo "There is no script $PRE_START_PATH"
fi
# Start Uvicorn with live reload
exec uvicorn --host $HOST --port $PORT --log-level $LOG_LEVEL main:app --reload
The solution lies in a difference between UNIX and Windows systems, and the way they end lines. A discussion on the topic can be found [here].
(Difference between CR LF, LF and CR line break types?)
The presence/absence of these characters in the file, and configuration of the shell running the command leads to an error where the file being run is the Dockerfile start-reload.sh(CR-LF) but the file that exists is simply start-reload.sh, hence the no such file or directory error raised.

docker-compose debugging service show `pwd` and `ls -l` at run?

I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py
You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.
If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.

How to debug a failing CMD in docker which runs fine when started manually in the container?

I package most of my code in docker containers to ensure a standalone, repeatable run.
I stumbled upon one case whee I do not understand why a specific container would not start.
The Dockerfile for that container is below. The general idea is to go for a two-stage build where a frontend would be built first and the compiled folder is reused in the container actually running afterwards. Two processes are started via supervisord
FROM node:10
WORKDIR /build
RUN npm install -g #quasar/cli
COPY front .
RUN npm install
RUN quasar build
FROM alpine:latest
RUN apk -X http://nl.alpinelinux.org/alpine/edge/testing add gcc musl-dev python3-dev libffi-dev openssl-dev make jo py3-pip py3-wheel
RUN pip3 install pyyaml logbook multiping dnspython requests paho-mqtt arrow paramiko click dictdiffer supervisor flask flask_cors
WORKDIR /app
EXPOSE 8000
COPY --from=0 /build/dist dist
COPY ec2-xxcom.key .
COPY xx.openssh .
COPY config.yaml .
COPY homemonitor.py .
RUN ln -fs /app/results.json dist/statics/results.json
COPY supervisord.conf .
CMD supervisord -c /app/supervisord.conf
This builds and runs fine on my Windows 10 Docker Desktop. supervisord starts the processes from its configuration file.
In order to go in production, the repository is pushed to Gitlab and built though its CI/CD, ending up with an image pushed to my registry.
The docker-compose file for that service is
services:
homemonitor:
container_name: homemonitor
image: myregistry.example.com/homemonitor:latest
restart: unless-stopped
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/docker/container-data/homemonitor:/app
labels:
- CADDY_ENABLE
version: '3'
Starting this image is not successful:
~ # docker-compose -f /etc/docker/docker-compose.d/homemonitor.yaml up
Creating homemonitor ... done
Attaching to homemonitor
homemonitor | Error: could not find config file /app/supervisord.conf
homemonitor | For help, use /usr/bin/supervisord -h
When I manually reach that container, I can start the CMD command from the Dockerfile.
First staring the container and peeking at the contents:
~ # docker run --rm --name=homemonitor -it homemonitor ash
/app # ls
config.yaml homemonitor.py supervisord.log
dist xx.openssh
ec2-xxcom.key supervisord.conf
/app # stat /app/supervisord.conf
File: /app/supervisord.conf
Size: 177 Blocks: 8 IO Block: 4096 regular file
Device: 3ch/60d Inode: 22719869 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2020-06-17 13:51:52.000000000
Modify: 2020-06-17 13:51:52.000000000
Change: 2020-06-17 13:55:23.000000000
Than running supervisord by hand:
/app # supervisord -c /app/supervisord.conf
2020-06-17 14:34:07,250 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2020-06-17 14:34:07,253 INFO supervisord started with pid 9
2020-06-17 14:34:08,258 INFO spawned: 'gui' with pid 11
2020-06-17 14:34:08,261 INFO spawned: 'homemonitor' with pid 12
2020-06-17 14:34:08,845 DEBG 'homemonitor' stdout output:
2020-06-17 14:34:08,844 [homemonitor] INFO starting internet web server for results
(... supervisord continues ...)
So there is some kind of disconnect between the CMD supervisord -c /app/supervisord.conf in the Dockerfile and supervisord -c /app/supervisord.conf started manually from within the container. I do not know what this can be.

Cannot access Vue CLI inside docker container

Based on this guide:
https://shekhargulati.com/2019/01/18/dockerizing-a-vue-js-application/
I have created a sample VueJS app and created a docker image:
docker build -t myapp .
based on the below Dockerfile:
# base image
FROM node:10.15.0
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install
RUN npm install -g #vue/cli
# start app
CMD ["npm", "run", "serve"]
Next I run a docker container with:
docker run -it -v ${PWD}:/usr/src/app -v /usr/src/app/node_modules -p 5000:5000 myapp
and get this (successfull) output:
DONE Compiled successfully in 4644ms 4:05:10 PM
No type errors found
No lint errors found
Version: typescript 3.4.3, tslint 5.15.0
Time: 4235ms
App running at:
- Local: http://localhost:8080/
It seems you are running Vue CLI inside a container.
Access the dev server via http://localhost:<your container's external mapped port>/
Note that the development build is not optimized.
To create a production build, run npm run build.
I then try to access the application from my browser on: http://localhost:5000/ but I just get a The connection was reset error.
I have also tried to inspect the port information on the running container with:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
755d2745bce2 myapp "npm run serve" 22 seconds ago Up 18 seconds 0.0.0.0:5000->5000/tcp confident_mirzakhani
$ docker port confident_mirzakhani
5000/tcp -> 0.0.0.0:5000
But that basically confirms the port info I passed to the run command.
Any suggestion on how to access the VueJS application in the container from the browser on my host?

How to fix docker container exiting with code 6?

Context
I am running a dockerized ubuntu for a Meteor (js) application with docker-compose file on my personnal computer. It is attached to a mongo DB container. And a nginx proxy container is running to make my development url secured with ssl (https).
Docker Images
ubuntu : ubuntu:18.04
nginx proxy : jwilder/nginx-proxy:alpine
mongo : mongo:latest
Plus my own meteor app locally.
Other
Meteor version : 1.8.1
Docker version : 18.09.3, build 774a1f4
Docker Compose version : 1.23.2, build 1110ad01
Problem
Since I stopped my containers with docker-compose down, when I restart them, my webapp container exits (with exit code 6) all the time. I didn't change the docker-compose files since my previous docker-compose -f docker-compose.dev.yml --verbose up
Error displayed by docker-compose -f docker-compose.dev.yml --verbose up
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('26d90365fe9a4b0c0eb24cb2c040aa43cf8ec207764f350b6273ee7362d9fe0e')
compose.cli.verbose_proxy.proxy_callable: docker wait <- ('26d90365fe9a4b0c0eb24cb2c040aa43cf8ec207764f350b6273ee7362d9fe0e')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/26d90365fe9a4b0c0eb24cb2c040aa43cf8ec207764f350b6273ee7362d9fe0e/json HTTP/1.1" 200 None
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/26d90365fe9a4b0c0eb24cb2c040aa43cf8ec207764f350b6273ee7362d9fe0e/wait HTTP/1.1" 200 30
compose.cli.verbose_proxy.proxy_callable: docker wait -> {'Error': None, 'StatusCode': 6}
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': 'docker-default',
'Args': [],
'Config': {'ArgsEscaped': True,
'AttachStderr': False,
'AttachStdin': False,
'AttachStdout': False,
'Cmd': ['meteor'],
'Domainname': '',
'Entrypoint': None,
'Env': ['ENV_APP_SERVER_USERNAME=app',
...webappContainer exited with code 6
Result of docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26d90365fe9a xxxxxxxxxx_webapp "meteor" 23 minutes ago Exited (6) 13 seconds ago webappContainer
4073bbfe37cf mongo "docker-entrypoint.s…" 39 minutes ago Up 14 seconds 0.0.0.0:27017->27017/tcp mongoDBContainer
201f0a99d1cf jwilder/nginx-proxy:alpine "/app/docker-entrypo…" 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-proxy_nginx-proxy_1
Result of docker logs <container_id>
** none **
How ? The container will not even start. When I do docker run it works..
Source files
docker-compose.dev.yml
version: '3'
services:
webapp:
container_name: webappContainer
env_file: .env
environment:
VIRTUAL_HOST: mysite.local
VIRTUAL_PORT: ${PORT}
build:
context: ${CONTEXT}
dockerfile: ${DOCKERFILE}...
volumes:
- ${VOLUME}:/usr/src/app
expose:
- ${PORT}
networks:
- dbAppConnector
- default
depends_on:
- mongodb
mongodb:...
container_name: ${MONGO_CONTAINER_NAME}
image: mongo
restartmongodb: always
env_file: .env
ports:
- "${MONGO_PORT}:${MONGO_PORT}"..
networks:
dbAppConnector:
volumes:
- mongo_volume:/data/db
- ./docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
volumes:
mongo_volume:
networks:
default:
external:
name: nginx-proxy
dbAppConnector:
Note that when i do docker-compose config every .env variables are correctly displayed.
Dockerfile.dev
FROM ubuntu:18.04
# Set environment dir & copy files
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
# Make host.docker.internal accessible from outside the container (dev only)
# RUN ip -4 route list match 0/0 | awk '{print $3 "host.docker.internal"}' >> /etc/hosts
# Add user to run localy (dev only) the app
RUN addgroup meteoruser && useradd -rm -d /home/moff -s /bin/bash -g root -G sudo -u 1000 meteoruser
# Update distribution
RUN apt-get update -q && apt-get clean \
# Install curl
&& apt-get install curl -y \
# Install Meteor
&& (curl https://install.meteor.com | sh) \
# Install node js
&& cd /usr/src/app \
# replace vx.x.x by meteor node -e 'console.log("I am Node.js %s!", process.version);' output from my project folder.
&& bash -c 'curl "https://nodejs.org/dist/v8.15.1/node-v8.15.1-linux-x64.tar.gz" > /usr/src/app/required-node-linux-x64.tar.gz' \
&& cd /usr/local && tar --strip-components 1 -xzf /usr/src/app/required-node-linux-x64.tar.gz \
&& rm /usr/src/app/required-node-linux-x64.tar.gz \
&& cd /usr/src/app \
&& npm install
RUN cd /usr/src/app && chown -Rh meteoruser .meteor/local
EXPOSE 80
ENV PORT 80
WORKDIR /usr/src/app
USER meteoruser
CMD ["meteor"]
What I currently tried to fix the problem
I Stopped and removed all containers, removed all images, removed all networks, and rebuild everything.
I Checked my internet access (average ping: ~35ms).
I Tried different version of "Meteor js" (1.8.1, latest, 1.8.0.2) with the flag ?release=1.8.0.2 in the Dockerfile.dev part where I do && (curl https://install.meteor.com | sh) \.
I Tried to find some documentation about code 6 when a container is exiting but found nothing relevant.
It tried to look at logs but none were available for the exited app container xxxxxx_webapp : result empty.
I tried to start separately the xxxxxx_webapp container and had a weird result with docker run xxxxxxx_webapp.
This is your first time using Meteor!
Installing a Meteor distribution in your home directory.
Downloading Meteor distribution
It took 2+ minutes to finally show :
Retrying download in 5 seconds …
Retrying download in 5 seconds …
Retrying download in 5 seconds …
Retrying download in 5 seconds …
[[[[[ /usr/src/app ]]]]]
=> Started proxy.
=> Started MongoDB.
And so the container is running without exiting with the separate docker run command..
Reminder: everything was working fine for 4+ weeks, then suddenly I suspect my new internet connection being too slow (I moved from my parents house 1 week ago).
Could you guys please give some advises ? Is there any additional information I could provide to help you understand what I am missing ? Thanks in advance!
Edit
Other tries to make things work:
I removed and reinstall everything from docker.
I tried a previous version of the app, 3 weeks ago that was fully working.
I Tried everything except using another wi-fi. Could this works? I don't think so but it there is always a hope.
Though I never found out what exit code 6 is either, I did find it's cause in my setup.
Turns out my environment was invalid because I had accidentally set it to a branch that didn't exist.
docker-compose --verbose up set me on the right path
TL;DR: If your use-case involves a deleted directory that's part of a volume, delete the docker volume, restart docker, then run compose up again
I was having the same issue, and it seemed related to one of my volumes.
I had the following definition in my docker-compose file:
volumename:
driver: local
driver_opts:
o: bind
type: none
device: "${PWD}/data/some/dir"
I had deleted and re-created the "${PWD}/data/some/dir" directory, and then started getting exit code 6 for the container that depended on the volume.
Things I tried:
Restart docker
Delete the volume (using the Docker Desktop UI) - this seemed to do the trick
Exit code 6 is thrown when you are pointing to the wrong branch. Check if you are pointing to a valid branch or not: docker-compose --verbose should solve your issue.
Also, I would suggest you to check the volume section in your code once.
SOLUTION
I uninstalled e-v-e-r-y-t-h-i-n-g from docker on my computer like this :
Warning : Be aware that with this procedure you will lose your configuration, images, and containers.
First I removed unused data from the 'system' with :
sudo docker system prune
Then, removed all networks and volumes :
sudo docker volume prune
sudo docker network prune
Then, a complete uninstall of docker was necessary :
apt-get remove --purge docker*
then I had to remove the docker containers and images :
rm -rf /var/lib/docker
then, remove config files, ..
rm -rf /etc/docker
rm /etc/systemd/system/docker.service
rm /etc/init/d/docker
Then removed osbolete and orphan packages:
apt-get autoremove && apt-get autoclean
Then update and upgrade existing packages
apt-get update && apt-get upgrade
Finally reinstalled docker and docker-compose from :
Docker install procedure : https://linuxize.com/post/how-to-install-and-use-docker-on-ubuntu-18-04/
Docker-compose install procedure : https://linuxize.com/post/how-to-install-and-use-docker-compose-on-ubuntu-18-04/
And at the and I redownloaded from git my project, reinstalled it with docker-compose up -d :)
So, I believe this was a config problem with docker. Unfortunately I never found a clear answer on that "exit code 6" ..
Feel free to share your solution if you ever encounter the same "exit code 6" problem guys.

Resources