Im facing a problem of not being able to access my docker container from my browser at localhost:8000. There is no error message. Here is what the browser is saying:
This page isn’t working localhost didn’t send any data. ERR_EMPTY_RESPONSE
This is my docker-compose file:
version: "3.7"
services:
postgres:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
fastapi:
build: ./backend
ports:
- "8000:8000"
volumes:
- ./backend/:/usr/src/backend/
depends_on:
- postgres
volumes:
postgres_data:
and this is my dockerfile:
# pull official base image
FROM python:3.8.3-slim-buster
# set work directory
WORKDIR /usr/src/backend
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# copy project
COPY . .
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
#Installing dependencies, remove those that are not needed after the installation
RUN pip install -r requirements.txt
CMD uvicorn main:app --reload
Here is my CLI:
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [6] using statreload
INFO: Started server process [8]
INFO: Waiting for application startup.
INFO: Application startup complete.
If anyone has this problem with FastAPI,
Try adding --host 0.0.0.0 in your startup script.
Example: uvicorn main:app --reload --host 0.0.0.0 --port 8000
Related
I am trying to set up an nginx reverse proxy to a gunicorn app server serving up my flask app. The gunicorn container listens on port 5000, and nginx listens on port 80. The problem is that I can still access the app through the browser by visiting localhost:5000, even though I have set gunicorn to listen to localhost of the docker container only, and all requests should pass through the nginx container to the gunicorn container through port 80. This is my set up.
docker-compose.yml
version: "3.3"
services:
web_app:
build:
context: .
dockerfile: Dockerfile.web
restart: always
ports:
- "5000:5000"
volumes:
- data:/home/microblog
networks:
- web
web_proxy:
container_name: web_proxy
image: nginx:alpine
restart: always
ports:
- "80:80"
volumes:
- data:/flask:ro
- ./nginx/config/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- web
networks:
web:
volumes:
data:
Dockerfile.web
FROM python:3.6-alpine
# Environment Variables
ENV FLASK_APP=microblog.py
ENV FLASK_ENVIRONMENT=production
ENV FLASK_RUN_PORT=5000
# Don't copy .pyc files to cointainer
ENV PYTHONDONTWRITEBYTECODE=1
# Security / Permissions (1/2)
RUN adduser -D microblog
WORKDIR /home/microblog
# Virtual Environment
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -U pip
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn pymysql
# Install App
COPY app app
COPY migrations migrations
COPY microblog.py config.py boot.sh ./
RUN chmod +x boot.sh
# Security / Permissions (2/2)
RUN chown -R microblog:microblog ./
USER microblog
# Start Application
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
boot.sh
#!/bin/sh
source venv/bin/activate
flask db upgrade
exec gunicorn --bind 127.0.0.1:5000 --access-logfile - --error-logfile - microblog:app
Even though I have set gunicorn --bind 127.0.0.1:5000', in stdout of docker-compose` I see
web_app_1 | [2021-03-02 22:54:14 +0000] [1] [INFO] Starting gunicorn 20.0.4
web_app_1 | [2021-03-02 22:54:14 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
And I am still able to see the website from port 5000 in my browser. I'm not sure why it is listening on 0.0.0.0 when I have explicitly set it to 127.0.0.1.
Your docker-compose has
ports:
- "5000:5000"
which tells the docker-proxy to listen on port 5000 on the host machine and forward requests to the container. If you don't want port 5000 to be externally available, remove this.
Also, it's good that you didn't succeed in making gunicorn listen only to 127.0.0.1; if you did, the web_proxy container wouldn't be able to connect to it. So you may as well undo your attempt to do that.
I have a GraphQL application that run inside a container. If I run docker compose build followed by docker compose up I can connect to it via localhost:9999/graphql. Inside the dockerfile the port forwarding is 9999:80. When I run the docker container ls command I can see the ports are forewarded as expected.
I'd like to running this in a VS Code remote container. Selecting Open folder in remote container gives me the option of selecting either the dockerfile or the docker-compose file to build the container. I've tried both options and neither allows me to access the GraphQL playground from localhost. Running from docker-compose I can see that the ports appear to be forwarded in the same manner as if I ran docker compose up but I can't access the site.
Where am I going wrong?
Update: If I run docker compose up on the container that is built by vs code, I can connect to localhost and the graphql playground.
FROM docker.removed.local/node
MAINTAINER removed
WORKDIR /opt/app
COPY package.json /opt/app/package.json
COPY package-lock.json /opt/app/package-lock.json
COPY .npmrc /opt/app/.npmrc
RUN echo "nameserver 192.168.11.1" > /etc/resolv.conf && npm ci
RUN mkdir -p /opt/app/logs
# Setup a path for using local npm packages
RUN mkdir -p /opt/node_modules
ENV PATH /opt/node_modules/.bin:$PATH
COPY ./ /opt/app
EXPOSE 80
ENV NODE_PATH /opt:/opt/app:$NODE_PATH
ARG NODE_ENV
VOLUME ["/opt/app"]
CMD ["forever", "-o", "/opt/app/logs/logs.log", "-e", "/opt/app/logs/error.log", "-a", "server.js"]
version: '3.5'
services:
server:
build: .
container_name: removed-data-graph
command: nodemon --ignore 'public/*' --legacy-watch src/server.js
image: docker.removed.local/removed-data-graph:local
ports:
- "9999:80"
volumes:
- .:/opt/app
- /opt/app/node_modules/
#- ${LOCAL_PACKAGE_DIR}:/opt/node_modules
depends_on:
- redis
networks:
- company-network
environment:
- NODE_ENV=dev
redis:
container_name: redis
image: redis
networks:
- company-network
ports:
- "6379:6379"
networks:
company-network:
name: company-network
I'm having problems when I build the container with MongoDB, when using the docker-compose up I get the following error
ERROR: Service 'app' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder367230859/entrypoint.sh: no such file or directory
I tried to change the mongo to PostgreSQL, but continue.
my files are below, thanks in advance
that Dockerfile
version: '3'
services:
web:
image: nginx
restart: always
# volumes:
# - ${APPLICATION}:/var/www/html
# - ${NGINX_HOST_LOG_PATH}:/var/log/nginx
# - ${NGINX_SITES_PATH}:/etc/nginx/conf.d
ports:
- "80:80"
- "443:443"
networks:
- web
mongo:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password
ports:
- "27017:27017"
# volumes:
# - data:/data/db
networks:
- mongo
app:
build: .
volumes:
- .:/mm_api
ports:
- 3000:3000
depends_on:
- mongo
networks:
web:
driver: bridge
mongo:
driver: bridge´´
that docker-compose
FROM ruby:2.7.0
RUN apt-get update -qq && apt-get install -y nodejs
RUN mkdir /mm_api
WORKDIR /mm_api
COPY Gemfile /mm_api/Gemfile
COPY Gemfile.lock /mm_api/Gemfile.lock
RUN bundle install
COPY . /mm_api
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-C", "config/puma,rb"]
#CMD ["rails", "server", "-b", "0.0.0.0"]
that entry point
#!/bin/bash
set -e
rm -f /mm_api/tmp/pids/server.pid
exec "$#"
I had a similar issue when working on a Rails 6 application using Docker.
When I run docker-compose build, I get the error:
Step 10/16 : COPY Gemfile Gemfile.lock ./
ERROR: Service 'app' failed to build : COPY failed: stat /var/lib/docker/tmp/docker-builder408411426/Gemfile.lock: no such file or directory
Here's how I fixed it:
The issue was that the Gemfile.lock was missing in my project directory. I had deleted when I was having some issues with my gem dependencies.
All I had to do was to run the command below to install the necessary gems and then re-create the Gemfile.lock:
bundle install
And then this time when I ran the command docker-compose build everything worked fine again.
So whenever you encounter this issue endeavour to check if the file is present in your directory and most importantly if the path you specified to the file is the correct one.
That's all.
I hope this helps
I need to install other node.js modules on the bitnami docker container.
I would like to install body-parser module to the container. I've started the container with sudo docker-compose up and it runs fine. i tried to modify the dockerfile and docker-compose.yml file to install the body-parser but i get EACCES permission denied, access '/app/node_modules' error. Can you help?
TIA,
Thomas
**** UPDATE 4/23/2019 ***
This is the docker file.
I added body-parser line.
## Dockerfile for building production image
FROM bitnami/express:4.16.4-debian-9-r166
LABEL maintainer "John Smith <john.smith#acme.com>"
ENV DISABLE_WELCOME_MESSAGE=1
ENV NODE_ENV=production \
PORT=3000
# Skip fetching dependencies and database migrations for production image
ENV SKIP_DB_WAIT=0 \
SKIP_DB_MIGRATION=1 \
SKIP_NPM_INSTALL=1 \
SKIP_BOWER_INSTALL=1
COPY . /app
RUN sudo chown -R bitnami: /app
RUN npm install
RUN npm install --save body-parser
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '2'
services:
mongodb:
image: 'bitnami/mongodb:latest'
express:
tty: true # Enables debugging capabilities when attached to this container.
image: 'bitnami/express:4'
command: npm start
environment:
- PORT=3000
- NODE_ENV=development
- DATABASE_URL=mongodb://mongodb:27017/myapp
- SKIP_DB_WAIT=0
- SKIP_DB_MIGRATION=0
- SKIP_NPM_INSTALL=0
- SKIP_BOWER_INSTALL=0
depends_on:
- mongodb
ports:
- 3000:3000
volumes:
- .:/app
I am currently working with Docker and a simple Flask website to which I want to send images. For this I'm working on port 8080, but the mapping from docker to host is not working properly as I am unable to connect. Could someone explain to me what I am doing wrong?
docker-compose.yml
version: "2.3"
services:
dev:
container_name: xvision-dev
build:
context: ./
dockerfile: docker/dev.dockerfile
working_dir: /app
volumes:
- .:/app
- /path/to/images:/app/images
ports:
- "127.0.0.1:8080:8080"
- "8080"
- "8080:8080"
dev.dockerfile
FROM tensorflow/tensorflow:latest
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
RUN apt update && apt install -y python-tk
EXPOSE 8080
CMD ["python", "-u", "app.py"]
app.py
#APP.route('/test', methods=['GET'])
def hello():
return "Hello world!"
def main():
"""Start the script"""
APP.json_encoder = Float32Encoder
APP.run(host="127.0.0.1", port=os.getenv('PORT', 8080))
I start my docker with docker-compose up, this gives the output: Running on http://127.0.0.1:8080/ (Press CTRL+C to quit).
But when I do a get request to 127.0.0.1:8080/test I get that there is no response.
I have also tried docker-compose run --service-port dev as some people have suggested online but this says that there is no service dev.
Can someone help me for what I am doing wrong?
Use:
APP.run(host="0.0.0.0", port=os.getenv('PORT', 8080))
Using only:
ports:
- "8080:8080"
is enough