Flask app running in docker container cannot be accessed: ERR_CONNECTION_RESET - docker

I've found a lot of questions on this topic. Always the answer was to use the '0.0.0.0' IP-adress. But I'm already doing this and still I get the error.
So I'm running a docker compose file that runs a database and a flask front end. The dockerfile runs fine on my own computer but on the server in the cloud I am getting this error.
This code launches the application:
app.run(host="0.0.0.0", port=5000, ssl_context="adhoc", debug=cfg.app_debug_mode)
This is my docker compose file:
version: "3.8"
services:
app:
image: app_image #the beginning is the unique uri of my amazon qccount. Then follows the repository name (ratio) and then the tag of the image (app) https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html
build: ./app
links:
- database
ports:
- "5000:5000"
environment:
AM_I_IN_A_DOCKER_CONTAINER: 'Yes'
CONFIG_NAME: 'config' #Name of the config file to use
database:
image: database_image
container_name: database
build: ./sql
restart: always
ports:
- "32000:32000"
environment:
MYSQL_ROOT_HOST: '%' #This allows the root user to access the database from any ip. For some reason amazon requires it.
MYSQL_ROOT_PASSWORD: 'zv2yRCt79AsGvz'
MYSQL_DATABASE: 'education'
volumes:
- ratio_volume:/var/lib/mysql #use a named volume for the database.
networks:
default:
name: my-network
volumes:
ratio_volume:
This is the docker file:
FROM python:3
RUN pip3 install --upgrade pip
WORKDIR /app
COPY . /app
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["/app/run.py"]
What am I doing wrong? The error is also not really helpful.. I am not getting any errors in the console, just the browser is complaining.

Related

limit_req in NGINX in Docker using webdevops/php-nginx

I don't suppose anyone would be prepared to outline for me how I might go about setting values for limit_req in NGINX in a Docker container using webdevops/php-nginx. I'm getting very occasional rate limit issues with a React/Laravel application and would like to try adjusting some rate limit settings.
The customization section of the webdevops/php-nginx docs suggests that to set "global configuration options the directory /opt/docker/etc/nginx/conf.d can be used. For vhost configuration options the directory /opt/docker/etc/nginx/vhost.common.conf can be used". However, I'm not sure how I bring that into my Docker process:
docker build --file Dockerfile.prod -t ghcr.io/theotherdy/laravel-xmap-php-nginx:latest .
using Dockerfile.prod:
FROM webdevops/php-nginx:8.1-alpine
# These ENV variables refer to options in webdevops/php-nginx - https://dockerfile.readthedocs.io/en/latest/content/DockerImages/dockerfiles/php-nginx.html
ENV WEB_DOCUMENT_ROOT=/app/public
ENV PHP_DISMOD=bz2,calendar,exiif,ffi,intl,gettext,ldap,imap,pdo_pgsql,pgsql,soap,sockets,sysvmsg,sysvsm,sysvshm,shmop,xsl,zip,gd,apcu,vips,yaml,imagick,mongodb,amqp
# sets working directory for any future actions
WORKDIR /app
# ie copying from wherever docker is run to WORKDIR
COPY . .
COPY composer.lock composer.lock
COPY .env.prod .env
# recommended optimization from: https://laravel.com/docs/9.x/deployment
RUN composer install --no-interaction --optimize-autoloader --no-dev
RUN php artisan key:generate
RUN php artisan config:cache
RUN php artisan route:cache
RUN php artisan view:cache
# Ensure all of our files are owned by the same user and group.
RUN chown -R application:application .
and docker-compose.yml:
version: "3" services:
app:
image: ghcr.io/theotherdy/laravel-xmap-php-nginx:latest
ports:
- '9000:80'
volumes:
- ./storage:/app/storage
#env_file: '.env'
depends_on:
- db
restart: always
db:
image: 'mysql/mysql-server:8.0'
container_name: xmap-db
environment:
MYSQL_ROOT_PASSWORD: 'xx'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: 'xx'
MYSQL_USER: 'xx'
MYSQL_PASSWORD: 'xx'
MYSQL_ALLOW_EMPTY_PASSWORD: 0
volumes:
- db-data:/var/lib/mysql
restart: always
volumes:
db-data:
Any advice/pointers very gratefully received!

docker-compose: how to automatically propagate changes (both frontend and backend)?

In Docker Compose, we have two services (a backend in Flask and a frontend in React) running at the same time in different directories. What are best practices for automatically updating the frontend service or backend service when ha change to the respective code is made?
In our case, we have:
frontend/
index.html
docker-compose.yml
Dockerfile
src
App.js
index.js
..
And our backend is:
backend/
app.py
Dockerfile
docker-compose.yml
This is our docker-compose.yml file:
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ../frontend
dockerfile: ../frontend/Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
volumes:
- .:/frontend
app:
image: python:3.9
build:
context: .
dockerfile: ./Dockerfile
command: app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
Typically, we reload the app (on change) almost instantly from a volume in the volume's section. This approach correctly changes the backend service when the backend code is changed, but not the frontend service. Also, we have 2 docker-compose files, one in frontend, one in backend, which we hope to somehow learn how to consolidate.
Edit: These are the logs that work for the backend (app_1 is the backend) but do not work for the frontend:
app_1 | * Detected change in '/app/app.py', reloading
app_1 | environ({'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'HOSTNAME': '***', 'PGPASSWORD': '***', 'POSTGRESQL_PASSWORD': 'magical_password', 'POSTGRESQL_HOST': 'backend-database-1', 'POSTGRESQL_USER_NAME': '***', 'LOCAL_ENVIRONMENT': 'True', 'FLASK_ENV': 'development', 'LANG': 'C.UTF-8', 'GPG_KEY': '***', 'PYTHON_VERSION': '3.9.13', 'PYTHON_PIP_VERSION': '22.0.4', 'PYTHON_SETUPTOOLS_VERSION': '58.1.0', 'PYTHON_GET_PIP_URL': 'https://github.com/pypa/get-pip/raw/6ce3639da143c5d79b44f94b04080abf2531fd6e/public/get-pip.py', 'PYTHON_GET_PIP_SHA256': '***', 'HOST': '0.0.0.0', 'PORT': '8080', 'HOME': '/root', 'KMP_INIT_AT_FORK': 'FALSE', 'KMP_DUPLICATE_LIB_OK': 'True', 'WERKZEUG_SERVER_FD': '3', 'WERKZEUG_RUN_MAIN': 'true'})
app_1 | * Restarting with stat
app_1 | * Tip: There are .env or .flaskenv files present. Do "pip install python-dotenv" to use them.
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 203-417-897
Edit 2: We followed the link suggested in the comments. We attempted setting both WATCHPACK_POLLING and CHOKIDAR_USEPOLLING to “true” but no luck. And we refactored our docker-compose file to be outside the directories like so:
docker-compose.yml
frontend/
index.html
Dockerfile
src
App.js
index.js
..
backend/
app.py
Dockerfile
Here is the new docker-compose
version: '3.8'
services:
frontend:
image: node:alpine
build:
context: ./frontend
cache_from:
- node:alpine
dockerfile: ./Dockerfile
command: npm start
depends_on:
- database # dont start until the database is up
- app
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING="true"
volumes:
- /app/node_modules
- ./frontend:/app
app:
image: python:3.9
build:
context: ./backend
cache_from:
- python:3.9
dockerfile: ./Dockerfile
command: backend/app.py
depends_on:
- database # dont start until the database is up
ports:
- 8080:8080
environment:
- PGPASSWORD=magical_password
- POSTGRESQL_PASSWORD=magical_password
- POSTGRESQL_HOST=backend-database-1
- POSTGRESQL_USER_NAME=unicorn_user
- LOCAL_ENVIRONMENT=True
- FLASK_ENV=development
- REPLICATE_API_TOKEN
volumes:
- .:/app
database:
image: "postgres" # use latest official postgres version
env_file:
- backend/database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
- ./schema.sql:/backend/docker-entrypoint-initdb.d/schema.sql
ports:
- "5432:5432"
volumes:
database-data: # named volumes can be managed easier using docker-compose
app:
And here are our Dockerfile for frontend
FROM node:alpine
RUN mkdir -p /frontend
WORKDIR /frontend
# We copy just the package.json first to leverage Docker cache
COPY package.json /frontend
RUN npm install --legacy-peer-deps
COPY . /frontend
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE ${PORT}
CMD ["npm", "start"]
and backend
FROM python:3.9
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=8080
EXPOSE ${PORT}
# This runs the app in the container
CMD [ "app.py" ]
Still backend hot reloads and every time we make a change the change is detected and picked up and reflected in docker-compose immediately. But frontend requires a restart with this command docker-compose down --volumes && docker-compose build --no-cache && docker-compose up the output we get from docker-compose is no logs. It’s like docker-compose can’t see the changes.
Edit 3: Any help would be much appreciated!

Dockerized flask not working in conjunction with dockerized neo4j

I'm trying to run neo4j in one container, and a flask app in another. I have a docker.compose.yml like so:
version: '3'
services:
app1:
container_name: app1
image: python:3.7.3-slim
build: ./APP1/
volumes:
- ./APP1/:/usr/src/app/
environment:
PORT: 5000
FLASK_DEBUG: 1
ports:
- 5000:5000
tty: true
neo4:
container_name: neo4j
image: neo4j:3.5
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- dbms_connector_bolt_tls__level=OPTIONAL
- NEO4J_dbms_memory_heap_max__size=3500M
- NEO4J_AUTH=user/pwd
volumes:
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/import:/import
- $HOME/neo4j/plugins:/plugins
ports:
- 7474:7474
- 7687:7687
My app.py:
app = Flask(__name__)
if __name__ == '__main__':
app.run(host="0.0.0.0",
port=5000,
debug=True
)
And the Dockerfile for APP1:
FROM python:3.7.3-slim
RUN mkdir /usr/src/app/
COPY . /usr/src/app/
WORKDIR /usr/src/app/
EXPOSE 5000
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENTRYPOINT ["python3", "app.py"]
I then use docker-compose up to execute. When I access neo4j through my browser, I can access as normal (http://localhost:7474/) but for the flask app I have no access (http://0.0.0.0:5000/). Where in my configuration am I going wrong?
http://0.0.0.0:5000 doesn't mean localhost, but all network interfaces. So you can't reach localhost.
I solved it and it was actually a dumb mistake, but one that could happen to others I guess...
In the docker-compose.yml:
build: ./APP1/
needs to be in quotes, so:
build: './APP1/'

docker-compose running 2 services with dockerfiles, The task "phx.server" could not be found, main.go: no such file or directory

I have an issue running my docker-compose.yml file with 4 services. They are my go microservice, phoenix web server, mongodb and redis images.
I specified in both my phoenix and golang dockerfiles to change working directory before running both services. I currently get the following errors when I do docker-compose up.
The task "phx.server" could not be found
main.go: no such file or directory
Here is my Dockerfile.go.development:
# base image elixer to start with
FROM golang:latest
# create app folder
RUN mkdir /goApp
COPY ./genesys-api /goApp
WORKDIR /goApp/cmd/genesys-server
# install dependencies
RUN go get gopkg.in/redis.v2
RUN go get github.com/gorilla/handlers
RUN go get github.com/dgrijalva/jwt-go
RUN go get github.com/gorilla/context
RUN go get github.com/gorilla/mux
RUN go get gopkg.in/mgo.v2/bson
RUN go get github.com/graphql-go/graphql
# run phoenix in *dev* mode on port 8080
CMD go run main.go
Here is my Dockerfile.phoenix.development:
# base image elixer to start with
FROM elixir:1.6
# install hex package manager
RUN mix local.hex --force
RUN mix local.rebar --force
# install the latest phoenix
RUN mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez --force
# create app folder
RUN mkdir /app
COPY ./my_app /app
WORKDIR /app
# install dependencies
RUN mix deps.get
# run phoenix in *dev* mode on port 4000
CMD mix phx.server
Here is my docker-compose.yml file:
version: '3.6'
services:
go:
build:
context: .
dockerfile: Dockerfile.go.development
ports:
- 8080:8080
volumes:
- .:/goApp
depends_on:
- db
- redis
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.phoenix.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- .:/app
# make sure we start mongodb when we start this service
# links:
# - db
depends_on:
- db
- redis
environment:
GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
FACEBOOK_CLIENT_ID: ${FACEBOOK_CLIENT_ID}
FACEBOOK_CLIENT_SECRET: ${FACEBOOK_CLIENT_SECRET}
db:
container_name: db
image: mongo:latest
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
volumes:
- ./data/redis:/data/redis
entrypoint: redis-server
restart: always
For the error related to go microservice, Since the go binary is not found in PATH, you may need to set the GOPATH env variable via your docker file for go:
export GOPATH=

Exposing localhost ports in several local services

I'm currently attempting to use Docker to make our local dev experience involving two services easier, but I'm struggling to use host and container ports in the right way. Here's the situation:
One repo containing a Rails API, running on 127.0.0.1:3000 (lets call this backend)
One repo containing an isomorphic React/Redux frontend app, running on 127.0.0.1:8080 (lets call this frontend)
Both have their own Dockerfile and docker-compose.yml files as they are in separate repos, and both start with docker-compose up fine.
Currently not using Docker at all for CI or deployment, planning to in the future.
The issue I'm having is that in local development the frontend app is looking for the API backend on 127.0.0.1:3000 from within the frontend container, which isn't there - it's only available to the host and the backend container actually running the Rails app.
Is it possible to forward the backend container's 3000 port to the frontend container? Or at the very least the host's 3000 port as I can see the Rails app on localhost on my computer. I've tried 127.0.0.1:3000:3000 within the frontend docker-compose but I can't do that while running the Rails app as the port is in use and fails to connect. I'm thinking maybe I've misunderstood the point or am missing something obvious?
Files:
frontend Dockerfile
FROM node:8.7.0
RUN npm install --global --silent webpack yarn
RUN mkdir /app
WORKDIR /app
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
RUN yarn install
COPY . /app
frontend docker-compose.yml
version: '3'
services:
web:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000' # rails backend exposed to localhost within container
backend Dockerfile
FROM ruby:2.4.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
RUN bundle install
COPY . /app
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
You have to unite the containers in one network. Do it in your docker-compose.yml files.
Check this docs to learn about networks in docker.
frontend docker-compose.yml
version: '3'
services:
gui:
build: .
command: yarn start:dev
volumes:
- .:/app
ports:
- '8080:8080'
- '127.0.0.1:3000:3000'
networks:
- webnet
networks:
webnet:
backend docker-compose.yml
version: '3'
volumes:
postgres-data:
driver: local
services:
postgres:
image: postgres:9.6
volumes:
- postgres-data:/var/lib/postgresql/data
back:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/app
ports:
- '3000:3000'
depends_on:
- postgres
networks:
- webnet
networks:
webnet:
Docker has its own DNS resolution, so after you do this you will be able to connect to your backend by setting the address to: http://back:3000
Managed to solve this using external links in the frontend app to link to the default network of the backend app like so:
version: '3'
services:
web:
build: .
command: yarn start:dev
environment:
- API_HOST=http://backend_web_1:3000
external_links:
- backend_default
networks:
- default
- backend_default
ports:
- '8080:8080'
volumes:
- .:/app
networks:
backend_default: # share with backend app
external: true

Resources