Flask and Frontend with Docker Compose - docker

I'm trying to get a basic Flask backend, and frontend framework in separate containers communicating with each other via docker-compose.
Caveat here is that I'm using Windows 10 Home so I need to be using Docker Toolbox so I've had to add a few networking rules for port forwarding. However, I can't seem to access http://localhost:5000 for my backend. I get ECONNREFUSED. I'm just trying to get basic communication between the frontend and backend communications to simulate frontend/api communication.
Given my port forwarding rules, I can access http://localhost:8080 and I can view the static portions of the app. However, I can't access the backend or can I tell if it they are communicating. New to both Flask and Docker so please forgive my ignorance. Coming from a .NET background, Windows seems to really make this a pain. Thank you for your help.
Here is my project structure:
Here is my application.py:
# Start with a basic flask app webpage.
from flask_socketio import SocketIO, emit
from flask import Flask, render_template, url_for, copy_current_request_context
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
app.config['DEBUG'] = True
#turn the flask app into a socketio app
socketio = SocketIO(app)
#app.route('/')
def index():
#only by sending this page first will the client be connected to the socketio instance
return render_template('index.html')
if __name__ == '__main__':
socketio.run(app)
Dockerfile for the backend:
FROM python:2.7
ADD ./requirements.txt /backend/requirements.txt
WORKDIR /backend
RUN pip install -r requirements.txt
ADD . /backend
ENTRYPOINT ["python"]
CMD ["/backend/application.py"]
EXPOSE 5000
Dockerfile for frontend:
FROM node:latest
COPY . /src
WORKDIR /src
RUN npm install --loglevel warn
RUN npm run production
EXPOSE 8080
CMD [ "node", "server.js" ]
And my docker-compose.yml:
version: '2'
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile
restart: always
ports:
- "5000:5000"
env_file:
- .env
frontend:
build: ./frontend
ports:
- "8080:8080"

Your issue with Flask configuration, as long as you get this error ECONNREFUSED while trying to connect it means that there is no service running on port 5000 with the ip you are trying to use and that's because this function socketio.run(app) defaults to 127.0.0.1 which will be the localhost inside the container itself. In order to make your application accessible from outside the container or through the container ip in general you have to pass another parameter to that function called host with value 0.0.0.0 in order to be listen on any interface inside the container.
socketio.run(app, host='0.0.0.0')
Quoted from the documentation:
run(app, host=None, port=None, **kwargs)
Run the SocketIO web server.
Parameters:
app – The Flask application instance.
host – The hostname or IP address for the server to listen on. Defaults to 127.0.0.1.
port – The port number for the server to listen on. Defaults to 5000.

Related

running sbt project from inside docker container

I have a project that when running locally, outputs server started at /127.0.0.1:5000 and I can access it locally on the said port just fine.
I am trying to run it through docker. I have the following:
DockerFile:
FROM mozilla/sbt
ADD build.sbt /root/build/
RUN cd /root/build && sbt compile
EXPOSE 5000
WORKDIR /root/build
CMD sbt run
and the following docker-compose.yml:
version: '3.1'
services:
sbt:
build:
context: ./
dockerfile: ./Dockerfile
image: sbt
ports:
- "8080:5000"
volumes:
- "./:/root/build"
I try running it through docker-compose up and I can see the logs about the server starting, but can't access the service through the specified port, namely 8080. Am I missing something?
fyi, the above setup is inspired by this post where I have changed the base image and also got rid of the external-network bit that I did not understand.
If app starts by default on port 5000, bu you need to start it in another port with docker you should use:
ports:
- "8080:5000"
Internally your app continues using the 5000 port but docker bind that port to another, in the example : 8080

How does container communicate with another via python when there is no port exposed?

In example here- https://docs.docker.com/compose/gettingstarted/
Flask:
from flask import Flask
app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)
Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
Compose:
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: development
redis:
image: "redis:alpine"
Web app runs in container on 0.0.0.0 port 5000
Redis runs on separate container and default port 6389
The web app container is run by exposing port 5000:5000
I am trying to understand how does the web app container communicate with the redis container when there is no network specified. Or when the port 6389 of the other container is not exposed?
If there is no network information provided in the docker-compose.yml file, Docker will use a default subnet for the container (unless explicitly named, it will be called ${current_working_dir}_default}. Since the two containers are technically on the same subnet, it is possible for them to communicate with each other without exposing ports. Using EXPOSE to expose ports is generally for allowing users the host machine to communicate with a container (i.e., opening the Flask app with a browser on your laptop).
In order for your Flask app to communicate with Redis, you may need to add a hostname descriptor to the redis service in your docker-compose.yml file. Otherwise, the hostname for the redis service may just be some SHA hash, and it would make it hard for the Flask app to find it (unless you know and use the IP address to the redis service)

Nuxt Js SSR requests fail (only server side) + Docker + axios + Linux

I'm developing a nuxt js universal ssr app. The depoloyment config is as followed:
Nuxt js universal app running on a docker container(running on nuxt 3010 port )
Linux caddy webserver
Laravel backend running on a seperate docker container
The server is using reverse proxy to set backend on domain and the etc/hosts file includes backend domain setting it as 127.0.0.1
The Problem:
Server side requests change the axios base url to http://localhost:3010 (instead of https:api.domain.com/) and result in 404 not found errors or timeouts or even 127.0.0.1 ECONNREFUSED nuxt error. THE SAME REQUESTS WORK FINE on client side and this only happens upon initial loading of the page(the server side rendering of the page)
Why is the url changing? how should i prevent it from changing base url?
docker file includes:
RUN apt-get update
ENV APP_ROOT /src
ENV DOCKER_API https://api.domain.com
WORKDIR ${APP_ROOT}
COPY . ${APP_ROOT}
RUN npm ci
RUN npm run build
docker-compose file:
version: "3"
services:
nuxt:
build: ./app/
ports:
- "3010:3010"
environment:
ALI_URL: https://api.domain.com/
restart: always
command:
"npm run start"
nuxt server config:
server:{
port:3010
}

How to combine nginx socket with gunicorn app using that socket inside docker?

I have developed a small project using flask/tensorflow. It runs under app server - gunicorn.
I have to also include nginx into the project for serving static files. Without docker app is running fine. All parts(gunicorn, nginx, flask) cooperate as intended. It's now time to move this project to an online server, and i need to do it via docker.
Nginx and gunicorn->flask app communicate via unix socket. In my localhost environment i used socket inside app root folder - myapp/app.sock, it all worked great.
Problem now is that i can't quite understand how do i tell nginx inside docker to use same socket file and tell gunicorn to listen to it. I get the following error:
upstream: http:// unix:/var/run/app.sock failed (No such file or directory) while connecting to upstream
Tried using different paths to socket file, but no luck - same error.
docker-compose file:
version: '3'
services:
nginx:
image: nginx:latest
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/remote-app:/etc/nginx/sites-enabled/remote-app
- /etc/nginx/proxy_params:/etc/nginx/proxy_params
ports:
- 8000:8000
build: .
command: gunicorn --bind unix:/var/run/app.sock wsgi:app --reload --workers 1 --timeout 60
environment:
- FLASK_APP=prediction_service.py
- FLASK_DEBUG=1
- PYTHONUNBUFFERED=True
restart: on-failure
main Dockerfile (for main app, it builds app fine, all is working):
FROM python:3.8-slim
RUN pip install flask gunicorn flask_wtf boto3 tqdm
RUN pip install numpy==1.18.5
RUN pip install tensorflow==2.2.0 onnxruntime==1.4.0
COPY *.ipynb /temp/
COPY *.hdf5 /temp/
COPY *.onnx /temp/
COPY *.json /temp/
COPY *.py /temp/
WORKDIR /temp
nginx.conf is 99% same as default with only increased file size for uploading to 8M
proxy-params is just a preset of configurtion params for making nginx proxy requests
and remote-app is a config for my app(simple one):
server {
listen 8000;
server_name localhost;
location / {
include proxy_params;
proxy_pass htpp://unix:/var/run/app.sock; //**tried /temp/app.sock here same issue**
}
}
So if i open localhost(even without port 8000) i can get nginx answer. If i try to open localhost:8000 i get that socket error( that is pasted above with strong text ).
I would avoid using sockets for this, as there is IP communication between containers/services, and you really should have a separate service for the app server.
Something like:
version: '3'
services:
nginx:
image: nginx:latest
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/remote-app:/etc/nginx/sites-enabled/remote-app
- /etc/nginx/proxy_params:/etc/nginx/proxy_params
ports:
- 80:80
- 143:143
app_server:
build: .
command: gunicorn --bind '0.0.0.0:5000' wsgi:app --reload --workers 1 --timeout 60
environment:
- FLASK_APP=prediction_service.py
- FLASK_DEBUG=1
- PYTHONUNBUFFERED=True
restart: on-failure
Notice instead of binding gunicorn to the socket, it is bound to all IP interfaces of the app_server container on port 5000.
With the separate service app_server alongside your current nginx service, you can simply treat these values like DNS aliases in each container. So in the nginx config:
proxy_pass http://app_server:5000/
So if i open localhost(even without port 8000) i can get nginx answer.
That sounds like you mean connecting to localhost on port 80 which could be a nginx server running on the host machine. This is also suggested by this line in your compose file: /etc/nginx/proxy_params:/etc/nginx/proxy_params.
That's loading the file from a local installation of nginx on the host. You should probably be aware of this, as having that server running also could confuse you when debugging, and launching this compose file somewhere would mean /etc/nginx/proxy_params has to be present on the host.
You should probably store this in the project directory, like the other files which are mounted, and mount it like:
- ./nginx/proxy_params:/etc/nginx/proxy_params

Beego - Make use of port number from docker-compose implementaion instead using port number from app.conf

I am trying to run the beego application using docker with the help of docker-compose. I am able access the demo application in http://localhost:8081 URL after running docker-compose up.
docker-compose.yml
version: "2"
services:
app:
build: .
volumes:
- .:/go/src/hello
ports:
- "8080:8080"
working_dir: /go/src/hello
command: bee run
Dockerfile
FROM golang:1.10
## Install beego and the bee dev tool
RUN go get github.com/astaxie/beego && go get github.com/beego/bee
app.conf from beego framework
appname = hello
httpport = 8081
runmode = dev
How can I overwrite the httpport(8081) in app.conf using ports(8080) number used in app from docker-compose.yml. After running docker-compose up application runs in port 8081 not in 8080. How can I solve this?
You shouldn't need to update the app.conf to 8080 use the ports to have the docker container listen on 8081 and respond to 8080.
Change - "8080:8080" to - "8080:8081"
First port is what the docker container will respond to and the second port is the port of the application within the container.

Resources