Flask/Docker - data not send by 127.0.0.1 - docker

I have a flask app, if use "python app.py" to start the server, it runs perfectly. The browser client can get what I want.
However, if I use docker container, If I use below DockerFile:
FROM python:3.6.5-slim
RUN mkdir /opt/convosimUI
WORKDIR /opt/convosimUI
RUN pip install Flask
RUN pip install grpcio
RUN pip install grpcio-tools
ADD . .
EXPOSE 5000
ENV FLASK_APP=app.py
CMD ["python", "-u", "app.py"]
The browser (on windows) can not get response from server, in the linux container, everything works perfect and I can use wget to get content of 127.0.0.1, but outside the container, everything in container is not accessible.
If I change the lask line of DockerFile into:
CMD ["flask", "run", "--host", "0.0.0.0"]
Not use python app.py, then it works again.
Why this happen? And how if I want to use python app.py command?
It because there're some other parallel processing in app.py which I need to share the client of another service, and this client must be always on when turn on the web server. So I can not just put them in separate place.
Any ideas are welcome. Thanks!

Related

Local flask/ flask container can connect to remote maria db, but Flask container on remote Ubuntu host can't

What I'm trying to do
I'm working on a flask API that is supposed to talk to a remote maria db.
The problem
My local, non-containerized flask can connect to the remote mariadb no problem, as does the local Flask container, but I run into issues deploying it in a docker container on an remote ubuntu host. The container is building and responsive, but it can't connect to the mariadb.
Response when asking API to POST
<title>
sqlalchemy.exc.OperationalError: (mariadb.OperationalError) Can't connect to MySQL server on <mariadb_ip>; (115)
(Background on this error at: https://sqlalche.me/e/14/e3q8) // Werkzeug Debugger
</title>
All that changes from local Flask to remote Flask container is the remote Ubuntu host environment.
Here is my docker compose and dockerfile:
Dockerfile:
FROM python:3.8-slim-buster
WORKDIR /app
RUN apt-get update && apt-get install -y libmariadb-dev
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV FLASK_APP=runner.py
ENV FLASK_ENV=production
ENV FLASK_DEBUG=0
ENV PROPAGATE_EXCEPTIONS=1
EXPOSE 5010
#COPY requirements.txt ./
#RUN pip install -r requirements.txt
RUN pip install gunicorn
COPY . .
CMD ["gunicorn", "--reload", "--bind", "0.0.0.0:5010", "runner:runner"]
docker-compose
version: "3.8"
services:
api:
build: .
restart: always
ports:
- "5010:5000"
What I tried
A lot, but mostly figuring out if I set up Flask incorrectly and it doesn't seem the case. I then tried to focus on the dockerfile and if I overlooked something, but the the Flask API is building and responsive. I just can't figure out why it's working fine locally, but not remotely. I'm worried that it has something to do with the remote Ubuntu host, but I have no clue what that could be and how to resolve it.

Run server in background in Dockerfile

Unfortunately, at the moment, I cannot use docker-compose. And I have to get Google Cloud Proxy running in a Docker container. But it doesn't run in the container, as MySQL is unable to connect to Google Cloud SQL.
Keep in mind, I was able to connect outside of the container on my machine. So that's how I know the connection works.
My Dockerfile looks like this:
FROM node:12-alpine
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy \
&& chmod +x cloud_sql_proxy
RUN ./cloud_sql_proxy -instances=project_placeholder:region_placeholder:instance_placeholder=tcp:3306 -credential_file=service_account.json &
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 80
CMD ["npm", "run", "serve"
How can I configure it so Google Cloud Proxy runs?
RUN directive executes at build time so your CMD only start node process, that is why you are not able to connect because the proxy process is not running at all.
One way is to start both processes from entrypoint but you should know that in such case if proxy down due to some reason your container will still keep running as the main process is nodejs of the container.
Change the entrypoint to
ENTRYPOINT [ "sh", "-c", "/cloud_sql_proxy -instances=project_placeholder:region_placeholder:instance_placeholder=tcp:3306 -credential_file=service_account.json & npm start" ]

Expose Both Ports 8080 and 3000 For Cloud Run Deployment

TL:DR - I am trying to deploy my MERN stack application to GCP's Cloud Run. Struggling with what I believe is a port issue.
My React application is in a client folder inside of my Node.js application.
Here is my one Dockerfile to run both the front-end and back-end:
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
# Installing components for be connector
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
... and here is my entrypoint.sh file:
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
docker-compose up works locally, and docker run -p 8080:8080 -p 3000:3000 <image_id> runs the image I built. Port 8080 is for Node and port 3000 for the React app. However, on Cloud Run, the app does not work. When I visit the app deployed to Cloud Run, the frontend initially loads for a split second, but then the app crashes as it attempts to make requests to the API.
In the Advanced Settings, there is a container port which defaults to 8080. I've tried changing this to 3000, but neither works. I cannot enter 8080,3000, as the field takes valid integers only for the port. Is it possible to deploy React + Node at the same time to Cloud Run like this? How can I have Cloud Run listen on both 8080 and 3000, as opposed to just 1 of the 2?
Thanks!
It's not currently possible.
Instead, you can run multiple processes inside Cloud Run, but instead use nginx to proxy requests between them depending on the URL, similar to what's recommended in this answer.

How to run golang web app on a Docker container

I have a web app that uses go language as it's back end. When I run my website I just do go build; ./projectName then it will run on local server port 8000. How do I run this web app on a container? I can run sample images like nginx on a container, but how do I create my own images for my projects. I created a Dockerfile inside my project folder with the following codes:
FROM nginx:latest
WORKDIR static/html/
COPY . /usr/src/app
Then made an image using the Dockerfile, but when I run it on a container and go to localhost:myPort/static/html/page.html it says 404 page not found. My other question is, does docker can only run static pages on a container? cause my site can receive and send data. Thanks
this is my docker file (./todo is my project name and folder name)
this is my terminal ( as you can see the the container exits emmediately)
I guess you are not exposing the Docker Port outside the container.
That's why you are not able to see any output rather than just being specific to GO Program.
Try adding the below lines to your docker compose File
EXPOSE 80(whichever port you want it to be)
EXPOSE 443
EXPOSE 3306
This will make the container be accessed from outside
Here is what i did for my GOlang web app use Gin-gonic framework -
my Dockerfile:
FROM golang:latest
# Author
MAINTAINER dangminhtruong
# Create working folder
RUN mkdir /app
COPY . /app
RUN apt -y update && apt -y install git
RUN go get github.com/go-sql-driver/mysql
RUN go get github.com/gosimple/slug
RUN go get github.com/gin-gonic/gin
RUN go get gopkg.in/russross/blackfriday.v2
RUN go get github.com/gin-gonic/contrib/sessions
WORKDIR /app
Then build docker image
docker build -t web-app:latest .
Finally, start my web-app
docker run -it -p 80:8080 -d web-app:latest go run main.go //My webapp start at 8080 port
Hope this helpfull
You don't need Nginx to run a server in Go
It's better to build a binary in Dockerfile
Here is how your Dockerfile may look like:
FROM golang:latest
RUN mkdir /app
ADD . /app/
WORKDIR /app
RUN go build -o main .
EXPOSE 8000
CMD ["/app/main"]

Flask running in Docker. Need to rebuild every time

I'm trying to get my Flask Docker build to be able to switch from running uWSGI + Nginx for production to simply running flask run for local development.
I have Flask running in development mode:
Here's my Dockerfile:
FROM python:3
RUN apt-get -qq update
RUN apt-get install -y supervisor nginx
RUN pip install uwsgi
# Expose dev port
EXPOSE 5000
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY nginx.conf /etc/nginx/sites-enabled/default
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN pip install --editable .
CMD [ "supervisord" ]
I'm exposing port 5000, which I run from docker-compose.yml:
version: '3'
services:
web:
build: ./
environment:
DATABASE_URL: postgresql://postgres#db/postgres
FLASK_APP: app.main
FLASK_ENV: development
command: flask run --host=0.0.0.0
ports:
- "5000:5000"
links:
- db
volumes:
- .:/app
db:
image: postgres
restart: always
ports:
- 5433:5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
driver: local
I can build this and run it fine at http://0.0.0.0:5000/ but if I change any code, in a view, or a template, nothing gets refreshed/reloaded on the server and I'm forced to rebuild.
I thought that by specifying FLASK_ENV: development in my docker-compose that this would enable auto-reloading of the Flask server. Any idea on how to debug?
Debug it on the host, outside Docker. If you need access to databases that are running in Docker, you can set environment variables like MYSQL_HOST or PGHOST to point at localhost. You don't need root privileges at all (any docker command implies unrestricted root access). Your IDE won't be upset at your code running "somewhere else" with a different filesystem layout. If you have a remote debugger, you won't need to traverse the Docker divide to get access to it.
Once you have it running reasonably on the host and your py.test tests pass, then run docker build.
(In the particular case of server-rendered views, you should be able to see almost exactly the same thing via the Flask debug server without having the nginx proxy in front of it, and you can control the library versions that get installed via requirements in your setup.py file. So the desktop environment will be much simpler, and you should be able to convince yourself it's very close to what would be running in the Docker world.)
When you run the flask app from docker-compose, it is running the version you installed with pip install --editable . in the Dockerfile. The problem is that this is NOT the version you're editing from outside Docker in the volume mounted to '/app' (rather it is a different version in /usr/src/app from the COPY . . command).
The Dockerfile requires COPY and ADD at build time, not run-time, so in order to run from the live version the flask run command can't be running from the pip installed package.
You have a few choices:
Move the pip install command to the docker-compose command (possibly by calling a bash script that does the pip install as well as the flask run).
Run the flask run command without pip installing the package. If you're in the /app folder, I believe flask will recognize the current folder as holding the code (given the FLASK_APP variable), so the pip install is unnecessary. This works for me for a simple app where I encountered the same problem, but I imagine if you have complicated imports or other things relying on the installed package this won't work.
I suppose if you reconcile the /usr/src/app and /app folders, so that you pip install --editable at build time then overwrite that folder so that python is looking in the same place but at the locally mounted volume you can trick it into working, but I haven't quite gotten this to work... when I do this, following the flask logs (docker logs -f web) indicates that it knows when a file changes, but the behavior doesn't seem to actually change. I don't know why this is exactly, but I suspect pip is upset about the folder swap.

Resources