I'm building a Flask + React webapp that runs correctly on my machine. I try to dockerize it. I can build image (stiko:demo), docker runs, server starts:
But when I try to open https://0.0.0.0:5000/ on my browser, connection fails:
I've searched for a while now, trying to start from various images, trying to use ENDPOINT + CMD command, use flask run --host=0.0.0.0 but still the same issue.
Here is Dockerfile:
FROM ubuntu:20.04
RUN useradd -ms /bin/bash ubuntu
RUN apt update
RUN apt install software-properties-common -y
RUN apt-get install libpq-dev -y
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt install python3.9 -y
RUN apt install python3-pip -y
RUN pip3 install --upgrade pip
WORKDIR /app/build
COPY ./build ./
WORKDIR /app/server
COPY ./server ./
RUN pip3 install -r requirements.txt --no-cache-dir
RUN pip3 install python-dotenv
ENV APP_SETTINGS="config.DevelopmentConfig"
EXPOSE 5000
app.py:
import sys
import os
from flask import Flask, request, render_template
from flask_sqlalchemy import SQLAlchemy
from flask_cors import CORS, cross_origin
from models import User, Project, Image, db
from api import blueprints
app = Flask(__name__,
static_folder='../build/static',
template_folder="../build"
)
app.config.from_object(os.environ['APP_SETTINGS'])
db.init_app(app)
cors = CORS(app)
# Register the blueprints
for b in blueprints:
app.register_blueprint(b)
#cross_origin
#app.route('/', defaults={'u_path': ''})
#app.route('/<path:u_path>')
def index(u_path=None):
return render_template("index.html")
if __name__ == "__main__":
app.run(host=('0.0.0.0'), port=5000, ssl_context='adhoc')
project structure:
build
|__static
|__index.html
|__ ...
server
|__app.py
|__requirements.txt
|__ ...
Dockerfile
Any help would be appreciated, thanks!
Your Dockerfile doesn't appear to be starting your flask api. Try adding a CMD to the end.
FROM ubuntu:20.04
RUN useradd -ms /bin/bash ubuntu
RUN apt update
RUN apt install software-properties-common -y
RUN apt-get install libpq-dev -y
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt install python3.9 -y
RUN apt install python3-pip -y
RUN pip3 install --upgrade pip
WORKDIR /app/build
COPY ./build ./
WORKDIR /app/server
COPY ./server ./
RUN pip3 install -r requirements.txt --no-cache-dir
RUN pip3 install python-dotenv
ENV APP_SETTINGS="config.DevelopmentConfig"
EXPOSE 5000
CMD [ "python", "app.py" ]
Well, just in case someone would be stuck like me, with probably zero network skill:
The Dockerfile and app.py are just fine. But instead of trying to access https://0.0.0.0:5000/ in browser I should have tried https://127.0.0.1:5000/ which works perfectly fine!
Related
shortcut = tensorflow.keras.layers.Conv2D(filters, 1, strides=stride, use_bias=False, kernel_initializer='glorot_normal', name=name + '_0_conv')(x)
where filters are 64 stride is 2 and name is conv2_block1
this line works perfectly fine in local machine but gets stuck in docker
Below is my docker file attached.
FROM python:3.7.9-buster
RUN apt-get update \
&& apt-get install -y -qq \
&& apt install cmake -y \
&& apt-get install ffmpeg libsm6 libxext6 -y\
&& apt-get clean
RUN pip3 install --upgrade pip
# Install libraries
COPY ./requirements.txt ./
RUN pip install -r requirements.txt && \
rm ./requirements.txt
RUN pip install fire
# Setup container directories
RUN mkdir /app
# Copy local code to the container
COPY . /app
# launch server with gunicorn
WORKDIR /app
EXPOSE 8080
ENV PORT 8080
ENV FLASK_CONF config.ProductionConfig
# CMD ["gunicorn", "main:app", "--timeout=60", "--preload", \
# "--workers=1", "--threads=4", "--bind :$PORT"]
CMD exec gunicorn --bind :$PORT main:app --preload --workers 9 --threads 5 --timeout 120
And these are my requirements.txt
opencv-python
tensorflow==2.2.0
protobuf==3.20.*
cmake
dlib
numpy==1.16.*
The stuck up issue was due to the exhausting resources for the thread, so removing the --preload argument did the job as the models will be executed on the runtime.
So I have a test.py file that creates a dataframe and outputs a csv:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark.createDataFrame([[1,2],[3,4]])
df.write.csv("test", header=True)
I'm trying to get this to run via Docker but with zero luck.
I created the following Dockerfile:
FROM apache/spark-py:v3.3.0
COPY test.py test.py
# RUN python -m pip install --upgrade pip && pip3 install pyspark==3.3.0
RUN apt-get update -y &&\
apt-get install -y python3 &&\
pip3 install pyspark==3.3.0
CMD ["python3", "test.py"]
and run the following to run the test.py script in Docker:
docker build -t ch_image --rm . && docker run --rm --name ch_container ch_image
I'm getting the following error message:
Sending build context to Docker daemon 31.23 kB
Step 1/4 : FROM apache/spark-py:v3.3.0
---> d186e5bd67b6
Step 2/4 : COPY test.py test.py
---> Using cache
---> 460491fa3ac9
Step 3/4 : RUN apt-get update -y && apt-get install -y python3 && pip3 install pyspark==3.3.0
---> Running in 21d6432b8d7b
Reading package lists...
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
E: Unable to lock directory /var/lib/apt/lists/
The command '/bin/sh -c apt-get update -y && apt-get install -y python3 && pip3 install pyspark==3.3.0' returned a non-zero code: 100
I've tried:
COPY test.py .
CMD ["/opt/spark/bin/pyspark", "test.py"]
RUN python -m pip install --upgrade pip && pip3 install pyspark==3.3.0
Any help to understand why I'm not able to run the script would be much appreciated.
You got "permission denied" error. Could you please add "USER root" between step 2 and step 4 in your dockerfile as shown below?
FROM apache/spark-py:v3.3.0
COPY test.py test.py
USER root
RUN apt-get update -y &&\
apt-get install -y python3 &&\
pip3 install pyspark==3.3.0
CMD ["python3", "test.py"]
This is the Dockerfile that I am trying to build using Cloud Build.
FROM ubuntu:latest
LABEL MAINTAINER example
WORKDIR /app
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& pip3 install --upgrade pip
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
CMD ["newrelic-admin","run-program","gunicorn","wsgi:app","--bind","0.0.0.0:5000","--workers","2","--threads","4","--worker-class=gthread","--log-level","info"]
Here is the requirements.txt file
numpy==1.18.2
pyarrow==0.17.0
lightgbm==2.3.1
scikit-learn==0.22.2.post1
pandas==1.0.3
scipy==1.4.1
Flask==2.1.0
tqdm==4.43.0
joblib==0.15.1
newrelic==6.2.0.156
google-cloud-storage==1.33.0
gunicorn==20.1.0
Once the build begins, it gets stuck at pyarrow and returns
Installing build dependencies: finished with status 'error'
[91m error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
How can I fix this?
I want to install rclone on a docker image in heroku to able to use Rclone with python telegram bot
I made a heroku.yml file
build:
docker:
worker: Dockerfile
run:
worker: bash start.sh
And start.sh as
python3 -m bot
And Dockerfile as
FROM ubuntu:18.04
WORKDIR /usr/src/app
RUN docker pull rclone/rclone:latest
RUN docker run rclone/rclone:latest version
RUN chmod 777 /usr/src/app
RUN apt -qq update
RUN apt -qq install -y python3 python3-pip locales
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
CMD ["bash","start.sh"]
I get error The command '/bin/sh -c docker pull rclone/rclone:latest' returned a non-zero code: 127 in the git bash CLI
What Am I doing wrong? Or what is the procedure?
Thanks in advance!
FROM ubuntu:16.04
WORKDIR /app
# line number 12 - 15 in your Dockerfile
RUN echo "LC_ALL=en_US.UTF-8" >> /etc/environment
RUN echo "LANG=en_US.UTF-8" >> /etc/environment
RUN more "/etc/environment"
RUN apt-get update
#RUN apt-get upgrade -y
#RUN apt-get dist-upgrade -y
RUN apt-get install curl htop git zip nano ncdu build-essential chrpath libssl-dev libxft-dev pkg-config glib2.0-dev libexpat1-dev gobject-introspection python-gi-dev apt-transport-https libgirepository1.0-dev libtiff5-dev libjpeg-turbo8-dev libgsf-1-dev fail2ban nginx -y
# Install Rclone
RUN curl -sL https://rclone.org/install.sh | bash
RUN rclone version
# Cleanup
RUN apt-get update && apt-get upgrade -y && apt-get autoremove -y
Based on this answer
You can try this, also don't try to use docker commands inside a Dockerfile.
I'm trying to run a pretty simple Flask API in OpenShift Origin 3.6. I can build this container just fine locally and remotely, but when I go to deploy on OpenShift, I get permissions errors on the RUN chmod -R 777 ... lines. Has anyone found a way around this? I wonder if my corporate environment doesn't allow this type of copying, but it's all within a container...
Edit: providing a completely minimal example
Directory structure:
project
├── Dockerfile
└── app
└── api.py
Dockerfile to build base image:
FROM docker.io/ubuntu:16.04
RUN apt-get update && apt-get install -y --no-install-recommends \
cmake curl git make gunicorn nginx python3 python3-pip python3-setuptools build-essential \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade pip
RUN pip3 install pandas numpy scikit-learn scipy xgboost flask-restful nltk gunicorn
RUN mkdir -p /home/app
WORKDIR /
RUN python3 -c 'import nltk; nltk.download("punkt")'
RUN mv /root/nltk_data /home/app
I then run docker build . -t project:latest --no-cache. Next, the Dockerfile that uses the base image from above to deploy the actual application (I basically just comment out the "base image" lines from above and uncomment these ones from below, using the same Dockerfile file):
FROM project:latest
COPY app /home/app
RUN chmod -R 777 /home/app
WORKDIR /home/app
EXPOSE 5000
CMD ["python3", "api.py"]
I build the container to be deployed using docker build . -t project:app --no-cache.
api.py:
import time
if __name__ == '__main__':
while True:
print('This app is running')
time.sleep(30)