Assigning port when building flask docker image - docker

I recently created an app using flask and put the py file in a docker container. However I am confused with online cases where people assigned the port.
First of all on the bottom of my py file I wrote
if __name__ == "__main__":
app.run(host='0.0.0.0',port=8000, debug=True)
In some cases I saw people specify the port in CMD when making dockerfile
CMD ["python3", "app.py", "--host=0.0.0.0", "--port=8000"]
In my own experience, the port assigned in CMD didn't work on my case at all. I wish to learn the differences between the two approaches and when to use each way.

Regarding this approach:
if __name__ == "__main__":
app.run(host='0.0.0.0',port=8000, debug=True)
__name__ is equal to "__main__" when the app is launched directly with the python interpreter (executed with the command python app.py) - which is a python technicallity and nothing to do with Flask. In that case the app.run function is called, and it accepts the various arguments as stated. app.run causes the Werkzeug development server to run.
This block will not be run, if you're executing the program with a production WSGI server like gunicorn as __name__ will not be equal to "__main__" in that case, so the app.run call is bypassed.
In practice, putting the app.run call in this if block means you can run the dev server with python app.py and avoid running the dev server when the same code is imported by gunicorn or similar in production.
There are lots of older tutorials or posts which reference the above approach. Modern versions of Flask ship with the flask command which is intended to replace this. So essentially without that if block, you can launch the development server which imports your app object in a similar manner to to gunicorn:
flask run -h 0.0.0.0 -p 8000
This automatically looks for an object called app in app.py, and accepts the host and port options, as you can see from flask run --help:
Options:
-h, --host TEXT The interface to bind to.
-p, --port INTEGER The port to bind to.
One advantage of this method is that the development server won't crash if you're using the auto reloader and introduce syntax errors. And of course the same code will be compatible with a production server like gunicorn.
With the above in mind, regarding the command you pass:
python app.py --host=0.0.0.0 --port=8000
I'm not sure if you've been confused with references to the flask command's supported options, but for this one to work you'd need to manually write some code to do something with those options. This could be done with a python module like argparse, but that would probably be redundant given that the flask command actually supports this out of the box.
To conclude: you should probably remove the if block, and your Dockerfile should contain:
CMD ["flask", "run", "--host=0.0.0.0", "--port=8000"]
You may also wish to check the FLASK_ENV environment variable is set to development to use the auto reloader, and be aware that the CMD line would need to be changed within this Dockerfile to run with gunicorn or similar in production, but that's probably outwith the scope of this question.

CMD ["python3", "app.py", "--host=0.0.0.0", "--port=8000"] means: Python run application app.py and pass the --host and the --port parameters to that application. It is up to your app.py to do something with those parameters. If your app does not process those flags, then you do not need to add them to the CMD.
If in your code you have app.run(host='0.0.0.0',port=8000), then your app will always be listening to port 8000 inside the container. In this case you can just use CMD ["python3", "app.py"]
If you wanted to have the ability to change the port and host that your app is listening to, the you could add some code to read the values from the command line. Once you setup your app to look at values from the command line, then it would make sense to run CMD ["python3", "app.py", "--host=0.0.0.0", "--port=8000"]

Related

Docker container name resolution inside and outside

I have a flask app that uses rabbitmq where both are docker containers (along with other components, such as a celery workers). I want to use a common .env environment file for both dev and container use in my docker-compose.
Example .env
RABBITMQ_DEFAULT_HOST=localhost
Now, if I use this with with flask run it works fine as the container rabbitmq port is mapped to the host. If I run this inside the flask docker container, it fails because localhost of the flask container is not the same as the host. If I change localhost to my container name, rabbitmq.
RABBITMQ_DEFAULT_HOST=rabbitmq
It will resolve nice inside the flask container via docker to the dynamic ip of the rabbitmq container (local port map not even necessary), however, my flask run during development has no knowledge of this name / ip mapping and will fail.
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672.
Update
So far, this is the best I've come up with.. in the program, I use a socket to check if the name resolves, if not, then it looks to the env with a default to localhost.
import socket
def get_rabbitmq_host() -> str:
try:
return socket.gethostbyname("rabbitmq") # container name
except socket.gaierror:
return os.getenv("RABBITMQ_DEFAULT_HOST", "localhost")
Here is another method I tried that's a lot faster (no dns timeout), but changes the order a bit.
def get_rabbitmq_host() -> str:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
result = sock.connect_ex(("127.0.0.1", 5672))
sock.close()
if result == 0:
return "127.0.0.1"
elif (
os.getenv("RABBITMQ_DEFAULT_HOST") == "localhost"
or os.getenv("RABBITMQ_DEFAULT_HOST") == "127.0.0.1"
):
return "rabbitmq"
else:
return os.getenv("RABBITMQ_DEFAULT_HOST", "rabbitmq")
Well no, not really. Or yes, depending on how you view it.
Since now you find out that localhost does not mean the same in every context, nmaybe you should split up the variables, even though in some situations it maybe have the same value.
So just something like
rabbit_mq_internal_host=localhost
rabbit_mq_external_host=rabbitmq #container name!
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
Well: that is the point of the .env files. You have to different environments there, so make two different .env files. Or let everyone adjust the .env file according to her/his preferred way of running the app.
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672
If you connect from container to container within a docker network, you do not need to publish the port at all. Only ports that have to be accessed from outside the network.
I am not sure if I completely understood your situation. I am assuming that you are developing the application and have environment which you would like to have it separated in accordance to the environment for example localhost, development, test etc ...
With that assumption as above. I would suggest to have env's in accordance to the environment like env_localhost, env_development where each key=value will be in accordance to the environment. Also, have an env.template file with empty key= so that if someone does not want a docker based runs then can setup that accordingly in a new file calling it the .env.
Once the above is created now you can modify your docker build for the app the Dockerfile I mean where you can utilise the following snippet. The important part is the environment variable called SETUP and the rename of the environment to .env during the build process:
# ... Other build commands follow
WORKDIR /usr/src/backend
COPY ./backend .
ARG SETUP=development # This is important environment we will pass in future. Defaults to a value like development.
COPY ./backend/env_${SETUP} .env # This is passed auto during docker-compose build I will tell that next.
# ... Other build commands follow
After the modification of the Dockerfile, now you can perform docker-compose build according to the environment by passing a SETUP as env to the build as follows:
docker-compose build --build-arg SETUP=localhost your_service_here
Additionally, once this process is stable you can create a Makefile and have make build-local, make build-dev and so on.

How to use SSL key and cert files in docker?

I have FastAPI app with uvicorn as starter. This app is in docker container and works fin.
Now I want to run uvicorn with SLL so i need PEM- and KEY-files inside of container. On dev machine theese files are /ssl/localhost.pem and /ssl/localhost.key, On prod server they are /ssl/certs/prod.pem and /ssl/private/prod.key.
My idea was to define SSL_KEY and SSL_PEM env vars with system path to that files and use it in Dockerfile:
RUN cp SSL_KEY /ssl.key
...
CMD uvicorn ... --ssl-keyfile=/ssl.key ...
But it doesnt work I do not know why.
Do you have any ideas about how to implement this case? Please :-)

Cloud Run Deploy fails: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable

I have a project which I had previously successfully deployed to Google Cloud Run, and set up with a trigger such that upon pushing to the repo's main branch on Github, it would automatically deploy. It worked great.
Then I tried to rename the github repo, which meant deleting and creating a new trigger, and now I cannot get it working again.
Everytime, the build succeeds but deployment fails with this error in Cloud Build:
Step #2 - "Deploy": ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I have not changed anything other than the repo name, leading me to believe the fix is not with my code, but I tried some changes there anyway.
I have looked into the solutions set forth in this post. However, I believe I am listening on the correct port.
My app is using Python and Flask, and contains this:
if __name__ == "__main__":
app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Which should use the ENV var Port (8080) and otherwise default to 8080. I also tried just using port=8080.
I tried explicitly exposing the port in the Dockerfile, which also did not work:
FROM python:3.7
#Copy files into docker image dir, and make that the current working dir
COPY . /docker-image
WORKDIR /docker-image
RUN pip install -r requirements.txt
CMD ["flask", "run", "--host", "0.0.0.0"]
EXPOSE 8080
Cloud Run does seem to be using port 8080 - if I dig into the response, I see this nested under Response.spec.container.0 :
ports: [
0: {
containerPort: 8080
name: "http1"
}
]
All that said, if I look at the logs, it shows "Now running on Port 5000".
I have no idea where that Port 5000 is coming from or being set, but trying to change the ports in Python/Flask and the Dockerfile to 5000 leads to the same errors.
How do I get it to run on Port 8080? It's very strange to me that this was working FINE prior to renaming the repo and creating a new trigger. How is this setup different? The Trigger does not give an option to set the port so I'm not sure how that caused this error.
You have mixed things. Flask command default port is effectively 5000. If you want to change it, you need to change your flask run command with the --port= parameter
CMD ["flask", "run", "--host", "0.0.0.0","--port","8080"]
In addition, your flask run command, is a flask runtime and totally ignore the standard python entrypoint if __name__ == "__main__":. If you want to use this entrypoint, use the Python runtime
CMD ["python", "<main file>.py"]

Docker Flask, stuck at "Waiting response from localhost"

Im trying to run a test Flask in Docker, but for some reason, i cant connect to it from the host machine, the composition of files is:
pyweb/
|app.py
|Dockerfile
|___app/
|__init__.py
The files content:
Dockerfile:
FROM python:2.7
COPY . /pyweb
WORKDIR /pyweb
RUN pip install flask
RUN export FLASK_APP=app.py
ENTRYPOINT ["python"]
CMD ["app.py"]
app.py:
from flask import Flask
from app import app
app/init.py:
from flask import Flask
app = Flask(__name__)
#app.route('/')
#app.route('/index')
def index():
return "Hello, World!"
app.run(host='0.0.0.0')
After running the docker file, and starting the container with docker run -d -p 5000:5000, the response is
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
But i cant connect form the host machine, it doesnt respond
Ok so I had that same issue during an interview a bit more than a year ago and it still puzzles me to this day.
I don't know why it's not working as expected when running the flask app with app.run().
Somehow it works fine when starting the app with the flask command line directly.
The Dockerfile would look like this:
FROM python:2.7
COPY . /pyweb
WORKDIR /pyweb
RUN pip install flask
ENV FLASK_APP=app.py
CMD ["flask", "run", "--host", "0.0.0.0"]
And you can drop app.run(host='0.0.0.0') from the __init__.py file.
I'll probably spend some time later trying to understand why your original implementation doesn't work as expected. I don't know much about flask but I don't see anything wrong in your code.
You should mention the port in flask app and set debug = True
app.run(host='0.0.0.0', port=5000, debug = True)
You should connect to the same port
docker run -p 3000:5000
Now you can access your app at http://localhost:3000/
This might not answer the OP's question but might answer others' questions who came here..
For me, the Flask output showed that the application was running after launching the Docker container. It showed this line, for example:
Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)
So, I clicked this link and expected it to work.
However, you need to connect to the Docker container --- not the Flask app. Going to http://172.17.0.2:5000/ connects to the Flask app directly, which won't work. You need to go to http://0.0.0.0:5000/, which connects to the Docker container, which connects to the Flask app.
So going to http://0.0.0.0:5000/ worked.

Linode/lamp + docker-compose

I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp

Resources