Kind of a newbie question, hopefully I'm on the right tracks :)
I have a FastAPI web server hosting some endpoint APIs for some IOS app I'm working on.
My webserver is running on an AWS EC2 machine using GUNICORN & Docker.
At first, I've hosted my webserver running simple HTTP, with the following Dockerfile:
WORKDIR /app
COPY . /app
# add files to Docker environment
...
# run app
RUN pip install -r requirements.txt
EXPOSE 80
CMD ["gunicorn", "-b", "0.0.0.0:80", "-k", "uvicorn.workers.UvicornWorker", "main:app"]
This worked perfectly, and I was able to access my APIs using http://<ec2-machine-public-ip>/...
However, I want to make sure all communications between a client (app-user) and my server are secure using HTTPS.
Since I only want to use my webserver for hosting APIs (and don't actually want anyone accessing the routes via browser), I figured a self-signed certificate would suffice (despite browser warnings).
To do that, I've generated self-signed certificates with OpenSSL using the following command:
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365
This created cert.pem and key.pem files.
Then, I attempted running with HTTPS with the following Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
WORKDIR /app
COPY . /app
# add files to Docker environment
...
# run app
RUN pip install -r requirements.txt
EXPOSE 433
CMD ["gunicorn", "-b", "0.0.0.0:433", "--keyfile", "key.pem", "--certfile", "cert.pem", "-k", "uvicorn.workers.UvicornWorker", "main:app"]
Unfortunately, when I try to access my APIs using https://<ec2-machine-public-ip>/... - I get an error: "This site can't be reached".
I should mention that everything looks normal in my container's logs:
[2020-11-13 17:02:56 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2020-11-13 17:02:56 +0000] [1] [INFO] Listening at: https://0.0.0.0:433 (1)
[2020-11-13 17:02:56 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2020-11-13 17:02:56 +0000] [8] [INFO] Booting worker with pid: 8
[2020-11-13 17:02:56 +0000] [8] [INFO] Started server process [8]
2020-11-13 17:02:56,490 Started server process [8]
[2020-11-13 17:02:56 +0000] [8] [INFO] Waiting for application startup.
2020-11-13 17:02:56,491 Waiting for application startup.
[2020-11-13 17:02:56 +0000] [8] [INFO] Application startup complete.
2020-11-13 17:02:56,491 Application startup complete.
BTW, my EC2 machine has port 443 open for HTTPS from all IP addresses (Here's a screenshot from my machine's security group inbound-rules).
What am I doing wrong?
Any help is appreciated!
Related
On a Digital Ocean droplet running Ubuntu 21.10 impish I am deploying a bare bones Rails 7.0.0.alpha2 application to production. I am setting up nginx as the reverse proxy server to communicate with Puma acting as the Rails server.
I wish to run puma as a service using systemctl without sudo root privileges. To this effect I have a puma service setup in the users home folder located at ~/.config/systemd/user, the service is enabled and runs as I would expect it to run.
systemctl status --user puma_master_cms_production
reports the following
● puma_master_cms_production.service - Puma HTTP Server for master_cms (production)
Loaded: loaded (/home/comtechmaster/.config/systemd/user/puma_master_cms_production.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-18 22:31:02 UTC; 1h 18min ago
Main PID: 1577 (ruby)
Tasks: 10 (limit: 2338)
Memory: 125.1M
CPU: 2.873s
CGroup: /user.slice/user-1000.slice/user#1000.service/app.slice/puma_master_cms_production.service
└─1577 puma 5.5.2 (unix:///home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock)
Nov 18 22:31:02 master-cms systemd[749]: Started Puma HTTP Server for master_cms (production).
The rails production.log is empty.
The puma error log shows the following
cat log/puma_error.log
=== puma startup: 2021-11-18 22:31:05 +0000 ===
The pid files exist in the application roots shared/tmp/pids folder
ls tmp/pids
puma.pid puma.state
and the socket that nginx needs but is unable to connect to due to permission denied exists
ls -l ~/apps/master_cms/shared/tmp/sockets/
total 0
srwxrwxrwx 1 comtechmaster comtechmaster 0 Nov 18 22:31 puma_master_cms_production.sock
nginx is up and running and providing a
502 bad gateway
response. The nginx error log reports the following error
2021/11/18 23:18:43 [crit] 1500#1500: *25 connect() to unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock failed (13: Permission denied) while connecting to upstream, client: 86.160.191.54, server: 159.65.50.229, request: "GET / HTTP/2.0", upstream: "http://unix:/home/comtechmaster/apps/master_cms/shared/tmp/sockets/puma_master_cms_production.sock:/500.html"
sudo nginx -t reports the following
sudo nginx -t
nginx: [warn] could not build optimal proxy_headers_hash, you should increase either proxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_size
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successfu
just to be pedantic both an ls and a sudo ls to the path reported in the error shows
ls /home/comtechmaster/apps/master_cms/shared/tmp/sockets/
puma_master_cms_production.sock
as expected so I am stumped to understand why nginx running as root using sudo service nginx start is being denied access to a socket that exists, that is owned by the local user rather than root.
I expect the solution is going to be something totally obvious but I can not see what
This problem ended up being related to the folder permissions for the users home folder and specifically a change in the way Ububntu 20.10 sets permissions differently to previous versions of ubuntu, or at least a difference in the way the DigitalOcean setup scripts behave.
This was resolved with a simple command line chmod o=rx from the /home against the user folder concerned e.g.
cd /home
chmod o=rx the_home_folder_for_user
I know that (many) versions of this question have been asked, but none of them have solved my problem. I have a Dash app that is running with Docker on an AWS EC2 instance. I would like to access it from my browser (Firefox), but I keep getting the Firefox can’t establish a connection to the server at 17.67.12.567:8085 (I have changed the public IPv4 address, 17.67.12.567 from the actual one I am using, but that shouldn't matter?).
I run the app with docker run -t -i -p 80:80 app_name, which outputs:
[2021-10-27 08:15:00 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2021-10-27 08:15:00 +0000] [1] [INFO] Listening at: http://0.0.0.0:8085 (1)
[2021-10-27 08:15:00 +0000] [1] [INFO] Using worker: threads
[2021-10-27 08:15:00 +0000] [8] [INFO] Booting worker with pid: 8
[2021-10-27 08:15:00 +0000] [9] [INFO] Booting worker with pid: 9
[2021-10-27 08:15:00 +0000] [10] [INFO] Booting worker with pid: 10
[2021-10-27 08:15:00 +0000] [11] [INFO] Booting worker with pid: 11
So, I would expect to be able to access the app at http:17.67.12.567:8085, but, when I do this, I get the Firefox can’t establish a connection to the server at 17.67.12.567:8085 error.
I have read a lot about firewalls and security group settings, and the EC2 instance's security group settings are as open as possible, I think (I know this is a bad idea, but I will narrow them down once I can access the app!); below is a screenshot of the EC2 instance's security group settings.
That is all the information that I can think to give, thanks in advance for the help!
Well, I figured this out about five minutes after posting, sorry, but hopefully this helps some other poor soul that is working on a project that they do not totally understand!
The problem was the with the port mapping; it needs to match exactly the port shown in Listening at: http://0.0.0.0:8085 (1). So, instead of using docker run -t -i -p 80:80 app_name I needed to use docker run -t -i 8085:8085 app_name.
Once I ran the app with docker run -t -i 8085:8085 app_name I was able to access it on http:17.67.12.567:8085, as expected.
Thank you to this answer for guidance!
I am trying to host a very simple (Hello World) FastAPI on AWS Lambda using Docker image. The image is working fine locally but when I am running it on Lambda it shows me the port binding error. Below are the error details that I am getting when I am trying to test the Lambda function with this image.
START RequestId: ae27e3b1-596d-41f3-a153-51cb9facc7a7 Version: $LATEST
INFO: Started server process [8]
INFO: Waiting for application startup.
INFO: Application startup complete.
ERROR: [Errno 13] error while attempting to bind on address ('0.0.0.0', 80): permission denied
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
END RequestId: ae27e3b1-596d-41f3-a153-51cb9facc7a7
REPORT RequestId: ae27e3b1-596d-41f3-a153-51cb9facc7a7 Duration: 3034.14 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 20 MB
2021-11-01T00:23:59.807Z ae27e3b1-596d-41f3-a153-51cb9facc7a7 Task timed out after 3.03 seconds
This says that I cant bind port 80 on 0.0.0.0, so any idea what port and host should I use in the Dockerfile to make it work on AWS Lambda? Thanks (Below is the Dockerfile which I am using)
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install -r /code/requirements.txt
COPY . /code
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
When running FastAPI in AWS Lambda (assuming used with AWS API Gateway, which you need for Lambda to receive HTTP requests) you can't run it using uvicorn and bind to a port as you would normally.
Instead you need to use Mangum which will create the Lambda handler and transform any incoming Lambda event and send it to FastAPI as a Request object, and in my experience it all works pretty well.
So your code to create the handler might look like this:
if __name__ == "__main__":
uvicorn.run("myapp:app")
else:
handler = Mangum(app)
Additionally your Dockerfile would have an entry point something like this:
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "myapp.handler" ]
Where awslambdaric is the python module provided by AWS to run Docker containers in AWS Lambda as described here.
Also note that API Gateway resource needs a method configured using the Lambda Proxy Integration.
I haven't tested any of the above its just an idea of how to get going.
Being new to Docker, I am following a tutorial but using my own personal MERN-stack project. My personal project folder structure consists of a frontend folder and a backend folder and my Dockerfile placed in the root directory. My frontend is uses localhost:3000 and my backend at localhost:5000. I am trying to view my application in the browser; however, it takes me to a page that states this site can't be reached for http://localhost:3000 and http://localhost:5000 and for http://172.17.0.3:3000 its just a blank forever loading page.
If it helps, i'm using a macOS.
steps i've taken:
docker build -t foodcore:1.0 .
docker run -p 3001:3000 -p 5001:5000 foodcore:1.0
outcome in my terminal:
> server#1.0.0 dev
> concurrently "nodemon server.js" "npm run client"
[0] [nodemon] 2.0.6
[0] [nodemon] to restart at any time, enter `rs`
[0] [nodemon] watching path(s): *.*
[0] [nodemon] watching extensions: js,mjs,json
[0] [nodemon] starting `node server.js`
[1]
[1] > server#1.0.0 client
[1] > cd .. && cd client && npm start
[1]
[1]
[1] > client#0.1.0 start
[1] > react-scripts start
[1]
[0] Thu, 07 Jan 2021 01:15:15 GMT body-parser deprecated bodyParser: use individual json/urlencoded middlewares at server.js:12:9
[0] Thu, 07 Jan 2021 01:15:15 GMT body-parser deprecated undefined extended: provide extended option at node_modules/body-parser/index.js:105:29
[0] Listening at: http://localhost:5000
[1] ℹ 「wds」: Project is running at http://172.17.0.3/
[1] ℹ 「wds」: webpack output is served from
[1] ℹ 「wds」: Content not from webpack is served from /FoodCore/client/public
[1] ℹ 「wds」: 404s will fallback to /
[1] Starting the development server...
[1]
[1] Compiled successfully!
[1]
[1] You can now view client in the browser.
[1]
[1] Local: http://localhost:3000
[1] On Your Network: http://172.17.0.3:3000
[1]
[1] Note that the development build is not optimized.
[1] To create a production build, use npm run build.
docker container
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6c5abad55b1b foodcore:1.0 "npm run dev" 32 minutes ago Up 32 minutes 0.0.0.0:3001->3000/tcp, 0.0.0.0:5001->5000/tcp optimistic_chandrasekhar
Dockerfile
FROM node:latest
RUN mkdir -p /FoodCore
COPY . /FoodCore
WORKDIR /FoodCore/client
RUN npm install
WORKDIR /FoodCore/server
RUN npm install
EXPOSE 3000 5000
ENTRYPOINT [ "npm", "run", "dev" ]
Thank you very much for taking your time reading this.
UPDATE
Turns out i was trying to access http://localhost:3000 but i set my application to run at 3001.
UPDATE
Turns out by through this outline
docker run -p <host_port>:<container_port>
i was initially setting my host port to 3001 instead of 3000. Thus, accessing the wrong port
I have my flask webapp up and running in Docker and trying to implement some unit tests and having trouble executing the tests. While my containers are up and running, I run the following:
docker-compose run app python3 manage.py test
to try to execute my test function in manage.py:
import unittest
from flask.cli import FlaskGroup
from myapp import app, db
cli = FlaskGroup(app)
#cli.command()
def recreate_db():
db.drop_all()
db.create_all()
db.session.commit()
#cli.command()
def test():
""" Runs the tests without code coverage"""
tests = unittest.TestLoader().discover('myapp/tests', pattern='test*.py')
result = unittest.TextTestRunner(verbosity=2).run(tests)
if result.wasSuccessful():
return 0
return 1
if __name__ == '__main__':
cli()
But since I have a start.sh in my Dockerfile it just executes my Gunicorn start.sh but it doesn't run my test. I just see the following in the console, but no trace of my test() function.
[2018-12-08 01:21:08 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2018-12-08 01:21:08 +0000] [1] [DEBUG] Arbiter booted
[2018-12-08 01:21:08 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
[2018-12-08 01:21:08 +0000] [1] [INFO] Using worker: sync
[2018-12-08 01:21:08 +0000] [9] [INFO] Booting worker with pid: 9
[2018-12-08 01:21:08 +0000] [11] [INFO] Booting worker with pid: 11
[2018-12-08 01:21:08 +0000] [13] [INFO] Booting worker with pid: 13
[2018-12-08 01:21:08 +0000] [1] [DEBUG] 3 workers
start.sh:
#!/bin/sh
# Wait until MySQL is ready
while ! mysqladmin ping -h"db" -P"3306" --silent; do
echo "Waiting for MySQL to be up..."
sleep 1
done
source venv/bin/activate
# Start Gunicorn processes
echo Starting Gunicorn.
exec gunicorn -b 0.0.0.0:5000 wsgi --reload --chdir usb_app --timeout 9999 --workers 3 --access-logfile - --error-logfile - --capture-output --log-level debug
Does anyone know why or how I can execute the test function in an existing container without having to start the Gunicorn workers again?
I presume start.sh is your entrypoint. If not, you can make it as an entrypoint instead of putting it in CMD.
We can override the entrypoint script using --entrypoint argument, I do it as below -
docker-compose run --rm --entrypoint "python3 manage.py test" app