I am trying to host a very simple (Hello World) FastAPI on AWS Lambda using Docker image. The image is working fine locally but when I am running it on Lambda it shows me the port binding error. Below are the error details that I am getting when I am trying to test the Lambda function with this image.
START RequestId: ae27e3b1-596d-41f3-a153-51cb9facc7a7 Version: $LATEST
INFO: Started server process [8]
INFO: Waiting for application startup.
INFO: Application startup complete.
ERROR: [Errno 13] error while attempting to bind on address ('0.0.0.0', 80): permission denied
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
END RequestId: ae27e3b1-596d-41f3-a153-51cb9facc7a7
REPORT RequestId: ae27e3b1-596d-41f3-a153-51cb9facc7a7 Duration: 3034.14 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 20 MB
2021-11-01T00:23:59.807Z ae27e3b1-596d-41f3-a153-51cb9facc7a7 Task timed out after 3.03 seconds
This says that I cant bind port 80 on 0.0.0.0, so any idea what port and host should I use in the Dockerfile to make it work on AWS Lambda? Thanks (Below is the Dockerfile which I am using)
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install -r /code/requirements.txt
COPY . /code
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
When running FastAPI in AWS Lambda (assuming used with AWS API Gateway, which you need for Lambda to receive HTTP requests) you can't run it using uvicorn and bind to a port as you would normally.
Instead you need to use Mangum which will create the Lambda handler and transform any incoming Lambda event and send it to FastAPI as a Request object, and in my experience it all works pretty well.
So your code to create the handler might look like this:
if __name__ == "__main__":
uvicorn.run("myapp:app")
else:
handler = Mangum(app)
Additionally your Dockerfile would have an entry point something like this:
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "myapp.handler" ]
Where awslambdaric is the python module provided by AWS to run Docker containers in AWS Lambda as described here.
Also note that API Gateway resource needs a method configured using the Lambda Proxy Integration.
I haven't tested any of the above its just an idea of how to get going.
Related
I'd like to shutdown my docker app when the GCE VM stops.
I use a docker image on GCE:
FROM node:16-alpine
RUN apk add --no-cache tini
ENTRYPOINT ["/sbin/tini", "--"]
WORKDIR /usr/src/app
COPY dist/fetcher.js ./
CMD [ "node", "fetcher.js" ]
fetcher.js is:
console.log('## start');
setInterval(() => console.log('tick'), 10 * 1000);
for (const signal of ['SIGTERM', 'SIGINT', 'SIGHUP']) {
process.on(signal, async (signal) => {
console.info(`Got ${signal}. Graceful shutdown start at ${Date().toString()}`);
process.exit();
});
}
Locally I can see the log message when using docker kill -s 1 <container>:
## start
tick
tick
tick
Got SIGHUP. Graceful shutdown start at Mon Jan 03 2022 04:39:53 GMT+0000 (Coordinated Universal Time)
It works well when I SSH into the VM and run the same command (docker kill -s 1 <container>).
However I can not see the log when I stop the VM:
tick
...
tick
methodName: "v1.compute.instances.stop"
For some reason the signal handler does not seem to be executed.
I have tried different things:
with and without tini,
writing a file to google storage in the signal handle (in case the problem is the log).
But none of this works.
Any help would be appreciated.
I agree with #John Hanley and would like to add that Stopping an instance causes Compute Engine to send the ACPI Power Off signal to the instance and therefore I believe it is an intended behavior that log processing will be stopped when the shutdown or stopping signal is sent for a GCE VM. Getting the logs from the GCE VM, once the stop signal is processed, is fairly impossible. This could be the reason why you were not receiving further logs.
I am using Goland IDE on MacOSX and I'm trying to debug an application running on the container. I'm trying to attempt remote debugging, just that the container is on my local.
When I run the debugger on my IDE it does stop on the breakpoint but the one that it is debugging is the application on my local and not the one on the container.
For background, my application is supposed to listen on port 8000 and return "Hello, visitor!".
If I compile and run this file through a docker container, map my port 8000 and make a request through browser or through .http file, I do receive this response.
However, when I run it through Delve on the container, it does not respond through browser.
Also, once the container is up, when I start debugger on my IDE it does not debug the application on the container, as it complains about
2020/08/05 17:57:39 main.go:16: listen tcp :8000: bind: address already in use
I've tried following these 2 tutorials, both of which are mostly same, except for the version of their docker images that they use.
Tutorial1
Tutorial2
I have gone through all the comments on these 2 posts as well but haven't found anything that would solve my problem.
Here is my main.go
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
// Set the flags for the logging package to give us the filename in the logs
log.SetFlags(log.LstdFlags | log.Lshortfile)
log.Println("starting server...")
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = fmt.Fprintln(w, `Hello, visitor!`)
})
log.Fatal(http.ListenAndServe(":8000", nil))
}
Here is my Dockerfile:
# Compile stage
FROM golang AS build-env
# Build Delve
RUN git config --global http.sslVerify "false"
RUN git config --global http.proxy http://mycompanysproxy.com:80
RUN go get github.com/go-delve/delve/cmd/dlv
ADD . /dockerdev
WORKDIR /dockerdev
RUN go build -gcflags="all=-N -l" -o /server
# Final stage
FROM debian:buster
EXPOSE 8000 40000
WORKDIR /
COPY --from=build-env /go/bin/dlv /
COPY --from=build-env /server /
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/server"]
The container comes up successfully and the attached console's log says:
API server listening at: [::]:40000
However, it does not seem to be listening.
If I run
GET http://localhost:8000/
Accept: application/json
I expect it to stop on the breakpoint but it doesn't. Rather it complains:
org.apache.http.NoHttpResponseException: localhost:8000 failed to respond
Am I missing something?
Is this the way to invoke debugger on a containerized app?
Some more information:
I figured out that I was using the wrong debug configuration. Need to press debug button with remote debug (top right) showing in the configuration.
I am trying to run Botpress with docker. I set my Dockerfile as follows:
FROM botpress/server:v11_9_5
ADD . /botpress
WORKDIR /botpress
CMD ["./bp"]
After building image, I run docker run my_image:latest to start my botpress. However it cannot connect to Duckling server.
According to the log,
03:20:32.917 Mod[nlu] Couldn't reach the Duckling server , so it will be disabled.
For more informations (or if you want to self-host it), please check the docs at
https://botpress.io/docs/build/nlu/#system-entities
[Error, connect ECONNREFUSED 127.0.0.1:8000]
STACK TRACE
Error: connect ECONNREFUSED 127.0.0.1:8000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1158:14)
My nlu.json setting is as follow:
{
"$schema": "../../assets/modules/nlu/config.schema.json",
"confidenceTreshold": 0.7,
"ducklingURL": "https://duckling.botpress.io",
"ducklingEnabled": true,
"autoTrainInterval": "30s",
"preloadModels": false,
"languageModel": "en",
"fastTextOverrides": {}
}
Duckling is bundled with Botpress when using the Docker image (and is expected to be started when you start Botpress). There is an environment variable which tells it to use the local version of duckling.
If you run the image directly, both processes are started at the same time.
There are a couple of examples on how to run both of them here: https://github.com/botpress/botpress/tree/master/examples/docker-compose
Basically:
command: bash -c "./duckling -p 8000 & ./bp"
I'm using docker compose to run a simple web server project I created. This configuration has been working fine for months but suddenly stopped working after I haven't been to the office for two weeks.
It works when I map my ports like that - 8080:80, but I don't want to have to type out port 8080 every time. I used netstat -a -n -o | findstr /c:80 to find the process ID of the process listening to port 80, and tasklist /fi "pid eq 4" to find out what the name of the process is.
Turns out it's some system process, so I'm not sure what to do about that. I've uninstalled Skype and checked that the World Wide Web Publishing Service isn't turned on. Does anybody have an explanation or ideas as to how to fix this?
Thanks in advance.
update
when I run net stop http and kill all dependant services with it, port 80 is free. Services being stopped: Windows Remote Management (WS-Management), SSDP Discovery, Print Spooler, BranchCache and HTTP of course. Which of these could be the culprit?
update 2
I now stopped those services one by one, and after stopping every one of those it seems BranchCache is responsible for this. more testing ensues
docker-compose.yml
version: "3"
services:
vote-client:
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
Dockerfile
FROM nginx
COPY ./html /usr/share/nginx/html
when I run docker-compose up this is my output:
docker-compose up --build
Removing vote-client_vote-client_1
Building vote-client
Step 1/2 : FROM nginx
---> 42b4762643dc
Step 2/2 : COPY ./html /usr/share/nginx/html
---> Using cache
---> a1aade2a299e
Successfully built a1aade2a299e
Successfully tagged vote-client_vote-client:latest
Recreating c2654f31dcff_vote-client_vote-client_1 ... error
ERROR: for c2654f31dcff_vote-client_vote-client_1 Cannot start service vote-client: driver failed programming external connectivity on endpoint vote-client_vote-client_1 (2188c8607a04ba2388a661504601431d6b30825d595dafae0c318f2d2b5685b0): Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error Permission denied
ERROR: for vote-client Cannot start service vote-client: driver failed programming external connectivity on endpoint vote-client_vote-client_1 (2188c8607a04ba2388a661504601431d6b30825d595dafae0c318f2d2b5685b0): Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error Permission denied
ERROR: Encountered errors while bringing up the project.
I have Wildfly running in a Docker container.
Within Wildfly the messaging-activemq subsystem is active.
The subsystem and extension defaults are taken from the standalone-full.xml file.
After starting wildfly, following output is displayed
[org.apache.activemq.artemis.jms.server] (ServerService Thread Pool -- 64)
AMQ121005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector.
Switching to "eeb79399d447".
If this new address is incorrect please manually configure the connector to use the proper one.
The eeb79399d447 is the docker container id.
It's also impossible to connect to jms from my java client. While connecting it gives the following error.
AMQ214016: Failed to create netty connection: java.net.UnknownHostException: eeb79399d447
When I start wildfly on my local workstation (outside docker) the problem does not occur and I can connect to jms and send my messages.
Here are a few options. Option 1 & 2 may be what you asked for, but in the end didn't work for me. Option 3 however, I think will better address your intent.
Option 1) You can do this by adding some scripting to your docker image ( and not touching your standalone-full.xml. The basic idea ( credit goes to git-hub user kwart ) is to make a docker entry point that can determine the IPv4 address of the docker container before calling standalone.sh.
see : https://github.com/kwart/dockerfiles/tree/master/wildfly-ext and check out the usage of WILDFLY_BIND_ADDR. I forked it.
Notes:
GetIp.java will print out the IPv4 address ( and is copied into the container )
dockerentry-point.sh calls GetIp.java as needed
WILDFLY_BIND_ADDR=${WILDFLY_BIND_ADDR:-0.0.0.0}
if [ "${WILDFLY_BIND_ADDR}" = "auto" ]; then
WILDFLY_BIND_ADDR=`java -cp /opt/jboss GetIp`
fi
Option 2) Alternatively, using some script-fu, you may be able to do everything you need in a Dockerfile:
#CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
CMD ["sh", "-c", "DOCKER_IPADDR=$(hostname --ip-address) && echo IP Address was $DOCKER_IPADDR && /opt/jboss/wildfly/bin/standalone.sh -c standalone-full.xml -b=$DOCKER_IPADDR -bmanagement=$DOCKER_IPADDR"]
Your mileage may very.
I was working with the helloworld-jms quickstart from the WildFly docs, and had to jump through some extra hoops to get the JMS queue created. Even at that point, the sample java code wasn't able to connect with either option 1 or option 2.
Option 3) ( This worked for me btw ) Start your container with binding to 0.0.0.0, expose your 8080 port for your JMS client running on the host, and add an entry in your hosts' /etc/hosts file:
Dockerfile:
FROM jboss/wildfly
# CP foo.war /opt/jboss/wildfly/standalone/deployments/
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin --silent
RUN /opt/jboss/wildfly/bin/add-user.sh -a quickstartUser quickstartPwd1! --silent
RUN echo "quickstartUser=guest" >> /opt/jboss/wildfly/standalone/configuration/application-roles.properties
# use standalone-full.xml to enable the JMS feature
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
Build & run ( expose 8080 if your client is on your host machine )
docker build -t mywildfly .
docker run -it --rm --name jboss -p127.0.0.1:8080:8080 -p127.0.0.1:9990:9990 my_wildfly
Then on the host machine ( I'm running OSX; my jboss container's id was 46d04508b92b ) add an entry in your /etc/hosts for the docker-host-name that points to 127.0.0.1:
127.0.0.1 46d04508b92b # <-- replace with your container's id
Once the wildfly container is running, you create/configure the testQueue via scripts or in the management console. My config came from https://github.com/wildfly/quickstart.git under the helloworld-jms folder:
docker cp configure-jms.cli jboss:/tmp/
docker exec jboss /opt/jboss/wildfly/bin/jboss-cli.sh --connect --file=/tmp/configure-jms.cli
and SUCCESS from mvn clean compile exec:java the host machine (from w/in the helloworld-jms folder):
Mar 28, 2018 9:03:15 PM org.jboss.as.quickstarts.jms.HelloWorldJMSClient main
INFO: Found destination "jms/queue/test" in JNDI
Mar 28, 2018 9:03:16 PM org.jboss.as.quickstarts.jms.HelloWorldJMSClient main
INFO: Sending 1 messages with content: Hello, World!
Mar 28, 2018 9:03:16 PM org.jboss.as.quickstarts.jms.HelloWorldJMSClient main
INFO: Received message with content Hello, World!
You need to edit the standalone-full.xml to cope with jms behind NAT and when you run the docker container pass though the ip and port that your jms client can use to connect, which is the ip of the machine running docker in Dockers' default config