Cloud Run Deploy fails: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable - port

I have a project which I had previously successfully deployed to Google Cloud Run, and set up with a trigger such that upon pushing to the repo's main branch on Github, it would automatically deploy. It worked great.
Then I tried to rename the github repo, which meant deleting and creating a new trigger, and now I cannot get it working again.
Everytime, the build succeeds but deployment fails with this error in Cloud Build:
Step #2 - "Deploy": ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I have not changed anything other than the repo name, leading me to believe the fix is not with my code, but I tried some changes there anyway.
I have looked into the solutions set forth in this post. However, I believe I am listening on the correct port.
My app is using Python and Flask, and contains this:
if __name__ == "__main__":
app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Which should use the ENV var Port (8080) and otherwise default to 8080. I also tried just using port=8080.
I tried explicitly exposing the port in the Dockerfile, which also did not work:
FROM python:3.7
#Copy files into docker image dir, and make that the current working dir
COPY . /docker-image
WORKDIR /docker-image
RUN pip install -r requirements.txt
CMD ["flask", "run", "--host", "0.0.0.0"]
EXPOSE 8080
Cloud Run does seem to be using port 8080 - if I dig into the response, I see this nested under Response.spec.container.0 :
ports: [
0: {
containerPort: 8080
name: "http1"
}
]
All that said, if I look at the logs, it shows "Now running on Port 5000".
I have no idea where that Port 5000 is coming from or being set, but trying to change the ports in Python/Flask and the Dockerfile to 5000 leads to the same errors.
How do I get it to run on Port 8080? It's very strange to me that this was working FINE prior to renaming the repo and creating a new trigger. How is this setup different? The Trigger does not give an option to set the port so I'm not sure how that caused this error.

You have mixed things. Flask command default port is effectively 5000. If you want to change it, you need to change your flask run command with the --port= parameter
CMD ["flask", "run", "--host", "0.0.0.0","--port","8080"]
In addition, your flask run command, is a flask runtime and totally ignore the standard python entrypoint if __name__ == "__main__":. If you want to use this entrypoint, use the Python runtime
CMD ["python", "<main file>.py"]

Related

ASP.NET Core Webapi startup url - Docker-Compose vs. Docker build

I am encountering an interesting difference in startup behaviour when running a simple net6.0 web api built with docker-compose in comparison to being built with docker. The application itself runs in a kubernetes cluster.
Environment
Minikube v1.26.1
Docker Desktop v4.12
Docker Compose v2.10.2
Building with docker-compose
docker-compose.yml
version: "3.8"
services:
web.api:
build:
context: ./../
dockerfile: ./web.API/Dockerfile
The context is set to the parent directory due to some files needed there.
Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
WORKDIR /src
ENV ASPNETCORE_URLS=http://+:80
COPY Directory.Build.props ./Directory.Build.props
COPY .editorconfig ./.editorconfig
COPY ["webapi/web.API", "web.API/"]
RUN dotnet build "web.API/web.API.csproj" -c Release --self-contained true --runtime alpine-x64
RUN dotnet publish "webapi/web.API.csproj" -c Release -o /app/publish \
--no-restore \
--runtime alpine-x64 \
--self-contained true \
/p:PublishSingleFile=true
FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-alpine
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=build /app/publish .
ENTRYPOINT ["./web.API"]
This results in the app starting up within the kubernetes cluster with the following logs:
Now listening on: http://[::]:80
Building with docker build
Using the same Dockerfile mentioned earlier with the same build context you can see in the docker-compose.yml, a deployment to k8s results in the following log:
Now listening on: http://localhost:5000
Running the image locally
Running the exact same image from the k8s cluster locally however results in
Now listening on: http://[::]:80
Already tried
As suggested in many posts, I tried setting the environment variable ASPNETCORE_URLS via Dockerfile or k8s deployment.yml- neither of which had an impact on the startup url.
I can't seem to figure out why there is a difference between those 2 ways of building an image.
Update
The only thing that seems to work is to add
builder.WebHost.ConfigureKestrel(option =>{
option.ListenAnyIP(80);
});
to the Program.cs.
Still not sure about the reason behind the behaviour though.
A few things:
I assume that the container already running and working on port 80 (docker run) is stopped before attempting to run docker-compose?
Environment variables can be used in docker-compose.yml file
Ports most likely need to be exposed correctly, which from the Dockerfile and docker-compose.yml seems like it is not?
Environment Variables
First off, before the environmental ENV ASPNETCORE_URLS=http://+:80 is going to be of any use, your docker-compose instance does not define which ports to use, your docker-compose (if trimmed) does not show any ports.
Perhaps because the ports aren't exposed, this means the environmental tries to connect to 80, which fails (already in use/not exposed) and then fails, and somehow connects on 5000.
Alternatively, more likely: it does not really not _see your ENV ASPNETCORE_URLS
You can try environment variables directly in your docker-compose file:
my-service:
image: ${IMAGE_NAME}
environment:
MY_SECRET_KEY: ${MY_SECRET_KEY}
Publishing ports
In docker-compose file you need this, to publish ports:
ports:
- "80"
- "443"
... or
ports:
- "80:80" // "host-port:container-port"
- "443:1234"
Additional information
The keyword EXPOSE\expose in Dockerfile/docker-compose.yml is just informative (comments in a sense), functionally it does not process anything. A port need to be exposed (published) to be used.
So, those EXPOSE 443 & 80 is not telling Docker to use it, perhaps you are running your container for example like this:
This exposes port 80 for it to be available.
docker run -p 127.0.0.1:80:80/tcp image command
In short, use ports keyword in docker-compose.yml.
EDIT:
I read your comment above:
But the app is not accessible in k8s when listening to localhost:5000 even with correct service and container configuration
This indicates what I am trying to say regarding the ports being published or not. Your port 5000 is also not exposed because nothing in your configuration shows that is the case.

Assigning port when building flask docker image

I recently created an app using flask and put the py file in a docker container. However I am confused with online cases where people assigned the port.
First of all on the bottom of my py file I wrote
if __name__ == "__main__":
app.run(host='0.0.0.0',port=8000, debug=True)
In some cases I saw people specify the port in CMD when making dockerfile
CMD ["python3", "app.py", "--host=0.0.0.0", "--port=8000"]
In my own experience, the port assigned in CMD didn't work on my case at all. I wish to learn the differences between the two approaches and when to use each way.
Regarding this approach:
if __name__ == "__main__":
app.run(host='0.0.0.0',port=8000, debug=True)
__name__ is equal to "__main__" when the app is launched directly with the python interpreter (executed with the command python app.py) - which is a python technicallity and nothing to do with Flask. In that case the app.run function is called, and it accepts the various arguments as stated. app.run causes the Werkzeug development server to run.
This block will not be run, if you're executing the program with a production WSGI server like gunicorn as __name__ will not be equal to "__main__" in that case, so the app.run call is bypassed.
In practice, putting the app.run call in this if block means you can run the dev server with python app.py and avoid running the dev server when the same code is imported by gunicorn or similar in production.
There are lots of older tutorials or posts which reference the above approach. Modern versions of Flask ship with the flask command which is intended to replace this. So essentially without that if block, you can launch the development server which imports your app object in a similar manner to to gunicorn:
flask run -h 0.0.0.0 -p 8000
This automatically looks for an object called app in app.py, and accepts the host and port options, as you can see from flask run --help:
Options:
-h, --host TEXT The interface to bind to.
-p, --port INTEGER The port to bind to.
One advantage of this method is that the development server won't crash if you're using the auto reloader and introduce syntax errors. And of course the same code will be compatible with a production server like gunicorn.
With the above in mind, regarding the command you pass:
python app.py --host=0.0.0.0 --port=8000
I'm not sure if you've been confused with references to the flask command's supported options, but for this one to work you'd need to manually write some code to do something with those options. This could be done with a python module like argparse, but that would probably be redundant given that the flask command actually supports this out of the box.
To conclude: you should probably remove the if block, and your Dockerfile should contain:
CMD ["flask", "run", "--host=0.0.0.0", "--port=8000"]
You may also wish to check the FLASK_ENV environment variable is set to development to use the auto reloader, and be aware that the CMD line would need to be changed within this Dockerfile to run with gunicorn or similar in production, but that's probably outwith the scope of this question.
CMD ["python3", "app.py", "--host=0.0.0.0", "--port=8000"] means: Python run application app.py and pass the --host and the --port parameters to that application. It is up to your app.py to do something with those parameters. If your app does not process those flags, then you do not need to add them to the CMD.
If in your code you have app.run(host='0.0.0.0',port=8000), then your app will always be listening to port 8000 inside the container. In this case you can just use CMD ["python3", "app.py"]
If you wanted to have the ability to change the port and host that your app is listening to, the you could add some code to read the values from the command line. Once you setup your app to look at values from the command line, then it would make sense to run CMD ["python3", "app.py", "--host=0.0.0.0", "--port=8000"]

Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable

I built my container image, but when I try to deploy it from the gcloud command line or the Cloud Console, I get the following error: "Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable."
In your code, you probably aren't listening for incoming HTTP requests, or you're listening for incoming requests on the wrong port.
As documented in the Cloud Run container runtime contract, your container must listen for incoming HTTP requests on the port that is defined by Cloud Run and provided in the $PORT environment variable.
If your container fails to listen on the expected port, the revision health check will fail, the revision will be in an error state and the traffic will not be routed to it.
For example, in Node.js with Express, you should use :
const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log('Hello world listening on port', port);
});
In Go:
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
In python:
app.run(port=int(os.environ.get("PORT", 8080)),host='0.0.0.0',debug=True)
One of the other reason may be the one which I observed. Docker images may not have the required code to run the application.
I had a Node application written in TypeScript. In order to dockerize the application all I need to do is compile the code tsc and run docker build but I though that gcloud builds submit will be taking care of that and picking the compiled code as the Dockerfile suggested in conjunction to the .dockerignore and will build my source code and submit to the repository.
But what all it did was to copy my source code and submitted to the Cloud Build and there as per the Dockerfile it dockerized my source code as compared to dockerizing the compiled code.
So remember to include a build step in Dockerfile if you are doing a source code in a language with require compilation.
Remember that enabling the build step in the Dockerfile will increase the image size every time you do a image push to the repository. It is eating the space over there and google is going to charge you for that.
Another possibility is that the docker image ends with a command that takes time to complete. By the time deployment starts the server is not yet running and the health check will hit a blank.
What kind of command would that be ? Usually any command that runs the server in dev mode. For Scala/SBT it would be sbt run or in Node it would be something like npm run dev. In short make sure to run only on the packaged build.
I was exposing a PORT in dockerfile , remove that automatically fixed my problem. Google injects PORT env variable so the project will pick up that Env variable.
We can also specify the port number used by the image from the command line.
If we are using Cloud Run, we can use the following:
gcloud run deploy --image gcr.io/<PROJECT_ID>/<APP_NAME>:<APP_VERSION> --max-instances=3 --port <PORT_NO>
Where
<PROJECT_ID> is the project ID
<APP_NAME> is the app name
<APP_VERSION> is the app version
<PORT_NO> is the port number
The Cloud Run is generating default yaml file which has hard-coded default port in it:
spec:
containerConcurrency: 80
timeoutSeconds: 300
containers:
- image: us.gcr.io/project-test/express-image:1.0
ports:
- name: http1
containerPort: 8080
resources:
limits:
memory: 256Mi
cpu: 1000m
So, we need to expose the same 8080 port or change the containerPort in yaml file and redeploy.
Here is more about that:
A possible solution could be:
build locally
push the image on google cloud
deploy on google run
With commands:
docker build -t gcr.io/project-name/image-name
docker push gcr.io/project-name/image-name
gcloud run deploy tag-name --image gcr.io/project-name/image-name

node and react running with docker-compose.yml file

I have a sample application, I am using nodejs and reactjs, So my project folder consists of client and server folder. The client folder is created using create-react-app.
i have created two Dockerfile for each of the folder, and i am using a docker-compose.yml on the root of the project.
everything is working fine. Now i just want to host this application. I am trying to use jenkins.
Since i have little knowledge on the devops side. i have some doubts
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
6) what is the use of volumes is it required in docker-compose.yml file ?
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
please see the below folder structure
movielisting
client
Dockerfile
package.json
package.lock.json
... other create-react-app folders like src..
server
Dockerfile
index.js
docker-compose.yml
.env
Dockerfile -- client
FROM node:10.15.1-alpine
#Create app directory and use it as the working directory
RUN mkdir -p /srv/app/client
WORKDIR /srv/app/client
COPY package.json /srv/app/client
COPY package-lock.json /srv/app/client
RUN npm install
COPY . /srv/app/client
CMD ["npm", "start"]
Dockerfile -- server
FROM node:10.15.1-alpine
#Create app directory
RUN mkdir -p /srv/app/server
WORKDIR /srv/app/server
COPY package.json /srv/app/server
COPY package-lock.json /srv/app/server
RUN npm install
COPY . /srv/app/server
CMD ["node", "index.js"]
docker-compose.yml -- root of project
version: "3"
services:
#########################
# Setup node container
#########################
server:
build: ./server
expose:
- ${APP_SERVER_PORT}
environment:
API_HOST: ${API_HOST}
APP_SERVER_PORT: ${APP_SERVER_PORT}
ports:
- ${APP_SERVER_PORT}:${APP_SERVER_PORT}
volumes:
- ./server:/srv/app/server
command: ${NODE_COMMAND:-node} index.js
##########################
# Setup client container
##########################
client:
build: ./client
environment:
- REACT_APP_PORT=${REACT_APP_PORT}
expose:
- ${REACT_APP_PORT}
ports:
- ${REACT_APP_PORT}:${REACT_APP_PORT}
volumes:
- ./client/src:/srv/app/client/src
- ./client/public:/srv/app/client/public
links:
- server
command: npm run start
.env
API_HOST="http://localhost:4000"
APP_SERVER_PORT=4000
REACT_APP_PORT=3000
package.json -- client
"proxy": "http://server:4000"
what all things can i refactor,
Any help appreciated.
1) if i use two docker files for client and react and it is started by docker-compose.yml , will it be running in two different containers or in a single container? from what ever i have read i think it will take two container thats the use of docker-compose.yml file. Little bit confused on this ?
Each dockerfile will build a docker image. So in the end you will have two images one for the react application and the other one for the backend which is nodejs application
2) also when i do the sudo docker-compose up, it is running perfectly but it is showing "to create production build use npm run build", based on the env how can i change this one. Do i need to create different docker-compose.yml file for each environment. How can i use the same file but different npm start or npm run build based on the env ?
You need to build the react application within the steps you have in its Dockerfile in order to be able to use it as a normal application. Also you might use environment varaible to customize the image during the build using build-args for example passing custom path or anything else.
3) can i use docker-compose.yml file for building the pipeline in jenkins or do i need a Dockerfile in the root of project. I have seen most of the projects having a single Dockerfile. Is that i am not able to use docker-compose.yml for hosting the application ?
It would be better if you use the dockerfile(s) with jenkins in order to build your images and keep docker-compose.yml file(s) for deploying the application itself without using the build keyword
4) why i use NODE_COMMAND for server in Command property of docker-compose.yml file is because when i am running application in local i need to have the auto reloading , so in terminal if i put NODE_COMMAND = nodemon it will take instead of running node index.js but in production it will take only node index.js if i don't mention any NODE_COMMAND.
Using command inside the docker-compose.yml file will override the CMD for the dockerfile which was set during the build step
5) Do i need the CMD in Dockerfile of each client and server since when i run docker-compose up it will take the command of docker-compose.yml. So i think the precedence will take from the docker-compose.yml file. is it ?
Generally speaking yes you need it however as long as you want to use override it from the docker-compose file you might added it as CMD ["node", "--help"] or something
6) what is the use of volumes is it required in docker-compose.yml file ?
Volumes is needed in case you have shared files between containers or you need to keep data persistent on the host
7) in env file i am using API_HOST and APP_SERVER_PORT how it is internally worrking with the package.json? is it doing the proxy thing. When we need to hit nodejs we usually gives "proxy": "http://localhost:4000", but here how it will take http://server:4000 . How this thing works ?
server is an alias for the nodejs container inside the docker network once you start your application. and why named server ? because you have it inside your docker-compose.yml file in this part:
services:
server:
But of course you can change it by adding alias to it within network keyword inside the docker-compose.yml file
Note: React itself is a client side which means it works through the browser itself so it wont be able to contact the nodejs application through docker network you may use the ip itself or use localhost and make the nodejs accessible through localhost
8) when we are creating a container we have ports like 3000, 3001 ... so the container port and our application port how it matches, by use of exports environments and ports in the docker-compose.yml file will take care of that ?
Docker itself does not know about which port your application is using so you have to make both of them use the same port. and in nodejs this is achievable by using environment variable
For more details:
https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/#aliases
https://docs.docker.com/compose/compose-file/#command
https://facebook.github.io/create-react-app/docs/deployment
if any one facing issues with connecting react and express, make sure there is NO localhost attached to server api address in client code
(eg: http://localhost:5000/api should be changed to /api),
since proxy entry is there in package.json file.
PS: if no entry is there, add
{
"proxy": "http://server:5000"
}
to package.json, ('server' is your express app container name in docker-compose file)
finally made it work. thought of sharing this if it helps anyone else

Linode/lamp + docker-compose

I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp

Resources