I've created a Dockerfile that successfully runs my Laravel 8 application locally. This is the content of the Dockerfile, located in my Laravel project root:
FROM webdevops/php-nginx:8.0-alpine
WORKDIR /app
COPY . .
RUN chmod 777 -R ./storage
ENV WEB_DOCUMENT_ROOT=/app/public
RUN composer install
I built the image and ran it locally:
docker build . -t gcr.io/my-project/my-image
docker run -p 5000:80 gcr.io/my-project/my-image
The container starts and the application runs as expected. No problems. If I shell into the container, I can see the ports that nginx is listening on:
> netstat -nlp | grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 49/nginx -g daemon
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 49/nginx -g daemon
As you can see the container is listening on port 80 of all network interfaces (0.0.0.0). This conforms with the CloudRun documentation and their troubleshooting guide:
https://cloud.google.com/run/docs/troubleshooting#port
A common issue is to forget to listen for incoming requests, or to
listen for incoming requests on the wrong port.
As documented in the container runtime contract, your container must
listen for incoming requests on the port that is defined by Cloud Run
and provided in the PORT environment variable.
If your container fails to listen on the expected port, the revision
health check will fail, the revision will be in an error state and the
traffic will not be routed to it.
https://cloud.google.com/run/docs/troubleshooting#listen_address
A common reason for Cloud Run services failing to start is that the
server process inside the container is configured to listen on the
localhost (127.0.0.1) address. This refers to the loopback network
interface, which is not accessible from outside the container and
therefore Cloud Run health check cannot be performed, causing the
service deployment failure.
To solve this, configure your application to start the HTTP server to
listen on all network interfaces, commonly denoted as 0.0.0.0.
From what I can tell, I have a working docker container, listening on the correct port. When I deploy to Google Cloud Run, I receive an error:
> gcloud run deploy my-service --image gcr.io/my-project/my-image --project my-project --port 80
...
Deployment failed
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
In the Cloud Run Console and I can see that the service is configured with a PORT of 80 as you would expect from the --port 80 included in the deployment command. I am having trouble figuring out why this isn't working. It seems like I've done everything right.
Does anybody have any idea what might be going wrong here?
This is what I see in the deployment log on Google Cloud:
Maybe the issue is related to the third line that says ln -f -s /var/lib/nginx/logs /var/log/nginx?
It looks like I'm not the only person to have this issue with this base image:
https://github.com/webdevops/Dockerfile/issues/358
I still don't know what the problem is, but it seems to effect other people trying to use this image specifically with Cloud Run.
Related
My system is composed of two parts: a Postgres local_db, and a Nodejs express server that communicates with it via prisma ORM. The Nodejs server, whenever receives a GET request to localhost:4000/, shall reply with a 200-code message as shown in the code:
app.get("/", (_req, res) => res.send("Hello!"))
Basically, this behavior is used further in a health check.
The database is instantiated by the docker-compose.yml (I omit parts not related to networking):
services:
timescaledb:
image: timescale/timescaledb:2.8.1-pg14
container_name: timescale-db
ports:
- "5000:5432"
And a Nodejs backend run in a container, whose Dockerfile is (omitting the parts related to Nodejs building):
FROM node:18
# Declare and set environment variables
ENV TIMESCALE_DATABASE_URL="postgres://postgres:password#localhost:5000/postgres?connect_timeout=300"
# Build app
RUN npm run build
# Expose the listening port
EXPOSE 4000
# Run container as non-root (unprivileged) user
# The node user is provided in the Node.js base image
USER node
CMD npx prisma migrate deploy; node ./build/main.js
The container is made to run via:
docker run -it --network host --name backend my-backend-image
However, despite the container actually finding and successfully connecting to the database (thus populating it), I cannot access to localhost:4000 from the host machine as it tells me connection refused. Furthermore, using curl I obtain the same reply:
$ curl -f http://localhost:4000
curl: (7) Failed to connect to localhost port 4000: Connection refused
I have even tried to connect to the localhost actual ip 127.0.0.1:4000 but still refuses the connection, or to the actual docker daemon address http://172.17.0.1:4000 but the connection keeps hanging.
I do not understand why I cannot access it, even though I have set the flag --network host when running the container, that should map one-to-one the ports of my host machine.
I have successfully built my web app image and ran a container on my server, which is an EC2 instance, with no error at all, but when I tried to access the web page it returned no connection, even though I accessed through the binded port of the host server. The build and run processes gave absolutely no error, either build error or connection error. I'm new to both Docker and AWS, so I'm not sure what could be the problem. Any help from you guys is really appreciated. Thanks a lot!
Here is my Dockerfile
FROM ubuntu
WORKDIR /usr/src/app
# install dependencies, nothing wrong
RUN ...
COPY...
#
# exposed ports
EXPOSE 5000
EXPOSE 5100
CMD ...
Docker build
$ sudo docker built -t demo-app
Docker run command
$ sudo docker run -it -p 8080:5000 -p 808:5100 --name demo-app-1 demo-app
I accessed through the binded port of the host server.
It's mean the application is running, and you're able to access using curl localhost:8080.
Now there are mainly two-issue if you're able to access the application after doing ssh to EC2 instance and verify the application responding on localhost of EC2.
Security group not allowing connection on the desired port, allow 8080 and the check
The instance is in private subnet, you can verify the instance.
I am trying to run a standard nginx container on one of my GCP VMs. When i run
docker run -it --rm -p 80:80 tiangolo/uwsgi-nginx-flask:python3.6
I get the following error:
Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
However it is a clean VM instance I created. During VM creation I also checked the http port to make sure port 80 is open (i need to add https - but this is my first deployment test).
The image does work locally. It seems to be a Google Cloud Platform configuring thing I guess.
it was my own stupid error.. sorry for asking the SO community...
so what did I do wrong.. I connected through the web client.. which means port 80 is already in use. causing all this havoc :(
so just ssh in and try again and it works.
I tried to reproduce the issue on my end, but I did not find any error. Here are the below steps I have taken.
First I spin up a Debian vm instance in the Google cloud platform and allowed incoming http in the firewall for that VM instance so that I could access the site from outside.
Then I installed docker in the VM instance. I followed this link.
After that, I made sure that http port is free in the VM instance. I used the below command.
netstat -an | egrep 'Proto|LISTEN'
You may check the link here.
At this point, I issued the docker command you provided.
docker run -it --rm -p 80:80 tiangolo/uwsgi-nginx-flask:python3.6
I did not get any error and I could access the nginx page.
“Hello World from Flask in a uWSGI Nginx Docker container with Python 3.6 (default)”
If you spin a new VM with the same docker version, do you have the same issue? What kind of image is your VM running?
I am trying to run this project - https://github.com/JumboInteractiveLimited/codetest
I've downloaded the Docker tool box, and I've executed the build and run commands as mentioned on the GitHub page, but when I try to access http:localhost:8080, the page is still unavailable.
When I try to execute run again, Docker says
"$ ./run.sh
Listening on http://localhost:8080
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint quirky_mcnulty (32af4359629669ee515cbc07d8bbe14cca3237979f37262882fb0288f5d6b6b8): Bind for 0.0.0.0:8080 failed: port is already allocated."
Edit: To clarify, I get that error only when I run the 2nd time. When I ran the run command first, it didn't complain. I ran it another time just to confirm that it's running.
When I initially ran, I got the following:
$ ./run.sh
Listening on http://localhost:8080
2017/10/24 13:51:53 Waiting...
The issue seems quite clear
port is already allocated
which means that some other program is listening on port 8080.
If you are on a Linux system you can try to run
sudo lsof -i :8080
to find out what is.
Otherwise, simply use another port.
Change run.sh to replace port 8080 to 8082
#!/bin/bash
echo "Listening on http://localhost:8082"
docker run -p 8082:80 codetest
I have changes port to 8082 if the port is already in use change that port again to some other port based on your available port.
If you are on Windows
netsh interface portproxy add v4tov4 listenport=8082 listenaddress=localhost connectport=8082 connectaddress=192.168.99.100(IP of the Docker)
Here is the helping discussion on port farwarding in windows with docker Solution for Windows hosts
I'm using Docker for Mac. I have a container that run a server, for example my server is run on port 5000. I have exposed this port on Dockerfile
When my container is running, I connect to this container and check and if this server is working or not by running below command and see that it returns data (a bunch of html and javascript)
wget -d localhost:5000
Notes, I start this container and also publish port outside by command:
docker run -d -p 5000:5000 <docker_image_name>
But at docker host (is my mac and running El Capitan), I open chrome and go to address localhost:5000. It doesn't work. Just a little note, if I go to any arbitrary port such as localhost:4000 I see error message from chrome such as:
This site can’t be reached
localhost refused to connect.
But error message for localhost:5000 is:
The localhost page isn’t working
localhost didn’t send any data.
So it seems I have configured work "a little" but something wrong. Please tell me how to fix this.
Please check the program in container is listening on interface 0.0.0.0.
In container, run command:
ss -lntp
If it appears like:
LISTEN 0 128 127.0.0.1:5000 *:*
that means your web app only listen at localhost so container host cannot access to your web app. You should make your server listen at 0.0.0.0 interface by changing your web app build setting.
For example if your server is nodejs app:
var app = connect().use(connect.static('public')).listen(5000, "0.0.0.0");
If your server is web pack:
"scripts": {
"dev": "webpack-dev-server --host 0.0.0.0 --port 5000 --progress"
}
I had this problem using Docker for Mac, trying to run an Angular2 app.
I fixed the issue by changing the start script in package.json to
"scripts": {
"start": "ng serve --host 0.0.0.0"
}
Previously start was only ng serve.