I faced some teething issue when trying to deploy my Docker image which contains a simple streamlit app to Heroku. My issue is that I am unable to access my Docker after deployment. On closer look, I discovered the following error:
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
I had researched and understood that this is because the port is unavailable, since Heroku will dynamically assign port number.
I had made sure that this will not happen by putting the following my Dockerfile.
Dockerfile:
FROM python:3.7
COPY . /app
WORKDIR /app
RUN pip install streamlit
ENTRYPOINT ["streamlit","run", "--server.enableCORS", "false" ,"--server.port", "$PORT"]
CMD ["app.py"]
I am now able to see that the Network URL and External URL port number are assigned by Heroku as it is not the typical 5901 number.
What puzzled me, however, is why is the container unable to bind to the given dynamic port number? I thought the app would be using the given dynamic number?
The problem is that $PORT does not get replaced with the corresponding environment variable when the Docker run is executed on the Heroku Docker Registry.
An alternative is to create a Docker file which invokes .sh script
FROM python:3.7
COPY . /app
WORKDIR /app
RUN pip install streamlit
ENTRYPOINT "/startup.sh"
and the startup.sh
echo PORT $PORT
streamlit run --server.enableCORS false --server.port $PORT app.py
Related
It is impossible for me to access container with ASP.NET Core 3.1 application running inside.
Goal is to run application in container on port 5000. When I'm running it locally using standard VS profile I navigate to http://localhost:5000/swagger/index.html in order to load swaggerUI. I would like to achieve same thing using docker.
Steps to reproduce my issue:
Add dockerfile with exposed 5000 port and ENV ASPNETCORE_URLS variable:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:5000
EXPOSE 5000
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["myapp/myapp.csproj", "myapp/"]
RUN dotnet restore "myapp/myapp.csproj"
COPY . .
WORKDIR "/src/myapp/"
RUN dotnet build "myapp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myapp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "myapp.dll"]
Build image
docker build -t myapp .
Run docker image:
docker run myapp -p 5000:5000
Running commands above with specific docker file results in this:
[21:28:42 INF] Starting host.
[21:28:42 INF] Now listening on: http://[::]:5000
[21:28:42 INF] Application started. Press Ctrl+C to shut down.
[21:28:42 INF] Hosting environment: Production
[21:28:42 INF] Content root path: /app
However, I can't access container using http://localhost:5000/swagger/index.html because of ERR_CONNECTION_REFUSED -> This site can't be reached.
I did get into container to check if host is running for sure, using:
docker exec -it containerId /bin/bash
cd /app
dotnet myapp.dll
what resulted in following error:
Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:5000: address already in use.
Conclusion is that port inside the container is used, application is alive, it's just not accessible from outside.I don't know how to get inside of it.
Please point me into right direction.
UPDATE
Issue is solved, answer is posted below. However explanation why it was needed and how it works would be nice!
To solve the issue I had to manually add "--server.urls" to entrypoint like shown below:
ENTRYPOINT ["dotnet", "myapp.dll", "--server.urls", "https://+:5000"]
I solved the same issue in the following way:
Added the following in appsettings.json to force Kestrel to listen to port 80.
"Kestrel": {
"EndPoints": {
"Http": {
"Url": "http://+:80"
}
}
}
Exposed the port in dockerfile
ENV ASPNETCORE_URLS=http://+:80
EXPOSE 80
ENTRYPOINT ["dotnet", "EntryPoint.dll"]
Ran the container using the below command.
docker run -p 8080:80 <image-name>:<tag>
The app exposed on http://localhost:8080/
TL:DR - I am trying to deploy my MERN stack application to GCP's Cloud Run. Struggling with what I believe is a port issue.
My React application is in a client folder inside of my Node.js application.
Here is my one Dockerfile to run both the front-end and back-end:
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
# Installing components for be connector
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
... and here is my entrypoint.sh file:
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
docker-compose up works locally, and docker run -p 8080:8080 -p 3000:3000 <image_id> runs the image I built. Port 8080 is for Node and port 3000 for the React app. However, on Cloud Run, the app does not work. When I visit the app deployed to Cloud Run, the frontend initially loads for a split second, but then the app crashes as it attempts to make requests to the API.
In the Advanced Settings, there is a container port which defaults to 8080. I've tried changing this to 3000, but neither works. I cannot enter 8080,3000, as the field takes valid integers only for the port. Is it possible to deploy React + Node at the same time to Cloud Run like this? How can I have Cloud Run listen on both 8080 and 3000, as opposed to just 1 of the 2?
Thanks!
It's not currently possible.
Instead, you can run multiple processes inside Cloud Run, but instead use nginx to proxy requests between them depending on the URL, similar to what's recommended in this answer.
I've created an image using this Docker file...
FROM node:8
# Create application directory
WORKDIR /usr/src/app
# Install application dependencies
# By only copying the package.json file here, we take advantage of cached Docker layers
COPY package.json ./
RUN npm install
# This will install dev dependencies as well.
# If dev dependencies have been set, use --only-production when deploying to production
# Bundle app source code
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
But when I run it using $ docker run -d --rm -p 3000:3000 62 I can't cUrl the API running inside the container from the Docker host (OS X) using curl http://localhost:3000/About
If I exec into the container I get a valid response from the API via cUrl. Looks like a Linux firewall in the container but I don't see one running.
any ideas?
Your node server is most likely not listening on all interfaces, make sure it binds to 0.0.0.0 instead of 127.0.0.1
I am totally new to docker and the client I am working for have sent me dockerfile configuration .dockerignore file probably to set up the work environment.
So this is basically what he sent to me
FROM node:8
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY assets ./assets
COPY server ./server
COPY docs ./docs
COPY internals ./internals
COPY track ./track
RUN npm run build:dll
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
with docker build and run command (he also provided the same)
docker build -t reponame:tag .
docker run -p 3000:3000 admin-web:v1
Here, First can someone tell me what does copy . . mean?
He asked me to configure all the ports accordingly. From going through videos, I remember that we can map ports like this -p 3000:3000 but what does configuring port means? and how can i do? any relevant article for the same would also be helpful. Do I need to make docker-compose file?
. is current directory in linux. So basicly: copy current local directory to the current container's directory.
The switch -p is used to configure port mapping. -p 2900:3000 means publish your local port 2900 to container's 3000 port so that the container is available on the outside (by your web browser for instance). Without that mapping the port would not be available to access outside the container. This port is still available to other containers inside same docker network though.
You don't need to make a docker-compose.yml file, but it certainly will make your life easier if you have one, because then you can just run docker-compose up every time to run the container instead of having to run
docker run -p 3000:3000 admin-web:v1
every time you want to start your applicaton.
Btw here is one of the ultimate docker cheatsheets that I got from a friend, might help you: https://gist.github.com/ruanbekker/4e8e4ca9b82b103973eaaea4ac81aa5f
I have a default asp.net core dockerfile (as created by VS Tools for Docker):
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "myapp.dll"]
When i run my image using docker run myimage i get this message in an interactive console:
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
If i then press Ctrl+C and type docker start <imgid>, then i no longer see this message, and my bash console is not blocked.
How can i do docker run bypassing this annoying message?
You can use -d flag with docker run so that the image starts in detached mode. In this case you will not see any output but docker will be running in background.
docker run -d