I have simple container created with below code
FROM nginx:1.22-alpine
COPY src/html /usr/share/nginx/html
CMD [ "nginx", "-g", "daemon off;" ]
Its working fine on the local machine but when deployed to GKE and trying to access the external ip getting below error on the log
Can you please advise
Trying to get the webpage similar to one getting on the local machine
Related
My computer is behind a proxy
I built a docker image with a Shiny App and with this lines in the dockerfile :
ENV http_proxy=myproxy.fr:5555
ENV https_proxy=myproxy.fr:5555
When I run the docker, my Shiny App starts well but it stops 2mn later because it can't access to the internet. In the log file there is this error :
Warning in file(file, "rt") :
unable to connect to 'www.openstreetmap.org' on port 80.
Warning: Error in file: cannot open the connection
And the Shiny App works well outside the docker, even behind the proxy
It appears that the ENV variable are only set for the root user.
Any clue to deal with this proxy issue in Docker ?
Thanx
I added the proxy variables in
/etc/R/Renviron.site
while building the image and it works now ;-)
I am facing the issue
(gcloud.run.deploy) Cloud Run error: Container failed to start. Failed
to start and then listen on the port defined by the PORT environment
variable. Logs for this revision might contain more information.
There are a few post with this error but I couldn't find my particular case.
I am running a background task, nothing to expose, it connects to firebase process some data and store it back. I wanted this process to run on a container on Cloud Run so I made it a container, which runs perfectly locally, but when uploading it to CR it fails with the above error.
I tried to expose 8080 on dockerfile and a few more things but if you try to connect to they container it has no server running to connect to. It is a batch task.
Can anyone tell me if it is possible at all to upload this type of tasks to Cloud Run, I do not know how to solve the issue. I wouldnt believe google requires a server running on the container to allow it, I saw some posts with dev pulling an nginx on the image so they can expose the port but this would be totally unnecessary in my case.
Thanks for your advice
UPDATE
Cloud Logging: The error simply say there was a fail to start the container, which is funny because the container starts and also shows some logs like if it were working but then it stops.
Build on MAC yes.
DockerFile is pretty simple.
FROM openjdk:11
ENV NOTIFIER_HOME /opt/app/
ENV NOTIFIER_LOGS /opt/notifications/logs/
RUN mkdir -p $NOTIFIER_HOME RUN mkdir -p $NOTIFIER_LOGS
RUN apt update
#RUN apt install curl
COPY docker/* $NOTIFIER_HOME
EXPOSE 8080
ENV TMP_OPTS -Djava.io.tmpdir=/tmp ENV LOG4j_OPTS
-Dlog4j.configurationFile=$NOTIFIER_HOME/logback.xml ENV NOTIFIER_OPTS $TMP_OPTS $LOG4j_OPTS
ENV JAVA_GC_OPTS -Xms1g -Xmx1g
WORKDIR $NOTIFIER_HOME ENTRYPOINT ["sh", "-c", "/opt/app/entrypoint.sh"]
You can't run background jobs on Cloud Run. Wrap it in a webserver as proposed by MBHA if the process take less than 1h.
Else you can you GKE Autopilot to run your container for a while. you pay only when your container run. And the first cluster is free. You can have a try on it!
As hack you can run your container in Cloud Build also, or in Vertex AI custom container training.
I've run in to a similar issue with building custom image on MAC + deploying in to Cloud Run. In my case, it turned out to be the docker platform causing the problem. The way I isolated this was by building the same image in Cloud Shell and that would work perfectly fine in Cloud Run.
Now, if you need to build it locally on MAC go ahead and test it by changing the Docker platform:
export DOCKER_DEFAULT_PLATFORM=linux/amd64
docker build -t mytag:myver .
Once the image has been built, you can inspect the architecture:
docker image inspect mytag:myver | grep -i Architecture
Then deploy it to Cloud Run.
The explanation is in your question:
I am running a background task, nothing to expose
A cloud run application, so your container, must be listening for incoming HTTP requests as stated in the Container runtime contract. That's why in all cloud run examples, java in your case, spring boot is used with #RestController. Other explanation can be found in this answer.
Update:
So the solution is either to
add a webserver to your code and wrap it with spring boot and controller logic
use Cloud Function rather than Cloud Run and get rid of the Dockerfile and in the same time have simpler code and less configuration
I have a Java application (.tar) mounted to a container. The entrypoint of the container starts that application.
Dockerfile (the backend folder is mounted into the image as a volume)
FROM openjdk:11.0.7
ENTRYPOINT /backend/entrypoint.sh
entrypoint.sh
java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005 -Xmx2048M -jar backend.jar
Now I want to debug that running application using VSCode's debugger. According to the official VSCode documentation (blog: inspecting containers) this can easily be done with the command palette and the command Debugger: attach to Node.js process.
But in their example they use a Node.js server. In my container however, there is no Node.js process that I could attach the debugger to and I can't find an appropriate command for a Java Spring application. So how can I attach the Java debugger of VSCode to an Java application which is already running inside a Docker container?
At another place in their documentation (containers: debug common) they state the following:
The Docker extension currently supports debugging Node.js, Python, and .NET Core applications within Docker containers.
So no mention of Java there but then again at another place (remote: debugging in a container) they clearly talk about a Java application:
For example, adding this to .devcontainer/devcontainer.json will set the Java home path:
"settings": { "java.home": "/docker-java-home" }
I got a Spring Boot app to run in an openjdk:11-jre-slim container and was able to successfully debug it with the following configuration.
First, set jvm args when running your container. You do this via entrypoint.sh
but I decided to override my container entrypoint in docker-compose. I also expose the debug port.
ports:
- 5005:5005
entrypoint: ["java","-agentlib:jdwp=transport=dt_socket,address=*:5005,server=y,suspend=n","-jar","app.jar"]
Then add this configuration to your launch.json in vscode:
{
"type": "java",
"name": "Debug (Attach)",
"projectName": "MyProjectName",
"request": "attach",
"hostName": "127.0.0.1",
"port": 5005
}
You can now start your container and select "Debug (Attach)" under RUN AND DEBUG in VScode. This will begin your typical debug session with breakpoints, variables, etc...
If you set up your run command like this
java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000 -jar App.jar
(or however you like to call it, the important bit are the options)
Then make your docker container expose that port. I usually use a docker compose file to do that, so you can easily map the port however you like at run time
I'm Running HTTP Azure function V2 inside a docker container, I used dockerfile to build my container and it's running but have many doubts
Why AzureFunction docker file is different from .netcore web project docker file, There is no ENTRYPOINT how it is running?
When we are using HTTP Trigger function in docker Linux container, Is it running through some webServer or self-host? I believe it self hosted. am I correct?
The relevant base Dockerfile should be this one: https://github.com/Azure/azure-functions-docker/blob/master/host/2.0/alpine/amd64/dotnet.Dockerfile
As you can see there, the WebHost is getting started - which also should answer your second question: Yes, it's a selfhost
CMD [ "/azure-functions-host/Microsoft.Azure.WebJobs.Script.WebHost" ]
I'm trying to publish a web api on docker based on docker.
I'm using a docker file with the following content :
FROM microsoft/dotnet
COPY . /dotnetapp
WORKDIR /dotnetapp
RUN dotnet restore
EXPOSE 5000
ENTRYPOINT dotnet run
I can build and run the image but i'm not able to acces to web api.
Seems like you have to specify which URL Kestrel will listen to otherwise it won't accept any connection outside same container.
So your ENTRYPOINT should be something like
ENTRYPOINT ["dotnet", "run", "--server.urls=http://0.0.0.0:5000"]
Including the –server.urls argument is vital to allow inbound connections from outside container. If not, Kestrel will reject any connection that is not coming from the container, something not really useful…
Reference https://www.sesispla.net/blog/language/en/2016/05/running-asp-net-core-1-0-rc2-in-docker/