Cannot connect to local Docker running Kestrel server - docker

I have a Kestrel server that returns Hello World to any HTTP request.
static class Program
{
static void Main(string[] args)
{
var webHostBuilder = WebHost.CreateDefaultBuilder(args)
.UseSetting("applicationName", "Hello World")
.Configure(builder => {
builder.Run(async context =>
{
var textBytes = UTF8.GetBytes("Hello World!");
await context.Response.Body.WriteAsync(textBytes, 0, textBytes.Length, default (CancellationToken));
});
})
.UseUrls("http://+:8000");
var webHost = webHostBuilder.Build();
webHost.Run();
}
}
I've added the build assemblies to a Docker image and built it with this DockerFile:
FROM microsoft/dotnet
WORKDIR /app
ADD /application /app
EXPOSE 8000
ENTRYPOINT [ "dotnet", "hello-world-server.dll" ]
I've run it with this:
>docker run hello-world-server --publish-list 8000:8000
When I send a request to http://localhost:8000 I get a 502 returned.
I'm using Windows containers on Windows 10.
The full output from a build & run is below:
C:\...\hello-world-server-docker>docker build -t hello-world-server .
Sending build context to Docker daemon 84.99kB
Step 1/5 : FROM microsoft/dotnet
---> d08db1d19023
Step 2/5 : WORKDIR /app
Removing intermediate container 873dea47b78b
---> de4b80a52d54
Step 3/5 : ADD /application /app
---> ba75fe5b5efc
Step 4/5 : EXPOSE 8000
---> Running in 1ac9c977c9b4
Removing intermediate container 1ac9c977c9b4
---> 22cb3848d762
Step 5/5 : ENTRYPOINT [ "dotnet", "hello-world-server.dll" ]
---> Running in 17f3b01f6ed0
Removing intermediate container 17f3b01f6ed0
---> 82c7e3aadfc2
Successfully built 82c7e3aadfc2
Successfully tagged hello-world-server:latest
C:\...\hello-world-server-docker>docker run hello-world-server --publish-list 8000:8000
Hosting environment: Production
Content root path: C:\app
Now listening on: http://[::]:8000
Application started. Press Ctrl+C to shut down.
When a request is made to localhost:8000 there's no further output on the console from Kestrel, whereas there usually would be is this were a normal console application.
I've also tried running it with >docker run hello-world-server --publish 8000:8000.

I think the problem is in the docker run command. --publish-list 8000:8000 as it is right now, is a parameter for the containers entrypoint.
The command to run a container and expose a port is:
docker run -p 8000:8000 hello-world-server
Every command line option for docker run must be placed before the image name. Everything after the image name is a command for the container itself.

Related

Cannot access localhost with dockerized app

MacOS Monterey
I have a simple Dockerfile
FROM denoland/deno:1.29.1
EXPOSE 4200
WORKDIR /app
ADD . /app
RUN deno cache ./src/index.ts
CMD ["run", "--allow-net", "--allow-read", "./src/index.ts"]
And the most simple deno code
const handler = (request: Request) => {
return new Response("heyo", { status: 200 });
};
DenoServer.serve(handler, { hostname: HOST, port: PORT });
Running the application locally works fine and I can reach localhost:4200. However when I run the app with docker the request fails
I use
docker run --publish 4200:4200 frontend
└───> curl http://localhost:4200
curl: (52) Empty reply from server
I can see the container running and trying to hit the {{ .NetworkSettings.IPAddress }} doesn't work either
docker container running on localhost
It appears that the necessary environment variables were not included in the docker run command. To specify the host and port, you can use the -e option in the docker command.
docker build -t deno-sample .
docker run -e HOST=localhost -e PORT=4200 -p 4200:4200 deno-sample
For more information, please refer to the Docker documentation at the following link: https://docs.docker.com/engine/reference/commandline/run/
FROM denoland/deno:1.29.1
EXPOSE 4200
WORKDIR /app
COPY . /app
# RUN deno cache ./src/index.ts
RUN ls
CMD ["deno", "run", "--allow-net", "--allow-read", "app.ts"]
Here is my app.ts in same path as of dockerfile
function requestHandler() {
console.log(">>>>>")
return new Response("Hey, I'm a server")
}
serve(requestHandler, { hostname: '0.0.0.0', port: 4200 })

Detecting username in Dockerfile

I need to run a cmd that will create my home folder within a docker container. So, if my username in my linux box is josecz then I could use it from within a Dockerfile to run a cmd like:
RUN mkdir /home/${GetMyUsername}
and get the folder /home/josecz after the Dockerfile is processed.
The only way just as commented by folks: use ARG, next gives you a workable minimal example:
Dockerfile:
FROM alpine:3.14.0
ARG GetMyUsername
RUN echo ${GetMyUsername}
RUN mkdir -p /home/${GetMyUsername}
Execution:
cake#cake:~/3$ docker build --build-arg GetMyUsername=`whoami` -t abc:1 . --no-cache
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM alpine:3.14.0
---> d4ff818577bc
Step 2/4 : ARG GetMyUsername
---> Running in 4d87a0970dbd
Removing intermediate container 4d87a0970dbd
---> 8b67912b3788
Step 3/4 : RUN echo ${GetMyUsername}
---> Running in 2d68a7e93715
cake
Removing intermediate container 2d68a7e93715
---> 100428a1c526
Step 4/4 : RUN mkdir -p /home/${GetMyUsername}
---> Running in 938d10336daa
Removing intermediate container 938d10336daa
---> 939729b76f09
Successfully built 939729b76f09
Successfully tagged abc:1
Explaination:
When docker build, you could use whoami to get the username who run the docker build, then pass to args GetMyUsername. Then, in Dockerfile, you could use ${GetMyUsername} to get the value.

ASP.NET Core + Docker not accessible on specified port

It is impossible for me to access container with ASP.NET Core 3.1 application running inside.
Goal is to run application in container on port 5000. When I'm running it locally using standard VS profile I navigate to http://localhost:5000/swagger/index.html in order to load swaggerUI. I would like to achieve same thing using docker.
Steps to reproduce my issue:
Add dockerfile with exposed 5000 port and ENV ASPNETCORE_URLS variable:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:5000
EXPOSE 5000
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["myapp/myapp.csproj", "myapp/"]
RUN dotnet restore "myapp/myapp.csproj"
COPY . .
WORKDIR "/src/myapp/"
RUN dotnet build "myapp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myapp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "myapp.dll"]
Build image
docker build -t myapp .
Run docker image:
docker run myapp -p 5000:5000
Running commands above with specific docker file results in this:
[21:28:42 INF] Starting host.
[21:28:42 INF] Now listening on: http://[::]:5000
[21:28:42 INF] Application started. Press Ctrl+C to shut down.
[21:28:42 INF] Hosting environment: Production
[21:28:42 INF] Content root path: /app
However, I can't access container using http://localhost:5000/swagger/index.html because of ERR_CONNECTION_REFUSED -> This site can't be reached.
I did get into container to check if host is running for sure, using:
docker exec -it containerId /bin/bash
cd /app
dotnet myapp.dll
what resulted in following error:
Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:5000: address already in use.
Conclusion is that port inside the container is used, application is alive, it's just not accessible from outside.I don't know how to get inside of it.
Please point me into right direction.
UPDATE
Issue is solved, answer is posted below. However explanation why it was needed and how it works would be nice!
To solve the issue I had to manually add "--server.urls" to entrypoint like shown below:
ENTRYPOINT ["dotnet", "myapp.dll", "--server.urls", "https://+:5000"]
I solved the same issue in the following way:
Added the following in appsettings.json to force Kestrel to listen to port 80.
"Kestrel": {
"EndPoints": {
"Http": {
"Url": "http://+:80"
}
}
}
Exposed the port in dockerfile
ENV ASPNETCORE_URLS=http://+:80
EXPOSE 80
ENTRYPOINT ["dotnet", "EntryPoint.dll"]
Ran the container using the below command.
docker run -p 8080:80 <image-name>:<tag>
The app exposed on http://localhost:8080/

How to create a file using touch in Dockerfile or docker-compose?

I want to create an empty DB file using touch by Dockerfile or docker-compose.yml and volume it. Actually, I'm able to create it manually within the container as follows:
docker exec -it <container-name> bash
# touch /app/model/modbus.db
Whereas, when I use the following procedure it throws exited with code 0 and stops:
version: '3'
services:
collector:
build: .
image: collector:2.0.0
command: bash -c "touch /app/model/modbus.db" # Note
# command: bash /app/bashes/create_an_empty_db.sh
volumes:
- "./model/modbus.db:/app/model/modbus.db:rw"
tty: true
As well as this, I tried that via Dockerfile without any success either:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /
ADD . /app
RUN touch /app/model/modbus.db # Note
CMD python app
[NOTE]:
Also without the command: bash -c "touch /app/model/modbus.db" in the docker-compose.yml which was the cause of exited with code 0; a directory will be created named modbus.db instead of a file due to the following section:
volumes:
- "./model/modbus.db:/app/model/modbus.db:rw"
TL;DR:
How to send a new file from the container to the host which does not exist in the host? (In other words, it is done inside of the container, not in the host)
I am not sure about the docker-compose.yml but the dockerfile that you have seems to be working for me.
The Dockerfile looks like this,
FROM python:3.6-slim
RUN mkdir /app
WORKDIR /
RUN touch /app/modbus.db
Build the dockerfile,
docker build -t test .
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM python:3.6-slim
---> 903e8a0f0681
Step 2/4 : RUN mkdir /app
---> Using cache
---> c039967bf463
Step 3/4 : WORKDIR /
---> Using cache
---> c8c81ac01f50
Step 4/4 : RUN touch /app/modbus.db
---> Using cache
---> 785916fe4cea
Successfully built 785916fe4cea
Successfully tagged test:latest
Build the container,
docker run -dit test
52cde500cda015f170140ae9e7174a0367b29265a49a3742173946b686179fb3
I ssh'ed into the container and was able to find the file.
docker exec -it 52cde500cda015f170140ae9e7174a0367b29265a49a3742173946b686179fb3 /bin/bash
root#52cde500cda0:/# ls
app bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#52cde500cda0:/# cd app
root#52cde500cda0:/app# ls
modbus.db
Put the following in your docker-compose.yml
volumes:
- "./model:/app/model"
This will create a /app/model folder inside of your container. Its contents (which you will create inside the container) will be available on ./model on your host.
If you put the touch command in the CMD of your Dockerfile, that file will be created after starting the container when the volume is also initialized. So the following Dockerfile should work:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /
ADD . /app
CMD touch /app/model/modbus.db && python app

Successful built docker image does not run when environment was built from anaconda

I built a docker image using the following Dockerfile:
FROM continuumio/miniconda3
ENTRYPOINT [ “/bin/bash”, “-c” ]
ADD angular_restplus.yaml angular_restplus.yaml
RUN ["conda", "env", "create", "-f", "angular_restplus.yaml"]
RUN ["/bin/bash", "-c", "source activate work"]
COPY json_to_db.py json_to_db.py
CMD ["gunicorn", "-b", "0.0.0.0:3000", "json_to_db:app"]
and command to build it:
sudo docker build -t testimage:latest .
That runs through:
Step 5/7 : RUN ["/bin/bash", "-c", "source activate work"]
---> Running in 45c6492b1c67
Removing intermediate container 45c6492b1c67
---> 5b5604dc281d
Step 6/7 : COPY json_to_db.py json_to_db.py
---> e5d05858bed1
Step 7/7 : CMD ["gunicorn", "-b", "0.0.0.0:3000", "json_to_db:app"]
---> Running in 3ada6fd24d09
Removing intermediate container 3ada6fd24d09
---> 6ed934acb671
Successfully built 6ed934acb671
Successfully tagged testimage:latest
However, when I now try to use it, it does not work; I tried:
sudo docker run --name testimage -d -p 8000:3000 --rm testimage:latest
which seems to work fine as it prints
b963bdf97b01541ec93e1eb7
However, I cannot access the service in my browser and using
sudo docker ps -a
only shows the intermediate containers needed to create the image from above.
When I try to run it without the -d flag, I get
gunicorn: 1: [: “/bin/bash”,: unexpected operator
Does that mean that I have to change the ENTRYPOINT again? If so, to what?
The solution can be found in the following post. I had to use the
"/bin/bash", "-c"
part throughout. The following works fine now (using also #larsks' input who deleted his answer meanwhile):
FROM continuumio/miniconda3
COPY angular_restplus.yaml angular_restplus.yaml
SHELL ["/bin/bash", "-c"]
RUN ["conda", "env", "create", "-f", "angular_restplus.yaml"]
COPY json_to_db.py json_to_db.py
CMD source activate work; gunicorn -b 0.0.0.0:3000 json_to_db:app
Then one can run
docker build -t testimage:latest .
and finally
docker run --name testimage -d -p 3000:3000 --rm testimage:latest
If one now uses
docker ps -a
one will get the expected outcome:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
61df8ac0432c testimage:latest "/usr/bin/tini -- /b…" 16 seconds ago Up 15 seconds 0.0.0.0:3000->3000/tcp testimage
and can then access the service at
http://localhost:3000/

Resources