How to access WebAPi after deploying on docker - docker

I have my Dockerfile:
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /build
COPY . .
RUN dotnet restore
RUN dotnet publish -c Release -o output
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build /build/output .
ENTRYPOINT ["dotnet","TestDockerApi.dll"]
I am creating an image using :
docker build -t testdocker/api .
and then running a container from image using :
docker run testdocker/api
I can see following message on my console:
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
I am trying to access using http://localhost/app/TestDockerApi/Values , but it does not work.
Do I need to use docker image IP to access that .
I can see few tutorials suggesting to do this in Entrypoint :
ENTRYPOINT ["dotnet","TestDockerApi.dll","--server.urls","http://0.0.0.0:5000"]
And then while running the container, mapping the port:
docker run -p 80:5000 testdocker/api
Is there any way I could access the API with out using portforwarding? I am just trying to get the basics right , why and what should I do.

The Dockerfile does not manage network configuration outside of the container at all. If you want docker to listen on your host port of 80, you need to bind it when you run your container.
docker run -80:80 testdocker/api
For more description about mapping and exposing ports, you can read here:
- https://www.ctl.io/developers/blog/post/docker-networking-rules/
Alternatively you can create your own service composition where you specify these details and specify this in a docker-compose.yml file
api:
image: testdocker/api
ports:
- "80:80"
And then you can simply run with
docker-compose up
More information is at:
https://docs.docker.com/compose/reference/overview/#command-options-overview-and-help

Related

Run docker image without specifying port

I have a node project and it has Dockerfile and docker-compose.yml as well.
Dockerfile
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
docker-compose.yml
version: '3'
services:
my-service-name:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
restart: unless-stopped
I uploaded the image to Docker Hub. When I tried to pull the image and run it, I needed to specify the port like this docker run -p 8080:80 my-username/my-image-name so I can open the project in localhost:8080 from NGINX expose 80.
What I want to do is run the image without specifying the port since I already specified the port in Dockerfile and docker-compose. I've been confused with how to achieve this. Does this mean my docker-compose is not uploaded to the Docker Hub and I should do so? Or is my current way is already correct?
When you use a docker-compose file, you have to run it with the docker-compose executable. What you are doing is bypassing the compose file altogether.
You are misinterpreting the meaning of EXPOSE in the Dockerfile. From the documentation:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
So feel free to run containers without specifying exposed ports on the docker command line or in Docker-Compose or anywhere else The containers will run but it's like they are behind a firewall.

ASP.NET Core + Docker not accessible on specified port

It is impossible for me to access container with ASP.NET Core 3.1 application running inside.
Goal is to run application in container on port 5000. When I'm running it locally using standard VS profile I navigate to http://localhost:5000/swagger/index.html in order to load swaggerUI. I would like to achieve same thing using docker.
Steps to reproduce my issue:
Add dockerfile with exposed 5000 port and ENV ASPNETCORE_URLS variable:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS base
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:5000
EXPOSE 5000
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
WORKDIR /src
COPY ["myapp/myapp.csproj", "myapp/"]
RUN dotnet restore "myapp/myapp.csproj"
COPY . .
WORKDIR "/src/myapp/"
RUN dotnet build "myapp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myapp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "myapp.dll"]
Build image
docker build -t myapp .
Run docker image:
docker run myapp -p 5000:5000
Running commands above with specific docker file results in this:
[21:28:42 INF] Starting host.
[21:28:42 INF] Now listening on: http://[::]:5000
[21:28:42 INF] Application started. Press Ctrl+C to shut down.
[21:28:42 INF] Hosting environment: Production
[21:28:42 INF] Content root path: /app
However, I can't access container using http://localhost:5000/swagger/index.html because of ERR_CONNECTION_REFUSED -> This site can't be reached.
I did get into container to check if host is running for sure, using:
docker exec -it containerId /bin/bash
cd /app
dotnet myapp.dll
what resulted in following error:
Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:5000: address already in use.
Conclusion is that port inside the container is used, application is alive, it's just not accessible from outside.I don't know how to get inside of it.
Please point me into right direction.
UPDATE
Issue is solved, answer is posted below. However explanation why it was needed and how it works would be nice!
To solve the issue I had to manually add "--server.urls" to entrypoint like shown below:
ENTRYPOINT ["dotnet", "myapp.dll", "--server.urls", "https://+:5000"]
I solved the same issue in the following way:
Added the following in appsettings.json to force Kestrel to listen to port 80.
"Kestrel": {
"EndPoints": {
"Http": {
"Url": "http://+:80"
}
}
}
Exposed the port in dockerfile
ENV ASPNETCORE_URLS=http://+:80
EXPOSE 80
ENTRYPOINT ["dotnet", "EntryPoint.dll"]
Ran the container using the below command.
docker run -p 8080:80 <image-name>:<tag>
The app exposed on http://localhost:8080/

How to run golang web app on a Docker container

I have a web app that uses go language as it's back end. When I run my website I just do go build; ./projectName then it will run on local server port 8000. How do I run this web app on a container? I can run sample images like nginx on a container, but how do I create my own images for my projects. I created a Dockerfile inside my project folder with the following codes:
FROM nginx:latest
WORKDIR static/html/
COPY . /usr/src/app
Then made an image using the Dockerfile, but when I run it on a container and go to localhost:myPort/static/html/page.html it says 404 page not found. My other question is, does docker can only run static pages on a container? cause my site can receive and send data. Thanks
this is my docker file (./todo is my project name and folder name)
this is my terminal ( as you can see the the container exits emmediately)
I guess you are not exposing the Docker Port outside the container.
That's why you are not able to see any output rather than just being specific to GO Program.
Try adding the below lines to your docker compose File
EXPOSE 80(whichever port you want it to be)
EXPOSE 443
EXPOSE 3306
This will make the container be accessed from outside
Here is what i did for my GOlang web app use Gin-gonic framework -
my Dockerfile:
FROM golang:latest
# Author
MAINTAINER dangminhtruong
# Create working folder
RUN mkdir /app
COPY . /app
RUN apt -y update && apt -y install git
RUN go get github.com/go-sql-driver/mysql
RUN go get github.com/gosimple/slug
RUN go get github.com/gin-gonic/gin
RUN go get gopkg.in/russross/blackfriday.v2
RUN go get github.com/gin-gonic/contrib/sessions
WORKDIR /app
Then build docker image
docker build -t web-app:latest .
Finally, start my web-app
docker run -it -p 80:8080 -d web-app:latest go run main.go //My webapp start at 8080 port
Hope this helpfull
You don't need Nginx to run a server in Go
It's better to build a binary in Dockerfile
Here is how your Dockerfile may look like:
FROM golang:latest
RUN mkdir /app
ADD . /app/
WORKDIR /app
RUN go build -o main .
EXPOSE 8000
CMD ["/app/main"]

Configuring ports in Docker

I am totally new to docker and the client I am working for have sent me dockerfile configuration .dockerignore file probably to set up the work environment.
So this is basically what he sent to me
FROM node:8
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY assets ./assets
COPY server ./server
COPY docs ./docs
COPY internals ./internals
COPY track ./track
RUN npm run build:dll
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
with docker build and run command (he also provided the same)
docker build -t reponame:tag .
docker run -p 3000:3000 admin-web:v1
Here, First can someone tell me what does copy . . mean?
He asked me to configure all the ports accordingly. From going through videos, I remember that we can map ports like this -p 3000:3000 but what does configuring port means? and how can i do? any relevant article for the same would also be helpful. Do I need to make docker-compose file?
. is current directory in linux. So basicly: copy current local directory to the current container's directory.
The switch -p is used to configure port mapping. -p 2900:3000 means publish your local port 2900 to container's 3000 port so that the container is available on the outside (by your web browser for instance). Without that mapping the port would not be available to access outside the container. This port is still available to other containers inside same docker network though.
You don't need to make a docker-compose.yml file, but it certainly will make your life easier if you have one, because then you can just run docker-compose up every time to run the container instead of having to run
docker run -p 3000:3000 admin-web:v1
every time you want to start your applicaton.
Btw here is one of the ultimate docker cheatsheets that I got from a friend, might help you: https://gist.github.com/ruanbekker/4e8e4ca9b82b103973eaaea4ac81aa5f

How to make a docker compose file for an existing image?

What I am trying to do is use a Docker image I found online timwiconsulting/ionic-v1.3, and run my ionic project within Docker. I want to mount my ionic project in Docker and forward my ports so I can run the emulator in a browser.
I want to ask how do I create a docker-compose.yml file for an existing container?
I found a Docker image timwiconsulting/ionic-v1.3 that I want to run, which has the correct version of the tools that I want.
Now I want to create a compose file to forward the ports to my computer, and mount the project files. I create this docker-compose.yml file:
version: '3'
services:
web:
build: .
ports:
- "8100:8100"
- "35729:35729"
volumes:
- /Users/leetcat/project/:/project
But every time I try to do docker-compose up I get the error:
~/user: docker-compose up
Building web
Step 1/6 : FROM timwiconsulting:ionic-v1.3
ERROR: Service 'web' failed to build: pull access denied for timwiconsulting, repository does not exist or may require 'docker login
I am doing something wrong. I think I want to be creating a docker-compose.yml file for the container timwiconsulting/ionic-v1.3. Feel free to tell me I am totally off the mark with what docker is.
Here is my Dockerfile:
# Use an official Python runtime as a parent image
FROM timwiconsulting:ionic-v1.3
# Set the working directory to /app
WORKDIR /project
# Copy the current directory contents into the container at /app
ADD . /project
# Install any needed packages specified in requirements.txt
# RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8100
EXPOSE 35729
# Define environment variable
ENV NAME World
# Run app.py when the container launches
# CMD ["python", "app.py"]
# docker exec -it <container_hash> /bin/bash/

Resources