I am trying to get a .net C# 6 API to remove metrics through to Prometheus, both running inside docker containers.
When I visit http://localhost:9090/targets, it is telling me that "Get "http://myapi:50505/metrics": dial tcp 192.168.65.2:50505: connect: connection refused" (the state is Down).
Within my api container, I did a netstat --tcp --listen --numeric-ports expecting that port 50505 would be being listened to, but it wasn't -- just 5005 (and 35605, which I also tried in the prometheus.ym).
I tried adding "50505:50505" to the ports on myapi.
One Google search suggested my C# 6 code should do this:
var metricServer = new KestrelMetricServer(port: 1234);
metricServer.Start();
So I did that, changed the prometethus.yml to listen for port 1234, still no luck.
I've tried the suggestions out there, including using localhost:50505 in the prometheus.yml, exposing 50505 in my API Dockerfile.
No errors in either the myapi nor prometheus logs.
No luck.
This is my prometheus.yml:
global:
scrape_interval: 15s
scrape_configs:
- job_name: "myapi"
scrape_interval: 5s
static_configs:
- targets: ["myapi:50505"]
This is my docker-compose.yml
version: '3.3'
services:
myapi:
image: myapi
container_name: myapi
ports:
- "7109:5005"
networks:
- mynet
prometheus:
image: prometheus
ports:
- "9090:9090"
depends_on:
- myapi
networks:
- mynet
networks:
mynet:
driver: bridge
I built the prometheus container with this Dockerfile
FROM prom/prometheus
ADD ./prometheus.yml /etc/prometheus/
My API is built with this Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
ARG BUILDCONFIG=RELEASE
ARG VERSION=1.0.0
COPY ["myapi.csproj", "./src/myapi/"]
WORKDIR ./src/myapi/
RUN dotnet publish -c $BUILDCONFIG -o out /p:Version=$VERSION
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build /src/myapi/out ./
EXPOSE 5005/tcp
EXPOSE 50505/tcp
ENTRYPOINT ["dotnet", "myapi.dll", "--urls", "http://+:5005"]
In my Program.cs I have this:
...
app.UseMetricServer(50505, "/prometheus");
app.UseRouting();
app.UseHttpMetrics();
app.Run();
And if it matters, in the endpoint method I have this:
Counter _eventCounter = Metrics.CreateCounter("API_Events", "Entry Attempts", new string[] { "result" });
_eventCounter.WithLabels("test").Inc();
Host OS: Mac M2 Ventura 13.2.1
Docker: Docker version 20.10.22, build 3a2c30b
Related
i'm using Docker-Desktop on Windows and i'm trying to get running 3 containers inside docker-desktop.
After few research and test, i get the 3 container running [WEB - API - DB], everything seems to compile/run without issue in the logs but i'can't access my web container from outside.
Here's my dockerfile and docker-compose, what did i miss or get wrong ?
[WEB] dockerfile
FROM node:16.17.0-bullseye-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
CMD ["npm", "run", "start"]
[API] dockerfile
FROM openjdk:17.0.1-jdk-slim
WORKDIR /app
COPY ./target/test-0.0.1-SNAPSHOT.jar /app
#EXPOSE 2022 (the issue is the same with or without this line)
CMD ["java", "-jar", "test-0.0.1-SNAPSHOT.jar"]
Docker-compose file
version: "3.8"
services:
### FRONTEND ###
web:
container_name: wallet-web
restart: always
build: ./frontend
ports:
- "80:4200"
depends_on:
- "api"
networks:
customnetwork:
ipv4_address: 172.20.0.12
#networks:
# - "api"
# - "web"
### BACKEND ###
api:
container_name: wallet-api
restart: always
build: ./backend
ports:
- "2022:2022"
depends_on:
- "db"
networks:
customnetwork:
ipv4_address: 172.20.0.11
#networks:
# - "api"
# - "web"
### DATABASE ###
db:
container_name: wallet-db
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
customnetwork:
ipv4_address: 172.20.0.10
#networks:
# - "api"
# - "web"
networks:
customnetwork:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
# api:
# web:
Listening on:
enter image description here
I found several issue similar to mine but the solution didn't worked for me.
If i understand you are trying to access on port 80. To do that, you have to map your container port 4200 to 80 in yaml file 80:4200 instead of 4200:4200.
https://docs.docker.com/config/containers/container-networking/
Have you looked in the browsers development console, if there comes any error. Your docker-compose seems not to have any issue.
How ever lets try to debug it:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6245eaffd67e nginx "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:4200->80/tcp test-api-1
copy the container id then execute:
docker exec -it 6245eaffd67e bin/bash
Now you are inside the container. Instead of the id you can use also the containers name.
curl http://localhost:80
Note: in my case here i just create a container from an nginx image.
In your case use the port where your app is running. Control it in your code if you arent sure. A lot of Javascript-frameworks start default on 3000.
If you get an error: curl command not found, install it in your image:
FROM node:16.17.0-bullseye-slim
USER root # to install dependencies you need sudo permissions su we tell the image that it is root
RUN apt update -y && apt install curl -y
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
USER node # we dont want to execute the image as root so we put user node (this user is defined in the node:16.17.0-bullseye-slim image)
CMD ["npm", "run", "start"]
Now the curl should work (if it doesnt already).
The same should work from your host.
Here is an important thing:
The localhost, always refers to the fisical computer, or the container itselfs where you are refering. Every container and your PC have localhost and they are not the same.
In the docker-compose you just map the port host/container, so your PC (host) where docker is running can access the docker network from the host on the host port you defined, inside the port of the container.
If you cant still access from your host, try to change the host ports 2022, 4200 ecc. Could be possible that something conflicts on your Windows machine.
It happens sometimes that the docker networks can create some conflicts.
Execute a docker-compose down, so it should be delete and recreated.
Still not working?
Reset docker-desktop to factory settings, control if you have last version (this is always better).
If all this doesnt help, let me know so we can debugg further.
For the sake of clarity i post you here the docker-compose which i used to check. I just used nginx to test the ports as i dont have your images.
version: "3.8"
services:
### FRONTEND ###
web:
restart: always
image: nginx
ports:
- "4200:80"
depends_on:
- "api"
networks:
- "web"
### BACKEND ###
api:
restart: always
image: nginx
ports:
- "2022:80"
depends_on:
- "db"
networks:
- "api"
- "web"
### DATABASE ###
db:
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
- "api"
networks:
api:
web:
```
Update:
You can log what happens in the conatiner like so:
```
docker logs containerid/name
```
If you are using Visualcode there is excellent extension for docker build also by Microsoft:
Just search docker in the extensions. Has something like 20.000.000 downloads and can help you a lot debugging containers ecc. After installing it you see the dockericon on the left toolbar.
If you can see directly the errors that occurs in the logs, maybe you can post them partially. So it would be possible to understand. Please tell also something about your Frontendapp architecture, (react-app, angular). There are some frameworks that need to be startet on 0.0.0.0 instead of 127.0.0.1 or they dont work.
I am trying to access Flask app from the Docker compose getting started tutorial from my local host but without making changes to this pruned Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.9-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
This if my docker-compose.yml:
version: "3.9"
services:
web:
build: .
command: flask run
volumes:
- type: bind
source: .
target: /code
environment:
- ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ports:
- target: 5000
published: 8000
networks:
- counter-net
redis:
image: "redis:alpine"
networks:
- counter-net
networks:
counter-net:
volumes:
volume-net:
When I use docker compose up I can see: Running on http://127.0.0.1:5000 but I cannot access it on Running on 127.0.0.1:8000 or localhost:8000
I can see 2_counter-net when I list networks and if relevant earlier I tried creating a volume but removed this when I changed the source to . and it came up without error.
How can I correct my config please?
You are trying to use a bridge network so that ports opened in the container can be forwarded to ports on your host computer. It's true that you could remove the user-defined network and just rely on the default bridge network (by removing all the "networks" sections from the YAML file). That should solve your problem. However, Docker doesn't recommend this approach for production.
The other option is to add a bridge driver to your user-defined network specification:
networks:
counter-net:
driver: bridge
And David is right, you should fix the YAML in your environment.
environment:
- FLASK_APP=app.py
- FLASK_RUN_HOST=0.0.0.0
I'm currently trying to introduce docker compose to my project. It includes a golang backend using the redis in-memory database.
version: "3.9"
services:
frontend:
...
backend:
build:
context: ./backend
ports:
- "8080:8080"
environment:
- NODE_ENV=production
env_file:
- ./backend/.env
redis:
image: "redis"
ports:
- "6379:6379"
FROM golang:1.16-alpine
RUN mkdir -p /usr/src/app
ENV PORT 8080
WORKDIR /usr/src/app
COPY go.mod /usr/src/app
COPY . /usr/src/app
RUN go build -o main .
EXPOSE 8080
CMD [ "./main" ]
The build runs successfully, but after starting the services, the go backend immediately exits throwing following error:
Error trying to ping redis: dial tcp 127.0.0.1:6379: connect: connection refused
Error being catched here:
_, err = client.Ping(ctx).Result()
if err != nil {
log.Fatalf("Error trying to ping redis: %v", err)
}
How come the backend docker service isn't able to connect to redis? Important note: when the redis service is running and I start my backend manually using go run *.go, there's no error and the backend starts successfully.
When you run your Go application inside a docker container, the localhost IP 127.0.0.1 is referring to this container. You should use the hostname of your Redis container to connect from your Go container, so your connection string would be:
redis://redis
I found I was having this same issue. Simply changing (in redis.NewClient(&redis.Options{...}) Addr: "localhost:6379"to Addr: "redis:6379" worked.
Faced similar issue with Golang and redis.
version: '3.0'
services:
redisdb:
image: redis:6.0
restart: always
ports:
- "6379:6379"
container_name: redisdb-container
command: ["redis-server", "--bind", "redisdb", "--port", "6379"]
urlshortnerservice:
depends_on:
- redisdb
ports:
- "7777:7777"
restart: always
container_name: url-shortner-container
image: url-shortner-service
In redis configuration use
redisClient := redis.NewClient(&redis.Options{
Addr: "redisdb:6379",
Password: "",
DB: 0,
})
I have cross browser tests that I have written with selenium. Since I want to test multiple browsers on multiple platforms I use docker virtualization and selenium grid. I could execute my tests without docker via localhost:4444 with this docker-compose.yml
version: "3"
services:
hub:
image: selenium/hub
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 16
GRID_BROWSER_TIMEOUT: 3000
GRID_TIMEOUT: 3000
chrome:
image: selenium/node-chrome
container_name: web-automation_chrome
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 4
NODE_MAX_INSTANCES: 4
volumes:
- /dev/shm:/dev/shm
ports:
- "9001:5900"
firefox:
image: selenium/node-firefox
container_name: web-automation_firefox
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 2
NODE_MAX_INSTANCES: 2
volumes:
- /dev/shm:/dev/shm
ports:
- "9002:5900"
opera:
image: selenium/node-opera
container_name: web-automation_opera
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 2
NODE_MAX_INSTANCES: 2
volumes:
- /dev/shm:/dev/shm
ports:
- "9003:5900"
I just executed my tests with maven and they would succeed. Then I planned to containerize also my browser JUnit tests and created this Dockerfile:
FROM openjdk:11 as build
WORKDIR /workspace/app
COPY .git .git
COPY mvnw .
COPY .mvn .mvn
COPY wait-for-it.sh .
RUN ["chmod", "+x", "wait-for-it.sh"]
COPY pom.xml .
COPY src src
RUN ./wait-for-it.sh hub:4444 -- ./mvnw clean package
FROM openjdk:11
VOLUME /tmp
COPY --from=build /workspace/app/target/*.jar app.jar
Which should work fine as well and added this part to my docker-compose.yml:
app:
build: .
ports:
- "80:8080"
depends_on:
- "hub"
As soon as I run docker-compose up, the maven builds the tests successfully and selenium grid is set up successfully but I receive the following error:
[ERROR] loginUserNotExistentFirefox Time elapsed: 0.033 s <<< ERROR!
org.openqa.selenium.remote.UnreachableBrowserException:
Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:17:03'
System info: host: '7e67c412b3c0', ip: '172.17.0.2', os.name: 'Linux', os.arch: 'amd64', os.version: '4.19.121-linuxkit', java.version: '11.0.9.1'
Driver info: driver.version: RemoteWebDriver
at org.seleniumtests.frontendtests.tests.TestLogin.loginUserNotExistentFirefox(TestLogin.java:29)
Caused by: java.net.UnknownHostException: hub
at org.seleniumtests.frontendtests.tests.TestLogin.loginUserNotExistentFirefox(TestLogin.java:29)
This is how I plan to reach the service from my app container.
driver = new RemoteWebDriver(new URL("http://hub:4444/wd/hub"),
DesiredCapabilities.operaBlink());
First problem: you can't connect to localhost:4444
There's a bridge (by default in Docker Compose) between your services, and you can access to another service by <service_name>:<service_port>, so you can access hub service by hub:4444.
Second problem, it may be outcome when you solve the first problem
From the official Docker Compose documentation, as you can read here:
You can control the order of service startup and shutdown with the depends_on option. Compose always starts and stops containers in dependency order, where dependencies are determined by depends_on, links, volumes_from, and network_mode: "service:...".
As you did with depends_on, but
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
Docker Compose official solution
Use a tool such as wait-for-it, dockerize, sh-compatible wait-for, or
RelayAndContainers template. These are small wrapper scripts which you
can include in your application’s image to poll a given host and port
until it’s accepting TCP connections.
They suggest you to act like this:
version: "3"
services:
hub:
image: selenium/hub
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 16
GRID_BROWSER_TIMEOUT: 3000
GRID_TIMEOUT: 3000
app:
build: .
ports:
- "80:8080"
depends_on:
- "hub"
command: ["./wait-for-it.sh", "hub:4444", "--", "java", "-jar", "app.jar"]
I have tried several possible solutions on Stack Overflow but nothing seems to work for me.
I am developing microservices using RabbitMQ. The solution contains multiple projects and runs without any problem, but as soon as I use docker-compose option to build the project, Visual Studio throws the following exception:
RabbitMQ.Client.Exceptions.BrokerUnreachableException ExtendedSocketException: Connection refused 127.0.0.1:5672
In my solution, I have three projects communicating with each other via RabbitMQ.
Below is the code for my YAML file.
My docker-compose.yaml:
version: '3.4'
services:
rabbitmq:
hostname: webnet
image: rabbitmq:3.7.2-management
ports:
- "15672:15672"
- "5672:5672"
networks:
- webnet
sql-server-db:
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "customerdbalten#123"
ACCEPT_EULA: "Y"
networks:
- webnet
myproject.simulation.api:
image: ${DOCKER_REGISTRY-}myprojectsimulationapi
build:
context: .
dockerfile: myproject.Simulation.Api/Dockerfile
links:
- rabbitmq
ports:
- '5000'
networks:
- webnet
myproject.updateservice.api:
image: ${DOCKER_REGISTRY-}myprojectupdateserviceapi
build:
context: .
dockerfile: myproject.updateservice.Api/Dockerfile
links:
- rabbitmq
- sql-server-db
ports:
- '5050'
networks:
- webnet
myproject.web:
image: ${DOCKER_REGISTRY-}myprojectweb
build:
context: .
dockerfile: MyProject.Web/Dockerfile
links:
- rabbitmq
ports:
- '5001'
networks:
- webnet
networks:
webnet:
driver: bridge
My Docker file:
My Dockerfile looks like the following:
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /src
COPY
MyProject.UpdateService.Api/MyProject.UpdateService.Api.csproj MyProject.UpdateService.Api/
COPY MyProject.Common/MyProject.Common.csproj MyProject.Common/
RUN dotnet restore MyProject.UpdateService.Api/MyProject.UpdateService.Api.csproj
COPY . .
WORKDIR /src/MyProject.UpdateService.Api
RUN dotnet build MyProject.UpdateService.Api.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish MyProject.UpdateService.Api.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.UpdateService.Api.dll"]
I've also created another simple solution with nothing but two projects - a sender and receiver - that uses RabbitMQ. This solution throws the same exception while docker-composeing, otherwise, it just runs. The YAML file has nothing but auto-generated code.
From the discussion, we found out that the rabbitMQ container is not running as there is already running rabbitMQ service on the host.
You have two option, stop the host RabbitMQ service and then try to connect with rabbitMQ container.
Hostname: rabbitmq:5672
Or if you want to connect with Host RabbitMQ service then you can use
Hostname: host.docker.internal
#or
Hostname: HOST_IP
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network
access). From 18.03 onwards our recommendation is to connect to the
special DNS name `host.docker.internal, which resolves to the internal
IP address used by the host. This is for development purpose and will
not work in a production environment outside of Docker Desktop for
Windows.
The gateway is also reachable as gateway.docker.internal.
docker-for-windows-networking