Prometheus targets dont displayed [cadvisor, node application] - docker

i should see tree targets in my prometheus dashboard, one from prometheus itself, which works, and one from my self created node.js application called chat-api, and one from cadvisor. for cadvisor i get following error, when i run docker-compose up:
cadvisor | W0419 22:12:08.195849 1 sysinfo.go:203] Nodes topology is not available, providing CPU topology
cadvisor | W0419 22:12:08.196364 1 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
cadvisor | E0419 22:12:08.200398 1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
i changed the parameters in my docker-compose file but it dont change anything. im a beginner in docker.
docker-compose.yml:
version : '3.7'
services:
chat-api:
container_name: chat-api
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000:4000'
networks:
- cchat
restart: 'on-failure'
userdb:
image: mongo:latest
container_name: mongodb
volumes:
- userdb:/data/db
networks:
- cchat
cadvisor:
image: gcr.io/cadvisor/cadvisor
container_name: cadvisor
privileged: true
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
devices:
- /dev/kmsg:/dev/kmsg
depends_on:
- chat-api
networks:
- cchat
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
depends_on:
- chat-api
networks:
- cchat
volumes:
userdb:
prometheus-data:
networks:
cchat:
prometheus.yml:
global:
scrape-interval: 5s
scrape_configs:
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
- job_name: 'chat-api'
static_configs:
- targets: ['chat-api:4000']
Dockerfile:
FROM node:alpine
WORKDIR .
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 4000
CMD ["node", "server.js"]
chat-api is a node application with express
my folder structure:
structure

Related

Set up Prometheus and Cadvisor with docker-compose

i am new to prometheus , cadvisor and docker-compose. i made a docker-compose file including my own created application named chat, with a mongo container. those work fine. now i want to monitor my containers with prometheus and cadvisor. im getting following errors:
cadvisor | W0419 11:41:00.576916 1 sysinfo.go:203] Nodes topology is not available, providing CPU topology
cadvisor | W0419 11:41:00.577437 1 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
cadvisor | E0419 11:41:00.582000 1 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
and
prometheus | ts=2022-04-19T11:54:19.051Z caller=main.go:438 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yml)" file=/etc/prometheus/prometheus.yml err="parsing YAML file /etc/prometheus/prometheus.yml: yaml: unmarshal errors:\n line 2: field scrape-interval not found in type config.plain"
i tryed to change the config parameter from my docker-compose into, but it dont changed the error:
command:
- '--config.file=./prometheus/prometheus.yml'
docker-compose.yml:
version : '3.7'
services:
chat-api:
container_name: chat-api
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000:4000'
networks:
- cchat
restart: 'on-failure'
userdb:
image: mongo:latest
container_name: mongodb
volumes:
- userdb:/data/db
networks:
- cchat
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9080:9080'
networks:
- cloudchat
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
devices:
- /dev/kmsg:/dev/kmsg
depends_on:
- chat-api
networks:
- cchat
volumes:
userdb:
networks:
cchat:
prometheus.yml:
global:
scrape-interval: 2s
scrape_configs:
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
project structure:
picture of project structure
I guess it's quite late but you can try mounting /etc/machine-id:/etc/machine-id:ro.
Running in privileged mode could help too. This is my configuration which is working without problems:
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.47.0
container_name: cadvisor
restart: unless-stopped
privileged: true
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
Some important note, don't use latest it seems it's not the latest version (source: https://github.com/google/cadvisor/issues/3066).

Docker compose container fails to read the application running in different container and in same network

I have docker-compose.yml file which contains frontend,backend,testing,postgres and pgadmin container. The containers except testing are able to communicate each other. But the testing container fails to communicate with backend and frontend container in docker-compose.
version: '3.7'
services:
frontend:
container_name: test-frontend
build:
context: ./frontend
dockerfile: Dockerfile.local
ports:
- '3000:3000'
networks:
- test-network
environment:
# For the frontend can be applied only during the build!
# (while it's applied when TS is compiled)
# You have to build manually without cache if one of those are changed at least for the prod mode.
- REACT_APP_BACKEND_API=http://localhost:8000/api/v1
- REACT_APP_GOOGLE_CLIENT_ID=1234567dfghjjnfd
- CI=true
- CHOKIDAR_USEPOLLING=true
postgres:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
ports:
- "5432:5432"
networks:
- test-network
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: "dev#dev.com"
PGADMIN_DEFAULT_PASSWORD: dev
volumes:
- pgadmin:/root/.pgadmin
- ./pgadmin-config/servers.json:/pgadmin4/servers.json
ports:
- "5050:80"
networks:
- test-network
restart: unless-stopped
backend:
container_name: test-backend
build:
context: ./backend
dockerfile: Dockerfile.local
ports:
- '8000:80'
volumes:
- ./backend:/app
command: >
bash -c "alembic upgrade head
&& exec /start-reload.sh"
networks:
- test-network
depends_on:
- postgres
environment:
- GOOGLE_APPLICATION_CREDENTIALS=/app/.secret/secret.json
- APP_DB_CONNECTION_STRING=postgresql+psycopg2://dev:dev#postgres:5432/postgres
- LOG_LEVEL=debug
- SQLALCHEMY_ECHO=True
- AUTH_ENABLED=True
- CORS=*
- GCP_ALLOWED_DOMAINS=*
testing:
container_name: test-testing
build:
context: ./testing
dockerfile: Dockerfile
volumes:
- ./testing:/isp-app
command: >
bash -c "/wait
&& robot ."
networks:
- test-network
depends_on:
- backend
- frontend
environment:
- WAIT_HOSTS= frontend:3000, backend:8000
- WAIT_TIMEOUT= 3000
- WAIT_SLEEP_INTERVAL=300
- WAIT_HOST_CONNECT_TIMEOUT=300
volumes:
postgres:
pgadmin:
networks:
test-network:
driver: bridge
All the containers are mapped to test-network. When the testing container tried to connect to frontend:3000 or backend:8000, it throws "Host [ backend:8000] not yet available"
How to fix it?

docker-compose: zipkin cannot connect to elasticsearch

I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f

Docker compose 3.1, communication between two docker-compose.yml

For past 3 days I have been trying to connect two docker containers generated by two separate docker-compose files.
I tried a lot of approaches none seem to be working. Currently after adding network to compose files, service with added network doesn't start. My goal is to access endpoint from another container.
Endpoint-API compose file:
version: "3.1"
networks:
test:
services:
mariadb:
image: mariadb:10.1
container_name: list-rest-api-mariadb
working_dir: /application
volumes:
- .:/application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=list-api
- MYSQL_USER=list-api
- MYSQL_PASSWORD=root
ports:
- "8003:3306"
webserver:
image: nginx:stable
container_name: list-rest-api-webserver
working_dir: /application
volumes:
- .:/application
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8005:80"
networks:
- test
php-fpm:
build: docker/php-fpm
container_name: list-rest-api-php-fpm
working_dir: /application
volumes:
- .:/application
- ./docker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
Consumer compose file:
version: "3.1"
networks:
test:
services:
consumer-mariadb:
image: mariadb:10.1
container_name: consumer-service-mariadb
working_dir: /consumer
volumes:
- .:/consumer
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=consumer
- MYSQL_USER=consumer
- MYSQL_PASSWORD=root
ports:
- "8006:3306"
consumer-webserver:
image: nginx:stable
container_name: consumer-service-webserver
working_dir: /consumer
volumes:
- .:/consumer
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8011:80"
networks:
- test
consumer-php-fpm:
build: docker/php-fpm
container_name: consumer-service-php-fpm
working_dir: /consumer
volumes:
- .:/consumer
- ./docker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
Could someone tell me best way of accessing API container from within consumer? I'm loosing my mind on this one.
Your two
networks:
test:
don't refer to the same network, but they are independent and don't talk to each other.
What you could do is having an external and pre-existing network that both compose file can talk and share. You can do that with docker network create, and then refer to that in your compose files with
networks:
default:
external:
name: some-external-network-here

Docker composer and volume

I have generated a docker yml file with necessary images. I have placed index.php in the current directory which is required by nginx to display the page and when i run docker-compose up command, no files from my working directory detected in /application directory from container.
version: "2"
services:
memcached:
image: memcached:alpine
container_name: juzpay-docker-memcached
mysql:
image: mysql:8.0
container_name: juzpay-docker-mysql
working_dir: /application
volumes:
- .:/application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=juzpay_db
- MYSQL_USER=root
- MYSQL_PASSWORD=root123
ports:
- "8082:3306"
webserver:
image: nginx:alpine
container_name: juzpay-docker-webserver
working_dir: /application
volumes:
- ./dockerapp:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8080:80"
php-fpm:
build: phpdocker/php-fpm
container_name: juzpay-docker-php-fpm
working_dir: /application
volumes:
- ./dockerapp:/application
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini

Resources