Grafana getting empty response from Prometheus - docker

I'm using the following Docker Compose File to startup Prometheus and Grafana:
version: '3.9'
services:
prometheus:
build: ~/programming/tools/prometheus-2.39.1.linux-amd64
ports:
- "9090:9090"
alertmanager:
build: ~/programming/tools/prometheus-2.39.1.linux-amd64/alertmanager
ports:
- "9093:9093"
grafana:
image: grafana/grafana
ports:
- 3000:3000
I'm able to ping the prometheus container from within the grafana container.
But i'm unable to configure the Prometheus Datasource in the Grafana UI.
I always get an empty response in the logs of grafana.
grafana_1 | logger=context userId=1 orgId=1 uname=admin t=2022-12-02T06:33:08.707608032Z level=error msg="Internal server error" error="[plugin.downstreamError] failed to query data: received empty response from prometheus" remote_addr=172.18.0.1 traceID=
I'm thinking this could be because of the configured proxy. But i don't know how to set this in Grafana for the Datasource.
I have set the Proxysettings in the ~/.docker/config.json file.
The strange thing is that i'm able to configure a different datasource like MySQL if i add a mysql container to the docker-compose

You can try to define NO_PROXY environment variable in Grafana's configuration and add Prometheus' URL to it.

Related

Docker inter-container communication

I'm facing a relatively simple problem here but I'm starting to wonder why it doesn't work.
I want to start two Docker Containers with Docker Compose: InfluxDB and Chronograph.
Unfortunately, the chronograph does not reach InfluxDB under the given hostname: "Unable to connect to InfluxDB Influx 1: Error contacting source"
What could be the reason for this?
Here is my docker-compose.yml:
version: "3.8"
services:
influxdb:
image: influxdb
restart: unless-stopped
ports:
- 8086:8086
volumes:
- influxdb-volume:/var/lib/influxdb
networks:
- test
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
depends_on:
- influxdb
networks:
- test
volumes:
influxdb-volume:
chronograf-volume:
networks:
test:
driver: bridge
I have also tried to start a shell inside the two containers and then ping the containers to each other or use wget to get the HTTP-API of the other container. Even this communication between the containers does not work. On both attempts with wget and ping I get timeouts.
It must be said that I use a Banana Pi BPI-M1 here. Is it possible that it is somehow due to the Linux that container to container communication does not work?
If not configured, chronograf will try to access influxdb on localhost:8086. To be able to reach the correct influxdb instance, you need to specify the url accordingly using either the --influxdb-url command line flag or (personal preference) an environment variable INFLUXDB_URL. Those should be set to the value of http://influxdb:8086 which is the docker DNS name derived from the service name of your compose file (the keys one level below services).
This should do the trick (snippet):
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
environment:
- INFLUXDB_URL=http://influxdb:8086
depends_on:
- influxdb
networks:
- test
Please check the chronograf readme (section Using the container with InfluxDB) for details on configuring the image and check the docker compose networking docs on some more info about networks and dns naming.
The Docker service creates some iptables entries in the tables filter and nat. My OpenVPN Gateway script executed the following commands at startup:
iptables --flush -t filter
iptables --flush -t nat
This will delete the entries from Docker and communication between the containers and the Internet will no longer be possible.
I have rewritten the script and now everything works again.

Minio / Keycloak integration: connection refused

I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost

How do I run a docker compose file with modified telegraf config file?

I'm trying to run a docker compose file on MacOS to run Telegraf, Mosquitto (MQTT), Grafana and InfluxDB. I'm trying to run Telegraf with a modified config file. The ultimate aim is to store and display data being sent from an arduino muscle sensor.
The docker compose file currently looks like this:
version: '3'
services:
influxdb:
container_name: influxdb
image: influxdb
ports:
- "8086:8086"
environment:
- INFLUXDB_DB=sensors
- INFLUXDB_ADMIN_USER=telegraf
- INFLUXDB_ADMIN_PASSWORD=telegraf
restart: always
telegraf:
image: telegraf
restart: always
ports:
- "5050:5050"
volumes:
- $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro
grafana:
container_name: grafana
image: grafana/grafana
links:
- influxdb
hostname: grafana
ports:
- "3000:3000"
mosquitto:
container_name: mosquitto
image: eclipse-mosquitto
ports:
- "1883:1883"
- "9001:9001"
depends_on:
- influxdb
restart: always
I can run the build command and Mosquitto, Grafana and InfluxDB all seem to run however there are a number of issues with Telegraf. Regardless of what changes are made to volume in the compose file the default config file for Telegraf is used, as opposed to my modified config, which seems to prevent data being sent to InfluxDB.
The Telegraf post to InfluxDB error looks as follows:
telegraf | 2020-03-03T11:40:49Z E! [outputs.influxdb] When writing to [http://localhost:8086]: Post http://localhost:8086/write?db=telegraf: dial tcp 127.0.0.1:8086: connect: connection refused
telegraf | 2020-03-03T11:40:49Z E! [agent] Error writing to outputs.influxdb: could not write any address
Mosquitto seems to work as the MQTT.fx app is able to connect and publish/subscribe to the container. However there are regular connections made and dropped with an unknown name.
The following connection error continuously logs:
mosquitto | 1583247033: New connection from 172.25.0.1 on port 1883.
mosquitto | 1583247033: Client <unknown> disconnected due to protocol error.
I've considered writing a Telegraf dockerfile to set the config file however this seems like overkill as my understanding is that the volume section of the compose file should allow this config file to be used.
My telegraf.conf file is in the same directory as the docker-compose.yml file.
The question is
a) Is my belief that the container is using the default telegraf file correct
b) How do I get the altered telegraf.conf file to on the container
Without having your telegraf config file, it's not easy to tell if it's being loaded correctly and there's a network issue...
I find instead of using: $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro, it makes more sense to include config files for docker in a local subdirectory:
.docker/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro.
If you want to use a full path with PWD, I suggest ${PWD}. Based on my experiences, $PWD is using a variable from your terminal (you can test if it's assigned with echo $PWD), where as ${PWD} actually runs print working directory and outputs the result (you can test with a terminal using echo ${PWD}.
For completeness, this stack (based around a BME280 sensor and Ardunio) should be working (for security - I've changed the credentials so if it doesn't work start there!). I've commented it for my own reference, so hope it helps you:
version: '3'
# To Do:
# - Setup defined networks to protect influxdb and telegraf
# - Define a backup process for data
# - Monitor implications of version tags/docker container lifecycles
services:
# MQTT Broker, handles data from sensors
# https://hub.docker.com/_/eclipse-mosquitto
mosquitto:
# Name this container so other containers can find it easily
# Name used in:
# - Telegraf config
container_name: mosquitto
image: eclipse-mosquitto:1.6
ports:
- "1883:1883"
- "9001:9001"
depends_on:
- influxdb
restart: always
volumes:
# Use a volume for storage
# since influxdb stores data long term this might not be needed?
- mosquitto-storage:/mosquitto/data
# Data storage
# https://hub.docker.com/_/influxdb
influxdb:
# Name this container so other containers can find it easily
# Name used in:
# - Grafana data source
# - Telegraf config
container_name: influxdb
image: influxdb:1.5
ports:
- "8086:8086"
environment:
- INFLUXDB_DB=sensors
- INFLUXDB_ADMIN_USER=admin-user
- INFLUXDB_ADMIN_PASSWORD=telegraf-admin-password
- INFLUXDB_USER=telegraf-username
- INFLUXDB_PASSWORD=telegraf-password
restart: always
volumes:
# Data persistence (could also be a bind mount: /srv/docker/influxdb/data:/var/lib/influxdb)
- influxdb-storage:/var/lib/influxdb
# Backups...
- ./influxdb-backup:/backup
# Host can run the following on a crontab, then rsnapshot can pickup:
# docker exec -it influxdb influxd backup -database sensors /backup
# Message formattet (MQTT -> InfluxDB)
# https://hub.docker.com/_/telegraf
telegraf:
image: telegraf:1.11
restart: always
ports:
- "5050:5050"
depends_on:
- influxdb
volumes:
# This file needs edited with your MQTT topics, host, etc
- .docker/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
# Dashboard/graphing
# https://hub.docker.com/r/grafana/grafana
grafana:
# Grafana tags are a mess! just use whatever...
image: grafana/grafana
links:
- influxdb
hostname: grafana
ports:
- "3000:3000"
depends_on:
- influxdb
volumes:
# Grafana gets grumpy over bind mount permissions, use a volume
- grafana-storage:/var/lib/grafana
volumes:
mosquitto-storage:
influxdb-storage:
grafana-storage:
And my config file for telegraf looks like this:
[[inputs.mqtt_consumer]]
servers = ["tcp://mosquitto:1883"]
topics = [
"home/sensor/bme280/temperature",
]
username = "mqttuser"
password = "mqttpassword"
data_format = "value"
data_type = "float"
[[outputs.influxdb]]
urls = ["http://influxdb:8086"]
database = "sensors"
skip_database_creation = true
timeout = "5s"
username = "telegraf-username"
password = "telegraf-password"
user_agent = "telegraf"
udp_payload = "512B"
Note that I am using the container names instead of local IP address within docker.

Setting up sunbird-telemetry Kafka DRUID and superset

I am trying to create a analytics dashboard based from mobile events. I want to dockerize all the components to containers in docker and deploy it in localhost and create an analytical dashboard.
Sunbird telemetry https://github.com/project-sunbird/sunbird-telemetry-service
Kafka https://github.com/wurstmeister/kafka-docker
Druid https://github.com/apache/incubator-druid/tree/master/distribution/docker
Superset https://github.com/apache/incubator-superset
What i did
Druid
I executed the command docker build -t apache/incubator-druid:tag -f distribution/docker/Dockerfile .
I executed the command docker-compose -f distribution/docker/docker-compose.yml up
After everything get executed open http://localhost:4008/ and see DRUID running
It takes 3.5 hours to complete both build and run
Kafka
Navigate to kafka folder
docker-compose up -d executed this command
Issue
When we execute druid a zookeeper starts running, and when we start kafka the docker file starts another zookeeper and i cannot establish a connection between kafka and zookeeper.
After i start sunbird telemetry and tries to create topic and connect kafka from sunbird its not getting connected.
I dont understand what i am doing wrong.
Can we tell kafka to share the zookeeper started by DRUID. I am completed new to this environment and these stacks.
I am studying this stacks. Am i doing something wrong. Can anybody point out how to properly connect kafka and druid over docker.
Note:- I am running all this in my mac
My kafka compose file
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: **localhost ip**
KAFKA_ZOOKEEPER_CONNECT: **localhost ip**:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Can we tell kafka to share the zookeeper started by DRUID
You would put all services in the same compose file.
Druids kafka connection is listed here
https://github.com/apache/incubator-druid/blob/master/distribution/docker/environment#L31
You can set KAFKA_ZOOKEEPER_CONNECT to the same address, yes
For example, downloading the file above and adding Kafka to the Druid Compose file...
version: "2.2"
volumes:
metadata_data: {}
middle_var: {}
historical_var: {}
broker_var: {}
coordinator_var: {}
overlord_var: {}
router_var: {}
services:
# TODO: Add sunbird
postgres:
container_name: postgres
image: postgres:latest
volumes:
- metadata_data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=FoolishPassword
- POSTGRES_USER=druid
- POSTGRES_DB=druid
# Need 3.5 or later for container nodes
zookeeper:
container_name: zookeeper
image: zookeeper:3.5
environment:
- ZOO_MY_ID=1
druid-coordinator:
image: apache/incubator-druid
container_name: druid-coordinator
volumes:
- coordinator_var:/opt/druid/var
depends_on:
- zookeeper
- postgres
ports:
- "3001:8081"
command:
- coordinator
env_file:
- environment
# renamed to druid-broker
druid-broker:
image: apache/incubator-druid
container_name: druid-broker
volumes:
- broker_var:/opt/druid/var
depends_on:
- zookeeper
- postgres
- druid-coordinator
ports:
- "3002:8082"
command:
- broker
env_file:
- environment
# TODO: add other Druid services
kafka:
image: wurstmeister/kafka
ports:
- "9092"
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181/kafka # This is the same service that Druid is using
Can we tell kafka to share the zookeeper started by DRUID
Yes, as there's a zookeeper.connect setting for Kafka broker that specifies the Zookeeper address to which Kafka will try to connect. How to do it depends entirely on the docker image you're using. For example, one of the popular images wurstmeister/kafka-docker does this by mapping all environmental variables starting with KAFKA_ to broker settings and adds them to server.properties, so that KAFKA_ZOOKEEPER_CONNECT becomes a zookeeper.connect setting. I suggest taking a look at the official documentation to see what else you can configure.
and when we start kafka the docker file starts another zookeeper
This is your issue. It's the docker-compose file that starts Zookeeper, Kafka, and configures Kafka to use the bundled Zookeeper. You need to modify it, by removing the bundled Zookeeper and configuring Kafka to use a different one. Ideally, you should have a single docker-compose file that starts the whole setup.

install influxdb in a docker with non standard port

I am new in dockers and I need some help please.
I am trying to install TICK in docker. Influxdb, Kapacitor and Chronograf will be installed in dockers but telegraf will be installed in each machine that will be necessary.
Port 8086 in my host is in use, so I will use 8087 for influxdb. Is it posible to configure influxdb dokcer with -p 8087:8086? If so, which port should I configure in conf files?
Docker compose file will be:
version: '3'
networks:
TICK_network:
services:
influxdb:
image: influxdb
container_name: influxdb
networks:
TICK_network:
ports:
- "8087:8086"
- "8083:8083"
expose:
- "8087"
- "8083"
hostname: influxdb
volumes:
- /var/lib/influxdb:/var/lib/influxdb
- /etc/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf:ro
restart: unless-stopped
kapacitor:
image: kapacitor
container_name: kapacitor
networks:
TICK_network:
links:
- influxdb
ports:
- "9092:9092"
expose:
- "9092"
hostname: kapacitor
volumes:
- /var/lib/kapacitor:/var/lib/kapacitor
- /etc/kapacitor/kapacitor.conf:/etc/kapacitor/kapacitor.conf:ro
restart: unless-stopped
chronograf:
image: chronograf
container_name: chronograf
networks:
TICK_network:
links:
- influxdb
- kapacitor
ports:
- "8888:8888"
expose:
- "8888"
hostname: chronograf
volumes:
- /var/lib/chronograf:/var/lib/chronograf
restart: unless-stopped
influxdb.conf is edited to point to port 8087
[http]
enabled = true
bind-address = ":8087"
auth-enabled = true
Kapacitor.conf and telegraf.conf are also pointing to port 8087.
But I am receiving following errors:
Telegraf log:
W! [outputs.influxdb] when writing to [http://localhost:8087]: database "telegraf" creation failed: Post http://localhost:8087/query: EOF
E! [outputs.influxdb] when writing to [http://localhost:8087]: Post http://localhost:8087/write?db=tick: EOF
E! [agent] Error writing to outputs.influxdb: could not write any address
kapacitor log:
vl=error msg="encountered error" service=run err="open server: open service *influxdb.Service: failed to link subscription on startup: authorization failed"
run: open server: open service *influxdb.Service: failed to link subscription on startup: authorization failed
What you did is correct if you want to access those services from outside the Docker network, that is from the host access to localhost:8087 for example.
However, this is not correct in your case. As you are using docker-compose, all the services are in the same network, and therefore, you need to attack the port in which the influx is listening in the Docker network (the right-side port), that is, 8086.
But, even if you do so, it will still not work. Why? Because you are trying to access localhost from the Telegraf container. You need to configure the access to influx as influxdb:8086, not as localhost:8087. influxdb here is the name of the container, if for example you name it ailb90, then it would be ailb90:8086
thanks for your answer. But telegraf is not installed in a container. This is why I access to database using urls = ["http://localhost:8087"].
In the other hand, kapacitor is installed in a docker container. The conexion to influxdb is made using the string urls=["https://influxdb:8087"]. If I cinfigure kapacitor in port 8086 it gives a connexion error (I think it is because influxdb.conf is pointing to port 8087):
lvl=error msg="failed to connect to InfluxDB, retrying..." service=influxdb cluster=default err="Get http://influxdb:8086/ping: dial tcp 172.0.0.2:8086: connect: connection refused"

Resources