I followed this guide (https://www.jeffgeerling.com/blog/2021/monitor-your-internet-raspberry-pi) by Jeff Geerling to install an internet monitoring dashboard using prometheus and grafana running in docker containers.
Everything works great, but I noticed that the data is getting deleted after 15 days. After a quick search I found out that this is the default setting for the storage retention in prometheus.
I tried a lot by myself but I cannot find a way to change this setting.
Even though I found this tutorial (https://mkezz.wordpress.com/2017/11/13/prometheus-command-line-flags-in-docker-service/) which as far as I know should exactly tackle the problem I have but it doesn't work. I get the Error: Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. when running the first command mentioned.
Also I found this question (Increasing Prometheus storage retention) but I cannot use the best answer because my prometheus is running in the docker container.
Is there an easy way to set a command line flag for prometheus something like this --storage.tsdb.retention.time=30d?
This is the ReadMe-File I downloaded when I first installed it:
# Internet Monitoring Docker Stack with Prometheus + Grafana
> This repository is a fork from [maxandersen/internet-monitoring](https://github.com/maxandersen/internet-monitoring), tailored for use on a Raspberry Pi. It has only been tested on a Raspberry Pi 4 running Pi OS 64-bit beta.
Stand-up a Docker [Prometheus](http://prometheus.io/) stack containing Prometheus, Grafana with [blackbox-exporter](https://github.com/prometheus/blackbox_exporter), and [speedtest-exporter](https://github.com/MiguelNdeCarvalho/speedtest-exporter) to collect and graph home Internet reliability and throughput.
## Pre-requisites
Make sure Docker and [Docker Compose](https://docs.docker.com/compose/install/) are installed on your Docker host machine.
## Quick Start
`
git clone https://github.com/geerlingguy/internet-monitoring
cd internet-monitoring
docker-compose up -d
`
Go to [http://localhost:3030/d/o9mIe_Aik/internet-connection](http://localhost:3030/d/o9mIe_Aik/internet-connection) (change `localhost` to your docker host ip/name).
## Configuration
To change what hosts you ping you change the `targets` section in [/prometheus/pinghosts.yaml](./prometheus/pinghosts.yaml) file.
For speedtest the only relevant configuration is how often you want the check to happen. It is at 30 minutes by default which might be too much if you have limit on downloads. This is changed by editing `scrape_interval` under `speedtest` in [/prometheus/prometheus.yml](./prometheus/prometheus.yml).
Once configurations are done, run the following command:
$ docker-compose up -d
That's it. docker-compose builds the entire Grafana and Prometheus stack automagically.
The Grafana Dashboard is now accessible via: `http://<Host IP Address>:3030` for example http://localhost:3030
username - admin
password - wonka (Password is stored in the `config.monitoring` env file)
The DataSource and Dashboard for Grafana are automatically provisioned.
If all works it should be available at http://localhost:3030/d/o9mIe_Aik/internet-connection - if no data shows up try change the timeduration to something smaller.
<center><img src="images/dashboard.png" width="4600" heighth="500"></center>
## Interesting urls
http://localhost:9090/targets shows status of monitored targets as seen from prometheus - in this case which hosts being pinged and speedtest. note: speedtest will take a while before it shows as UP as it takes about 30s to respond.
http://localhost:9090/graph?g0.expr=probe_http_status_code&g0.tab=1 shows prometheus value for `probe_http_status_code` for each host. You can edit/play with additional values. Useful to check everything is okey in prometheus (in case Grafana is not showing the data you expect).
http://localhost:9115 blackbox exporter endpoint. Lets you see what have failed/succeded.
http://localhost:9798/metrics speedtest exporter endpoint. Does take about 30 seconds to show its result as it runs an actual speedtest when requested.
## Thanks and a disclaimer
Thanks to #maxandersen for making the original project this fork is based on.
Thanks to #vegasbrianc work on making a [super easy docker](https://github.com/vegasbrianc/github-monitoring) stack for running prometheus and grafana.
This setup is not secured in any way, so please only use on non-public networks, or find a way to secure it on your own.
After further tinkering with it I found a docker-compose.yml file and I simply added under the command-section of prometheus the --storage.tsdb.retention.time=30d as shown here:
version: "3.1"
volumes:
prometheus_data: {}
grafana_data: {}
networks:
front-tier:
back-tier:
services:
prometheus:
image: prom/prometheus:v2.25.2
restart: always
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--storage.tsdb.retention.time=30d'
ports:
- 9090:9090
links:
- ping:ping
- speedtest:speedtest
networks:
- back-tier
grafana:
image: grafana/grafana
restart: always
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
depends_on:
- prometheus
ports:
- 3030:3000
env_file:
- ./grafana/config.monitoring
networks:
- back-tier
- front-tier
ping:
tty: true
stdin_open: true
expose:
- 9115
ports:
- 9115:9115
image: prom/blackbox-exporter
restart: always
volumes:
- ./blackbox/config:/config
command:
- '--config.file=/config/blackbox.yml'
networks:
- back-tier
speedtest:
tty: true
stdin_open: true
expose:
- 9798
ports:
- 9798:9798
image: miguelndecarvalho/speedtest-exporter
restart: always
networks:
- back-tier
nodeexp:
privileged: true
image: prom/node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
ports:
- 9100:9100
restart: always
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
networks:
- back-tier
By then running docker-compose create followed by docker start internet-monitoring_prometheus_1 I can see under [Hostname of Server]:9090/status under Storage Retention 30 days.
Is that the way it should be done, as I think I found my solution.
Related
I would like to configure Load Balancing in docker-compose.yml file for NiFi cluster deployed via Docker containers.
Current docker-compose parameters for LB are as follows (for each of three NiFi nodes):
# load balancing
- NIFI_CLUSTER_LOAD_BALANCE_PORT=6342
- NIFI_CLUSTER_LOAD_BALANCE_HOST=node.name
- NIFI_CLUSTER_LOAD_BALANCE_CONNECTIONS_PER_NODE=4
- NIFI_CLUSTER_LOAD_BALANCE_MAX_THREADS=8
But, when I try to use load balancing in queues, I can choose all the parameters there, and do not have any error, but LB is not working, everything is done on the primary node (because I used GetSFTP on the primary node only, but want to then process data on all 3 nodes). Also, NiFi cluster is configured to work with SSL.
Thanks in advance!
I had opened load balance port on my docker file. Also I had to specify hostname for each node's compose file
here is my docker file for basic clustering
version: "3.3"
services:
nifi_service:
container_name: "nifi_service"
image: "apache/nifi:1.11.4"
hostname: "APPTHLP7"
environment:
- TZ=Europe/Istanbul
- NIFI_CLUSTER_IS_NODE=true
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8088
- NIFI_ZK_CONNECT_STRING=172.16.2.238:2181,172.16.2.240:2181,172.16.2.241:2181
ports:
- "8080:8080"
- "8088:8088"
- "6342:6342"
volumes:
- /home/my/nifi-conf:/opt/nifi/nifi-current/conf
networks:
- my_network
restart: unless-stopped
networks:
my_network:
external: true
please not that you have to configure load balance strategy on the downstream connection in your flow.
I'm facing a relatively simple problem here but I'm starting to wonder why it doesn't work.
I want to start two Docker Containers with Docker Compose: InfluxDB and Chronograph.
Unfortunately, the chronograph does not reach InfluxDB under the given hostname: "Unable to connect to InfluxDB Influx 1: Error contacting source"
What could be the reason for this?
Here is my docker-compose.yml:
version: "3.8"
services:
influxdb:
image: influxdb
restart: unless-stopped
ports:
- 8086:8086
volumes:
- influxdb-volume:/var/lib/influxdb
networks:
- test
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
depends_on:
- influxdb
networks:
- test
volumes:
influxdb-volume:
chronograf-volume:
networks:
test:
driver: bridge
I have also tried to start a shell inside the two containers and then ping the containers to each other or use wget to get the HTTP-API of the other container. Even this communication between the containers does not work. On both attempts with wget and ping I get timeouts.
It must be said that I use a Banana Pi BPI-M1 here. Is it possible that it is somehow due to the Linux that container to container communication does not work?
If not configured, chronograf will try to access influxdb on localhost:8086. To be able to reach the correct influxdb instance, you need to specify the url accordingly using either the --influxdb-url command line flag or (personal preference) an environment variable INFLUXDB_URL. Those should be set to the value of http://influxdb:8086 which is the docker DNS name derived from the service name of your compose file (the keys one level below services).
This should do the trick (snippet):
chronograf:
image: chronograf
restart: unless-stopped
ports:
- 8888:8888
volumes:
- chronograf-volume:/var/lib/chronograf
environment:
- INFLUXDB_URL=http://influxdb:8086
depends_on:
- influxdb
networks:
- test
Please check the chronograf readme (section Using the container with InfluxDB) for details on configuring the image and check the docker compose networking docs on some more info about networks and dns naming.
The Docker service creates some iptables entries in the tables filter and nat. My OpenVPN Gateway script executed the following commands at startup:
iptables --flush -t filter
iptables --flush -t nat
This will delete the entries from Docker and communication between the containers and the Internet will no longer be possible.
I have rewritten the script and now everything works again.
I have got following compose file where i'm sharing some generated html data from Jenkins container to the host drive and reading this data by Nginx container from the host drive. I'm using Ubuntu Server 18.04 on AWS.
The problem is that I can read contents of the jenkins/workspace/allure-report only once. After updating of the html data it becomes inaccessible for Nginx and it throws 403 status code.
I tried all the possible solutions but nothing works. The only ugly solution is to restart Nginx container after every html data updating. I don't like this way and looking for some inbuilt docker features to resolve this.
What didn't help: sharing volume straight between containers without using docker host drive, using rslave option, using docker separate volume that can be used as buffer between the two containers... I believe it should be much more easier!
version: '2'
services:
jenkins:
container_name: jenkins
image: "jenkins/jenkins"
ports:
- "8088:8080"
- "50000:50000"
env_file:
- variables.env
volumes:
- ./jenkins:/var/jenkins_home
selenoid:
container_name: selenoid
network_mode: bridge
image: "aerokube/selenoid"
# default directory for browsers.json is /etc/selenoid/
command: -listen :4444 -conf /etc/selenoid/browsers.json -video-output-dir /opt/selenoid/video/ -timeout 3m
ports:
- "4444:4444"
env_file:
- variables.env
volumes:
- $PWD:/etc/selenoid/ # assumed current dir contains browsers.json
- /var/run/docker.sock:/var/run/docker.sock
selenoid-ui:
container_name: selenoid-ui
network_mode: bridge
image: "aerokube/selenoid-ui"
links:
- selenoid
ports:
- "8080:8080"
env_file:
- variables.env
command: ["--selenoid-uri", "http://selenoid:4444"]
nginx:
container_name: nginx
image: "nginx"
ports:
- "80:80"
volumes:
- ./jenkins/workspace/allure-report:/usr/share/nginx/html:ro,rslave
Found the solution: the easiest way to get access to the dynamic data is to use volumes_from in that container you want to look from.
When I configured my compose file like that I faced another issue - the 403 status has gone but the data was static. But that was my fault, I didn't use "cp -r " command correctly so my data has been copied only once.
I'm trying to pass redis url to docker container but so far i couldn't get it to work. I did a little research and none of the answers worked for me.
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
container_name: redis
hostname: redis
expose:
- 6379
links:
- api
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- proxy
environment:
- REDIS_URL=redis
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=proxy'
networks:
proxy:
Error: Redis connection to redis failed - connect ENOENT redis
You can only communicate between containers on the same Docker network. Docker Compose creates a default network for you, and absent any specific declaration your redis container is on that network. But you also declare a separate proxy network, and only attach the api container to that other network.
The single simplest solution to this is to delete all of the network: blocks everywhere and just use the default network Docker Compose creates for you. You may need to format the REDIS_URL variable as an actual URL, maybe like redis://redis:6379.
If you have a non-technical requirement to have separate networks, add - default to the networks listing for the api container.
You have a number of other settings in your docker-compose.yml that aren't especially useful. expose: does almost nothing at all, and is usually also provided in a Dockerfile. links: is an outdated way to make cross-container calls, and as you've declared it to make calls from Redis to your API server. hostname: has no effect outside the container itself and is usually totally unnecessary. container_name: does have some visible effects, but usually the container name Docker Compose picks is just fine.
This would leave you with:
version: '3.2'
services:
redis:
image: 'bitnami/redis:latest'
api:
image: tufanmeric/api:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
deploy:
mode: global
labels:
- 'traefik.port=3002'
- 'traefik.frontend.rule=PathPrefix:/'
- 'traefik.frontend.rule=Host:api.example.com'
- 'traefik.docker.network=default'
I have a docker based system that comprises of three containers:
1. The official PHP container, modified with some additional pear libs
2. mysql:5.7
3: alterrebe/postfix-relay (a postfix container)
The official php container has a volume that is linked to the host system's code repository which should in theory allow me to work on this application the same as I would if it were hosted "locally".
However, every time the system is brought up, I have to run
docker-compose stop && docker-compose up -d
in order to see the changes that I just made to the system. It's possible that I don't understand Docker correctly and this is by design, but stopping and starting the container after every code change slows down development substantially. Can anyone tell me what I am doing wrong (if anything)? Thanks in advance.
My docker-compose.yml is below (with variables and what not hidden of course)
web:
build: .
links:
- mysql
- mailrelay
environment:
- HIDDEN_VAR=placeholder
- ABC_ENV=development
volumes:
- ./html/:/var/www/html/
ports:
- "0.0.0.0:80:80"
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=abcdefg
- MYSQL_DATABASE=thedatabase
volumes:
- .:/db/:ro
mailrelay:
hostname: mailrelay
image: alterrebe/postfix-relay
ports:
- "25:25"
environment:
- EXT_RELAY_HOST=relay.relay.com
- EXT_RELAY_PORT=25
- SMTP_LOGIN=CLASSIFIED
- SMTP_PASSWORD=ABCDEFGHIK
- ACCEPTED_NETWORKS=172.0.0.0/8
Eventually I just started running
docker stop {{ container name }} && docker start {{ container name }}
every time instead of docker-compose. using Docker directly instead of docker-compose is super fast ( < 1second as opposed to over a minute) so it stopped being a big deal