Prometheus running inside a docker container (version 18.09.2, build 6247962, docker-compose.xml below) and the scrape target is on localhost:8000 which is created by a Python 3 script.
Error obtained for the failed scrape target (localhost:9090/targets) is
Get http://127.0.0.1:8000/metrics: dial tcp 127.0.0.1:8000: getsockopt: connection refused
Question: Why is Prometheus in the docker container unable to scrape the target which is running on the host computer (Mac OS X)? How can we get Prometheus running in docker container able to scrape the target running on the host?
Failed attempt: Tried replacing in docker-compose.yml
networks:
- back-tier
- front-tier
with
network_mode: "host"
but then we are unable to access the Prometheus admin page at localhost:9090.
Unable to find solution from similar questions
Getting error "Get http://localhost:9443/metrics: dial tcp 127.0.0.1:9443: connect: connection refused"
docker-compose.yml
version: '3.3'
networks:
front-tier:
back-tier:
services:
prometheus:
image: prom/prometheus:v2.1.0
volumes:
- ./prometheus/prometheus:/etc/prometheus/
- ./prometheus/prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- 9090:9090
networks:
- back-tier
restart: always
grafana:
image: grafana/grafana
user: "104"
depends_on:
- prometheus
ports:
- 3000:3000
volumes:
- ./grafana/grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
env_file:
- ./grafana/config.monitoring
networks:
- back-tier
- front-tier
restart: always
prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: 'my-project'
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'rigs-portal'
scrape_interval: 5s
static_configs:
- targets: ['127.0.0.1:8000']
Output at http://localhost:8000/metrics
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 65.0
python_gc_objects_collected_total{generation="1"} 281.0
python_gc_objects_collected_total{generation="2"} 0.0
# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 37.0
python_gc_collections_total{generation="1"} 3.0
python_gc_collections_total{generation="2"} 0.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="7",patchlevel="3",version="3.7.3"} 1.0
# HELP request_processing_seconds Time spend processing request
# TYPE request_processing_seconds summary
request_processing_seconds_count 2545.0
request_processing_seconds_sum 1290.4869346540017
# TYPE request_processing_seconds_created gauge
request_processing_seconds_created 1.562364777766845e+09
# HELP my_inprorgress_requests CPU Load
# TYPE my_inprorgress_requests gauge
my_inprorgress_requests 65.0
Python3 script
from prometheus_client import start_http_server, Summary, Gauge
import random
import time
# Create a metric to track time spent and requests made
REQUEST_TIME = Summary("request_processing_seconds", 'Time spend processing request')
#REQUEST_TIME.time()
def process_request(t):
time.sleep(t)
if __name__ == "__main__":
start_http_server(8000)
g = Gauge('my_inprorgress_requests', 'CPU Load')
g.set(65)
while True:
process_request(random.random())
While not a very common use case.. you can indeed connect from your container to your host.
From https://docs.docker.com/docker-for-mac/networking/
I want to connect from a container to a service on the host
The host has a changing IP address (or none if you have no network
access). From 18.03 onwards our recommendation is to connect to the
special DNS name host.docker.internal, which resolves to the internal
IP address used by the host. This is for development purpose and will
not work in a production environment outside of Docker Desktop for
Mac.
For reference for people who might find this question through search, this is supported now as of Docker 20.10 and above. See the following link:
How to access host port from docker container
and:
https://github.com/docker/for-linux/issues/264#issuecomment-823528103
Below is an example of running Prometheus on Docker for macOS which causes Prometheus to scrape a simple Spring Boot application running on localhost:8080:
Bash
docker run --rm --name prometheus -p 9090:9090 -v /Users/YourName/conf/prometheus.yml:/etc/prometheus/prometheus.yml -d prom/prometheus
/Users/YourName/conf/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'spring-boot'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:8080']
In this case it is the use of the special domain host.docker.internal instead of localhost that causes the host to be resolved from the container on a macOS as the config file is mapped into the Prometheus container.
Environment
Macbook Pro, Apple M1 Pro
Docker version 20.10.17, build 100c701
Prometheus 2.38
Related
I was referring to example given in the elasticsearch documentation for starting elastic stack (elastic and kibana) on docker using docker compose. It gives example of docker compose version 2.2 file. So, I tried to convert it to docker compose version 3.8 file. Also, it creates three elastic nodes and has security enabled. I want to keep it minimal to start with. So I tried to turn off security and also reduce the number of elastic nodes to 2. This is how my current compose file looks like:
version: "3.8"
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:8.0.0-amd64
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
environment:
- node.name=es01
- cluster.name=docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- xpack.security.enabled=false
deploy:
resources:
limits:
memory: 1g
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
# [
# "CMD-SHELL",
# # "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
# ]
# Changed to:
test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
- es01
image: docker.elastic.co/kibana/kibana:8.0.0-amd64
volumes:
- kibanadata:/usr/share/kibana/data
ports:
- 5601:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://localhost:9200
deploy:
resources:
limits:
memory: 1g
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
esdata01:
driver: local
kibanadata:
driver: local
Then, I tried to run it:
docker stack deploy -c docker-compose.nosec.noenv.yml elk
Creating network elk_default
Creating service elk_es01
Creating service elk_kibana
When I tried to check their status, it displayed following:
$ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3dcd08134e38 docker.elastic.co/kibana/kibana:8.0.0-amd64 "/bin/tini -- /usr/l…" 3 minutes ago Up 3 minutes (health: starting) 5601/tcp elk_kibana.1.ng8aspz9krfnejfpsnqzl2sci
7b548a43c45c docker.elastic.co/elasticsearch/elasticsearch:8.0.0-amd64 "/bin/tini -- /usr/l…" 3 minutes ago Up 3 minutes (healthy) 9200/tcp, 9300/tcp elk_es01.1.d9a107j6wkz42shti3n6kpfmx
I noticed that kibana's status gets stuck at (health: starting). When I checked Kibana's logs with command docker service logs -f elk_kibana, it had following WARN and ERROR lines:
[WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[WARN ][plugins.reporting.config] Found 'server.host: "0.0.0.0"' in Kibana configuration. Reporting is not able to use this as the Kibana server hostname. To enable PNG/PDF Reporting to work, 'xpack.reporting.kibanaServer.hostname: localhost' is automatically set in the configuration. You can prevent this message by adding 'xpack.reporting.kibanaServer.hostname: localhost' in kibana.yml.
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 127.0.0.1:9200
It seems that kibana is not able to connect with Elasticsearch, but why? Is it because of disabling of security and that we cannot have security disabled?
PS-1: Earlier, when I set elasticsearch host as follows in kibana's environment in the docker compose file:
ELASTICSEARCH_HOSTS=https://es01:9200 # that is 'es01' instead of `localhost`
it gave me following error:
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND es01
So, after checking this question, I changed es01 to localhost as specified earlier (that is in complete docker compose file content before PS-1.)
PS-2: Replacing localhost with 192.168.0.104 gives following error
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 192.168.0.104:9200
[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. write EPROTO 140274197346240:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
Try this :
ELASTICSEARCH_HOSTS=http://es01:9200
I don't know why it can run in my PC, since Elasticsearch is supossed use SSL. But in your case using http working just fine.
I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.
I'm trying to configure Prometheus and Grafana with my Hyperledger fabric v1.4 network to analyze the peer and chaincode mertics. I've mapped peer container's port 9443 to my host machine's port 9443 after following this documentation. I've also changed the provider entry to prometheus under metrics section in core.yml of peer. I've configured prometheus and grafana in docker-compose.yml in the following way.
prometheus:
image: prom/prometheus:v2.6.1
container_name: prometheus
volumes:
- ./prometheus/:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention=200h'
- '--web.enable-lifecycle'
restart: unless-stopped
ports:
- 9090:9090
networks:
- basic
labels:
org.label-schema.group: "monitoring"
grafana:
image: grafana/grafana:5.4.3
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/datasources:/etc/grafana/datasources
- ./grafana/dashboards:/etc/grafana/dashboards
- ./grafana/setup.sh:/setup.sh
entrypoint: /setup.sh
environment:
- GF_SECURITY_ADMIN_USER={ADMIN_USER}
- GF_SECURITY_ADMIN_PASSWORD={ADMIN_PASS}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
ports:
- 3000:3000
networks:
- basic
labels:
org.label-schema.group: "monitoring"
When I curl 0.0.0.0:9443/metrics on my remote centos machine, I get all the list of metrics. However, when I run Prometheus with the above configuration, it throws the error Get http://localhost:9443/metrics: dial tcp 127.0.0.1:9443: connect: connection refused. This is what my prometheus.yml looks like.
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 10s
static_configs:
- targets: ['localhost:9090']
- job_name: 'peer_metrics'
scrape_interval: 10s
static_configs:
- targets: ['localhost:9443']
Even, when I go to endpoint http://localhost:9443/metrics in my browser, I get all the metrics. What am I doing wrong here. How come Prometheus metrics are being shown on its interface and not peer's?
Since the targets are not running inside the prometheus container, they cannot be accessed through localhost. You need to access them through the host private IP or by replacing localhost with docker.for.mac.localhost or host.docker.internal.
The problem: On Prometheus you added a service for scraping but on http://localhost:9090/targets the endpoint state is Down
with an error:
Get http://localhost:9091/metrics: dial tcp 127.0.0.1:9091: connect:
connection refused
Solution: On prometheus.yml you need to verify that
scraping details pointing to the right endpoint.
the yml indentation is correct.
using curl -v http://<serviceip>:<port>/metrics should prompt the metrics in plaintext in your terminal.
Note: If you pointing to some service in another docker container, your localhost might be represented not as localhost but as servicename ( service name that shown in docker ps ) or docker.host.internal (the internal ip that running the docker container ).
for this example: I'll be working with 2 dockers containers prometheus and "myService".
sudo docker ps
CONTAINER ID IMAGE CREATED PORTS NAMES
abc123 prom/prometheus:latest 2 hours ago 0.0.0.0:9090->9090/tcp prometheus
def456 myService/myService:latest 2 hours ago 0.0.0.0:9091->9091/tcp myService
and then edit the file prometheus.yml (and rerun prometheus)
- job_name: myService
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
static_configs:
- targets: // Presenting you 3 options
- localhost:9091 // simple localhost
- docker.host.internal:9091 // the localhost of agent that runs the docker container
- myService:9091 // docker container name (worked in my case)
Your prometheus container isn't running on host network. It's running on its own bridge (the one created by docker-compose). Therefore the scrape config for peer should point at the IP of the peer container.
Recommended way of solving this:
Run prometheus and grafana in the same network as the fabric network.
In you docker-compose for prometheus stack you can reference it like this:
networks:
default:
external:
name: <your-hyperledger-network>
(use docker network ls to find the network name )
Then you can use http://<peer_container_name>:9443 in your scrape config
NOTE
This solution is not for docker swarm. It for standalone containers (multi-container) aimed to be run on overlay network.
The same error we get when using overlay network and here is the solution (statically NOT dynamically)
this config does not work:
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: 'promswarm'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: [ 'localhost:9100' ]
Nor does this one even when http://docker.for.mac.localhost:9100/ is available, yet prometheus cannot find node-exporter. So the below one did not work either:
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: 'promswarm'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: [ 'docker.for.mac.localhost:9100' ]
But simply using its container ID we can have access to that service via its port number.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a58264faa1a4 prom/prometheus "/bin/prometheus --c…" 5 minutes ago Up 5 minutes 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp unruffled_solomon
62310f56f64a grafana/grafana:latest "/run.sh" 42 minutes ago Up 42 minutes 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp wonderful_goldberg
7f1da9796af3 prom/node-exporter "/bin/node_exporter …" 48 minutes ago Up 48 minutes 0.0.0.0:9100->9100/tcp, :::9100->9100/tcp intelligent_panini
So we have 7f1da9796af3 prom/node-exporter ID and we can update our yml file to:
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
monitor: 'promswarm'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: [ '7f1da9796af3:9100' ]
not working
working
UPDATE
I myself was not happy with this hard-coded solution , so after some other search found a more reliable approach using --network-alias NAME which within the overlay network , that container will be route-able by that name. So the yml looks like this:
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: [ 'node_exporter:9100' ]
In which the name node_exporter is an alias which has been created with run subcommand. e.g.
docker run --rm -d -v "/:/host:ro,rslave" --network cloud --network-alias node_exporter --pid host -p 9100:9100 prom/node-exporter --path.rootfs=/host
And in a nutshell it says on the overlay cloud network you can reach node-exporter using node_exporter:<PORT>.
Well I remember I resolved the problem by downloading Prometheus node exporter for windows.
check out this link https://medium.com/#facundofarias/setting-up-a-prometheus-exporter-on-windows-b3e45f1235a5
If you pointing to some service in another docker container, your localhost might be represented not as localhost but as servicename ( service name that shown in docker ps ) or internal ip that running the docker container .
prometheus.yml
job_name: "node-exporter"
static_configs:
targets: ["nodeexporter:9100"] // docker container name
I have a Prometheus setup that monitors metrics exposed by my own services. This works fine for a single instance, but once I start scaling them, Prometheus gets completely confused and starts tracking incorrect values.
All services are running on a single node, through docker-compose.
This is the job in the scrape_configs:
- job_name: 'wowanalyzer'
static_configs:
- targets: ['prod:8000']
Each instance of prod tracks metrics in its memory and serves it at /metrics. I'm guessing Prometheus picks a random container each time it scraps which leads to the huge increase in counts recorded, building up over time. Instead I'd like Prometheus to read /metrics on all instances simultaneously, regardless of the amount of instances active at that time.
docker-gen (https://github.com/jwilder/docker-gen) was developed for this purpose.
You would need to create a sidecart container running docker-gen that generates a new set of targets.
If I remember well the host names generated are prod_1, prod_2, prod_X, etc.
I tried a lot to find something to help us with this issue but it looks an unsolved issue.
So, I decided to create this tool that helps us with this service-discovery.
https://github.com/juliofalbo/docker-compose-prometheus-service-discovery
Feel free to contribute and open issues!
You can use DNS service discovery feature. For example:
docker-compose.yml:
version: "3"
services:
myapp:
image: appimage:v1
restart: always
networks:
- back
prometheus:
image: "prom/prometheus:v2.32.1"
container_name: "prometheus"
restart: "always"
ports: [ "9090:9090" ]
volumes:
- "./prometheus.yml:/etc/prometheus/prometheus.yml"
- "prometheus_data:/prometheus"
networks:
- back
prometheus.yml sample:
global:
scrape_interval: 15s
evaluation_interval: 60s
scrape_configs:
- job_name: 'monitoringjob'
dns_sd_configs:
- names: [ 'myapp' ] <-- service name from docker-compose
type: 'A'
port: 8080
metrics_path: '/actuator/prometheus'
You can check your DNS records using nslookup util from any container in this network:
docker exec -it myapp bash
bash-4.2# yum install bind-utils
bash-4.2# nslookup myapp
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: myapp
Address: 172.22.0.2
Name: myapp
Address: 172.22.0.7
when i run this command it creates the docker container but shows in exit status
and i am not able to get it started
my goal is to be able to replace prometheus.yml file with a custom prometheus.yml to monitor nginx running at http://localhost:70/nginx_status
docker run -it -d --name prometheus3 -p 9090:9090 -v
/opt/docker/prometheus:/etc/prometheus prom/prometheus -
config.file=/etc/prometheus/prometheus.yml
here is my prometheus.yml file
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
scrape_timeout: 5s
static_configs:
- targets: ['localhost: 9090']
- job_name: 'node'
static_configs:
- targets: ['localhost: 70/nginx_status']
You should be able to see the logs of the stopped container by running:
docker logs prometheus3
Anyway, there are (at least) two issues with your configuration:
The prometheus.yml file is invalid so the prometheus process immediately exits.
The scrape_interval and scrape_timeout need to be in a global section and the indentation was off. See below for an example of a correctly formatted yml file.
2.) You can't just scrape the /nginx_status endpoint but need to use a nginx exporter which extracts the metrics for you. Then the Prometheus server will scrape the nginx_exporter to retrieve the metrics. You can find a list of exporters here and pick one that suits you.
Once you have the exporter running, you need to point Prometheus to the address of the exporter so it can be scraped.
Working prometheus.yml :
global:
scrape_interval: 5s
scrape_timeout: 5s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: ['<< host name and port of nginx exporter >>']