RabbitMQ Metrics Not found - docker

I'm running the latest RabbitMQ on docker, and I am trying to get RabbitMQ's metrics to use it with prometheus but with no success.
When I access host.docker.internal:15672/metrics I get the following response
{"Error":"Object Not Found","reason":"Not Found"}

Not sure why my answer has been deleted by #Jean-François Fabre, but let me quote the official rabbitmq site where it says that by default prometheus metrics are exposed on port 15692:
Notice that RabbitMQ exposes the metrics on a dedicated TCP port, 15692 by default.
Source: https://www.rabbitmq.com/prometheus.html#rabbitmq-configuration

Related

Apache Nutch doesn't expose its API

I'm trying to use Apache Nutch 1.x Rest API. I use docker images to set up Nutch and Solr. You can see the demo repo in here
Apache Nutch uses Solr as its dependents. Solr works great, I'm able to reach its GUI at localhost:8983.
However, I cannot reach Apache Nutch's API at localhost:8081. The problem starts here. The Apache Nutch 1.X RESTAPI doc indicates that I can start the server like this
2. :~$ bin/nutch startserver -port <port_number> [If the port option is not mentioned then by default the server starts on port 8081]
Which I am doing in docker-compose.yml file.
I'm also exposing the ports to the outside.
ports:
- "8080:8080"
- "8081:8081"
But I wasn't able to successfully call the API from my computer.
The rest API documentation says that if I send a get request to /admin endpoint, I would get a response.
GET /admin
When I try this with Postman or from the browser, it cannot reach out to the server and gives me back a 500 error.
However, when I get inside of the container with docker exec -it and try to curl localhost:8081/admin, I get the correct response. So within the container the API is up and running, but it is not exposed to outside.
In one of my tryouts, I have added a frontend application in another container and send rest requests to Solr and Nutch containers. Solr worked, Nutch failed with 500. This tells me that Nutch container is not only unreachable to the outside world, it is also unreachable to the containers within the same network.
Any idea how to workaround this problem?
nutch by default only reply to requests from localhost:
bash-5.1# /root/nutch/bin/nutch startserver -help
usage: NutchServer [-help] [-host <host>] [-port <port>]
-help Show this help
-host <host> The host to bind the Nutch Server to. Default is
localhost.
So you need to start it with -host 0.0.0.0 to be able to reach it from the host machine or another container:
services:
nutch:
image: 'apache/nutch:latest'
command: '/root/nutch/bin/nutch startserver -port 8081 -host 0.0.0.0'

Cannot capture client metrics with Prometheus

I have a newbie question.
I'm using dockprom (github.com/stefanprodan/dockprom) to capture metrics from a docker-compose successfully.
Now I'm trying to monitor specific metrics from my applications using golang's Prometheus client library, but Prometheus shows my endpoint as down (0), with the message, in the targets section,
Get http://localhost:8090/metrics: dial tcp 127.0.0.1:8090: connect: connection refused
However, if I navigate to http://localhost:8090/metrics I can see the metrics being exposed.
Prometheus is running in a docker-compose set of containers, while my application is running in another.
The declaration of my endpoint in prometheus/prometheus.yml is:
job_name: 'cloud_server_auth'
scrape_interval: 10s
static_configs:
targets: ['localhost:8090']
I noticed that cAdvisor was failing when not running in privileged_mode, but even after fixing that, I still can't get prometheus to consume my metrics.
Any thoughts?
Thanks in advance to any who might shed some light on this issue, and please let me know if you need any further information.
Adolfo
If you're running Prometheus in a Docker container, then when Prometheus makes calls to other places to collect metrics, localhost is interpreted relative to the Prometheus container, which is to say, Prometheus is trying to collect metrics from itself.
If this is all running within the same docker-compose.yml file then you can use the Docker Compose services: name of the other container(s) as hostname(s) when configuring metric target(s). The target containers don't necessarily need to have published ports:, and you need to use the port number the process inside the container is running on – if your ports: remap a container port to a different host port, use the second (container) port number, not the first (host).
This is the same setup as other service-to-service calls within the same docker-compose.yml file. Networking in Compose has more details on the container network environment.

Concourse not showing up on 8080 when running on GCP

I'm setting up a Concourse Docker container by following - https://concoursetutorial.com/ , but on GCP compute Engine. The tutorial says to access the UI at http://127.0.0.1:8080/ in your browser.
Since I am running on GCP, i gave :8080, I am getting "This site can’t be reached"
Note- I have enabled the 8080 port in GCP and also in the compute engine.enter image description here
You need to create a firewall that allows 8080 port on GCP and add the network tag of the instance to target.
See detailed instructions here:
https://cloud.google.com/vpc/docs/using-firewalls#creating_firewall_rules

Sporadic 503s from specified ports

I've been working on using Rancher for manager our dashboard applications, part of this has involved exposing multiple kibana containers from the same port, and one kibana 3 container exposing on port 80.
I want to therefore send requests on specified ports: 5602, 5603, 5604 to specific containers, so I setup the following docker-compose.yml config:
kibana:
image: rancher/load-balancer-service
ports:
- 5602:5602
- 5603:5603
- 5604:5604
links:
- kibana3:kibana3
- kibana4-logging:kibana4-logging
- kibana4-metrics:kibana4-metrics
labels:
io.rancher.loadbalancer.target.kibana3: 5602=80
io.rancher.loadbalancer.target.kibana4-logging: 5603=5601
io.rancher.loadbalancer.target.kibana4-metrics: 5604=5601
Everything works as expected, but I get sporadic 503's. When I go into the container and look at the haproxy.cfg I see:
frontend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_frontend
bind *:5603
mode http
default_backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
backend d898fb95-ec51-4c73-bdaa-cc0435d8572a_5603_2_backend
mode http
timeout check 2000
option httpchk GET /status HTTP/1.1
server cbc23ed9-a13a-4546-9001-a82220221513 10.42.60.179:5603 check port 5601 inter 2000 rise 2 fall 3
server 851bdb7d-1f6b-4f61-b454-1e910d5d1490 10.42.113.167:5603
server 215403bb-8cbb-4ff0-b868-6586a8941267 10.42.85.7:5601
The IPs listed are all three Kibana containers, the first container has a health check has it, but none of the others do (kibana3/kibana4.1 dont have a status endpoint). My understanding of the docker-compose config is it should have only the one server per backend, but all three appear to be listed, I assume this is in part down to the sporadic 503s, and removing this manually and restarting the haproxy service does seem to solve the problem.
I am configuring the load balancer incorrectly or is this worth raising as a Github issue with Rancher?
I posted on the Rancher forums as that was suggested from Rancher Labs on twitter: https://forums.rancher.com/t/load-balancer-sporadic-503s-with-multiple-port-bindings/2358
Someone from rancher posted a link to a github issue which was similar to what I was experiencing: https://github.com/rancher/rancher/issues/2475
In summary, the load balancers will rotate through all matching backends, there is a work around involving "dummy" domains, which I've confirmed with my configuration does work, even if it is slightly inelegant.
labels:
# Create a rule that forces all traffic to redis at port 3000 to have a hostname of bogus.com
# This eliminates any traffic from port 3000 to be directed to redis
io.rancher.loadbalancer.target.conf/redis: bogus.com:3000
# Create a rule that forces all traffic to api at port 6379 to have a hostname of bogus.com
# This eliminates any traffic from port 6379 to be directed to api
io.rancher.loadbalancer.target.conf/api: bogus.com:6379
(^^ Copied from rancher github issue, not my workaround)
I'm going to see how easy it would be to route via port and raise a PR/Github issue as I think it's a valid usecase for an LB in this scenario.
Make sure that you are using the port initially exposed on the docker container. For some reason, if you bind it to a different port, HAProxy fails to work. If you are using a container from DockerHub that is using a port already taken on your system, you may have to rebuild that docker container to use a different port by routing it through a proxy like nginx.

Can't access Heapster's InfluxDB port 8083

I follow this guide to deploy my Kubernetes cluster and this guide to launch Heapster.
However, when I open Granfa's website UI, it always says "Dashboard init failed: Template variables could not be initialized: undefined". Moreover, I can't access InfluxDB via port 8083. Is there anything I missed?
I've tried several versions of Kubernetes. I can't deploy DNS with some of them, for now I'm using 1.1.4, but I need to create "kube-system" namespace manually. The docker version is 1.7.1.
Edit:
I can curl ports 8083 and 8086 in the influxdb pod. However, I get connection refused if I do that in the node running the container. This is my services status:

Resources