I've been struggling with this for a while, maybe someone might have some insight...I've created a docker bridge network:
docker network create -d bridge prometheus-network
in which I'm running both Prometheus and cAdvisor containers, created as such:
Prometheus:
docker run -d --net=prometheus-network --name=prometheus-server -p 9090:9090 -v /etc/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
cAdvisor:
docker run -d --net=prometheus-network --name=cadvisor -p 8080:8080 --volume=/:/rootfs:ro --volume=/var/run:/var/run:ro --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --volume=/dev/disk/:/dev/disk:ro --privileged --device=/dev/kmsg gcr.io/cadvisor/cadvisor:latest
I can see both up and running and can connect to the Prometheus web UI. From there, though, it shows the cAdvisor connection being refused (I have one other container in the network as well, which is also being refused)
here is the prometheus.yml file, with the two targets defined:
global:
evaluation_interval: 15s
scrape_interval: 15s
scrape_configs:
- job_name: adapters
static_configs:
- labels:
namespace: adapters
targets:
- tiingo:9080
- job_name: cadvisor
static_configs:
- targets: ['localhost:8080']
and what I'm seeing:
Is there some other networking step here that I might be missing?? Thanks in advance for any help...
When you connect to localhost you will connect to the localhost of the current container. Within user-defined docker networks you can use container names as hostnames thanks to Docker service discovery.
So use cadvisor:8080 instead of localhost:8080 in your configuration file.
Related
I'm trying to build a Jenkins docker container by following this page so I can test locally. Problem is with this is that once I've ran docker run -it -p 8080:8080 jenkins/jenkins:lts it seems I cannot use the same port for my docker-compose.yml:
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
user: root
privileged: true
ports:
- 8080:8080
- 50000:50000
volumes:
- .jenkins/jenkins_configuration:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
The error shown in PowerShell (I'm on windows 10 if that's relevant) is:
Error response from daemon: driver failed programming external connectivity on endpoint jenkins (xxxx): Bind for 0.0.0.0:8080 failed: port is already allocated
I've made sure it's not affected from another container, image or volume and have deleted everything apart from this.
I wish to use Jenkins locally but how can I get around this? I'm not familiar with networking and what I've googled so far has not seemed to work for me. I would like this to be able to use Jenkins ui on localhost:8080
If port 8080 is already allocated on your host machine, you can just map a different one to 8080 of the container instead. Two things can't be mapped to the same port on the host machine. In order to map 8081 for example, change your compose to the following:
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
user: root
privileged: true
ports:
- 8081:8080 # a different port is mapped here
- 50000:50000
volumes:
- .jenkins/jenkins_configuration:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
Then, you just need to access the container started by docker-compose with port localhost:8081 rather than localhost:8080.
i'm setting up Promtheus & Grafana of my local Ubuntu machine as docker containers ,
my steps were :
running prometheus : docker run -t -d -p 9090:9090 prom/prometheus
running Grafana : docker run -t -d --name grafana -p 3000:3000 grafana/grafana
as you can see prometheus run on the mapped 9090 port , same for grafana running on 3000
Now when configuring grafana dashborad for prometheus in grafana , i need to indicate the url of prometheus :
-> since both of them are running on local containers.
What address ton give to grafana to make it point on prometheus ?
For an easy setup, you can use docker-compose as commented. An example of docker-compose.yaml file with prometheus and grafana:
docker-compose.yaml
version: "3"
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
grafana:
image: grafana/grafana:latest
volumes:
- ./grafana-storage:/var/lib/grafana
- ./grafana/config.ini:/etc/grafana/config.ini
- ./grafana/provisioning:/etc/grafana/provisioning
- ./grafana/dashboards:/var/lib/grafana/dashboards
ports:
- 3001:3000
prometheus.yaml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'your-app'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['your-app:3000']
config.ini
[paths]
provisioning = /etc/grafana/provisioning
[server]
enable_gzip = true
[users]
default_theme = light
Your Grafana container will not be able to contact (discover) your Prometheus container because when docker starts each container, it creates a virtual interface on host system with unique name like vethef766ac, and isolates the containers.
If you don't want to use docker compose AND if you want to access your Grafana container using its host IP, you have to run your Grafana container in the host network using the --network option.
You can then run Grafana as so:
docker run -t -d --name grafana --network="host" grafana/grafana
Note: --network="host" gives the container full access to local
system services such as D-bus and is therefore considered insecure.
The URL you will want to specify in Grafana would be http://localhost:9090.
For a running docker container if needs to look for their address, following command can be helpful.
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name>
This will return an IP address associated.
The "docker ps" command does not show the traefik port 8080 and other ports used from the service
The traefik service is defined in a docker-compose.yml as follow:
...
traefik:
image: traefik:1.7-alpine
command: --docker --docker.swarmMode --docker.domain=mylocal.swarm --docker.watch --api --logLevel=INFO \
--entryPoints='Name:http Address::80' \
--entryPoints='Name:https Address::443 TLS'
networks:
- net-traefik
deploy: *default-deploy
ports:
- "8091:80"
- "8093:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
logging: *default-logging
The "docker ps" command output is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
...
635a4e017bb5 traefik:1.7-alpine "/entrypoint.sh o…" … … 80/tcp
...
Why only the port 80 is displayed? I expected to find also 443 and 8080
This port 80 is just the one that was specified under EXPOSE in the Dockerfile and therefore shows up there. The other ports you define in your file are mapped to the service you create with Docker Swarm, and not to a single container. If this would happen, then using Swarm at all is useless, because if you have more than one container – which is kind of the point of using Swarm – they cannot both be mapped to the same port.
This blog post seems to contain some good information about how the publishing of services work with Docker Swarm.
With swarm mode, the port is not published on the individual containers. Instead the port is published via the service, which then uses a VIP to send the request over the ingress network to one of the replicas of the service. You can use the following to see these ports:
docker service ls
Note in the docker ps command, the ports you see there include the "exposed ports" which are defined in the image. These are documentation from the image creator to the user of the image to expect something listening inside the container on this port, and in no way impacts published ports on the host or container to container networking.
I have downloaded the Docker Consul image and it is running, but I am not able to access its web UI. Does any one have an idea how to get started. I am running this on my local machine in developer mode.
I am running:
docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul
See documentation:
The Web UI can be enabled by adding the -ui-dir flag:
$ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap -ui-dir /ui
We publish 8400 (RPC), 8500 (HTTP), and 8600 (DNS) so you can try all three interfaces. We also give it a hostname of node1. Setting the container hostname is the intended way to name the Consul Agent node.
You can try to activate ui by setting the -ui-dir flag.
First set experimental to true in docker desktop if you're using windows containers.
The command below will work, because you need to expose the port 8500.
docker run -d -e CONSUL_BIND_INTERFACE=eth0 -p 8500:8500 consul
You will be able to access consul at http://localhost:8500/
You could also use a compose file like this:
version: "3.7"
services:
consul:
image: consul
ports:
- "8500:8500"
environment:
- CONSUL_BIND_INTERFACE=eth0
networks:
nat:
aliases:
- consul
I am quite new to Docker and Consul and now trying to set up a local Consul Cluster consisting of 3 dockerized nodes. I am using the progrium/consul Docker image and went through the whole tutorial and examples described.
The cluster works fine until it comes to restarting / rebooting.
Here is my docker-compose.yml:
---
node1:
command: "-server -bootstrap-expect 3 -ui-dir /ui -advertise 10.67.203.217"
image: progrium/consul
ports:
- "10.67.203.217:8300:8300"
- "10.67.203.217:8400:8400"
- "10.67.203.217:8500:8500"
- "10.67.203.217:8301:8301"
- "10.67.203.217:8302:8302"
- "10.67.203.217:8301:8301/udp"
- "10.67.203.217:8302:8302/udp"
- "172.17.42.1:53:53/udp"
restart: always
node2:
command: "-server -join 10.67.203.217"
image: progrium/consul
restart: always
node3:
command: "-server -join 10.67.203.217"
image: progrium/consul
restart: always
registrator:
command: "consul://10.67.203.217:8500"
image: "progrium/registrator:latest"
restart: always
I get message like:
[ERR] raft: Failed to make RequestVote RPC to 172.17.0.103:8300: dial tcp 172.17.0.103:8300: no route to host
which is obviously because of the new IP my nodes 2 and 3 get after the restart. So is it possible to prevent this? A read about linking and environment variables but it seems those variables are also not updated after a reboot.
I have had the same problem until I have read that there is a ARP table caching problem when you restart a containerized consul node.
As far as I know, there are 2 workaround:
Run your container using --net=host
Clear ARP table before you restart your container: docker run --net=host --privileged --rm cap10morgan/conntrack -F
The owner(Jeff Lindsay) told me that they are redisigning the entire container with this fix built in, no timelines unfortunately.
Source: https://github.com/progrium/docker-consul/issues/26