I have a newbie question.
I'm using dockprom (github.com/stefanprodan/dockprom) to capture metrics from a docker-compose successfully.
Now I'm trying to monitor specific metrics from my applications using golang's Prometheus client library, but Prometheus shows my endpoint as down (0), with the message, in the targets section,
Get http://localhost:8090/metrics: dial tcp 127.0.0.1:8090: connect: connection refused
However, if I navigate to http://localhost:8090/metrics I can see the metrics being exposed.
Prometheus is running in a docker-compose set of containers, while my application is running in another.
The declaration of my endpoint in prometheus/prometheus.yml is:
job_name: 'cloud_server_auth'
scrape_interval: 10s
static_configs:
targets: ['localhost:8090']
I noticed that cAdvisor was failing when not running in privileged_mode, but even after fixing that, I still can't get prometheus to consume my metrics.
Any thoughts?
Thanks in advance to any who might shed some light on this issue, and please let me know if you need any further information.
Adolfo
If you're running Prometheus in a Docker container, then when Prometheus makes calls to other places to collect metrics, localhost is interpreted relative to the Prometheus container, which is to say, Prometheus is trying to collect metrics from itself.
If this is all running within the same docker-compose.yml file then you can use the Docker Compose services: name of the other container(s) as hostname(s) when configuring metric target(s). The target containers don't necessarily need to have published ports:, and you need to use the port number the process inside the container is running on – if your ports: remap a container port to a different host port, use the second (container) port number, not the first (host).
This is the same setup as other service-to-service calls within the same docker-compose.yml file. Networking in Compose has more details on the container network environment.
Related
In a scenario where one service connects to the other one's network (as in the example .yml below), is there a way to use --scale to scale both and make them connect to the correct network?
As in app1 uses vpn1 network, app2 uses vpn2 network etc.
services:
vpn:
image: myvpn
app:
image: myapp
depends_on: vpn
network_mode: service:vpn
I want to be able to run the container with docker-compose up -d --scale app=5 --scale vpn=5
The issue is that if I scale both containers the way it's currently setup (lets say with two instances of each service to simplify), both "app" services connect to the first "vpn" service.
I can confirm it by inspecting both app-1 and app-2. In "HostConfig" they both show "NetworkMode": "container:ef37426bec3dbd9c182187d87faf5fe8c92c1e1fa26066f57d163f301af2574e", which is the first vpn container.
I understand this is the expected behavior, as the .yml file indicates network_mode: service:vpn and not something like network_mode: service:vpn-${container ID here}
I want to find a way to set app-1 to use vpn-1 as NetworkMode, app-2 to use vpn-2 as NetworkMode, etc.
Out of the box docker-compose allows for two communication methods:
VIP - one VIP and DNS record to load balance between the replicas using docker's configuration
DNSRR - one DNS record and multiple IPs (one for each service) used to allow the user to implement their own load balancer
As far as I managed to research, a configuration as the one you're asking can be achieved using docker-compose in one way only, you can set in each client service the backend DNS name by the container name.
For example, if I want to connect to the second instance I'll connect to docker_backend_2
Another way that requires a more hands-on approach would be talking between the services and deciding which will connect to who of the returned IP's to the nslookup or some other probing method
If this connectivity configuration is important to you, you should look into Kubernetes where your basic working unit is a pod and you can spin multiple container images per pod, then you'll spin one of each service inside the pod and connect directly to them internally in the pod's network.
I set up a Cloud Run instance with gRPC and HTTP2. It works well. But, I'd like to open a new port externally and route that traffic to gRPC over HTTPS.
This is the current YAML for the container:
- image: asia.gcr.io/assets-320007/server5c2b87a8444cb42e566e130a907015df7dd841b4
ports:
- name: h2c
containerPort: 5000
resources:
limits:
cpu: 1000m
memory: 512Mi
I cannot add new ports because if I do, I get:
spec.template.spec.containers[0].ports should contain 0 or 1 port (field: spec.template.spec.containers[0].ports)
Also, the YAML doesn't specify a forwarding port. It seems to be just assuming that you would only ever set up one port which automatically routes to the one open port on the Docker container. Is that true?
Note: it would be really nice if the YAML came with reference documentation or a schema. That way we could tell what all the possible permutations could be.
Yes, you can only expose one port for a Cloud Run service.
I also find this a curious limitation.
I'm deploying services that use gRPC and expose Prometheus metrics and have been able to multiplex both HTTP/2 and HTTP/1 services in a single port but it requires additional work and is inconsistent with the conceptually underlying Kubernetes.
An excellent feature of GCP is comprehensive and current documentation. Here's Cloud Run Service.
NOTE Found using [APIs Explorer] (https://developers.google.com/apis-explorer) and then finding the Cloud Run Admin API
There are some differences between these knative types and the similar Kubernetes types. One approach I've used is to deploy a known-good service using e.g. gcloud and then corroborate the YAML produced by the service.
For example... top of my head... container ports can't have arbitrary names but must be .... http1 (see link).
I have node exporter running on a docker container in Amazon ecs, and I want to be able to scrape metrics from a different machine in the same network running Prometheus (aka not locally). How do I expose the port on docker and ecs to do so? Or is there a better way to do this?
edit: ecs access needs authentication, so just adding the ip to the yml file doesn't work
Since Prometheus and Amazon ECS were on the same network, port forwarding on the container allowed me to connect. There was no need to authenticate and just adding the ip:port to the config file for Prometheus worked.
If you tell docker-compose to scale a service, and do NOT expose its ports,
docker-compose scale dataservice=2
There will be two IPs in the network that the dns name dataservice will resolve to. So, services that reach it by hostname will load balance.
I would also like to do this to the edge proxy as well. The point would be that
docker-compose scale edgeproxy=2
Would cause edgeproxy to resolve to one of 2 possible IP Addresses.
But the semantics of exposing ports is wrong for this. If I expose:
8443:8443
Then it will try to bind each edgeproxy to be bound to host 8443. What I want is more like:
0.0.0.0:8443:edgeproxy:8443
Where when you try to come into the docker network via host 8443, it randomly selects an edgeproxy:8443 IP to bind the incoming TCP connection to.
Is there an alternative to just do a port-forward? I want a port that can get me in to talk to any ip that will resolve as edgeproxy.
This is provided by swarm mode. You can enable a single node swarm cluster with:
docker swarm init
And then deploy your compose file as a stack with:
docker stack deploy -c docker-compose.yml $stack_name
There are quite a few differences from docker compose including:
Swarm doesn't build images
You manage the target state with docker service commands, trying to stop a container with docker stop won't work since swarm will restart it
The compose file needs to be in a v3 syntax
Networks will be an overlay network, and not attachable by containers outside of swarm, by default
One of the main changes is that exposed ports are published on an ingress network managed by swarm mode, and connections are round robin load balanced to your containers. You can also define a replica count inside the compose file, eliminating the need to run a scale command.
See more at: https://docs.docker.com/engine/swarm/
I've got a swarm set up with a two nodes, one manager and one worker. I'd like to have a port published in the swarm so I can access my applications and I wonder how I achieve this.
version: '2'
services:
server:
build: .
image: my-hub.company.com/application/server:latest
ports:
- "80:80"
This exposes port 80 when I run docker-compose up and it works just fine, however when I run a bundled deploy
docker deploy my-service
This won't publish the port, so it just says 80/tcp in docker ps, instead of pointing on a port. Maybe this is because I need to attach a load balancer or run some fancy command or maybe add another layer of config to actually expose this port in a multi-host swarm.
Can someone help me understand what I need to configure/do to make this expose a port.
My best case scenario would be that port 80 is exposed, and if I access it from different hostnames it will send me to different applications.
Update:
It seems to work if I run the following commands after deploying the application
docker service update -p 80:80 my-service_server
docker kill <my-service_server id>
I found this repository for running a HA proxy, it seems great and is supported by docker themselves, however I cannot seem to apply this separate to my services using the new swarm mode.
https://github.com/docker/dockercloud-haproxy
There's a nice description in the bottom describing how the network should look:
Internet -> HAProxy -> Service_A -> Container A
However I cannot find a way to link services through the docker service create command, optimally now looks like a way to set up a network, and when I apply this network on a service it will pick it up in the HAProxy.
-- Marcus
As far as I understood for the moment you just can publish ports updating the service later the creation, like this:
docker service update my-service --publish-add 80:80
Swarm mode publishes ports in a different way. It won't show up in docker ps because it's not publishing the port on the host, it publishes the port to all nodes so that takes it can load balancing between service replicas.
You should see the port from docker service inspect my-service.
Any other service should be able to connect to my-service:80
docker service ls will display the port mappings.