Prometheus scraping from another machine - docker

I have node exporter running on a docker container in Amazon ecs, and I want to be able to scrape metrics from a different machine in the same network running Prometheus (aka not locally). How do I expose the port on docker and ecs to do so? Or is there a better way to do this?
edit: ecs access needs authentication, so just adding the ip to the yml file doesn't work

Since Prometheus and Amazon ECS were on the same network, port forwarding on the container allowed me to connect. There was no need to authenticate and just adding the ip:port to the config file for Prometheus worked.

Related

how can I connect two docker containers with nomad

I built two docker applications that communicate with each other using the docker network, but when I tried to run those applications using nomad. The problem within nomad is that the container name is not configurable and gives the container a random name. So I can't add those containers to the docker network and have them know each other with their specific names.
So how can I run two or more docker containers in the same docker network using nomad?
I'm aware of few approaches. First one works with nomad only, the others assume that consul is deployed as well.
Place both containers in the same task group. Nomad will locate them always on the same node and you can access address via Nomad env variables NOMAD_IP_<label>, NOMAD_PORT_<label> or NOMAD_ADDR_<label>.
Register the server application (docker container) in the consul service registry with nomad service stanza. You can then use nomad template stanza in "client" application to render config. Example/doc is here.
Setup consul connect (service mesh) in your deployment.
You could use consul DNS interface. Consul can work as a DNS server and every service is resolvable at <service_name>.service.<dc>.consul (doc). But you have to configure your servers to use consul DNS (doc).
Approach 1 is the easiest but has huge limitation (the same node). Approach 2 worked for me well for several years. Nomad is that intelligent that it will reload/restart your client IP should the server IP/port change.

Why do I need to specify hostname when deploy minio in docker swarm?

From the offical doc, the hostname is specified for each minio service when running in docker swarm. But from the offical doc, we don't need to specify hostname in docker compose. I'd like to know the reason. Thank you in advance.
from docker-compose networking documentation
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
so when you launch minio by leveraging docker-compose, hostname is set for you automatically by docker-compose.
the use of the hostname is needed in order to connect to the minio. for instance, this is the general usages of adding a storage service using minio client
mc alias set <ALIAS> <YOUR-S3-ENDPOINT> [YOUR-ACCESS-KEY] [YOUR-SECRET-KEY] [--api API-SIGNATURE]
when using docker-compose, you can easily switch the <YOUR-S3-ENDPOINT> by the docker image name of the minio server.
docker swarm behaves differently.

Monitor ECS docker instance with Prometheus

I've deployed several micro-services to a docker container with dynamic port mapping, but now my issue is with finding the ports. How can Prometheus detect these dynamic ports?
I am able to find out the solution here . Written one lambda which will invoke ECS cluster and services. will read all task from ecs service. task will have ip and port. then list down all ip and port supply to Prometheus as a target.json

Local Docker connection to Kubernetes Cluster

I want to connect a docker container running locally to a service running on a Kubernetes cluster. To do so I have exposed a service through reserving some static IP addresses.
I have also saved those IP addresses in local DNS, in the /etc/hosts/ file:
123.123.123.12 host1
456.456.456.45 host2
I want to link my container to that such that all the traffic is routed to those addresses so that it can be processed by the cluster. I am using the link feature in the docker container but it isn't working.
I want to connect directly using IP? How should I do this?
There's no difference doing this if the client is or isn't in Docker. However you have the service exposed from Kubernetes, you'd make the same connection to it from a process running on an external host or from a process running in a Docker container on that host.
Say, as in the example in the Kubernetes documentation, you're running a NodePort service that's accessible on port 31496 on every node in the cluster, and you're trying to connect to it from outside the cluster. Maybe as in the question 123.123.123.12 is some node in the cluster. A typical setup would be to get the location of the service from an environment variable (JavaScript process.env.THE_SERVICE_URL; Ruby ENV['THE_SERVICE_URL']; Python os.environ['THE_SERVICE_URL']; ...).
When you're developing, you could set that variable in your local shell:
export THE_SERVICE_URL=http://123.123.123.12:31496
cd here && ./kubernetes_client_script.py
When you go to deploy your application, you can set the same environment variable:
docker run -e THE_SERVICE_URL=http://123.123.123.12:31496 me:k8s-client

Docker swarm, Consul and Spring Boot

I have 6 microservices packed in docker containers. On every swarm node, i have installed consul agent, binded to host ip, and client in 0.0.0.0 mode.
All microservices are in docker-compose file which I am running from Swarm manager.
Microservices are written in Java and in bootstrap.yml I must to specify consul agent endpoint. Possible choices are:
localhost
${HOSTIP} environment variable
Problems:
- localhost is not localhost of host, but container localhost, and I don't have consul agent on container localhost but host.
- ${HOSTIP} in compose file i have to supply this env var. But, I don't know where Swarm MAnager will schedule microservice start so I cannot know which IP address will be used.
I tried to expose on each node host ip address but since i am running compose from manager, it will not read this variable.
Do you have any proposal how to solve this? I have consul cluster, 3 managers and 3 nodes. on each manager and node i have consul agent started (as docker container). No matter what type of networking i am using, i am not able to start up microservice. I started consul as --net=host and --net=bridge, but this is not working.
Is there anyone with some idea?
Thanks ahead.
So you are running consul in containers also, right? Is it possible in your setup to link containers? So you could start the consul containers as "consul" on each host and link your microservices to it. Linked containers get a hosts entry and so the consul service should be reachable at "consul:8500" from within your services.
Edit: If you are using the official Consul Docker image from Hashicorp, you can configure the client address to 0.0.0.0, this should make the consul API available to the other containers running on the host.
Let me answer my own Q: This is not a way we want to do this, I mean, we cannot put some things in Swarm and some thing outside Swarm with expectation that it will work. It will not. Consul as a service discovery cannot be used outside Swarm, too. Simple answer would be to use Docker Orchestration and Service discovery and not to involve Consul. If someone is using Swarm, everything should be in overlay networks (rabbit, redis, elk and so on)...

Resources