I've deployed several micro-services to a docker container with dynamic port mapping, but now my issue is with finding the ports. How can Prometheus detect these dynamic ports?
I am able to find out the solution here . Written one lambda which will invoke ECS cluster and services. will read all task from ecs service. task will have ip and port. then list down all ip and port supply to Prometheus as a target.json
Related
I'm deploying a stack of services through the command:
docker stack deploy -c <docker-compose.yml> <stack-name>
And I'm mapping ports of one of these services on docker compose with ports: 8000:8000.
The network driver being used is overlay.
I can access these services via localhost:8000, via Peers IP(?).
When I inspect the network created, I can see the local IPs of each container (for instance, 10.0.1.2). But Where is the external IP of container (the one like 172.0. ...) ?
I am running these docker container on a virtual machine ubuntu.
How can I access the services running on containers from other nodes running on other networks? Isn't possible to access via hostIP:port?
If so, how do I get the host IP? When I do docker-machine IP I get "host is not running".
[EDIT: I wasn't doing port mapping between the host and the VM in virtualbox. Now it works!]
Whats the best way to communicate between containers on the same swarm?
Thanks
Whats the best way to communicate between containers on the same swarm? Through name discovery?
In general if you communicate between containers you should use the container/service name.
And for your other problem you probably wan't a reverse proxy like nginx or traefik.
I have node exporter running on a docker container in Amazon ecs, and I want to be able to scrape metrics from a different machine in the same network running Prometheus (aka not locally). How do I expose the port on docker and ecs to do so? Or is there a better way to do this?
edit: ecs access needs authentication, so just adding the ip to the yml file doesn't work
Since Prometheus and Amazon ECS were on the same network, port forwarding on the container allowed me to connect. There was no need to authenticate and just adding the ip:port to the config file for Prometheus worked.
So I've got a Plex server running on my Docker swarm!! If I kill a node magically it'll start Plex somewhere else. This is great! Now comes the fun part...
With old-school containers I would just port forward port 32400 on my router to the server that was running Plex and it would work find. Now that Plex can run in multiple different places I need to figure out how to forward the port to some static resource. I could use HAProxy to bind some bridge interface and run it on every node to provide failover...but I'd like to see if there's an easier way to accomplish this.
What's the best way to forward ports to services in Docker Swarm?
Port forwarding is built into the new swarm mode. There's a section on load balancing in the documentation:
The swarm manager uses ingress load balancing to expose the services
you want to make available externally to the swarm. The swarm manager
can automatically assign the service a PublishedPort or you can
configure a PublishedPort for the service in the 30000-32767 range.
External components, such as cloud load balancers, can access the
service on the PublishedPort of any node in the cluster whether or not
the node is currently running the task for the service. All nodes in
the swarm cluster route ingress connections to a running task
instance.
Swarm mode has an internal DNS component that automatically assigns
each service in the swarm a DNS entry. The swarm manager uses internal
load balancing to distribute requests among services within the
cluster based upon the DNS name of the service.
Update
The following article discusses how to integrate a proxy load balancer into the docker engine
https://technologyconversations.com/2016/08/01/integrating-proxy-with-docker-swarm-tour-around-docker-1-12-series/
I have 6 microservices packed in docker containers. On every swarm node, i have installed consul agent, binded to host ip, and client in 0.0.0.0 mode.
All microservices are in docker-compose file which I am running from Swarm manager.
Microservices are written in Java and in bootstrap.yml I must to specify consul agent endpoint. Possible choices are:
localhost
${HOSTIP} environment variable
Problems:
- localhost is not localhost of host, but container localhost, and I don't have consul agent on container localhost but host.
- ${HOSTIP} in compose file i have to supply this env var. But, I don't know where Swarm MAnager will schedule microservice start so I cannot know which IP address will be used.
I tried to expose on each node host ip address but since i am running compose from manager, it will not read this variable.
Do you have any proposal how to solve this? I have consul cluster, 3 managers and 3 nodes. on each manager and node i have consul agent started (as docker container). No matter what type of networking i am using, i am not able to start up microservice. I started consul as --net=host and --net=bridge, but this is not working.
Is there anyone with some idea?
Thanks ahead.
So you are running consul in containers also, right? Is it possible in your setup to link containers? So you could start the consul containers as "consul" on each host and link your microservices to it. Linked containers get a hosts entry and so the consul service should be reachable at "consul:8500" from within your services.
Edit: If you are using the official Consul Docker image from Hashicorp, you can configure the client address to 0.0.0.0, this should make the consul API available to the other containers running on the host.
Let me answer my own Q: This is not a way we want to do this, I mean, we cannot put some things in Swarm and some thing outside Swarm with expectation that it will work. It will not. Consul as a service discovery cannot be used outside Swarm, too. Simple answer would be to use Docker Orchestration and Service discovery and not to involve Consul. If someone is using Swarm, everything should be in overlay networks (rabbit, redis, elk and so on)...
What I want to do is run kubernetes within docker and expose the kubernetes services externally. I followed the docs on getting kubernetes running within docker. As long as I connect from the localhost, I can access my services. However, connecting from a different computer doesn't work. If I spin up a docker image directly, then I can access it. Only things running within kubernetes aren't exposed. Is this possible?
Ensure your nodes have externally reachable IP addresses.
Then create a service of type NodePort:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport
And direct traffic to nodes at the allocated port.