If I have a microservice for Eureka service discovery and I have 5 replicas for it in docker-compose.yml then, these 5 eureka containers would be spread across multiple swarm nodes available in swarm cluster.
My question is, when a microservice wants to register itself with eureka,
Would it specify the ip address of the master node in the swarm cluster in its config for eureka server ?
When a microservice registers itself with eureka whichever way, does this registry get replicated across all the eureka containers in swarm cluster as who know which eureka node in swarm cluster would service a particular microservice.
When a microservice registers itself with eureka whichever way, does this registry get replicated across all the eureka containers
Never tested by myself, but I think this happens. Registries contact themself and share all current registred microservices from everywhere.
who know which eureka node in swarm cluster would service a particular microservice
Your microservice has an unique name in its application.properties that allows others microservices to send it an HTTP request, you don't control that mechanism in java so I hope it contacts the most suitable.
Would it specify the ip address of the master node in the swarm cluster in its config for eureka server ?
I tryed several times to setup an hostname to let microservice contacts eureka
using a container based hostname but that doesn't work. I use frolvlad/alpine-oraclejdk8 as base image + bash and wait-for-it.sh. I run everything with docker-compose.
Keep in mind that this is the configuration server that tells microservices where is eureka. I endup to add a bash script in the configuration server to extract the application.properties from the jar file, put the IP of the registry, and rewrite the updated application.properties into the jar.
Related
I built two docker applications that communicate with each other using the docker network, but when I tried to run those applications using nomad. The problem within nomad is that the container name is not configurable and gives the container a random name. So I can't add those containers to the docker network and have them know each other with their specific names.
So how can I run two or more docker containers in the same docker network using nomad?
I'm aware of few approaches. First one works with nomad only, the others assume that consul is deployed as well.
Place both containers in the same task group. Nomad will locate them always on the same node and you can access address via Nomad env variables NOMAD_IP_<label>, NOMAD_PORT_<label> or NOMAD_ADDR_<label>.
Register the server application (docker container) in the consul service registry with nomad service stanza. You can then use nomad template stanza in "client" application to render config. Example/doc is here.
Setup consul connect (service mesh) in your deployment.
You could use consul DNS interface. Consul can work as a DNS server and every service is resolvable at <service_name>.service.<dc>.consul (doc). But you have to configure your servers to use consul DNS (doc).
Approach 1 is the easiest but has huge limitation (the same node). Approach 2 worked for me well for several years. Nomad is that intelligent that it will reload/restart your client IP should the server IP/port change.
I am new to the Docker swarm. I deployed a Jenkins service on Docker swarm cluster with 3 managers and 2 worker nodes. I can access the service using node port. But, I want to access the service from outside network using an external loadbalancer. If any one have any reference, please help me on this.
You specified an external load balancer, so you would do something like:
deploy hashicorp consul as part of your app stack, or as a swarm service, to your swarm.
integrate your services with hashicorp consul so they publish their external ips and ports to it. The services would be setup with host mode networking rather than using dockers ingress networking.
integrate your external load balancer with consul so it can deliver traffic to the service.
point your external dns as your external load balancer.
I have created kubernetes cluster with one master node and one slave node and deployed containers.How can I access containers through external IP.
I have tried assigning IP address to the containers using type=Loadbalancer in docker compose file.
I would suggest you go through the tutorials for Kubernetes.
In general, (1) you would need 3 master nodes. (2) Setup a Ingress-controller ( HTTP Load balancer ) as a type=LoadBalancer service and then configure Ingress with domain for routing, instead of using IP to access those containers directly.
https://kubernetes.io/docs/tutorials/
https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16
So I've got a Plex server running on my Docker swarm!! If I kill a node magically it'll start Plex somewhere else. This is great! Now comes the fun part...
With old-school containers I would just port forward port 32400 on my router to the server that was running Plex and it would work find. Now that Plex can run in multiple different places I need to figure out how to forward the port to some static resource. I could use HAProxy to bind some bridge interface and run it on every node to provide failover...but I'd like to see if there's an easier way to accomplish this.
What's the best way to forward ports to services in Docker Swarm?
Port forwarding is built into the new swarm mode. There's a section on load balancing in the documentation:
The swarm manager uses ingress load balancing to expose the services
you want to make available externally to the swarm. The swarm manager
can automatically assign the service a PublishedPort or you can
configure a PublishedPort for the service in the 30000-32767 range.
External components, such as cloud load balancers, can access the
service on the PublishedPort of any node in the cluster whether or not
the node is currently running the task for the service. All nodes in
the swarm cluster route ingress connections to a running task
instance.
Swarm mode has an internal DNS component that automatically assigns
each service in the swarm a DNS entry. The swarm manager uses internal
load balancing to distribute requests among services within the
cluster based upon the DNS name of the service.
Update
The following article discusses how to integrate a proxy load balancer into the docker engine
https://technologyconversations.com/2016/08/01/integrating-proxy-with-docker-swarm-tour-around-docker-1-12-series/
I have 6 microservices packed in docker containers. On every swarm node, i have installed consul agent, binded to host ip, and client in 0.0.0.0 mode.
All microservices are in docker-compose file which I am running from Swarm manager.
Microservices are written in Java and in bootstrap.yml I must to specify consul agent endpoint. Possible choices are:
localhost
${HOSTIP} environment variable
Problems:
- localhost is not localhost of host, but container localhost, and I don't have consul agent on container localhost but host.
- ${HOSTIP} in compose file i have to supply this env var. But, I don't know where Swarm MAnager will schedule microservice start so I cannot know which IP address will be used.
I tried to expose on each node host ip address but since i am running compose from manager, it will not read this variable.
Do you have any proposal how to solve this? I have consul cluster, 3 managers and 3 nodes. on each manager and node i have consul agent started (as docker container). No matter what type of networking i am using, i am not able to start up microservice. I started consul as --net=host and --net=bridge, but this is not working.
Is there anyone with some idea?
Thanks ahead.
So you are running consul in containers also, right? Is it possible in your setup to link containers? So you could start the consul containers as "consul" on each host and link your microservices to it. Linked containers get a hosts entry and so the consul service should be reachable at "consul:8500" from within your services.
Edit: If you are using the official Consul Docker image from Hashicorp, you can configure the client address to 0.0.0.0, this should make the consul API available to the other containers running on the host.
Let me answer my own Q: This is not a way we want to do this, I mean, we cannot put some things in Swarm and some thing outside Swarm with expectation that it will work. It will not. Consul as a service discovery cannot be used outside Swarm, too. Simple answer would be to use Docker Orchestration and Service discovery and not to involve Consul. If someone is using Swarm, everything should be in overlay networks (rabbit, redis, elk and so on)...