I have created kubernetes cluster with one master node and one slave node and deployed containers.How can I access containers through external IP.
I have tried assigning IP address to the containers using type=Loadbalancer in docker compose file.
I would suggest you go through the tutorials for Kubernetes.
In general, (1) you would need 3 master nodes. (2) Setup a Ingress-controller ( HTTP Load balancer ) as a type=LoadBalancer service and then configure Ingress with domain for routing, instead of using IP to access those containers directly.
https://kubernetes.io/docs/tutorials/
https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16
Related
I have two VMs on GCP in same network and same subnet. VM-A & VM-B, VM-A hosts a master Jenkins container & VM-B hosts a child Jenkins container. I need to SSH directly to child container from master Jenkins. Again both docker containers are on different machines. Any idea how can I do this? Thanks
I'm not 100% sure, but since both the containers are on VMs which are in the same network and subnet, wouldn't the container internal IPs also be in the same subnet? At least that's how I believe GCP networking works. If they indeed are in the same subnet, then you can SSH to the child container by specifying the internal IP of the child container.
When you create a container it sits in it's own docker network.
Only containers sitting in the same network can communicate directly using their network names.
Docker has overlay networking using swarm but it is for non-prod environments.
As jenkins level solution , I use my external IP of the master to setup master-agent communication. Of course, it is less efficient.
Here are some docs which can be helpful. SSH Credentials Management with Jenkins and Using Jenkins agents. Do let us know if it was useful.
I am new to the Docker swarm. I deployed a Jenkins service on Docker swarm cluster with 3 managers and 2 worker nodes. I can access the service using node port. But, I want to access the service from outside network using an external loadbalancer. If any one have any reference, please help me on this.
You specified an external load balancer, so you would do something like:
deploy hashicorp consul as part of your app stack, or as a swarm service, to your swarm.
integrate your services with hashicorp consul so they publish their external ips and ports to it. The services would be setup with host mode networking rather than using dockers ingress networking.
integrate your external load balancer with consul so it can deliver traffic to the service.
point your external dns as your external load balancer.
Say I have a Swarm of 3 nodes on my local system. And I create a service say Drupal with a replication of 3 in this swarm. Now, say each of the node has one container each running Drupal. Now when I have to access this in my browser I will have to use the IP address of one of the nodes <IP Address>:8080 to access Drupal.
Is there a way I can set a DNS name for this service and access it using DNS name instead of having to use IP Address and port number?
You need to configure the DNS server that you use on the host making the query. So if your laptop queries the public DNS, you need to create a public DNS entry that would resolve from the internet (on a domain you own). This should resolve to the docker host IPs running the containers, or an LB in front of those hosts. And then you publish the port on the host to the container you want to access.
You should not be trying to talk directly to the container IP, these are not routeable from outside of the docker host. And the docker DNS used for service discovery is for container to container communication. This is separate from communication outside of docker that goes through a published port.
If I have a microservice for Eureka service discovery and I have 5 replicas for it in docker-compose.yml then, these 5 eureka containers would be spread across multiple swarm nodes available in swarm cluster.
My question is, when a microservice wants to register itself with eureka,
Would it specify the ip address of the master node in the swarm cluster in its config for eureka server ?
When a microservice registers itself with eureka whichever way, does this registry get replicated across all the eureka containers in swarm cluster as who know which eureka node in swarm cluster would service a particular microservice.
When a microservice registers itself with eureka whichever way, does this registry get replicated across all the eureka containers
Never tested by myself, but I think this happens. Registries contact themself and share all current registred microservices from everywhere.
who know which eureka node in swarm cluster would service a particular microservice
Your microservice has an unique name in its application.properties that allows others microservices to send it an HTTP request, you don't control that mechanism in java so I hope it contacts the most suitable.
Would it specify the ip address of the master node in the swarm cluster in its config for eureka server ?
I tryed several times to setup an hostname to let microservice contacts eureka
using a container based hostname but that doesn't work. I use frolvlad/alpine-oraclejdk8 as base image + bash and wait-for-it.sh. I run everything with docker-compose.
Keep in mind that this is the configuration server that tells microservices where is eureka. I endup to add a bash script in the configuration server to extract the application.properties from the jar file, put the IP of the registry, and rewrite the updated application.properties into the jar.
So I've got a Plex server running on my Docker swarm!! If I kill a node magically it'll start Plex somewhere else. This is great! Now comes the fun part...
With old-school containers I would just port forward port 32400 on my router to the server that was running Plex and it would work find. Now that Plex can run in multiple different places I need to figure out how to forward the port to some static resource. I could use HAProxy to bind some bridge interface and run it on every node to provide failover...but I'd like to see if there's an easier way to accomplish this.
What's the best way to forward ports to services in Docker Swarm?
Port forwarding is built into the new swarm mode. There's a section on load balancing in the documentation:
The swarm manager uses ingress load balancing to expose the services
you want to make available externally to the swarm. The swarm manager
can automatically assign the service a PublishedPort or you can
configure a PublishedPort for the service in the 30000-32767 range.
External components, such as cloud load balancers, can access the
service on the PublishedPort of any node in the cluster whether or not
the node is currently running the task for the service. All nodes in
the swarm cluster route ingress connections to a running task
instance.
Swarm mode has an internal DNS component that automatically assigns
each service in the swarm a DNS entry. The swarm manager uses internal
load balancing to distribute requests among services within the
cluster based upon the DNS name of the service.
Update
The following article discusses how to integrate a proxy load balancer into the docker engine
https://technologyconversations.com/2016/08/01/integrating-proxy-with-docker-swarm-tour-around-docker-1-12-series/