I have a kubenetes cluster running successfully. I have tried bringing applications in the cluster like a simple nginx. I have a setup of master-minion and a minion node.
The problem here is that I'm able to launch applications from the master-minion node but when I bring up applications in another minion it gives me an error as no route to host.
After some exploration I saw that container ip is not pingable from the master-minion node.
Can someone point out as to what to do to fix this communication between containers in the cluster?
Have you set up networking for kubernetes? you can refer to:
http://kubernetes.io/v1.0/docs/admin/networking.html
Related
I have two VMs on GCP in same network and same subnet. VM-A & VM-B, VM-A hosts a master Jenkins container & VM-B hosts a child Jenkins container. I need to SSH directly to child container from master Jenkins. Again both docker containers are on different machines. Any idea how can I do this? Thanks
I'm not 100% sure, but since both the containers are on VMs which are in the same network and subnet, wouldn't the container internal IPs also be in the same subnet? At least that's how I believe GCP networking works. If they indeed are in the same subnet, then you can SSH to the child container by specifying the internal IP of the child container.
When you create a container it sits in it's own docker network.
Only containers sitting in the same network can communicate directly using their network names.
Docker has overlay networking using swarm but it is for non-prod environments.
As jenkins level solution , I use my external IP of the master to setup master-agent communication. Of course, it is less efficient.
Here are some docs which can be helpful. SSH Credentials Management with Jenkins and Using Jenkins agents. Do let us know if it was useful.
I can understand basic management and operation of container on bare-metal running Docker engine and Kubernetes Orchestration. I'm wondering how is the management and orchestration of container on virtual machine (VM)? Anyone familiar working of container on VM, does it difficult to manage and orchestrate compare to container on bare-metal?
Resources of container on VM as my understanding the VM instance itself is already mapped to specific flavor (e.g 2vCPU and 8G memory) and does it means if container on VM will be limited by the defined VM flavor?
How K8s will manage container on VM, does it see the VM as a VM or as a POD?
Thanks for sharing your comments and input. Please advise and enlighten me.
There is no difference if you are looking forward to using VM as worker node of the Kubernetes cluster and manage the POD (container) over it. Kubernetes consider and manage the VM as Node.
If you are looking forward to running standalone containers on top of a VM using simple docker without any Orchestration tool, it will be hard to manage
Deployment options
Scaling containers
Resource management
Load balancing the traffic across containers
Handling the routing
Monitor the health of containers and hosts
If you are still looking forward to running the container top of only VM there are few managed services from AWS & GCP.
Cloud Run
ECS
above are managed container Orchestration services and using it you can manage the container top of VM.
If you looking forward to the running the container by your ownself on simple you can do it using simple docker or docker-compose also. But at very first you will face an issue routing the traffic across multiple containers.
How K8s will manage container on VM, does it see the VM as a VM or as
a POD?
It sees the VM as a node and runs the necessary services top of VM first and manages it.
I am currently developing docker containers using visual studio, and these container images are supposed to run in a kubernetes cluster that I am also running locally.
Currently, the docker container that is running via visual studio is not being deployed to a kubernetes cluster, but for some reason am I able to ping the kubernetes pod's ip address from the docker container, but for which I don't quite understand; should they not be separated, and not be able to reach each other?
And it cannot be located on the kubernetes dashboard?
And since they are connected, why can't I use the kubernetes service to connect to my pod from my docker container?
The docker container is capable of pinging the cluster IP, meaning that it is reachable.
nslookup the service is not able to resolve the hostname.
So, as I already stated in the comment:
When Docker is installed, a default bridge network named docker0 is
created. Each new Docker container is automatically attached to this
network, unless a custom network is specified.
Thats mean you are able to ping containers by their respective IP. But you are not able to resolve DNS names of cluster objects - you VM know nothing about internal cluster DNS server.
Few option what you can do:
1) explicitly add record of cluster DNS to /etc/hosts inside VM
2) add a record to /etc/resolv.conf with nameserver and search inside VM. See one of my answers related to DNS resolution on stack: nslookup does not resolve Kubernetes.default
3)use dnsmasq as described in Configuring your Linux host to resolve a local Kubernetes cluster’s service URLs article. Btw I highly recommend you read it from the beginning till the end. It greatly describes how to work with DNS and what workaround you can use.
Hope it helps.
I've got a Docker swarm cluster where the manager nodes are in constant 'drain' mode, e.g. no container will ever run on them.
Now I'm running Jenkins in a container on a worker node and I'd like Jenkins to be able to deploy images to the swarm cluster.
My reasoning so far:
Mounting /var/run/docker.sock is obviously not an option as the docker manager and Jenkins container are on different hosts and the local docker is not a swarm manager.
Connecting from the Jenkins container to the local docker host using tcp has the same issues
Adding the Jenkins container to the --network host, seems not to be possible: a container cannot also be in an overlay network at the same time
I assume this is quite a common use case, yet I haven't found a solution and maybe someone here has an idea.
Thanks!
I would like to have my Jenkins master (not containerized) to create slaves within a container. So I have installed the docker plugin into jenkins, created a docker server, configured and jenkins does indeed spin up a slave container fine after the job creation.
However, after I have created another docker server and created a swarm out of two of them and tried running jenkins jobs again it have continued to only deploy containers on the original server(which is now also a manager). I'd be expecting the swarm to balance the load and to distribute the newly created containers evenly across the swarm. What am I missing?
Do I have to use a service perhaps?
Docker images by themselves are not load balanced even if deployed in a swarm. What you're looking for would indeed be a Service definition. Just be careful because of port allocation. If you're deploying your jenkins slaves to listen on port 80, etc, all swarm hosts will listen on port 80 and mesh route to the containers.
Basically means you couldn't deploy anything else to port 80 on those hosts. Once that's done, however, any requests to the hosts would be load balanced to the containers.
The other nice thing is that you can dynamically change the number of replicas with service update
docker service update JenkinsService --replicas 42
While 42 may be extreme, you could obviously change it :)
As of that moment there was nothing I could find in the swarm that would help me to manipulate container distribution across the swarm nodes.
Ended up using a more flexible kubernetes for that purpose. I think marathon is capable of that as well.