I am running into a peculiar problem.
I have kubernetes cluster, I setup no_proxy for the master node of the cluster (in docker systemd environment). In order to be able to run docker build/push to a registry that is running on docker on the master node.
Now I have a problem, as my containers cannot access the outside network (because the communication happens through k8s master node I presume).
Or if I choose not to set no_proxy for the master node in docker then I cannot push images to my registry through the external IP of the master, have to use (localhost) as push destination -> which breaks my app later on.
I use weave as my cni plugin
The network communication of containers running on your nodes has nothing to do with the network communication of your master to the outside world or it through a proxy.
Basically, the network communication for your containers running on a node goes through its own network interface, etc.
Having said that, are you running your workloads on your master? If yes, that could be affecting the communication of your master containers (if you set no_proxy for some hostnames). It could also be affecting the communication of your kube-controller-manager, kube-apiserver, core-dns, kubelet and network overlay on the master.
Are you configuring your docker client proxy correctly as per here?
Related
I have two VMs on GCP in same network and same subnet. VM-A & VM-B, VM-A hosts a master Jenkins container & VM-B hosts a child Jenkins container. I need to SSH directly to child container from master Jenkins. Again both docker containers are on different machines. Any idea how can I do this? Thanks
I'm not 100% sure, but since both the containers are on VMs which are in the same network and subnet, wouldn't the container internal IPs also be in the same subnet? At least that's how I believe GCP networking works. If they indeed are in the same subnet, then you can SSH to the child container by specifying the internal IP of the child container.
When you create a container it sits in it's own docker network.
Only containers sitting in the same network can communicate directly using their network names.
Docker has overlay networking using swarm but it is for non-prod environments.
As jenkins level solution , I use my external IP of the master to setup master-agent communication. Of course, it is less efficient.
Here are some docs which can be helpful. SSH Credentials Management with Jenkins and Using Jenkins agents. Do let us know if it was useful.
We are implementing a CI infrastructure as Docker stacks.
Some of the containers in the stacks now need to access external services, only available through an OpenVPN connection, let's say on the 192.168.2.0/24 subnet.
In order to keep containers as "single-purpose" as possible, we would ideally like to add a Docker container acting as a VPN gateway, through which other containers could talk to the 192.168.2.0/24 subnet.
This first raises a complication: a VPN client containers needs the cap-add of NET_ADMIN, which is not available in the swarm mode we are using to deploy stack. Is there a work around, appart from starting the VPN client container as standalone through docker run?
And more importantly, once we have the vpnclient container running and connected, how can we configure other containers in the swarm to actually use it as a gateway to reach all IPs on the 192.168.2.0/24 subnet?
I've got a Docker swarm cluster where the manager nodes are in constant 'drain' mode, e.g. no container will ever run on them.
Now I'm running Jenkins in a container on a worker node and I'd like Jenkins to be able to deploy images to the swarm cluster.
My reasoning so far:
Mounting /var/run/docker.sock is obviously not an option as the docker manager and Jenkins container are on different hosts and the local docker is not a swarm manager.
Connecting from the Jenkins container to the local docker host using tcp has the same issues
Adding the Jenkins container to the --network host, seems not to be possible: a container cannot also be in an overlay network at the same time
I assume this is quite a common use case, yet I haven't found a solution and maybe someone here has an idea.
Thanks!
I would like to have my Jenkins master (not containerized) to create slaves within a container. So I have installed the docker plugin into jenkins, created a docker server, configured and jenkins does indeed spin up a slave container fine after the job creation.
However, after I have created another docker server and created a swarm out of two of them and tried running jenkins jobs again it have continued to only deploy containers on the original server(which is now also a manager). I'd be expecting the swarm to balance the load and to distribute the newly created containers evenly across the swarm. What am I missing?
Do I have to use a service perhaps?
Docker images by themselves are not load balanced even if deployed in a swarm. What you're looking for would indeed be a Service definition. Just be careful because of port allocation. If you're deploying your jenkins slaves to listen on port 80, etc, all swarm hosts will listen on port 80 and mesh route to the containers.
Basically means you couldn't deploy anything else to port 80 on those hosts. Once that's done, however, any requests to the hosts would be load balanced to the containers.
The other nice thing is that you can dynamically change the number of replicas with service update
docker service update JenkinsService --replicas 42
While 42 may be extreme, you could obviously change it :)
As of that moment there was nothing I could find in the swarm that would help me to manipulate container distribution across the swarm nodes.
Ended up using a more flexible kubernetes for that purpose. I think marathon is capable of that as well.
I have a kubenetes cluster running successfully. I have tried bringing applications in the cluster like a simple nginx. I have a setup of master-minion and a minion node.
The problem here is that I'm able to launch applications from the master-minion node but when I bring up applications in another minion it gives me an error as no route to host.
After some exploration I saw that container ip is not pingable from the master-minion node.
Can someone point out as to what to do to fix this communication between containers in the cluster?
Have you set up networking for kubernetes? you can refer to:
http://kubernetes.io/v1.0/docs/admin/networking.html