Understanding Minikube and the local IP address - docker

I am very new to kubernetes. I'm using minikube to run a k8 cluster on local machine and deploy a Redis based docker image to it. I have the following question:
After starting the minikube. I get the following output when I run cluster info. I don't understand what this IP : 192.168.49.2:8443 is and where it is coming from ?
I read up online on minikube and have understood that it spins up a cluster by creating various vm's out of local machine. So I'm guessing the IP addr that I get here is a virtual IP address.

192.168.49.0/24 is used by the Minikube Docker driver as the default subnet. In greater detail, Minikube creates a bridge network on which the pods run; this network has this default subnet 192.168.49.0/24 with NAT. The subsequent Minikube container is on 192.168.49.2 IP.

Related

Access to a kubernetes cluster from inside a docker container

I have some docker containers running with docker compose (node.js, databases, nginx...). I have also a minikube Kubernetes cluster.
I am trying to communicate from node.js container to Kubernetes to manage some nodes (using Kubernetes API and the config file generated). But I can't get access to the Kubernetes, I tried to ping minikube IP from a docker container but I get not connection. But from my local machine, works without problems.
Someone can help? What is wrong?
My machine is a Linux Ubuntu 20.04 and minikube uses docker driver.
For Kubernetes to work, you need to have two network interfaces configured. One in a private network and another in dhcp that allows you to get an internet connection. You must take into account that the dhcp address will coincide with the ip used at the time of initializing the cluster, so you must assign a static ip.
Check your etcd.yaml file
maybe this network file configuration can help you
# Let NetworkManager manage all devices on this system
network:
version: 2
renderer: NetworkManager
ethernets:
enp0s3:
addresses: [10.0.0.1/24]
dhcp4: no
dhcp6: no
enp0s8:
addresses: [192.168.XX.XX/24]
dhcp4: true
dhcp6: no

Can't start minikube inside docker network

I'm trying to start minikube on ubuntu 18.04 inside nginx proxy manager docker network in order to setup some kubernetes services and manage the domain names and the proxy hosts in the nginx proxy manager platform.
so I have nginxproxymanager_default docker network and when I run minikube start --network=nginxproxymanager_default I get
Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
what might I been doing wrong?
A similar error was reported with kubernetes/minikube issue 12894
please check whether there are other services using that IP address, and try starting minikube again.
Considering minikube start man page
--network string
network to run minikube with.
Now it is used by docker/podman and KVM drivers.
If left empty, minikube will create a new network.
Using an existing NGiNX network (as opposed to docker/podman) might not be supported.
I have seen NGiNX set up as ingress, not directly as "network".

Access host resources from a kubernetes cluster service when minikube started with DOCKER vm driver

I have been able to successfully provision services inside a kubernetes minikube cluster to connect to services such as cassandra, kafka, etc installed on the host machine so far when I started the minikube cluster with the virutalbox vm.
minikube start --driver=virtualbox
For this I had to defined k8s manifest say fror cassandra endpoint with an IP address of the host machine identified from within the minikube cluster via following command as suggested here:
minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'"
But when I start the minikube cluster with docker as VM driver, I am not able to figure out the ip address of the host machine as identified from inside the cluster because the command to fetch the same doesn't work.
minikube start --driver=docker
minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'"
bash: route: command not found
Please suggest how could this ip address be retrived?
I think you overcomplicate things when it is not needed.
Since you want to run a testing environment (minikube suggests that) you should always want to start minikube with:
minikube start --driver=docker
Now the IP address of the Kubernetes cluster can be found simply by typing:
minikube ip
Or:
kubectl cluster-info | grep master
This command will show you the ip address that Kubernetes is running from, and the port (usually 6443).
Unfortunately it is not possible to it that way while using Docker as a driver due to the limitations of Docker VM.
There is a workaround listed here, saying:
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network
access). From 18.03 onwards our recommendation is to connect to the
special DNS name host.docker.internal, which resolves to the internal
IP address used by the host. This is for development purpose and will
not work in a production environment outside of Docker Desktop for
Mac.
The gateway is also reachable as gateway.docker.internal.
But there is no official analogical way to do so with Minikube.
I know that it might not be the exact thing you wanted but I see three options as a workaround:
Use a different driver.
Try using Telepresence as a proxy.
Try Routing an internal Kubernetes IP address to the host system.
I hope it helps.

Visual studio docker container capable of seeing kubernetes pods outside?

I am currently developing docker containers using visual studio, and these container images are supposed to run in a kubernetes cluster that I am also running locally.
Currently, the docker container that is running via visual studio is not being deployed to a kubernetes cluster, but for some reason am I able to ping the kubernetes pod's ip address from the docker container, but for which I don't quite understand; should they not be separated, and not be able to reach each other?
And it cannot be located on the kubernetes dashboard?
And since they are connected, why can't I use the kubernetes service to connect to my pod from my docker container?
The docker container is capable of pinging the cluster IP, meaning that it is reachable.
nslookup the service is not able to resolve the hostname.
So, as I already stated in the comment:
When Docker is installed, a default bridge network named docker0 is
created. Each new Docker container is automatically attached to this
network, unless a custom network is specified.
Thats mean you are able to ping containers by their respective IP. But you are not able to resolve DNS names of cluster objects - you VM know nothing about internal cluster DNS server.
Few option what you can do:
1) explicitly add record of cluster DNS to /etc/hosts inside VM
2) add a record to /etc/resolv.conf with nameserver and search inside VM. See one of my answers related to DNS resolution on stack: nslookup does not resolve Kubernetes.default
3)use dnsmasq as described in Configuring your Linux host to resolve a local Kubernetes cluster’s service URLs article. Btw I highly recommend you read it from the beginning till the end. It greatly describes how to work with DNS and what workaround you can use.
Hope it helps.

Host unreachable after docker swarm init

I have Windows Server 2016 Core(Hyper-V VM). Docker is installed, working and I want to create swarm.
IP config at the beginning:
1. Ethernet - 192.168.0.1
2. vEthernet (HSN Internal NIC) - 172.30.208.1
Then I run
docker swarm init --advertise-addr 192.168.0.1
Swarm is created, but I have lost my main IP address. IP config:
1. vEthernet (HNS internal NIC) - 172.30.208.1
2. vEthernet (HNS Transparent) - 169.254.225.229
Created swarm manager node is not reachable on main address 192.168.0.1. I can't connect to it and swarm workers are not able to join with this IP. Where is the problem?
A little late answering this but ... Docker is going to take over your network card when you bring up the Swarm. What I did was use two network cards: one I left alone for Docker to use and the second I used for everything else including virtual machines.
Currently, you cannot use Docker for Mac or Docker for Windows alone to test a multi-node swarm. For single node swarm cluster,
If you are using Docker for Mac or Docker for Windows to test single-node swarm, simply run docker swarm init with no arguments
However, you can use the included version of Docker Machine to create the swarm nodes (see Get started with Docker Machine and a local VM), then follow the tutorial for all multi-node features
For furthere info read this
Edit:
Also refer to this

Resources