MacVlan network with kubernetes - docker

I setup the kubernetes cluster using k3s. I have one master and two nodes. I created docker macvlan network on one of the node.
I want to achieve below mentioned scenario.
Assign IP to container/pod.(user defined IP, not cluster IP).
q1.Is there any alternative option for docker macvlan?
q2.Can we run command on node (not on pod/container)? (while deploying the pod/service)
q3.can we create kubernetes network with user defined IP? (I don`t think LB/NP/Ingress will help for user defined IP, correct me if I am wrong!)

Kubernetes has its own very specialized network implementation. It can't easily assign a unique externally accessible IP address to each process the way the Docker MacVLAN setup can. Kubernetes also can't reuse the Docker networking infrastructure. Generally the cluster takes responsibility for assigning IP addresses to pods and services, and you can't specify them yourself.
So, in Kubernetes:
You can't manually assign IP addresses to things;
The cluster-internal IP addresses aren't directly accessible from outside the cluster;
The Kubernetes constructs can only launch containers on arbitrarily chosen nodes (possibly with some constraints; possibly on every node), but you don't usually launch a container on a single specific node, and you can't run a non-container command on a node.
Given what you're describing, a more general-purpose cluster automation tool like Salt Stack, Ansible, or Chef might meet your needs better. This will let you launch processes directly on managed nodes, and if those are server-type processes, they'll be accessible using the host's IP address as normal.

You can look into MetalLB, specifically into the Layer2 & Local traffic policy
(https://metallb.universe.tf/usage/)
You cannot assign IPs to the Pods but when you'll create a service type LoadBalancer (for example a http routing service like Traefik), MetalLB will help bind that service to the IP of the node.
As an example, you can see the external IP of the service Trafik is reported as my node's address - 192.168.1.201
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node02 Ready <none> 8d v1.20.2+k3s1 192.168.1.202 <none> Alpine Linux v3.13 5.10.10-0-virt containerd://1.4.3-k3s1
node01 Ready control-plane,master 8d v1.20.2+k3s1 192.168.1.201 <none> Alpine Linux v3.13 5.10.10-0-virt containerd://1.4.3-k3s1
For q2:
Of course you can, k8s doesn't take over the node. You ssh into it and run whatever you like.
For q1:
No.
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP
kube-system service/metrics-server ClusterIP 10.43.254.20 <none> 443/TCP
kube-system service/traefik LoadBalancer 10.43.130.1 192.168.1.201 80:31666/TCP,443:31194/TCP,8080:31199/TCP
default service/whoami ClusterIP 10.43.61.10 <none> 80/TCP

Related

Minikube M1 - minikube service not working

I try to follow the instructions on the link to deploy a hello world app in the Minikube. So far I created the deployment and exposed the deployment.
Deployment:
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/web 1/1 1 1 10m
Service:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 11m
default service/web NodePort 10.107.89.59 <none> 8080:30841/TCP 10m
When I run below "minikube service web --url" I only got:
minikube service web --url
🏃 Starting tunnel for service web.
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
If I run it without the "--url" option, my browser says that the server is not responding.
Did anyone have any similar problems? Why is not the hello world app loading in the browser as in the guide?
Thank you in advance
The tunnel is created between your Mac and Cluster IP instead of Node IP.
Here is a workaround for your case
Get the service cluster IP
kubectl get svc -o wide
Get the tunnel PORT on your Mac
ps -ef | grep ssh
Access it from localhost
curl http://127.0.0.1:PORT
if you are using ingress, please refer to #12089
Remember The bridge network in mac is different from in Linux #7332 (comment)

AKS Load Balancer IP Address Not Accessible

I created a load balancer for service in AKS, and the load balancer got approved to be accessible from AKS Network Subnet Group. The load balancer has an external IP address corresponding to an internal service. But I'm not able to access the IP address provided by the load balancer.
Please perform a kubectl get service -n <namespace> on the AKS cluster:
If you see something like the following, where the External-IP is a Public IP address:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.0.192.89 20.69.145.115 80:31541/TCP 6s
then the Service was allocated a Public IP address from the Frontend IP addresses of the AKS public Load Balancer. Please ensure that all Network Security Groups associated with the AKS cluster subnet or the node virtual machines' network interfaces effectively Allow Inbound traffic from the Internet or the Public IP address (range) from which you are trying to connect. Please also ensure that there are no Firewalls, Network Virtual Appliances etc. which blocks inbound traffic to the AKS cluster subnet and node virtual machines.
If you see something like the following where the External-IP is a private IP address:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.0.184.168 10.240.0.25 80:30225/TCP 4m
then you have created an Azure Internal Load Balancer Service on the AKS cluster and a private IP address from the associated virtual network was associated to the Service. Please ensure that you are connecting to the Service from a device inside the AKS cluster's virtual network or a connected network(like peered virtual networks, virtual networks connected over a VPN gateway, on-premise network connected to the Azure Virtual network). Default Network Security Group rules allow connectivity inside the virtual network and connected networks, however if custom rules are added please ensure that the effective rules allow traffic between the source and the AKS cluster subnet and node Virtual machines.
In a third scenario, you might see the External-IP <pending> for a very long time as in the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.0.192.89 <pending> 80:31541/TCP 45m
In this case please describe the Service using kubectl describe service. Under the events section of the output, you might find errors during EnsuringLoadBalancer. Please ensure that annotations are correctly set in the service manifest and correct permissions are granted to the AKS cluster's managed identity or service p[rincipal as described in:
https://learn.microsoft.com/en-us/azure/aks/internal-lb and/or https://learn.microsoft.com/en-us/azure/aks/static-ip

How to access services exposed via ClusterIP when using Docker For Windows:

I installed Docker for windows and the built in k8s single node cluster for dev purposes on a local workstation (windows 10 pro).
I'd like to know how to access services hosted on this cluster. It's not documented very well
I don't have a load balancer installed and don't need a K8s Ingress. How can I access a service hosted at 10.105.245.65:80 . localhost and 127.0.0.1 don't work and 10.105.245.65 has no meaning on the host windows machine .
I could use NodePort (that works) but I'd like to understand how to access via ClusterIP .
C:\Users\balamuvi>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 52m
webserver ClusterIP 10.105.245.65 <none> 80/TCP 48m ===> how do I access this service
C:\Users\balamuvi>kubectl cluster-info
Kubernetes master is running at https://kubernetes.docker.internal:6443 =======> resolves to localhost
KubeDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
ClusterIP is not accessible from outside the cluster. You will have to exec into another pod and use curl to access it.
CLusterIP cannot be accessible from outside the cluster.
Other method beside execing into pod (as mentioned by Arghya) would be using kubectl port-forward command.
Just run:
kubectl port-forward pods/<pod-name> <local-port>:<pod-port>
and then you can access the pod under localhost:<local-port>
Refer to kubernetes documentation for more information about port forwarding.

Accessing a k8s service with cluster IP in default namespace from a docker container

I have a server that is orchestrated using k8s it's service looks like below
➜ installations ✗ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oxd-server ClusterIP 10.96.124.25 <none> 8444/TCP,8443/TCP 3h32m
and it's pod.
➜ helm git:(helm-rc1) ✗ kubectl get po
NAME READY STATUS RESTARTS AGE
sam-test-oxd-server-6b8f456cb6-5gwwd 1/1 Running 0 3h2m
Now, I have a docker image with an env variable that requires the URL of this server.
I have 2 questions from here.
How can the docker image get the URL or access the URL?
How can I access the same URL in my terminal so I make some curl commands through it?
I hope I am clear on the explanation.
If your docker container is outside the kubernetes cluster, then it's not possible to access you ClusterIP service.
As you could guess by its name, ClusterIP type services are only accessible from within the cluster.
By within the cluster I mean any resource managed by Kubernetes.
A standalone docker container running inside a VM which is part of your K8S cluster is not a resource managed by K8S.
So, in order to achieve what you want, you'll have those possibilities :
Set a hostPort inside your pod. This is not recommanded and is listed as a bad practice in the doc. Keep this usage for very specific case.
Switch your service to NodePort instead of ClusterIP. This way, you'll be able to access it using a node IP + the node port.
Use a LoadBalancer type of service, but this solution needs some configuration and is not straightforward.
Use an Ingress along with an IngressController but just like the load balancer, this solution needs some configuration and is not that straightforward.
Depending on what you do and if this is critical or not, you'll have to choose one of these solutions.
1 & 2 for debug/dev
3 & 4 for prod, but you'll have to work with your k8s admin
You can use the name of the service oxd-server from any other pod in the same namespace to access it i.e., if the service is backed by pods that are serving HTTPS, you can access the service at https://oxd-server:8443/.
If the client pod that wants to access this service is in a different namespace, then you can use oxd-server.<namespace> name. In your case that would be oxd-server.default since your service is in default namespace.
To access this service from outside the cluster(from your terminal) for local debugging, you can use port forwarding.
Then you can use the URL localhost:8443 to make any requests and request would be port forwarded to the service.
kubectl port-forward svc/oxd-server 8443:8443
If you want to access this service from outside the cluster for production use, you can make the service as type: NodePort or type: LoadBalancer. See service types here.

Make Kubernetes Service Accessible Externally

We have a private kubernetes cluster running on a baremetal CoreOS cluster (with Flannel for network overlay) with private addresses.
On top of this cluster we run a kubernetes ReplicationController and Service for elasticsearch. To enable load-balancing, this service has a ClusterIP defined - which is also a private IP address: 10.99.44.10 (but in a different range to node IP addresses).
The issue that we face is that we wish to be able to connect to this ClusterIP from outside the cluster. As far as we can tell this private IP is not contactable from other machines in our private network...
How can we achieve this?
The IP addresses of the nodes are:
node 1 - 192.168.77.102
node 2 - 192.168.77.103
.
and this is how the Service, RC and Pod appear with kubectl:
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch <none> app=elasticsearch 10.99.44.10 9200/TCP
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
elasticsearch elasticsearch elasticsearch app=elasticsearch 1
NAME READY STATUS RESTARTS AGE
elasticsearch-swpy1 1/1 Running 0 26m
You need to set the type of your Service.
http://docs.k8s.io/v1.0/user-guide/services.html#external-services
If you are on bare metal, you don't have a LoadBalancer integrated. You can use NodePort to get a port on each VM, and then set up whatever you use for load-balancing to aim at that port on any node.
You can use nodeport, but also use hostport for some daemonsets and deployments and hostnetwork to give a pod total node network access
IIRC, if you have a recent enough kubernetes, each node can forward traffic to the internal network, so if you create the correct routing in your clients/switch, you can access the internal network by delivering those TCP/IP packages to one node. The node will then receive the package and SNAT+forward to the clusterIP or podIP.
Finally, barebone can use now MetalLB for kubernetes loadbalancer, that is mostly using this last feature in a more automatic and redundant way

Resources