How to fix service PORT for minikube - skaffold local environment? - docker

Currently I am using a local environment with Skaffold + Minikube and every time I start the cluster like this:
skaffold dev -f='./skaffold-cluster.yaml' --no-prune=false --cache-artifacts=false --status-check=false
I get a bunch of services that belongs to my skaffold manifests, but each one of this services are exposed with random ports. The ip is the same because minikube have already started.
If I do: minikube service nice-service --url I will get the service with the random PORT.
I want to be able to fix this port. But I don't see if this is something that should be consider in k8s configuration / skaffold / minikube / docker ??
Typical use case:
I want to access mysql from sequel pro / workbench or any tool... therefor this configurations are saved locally with a port... it would be great to not to have to change the port in this tools, to access to the minikube service of mysql...
Current setup has: Virtualbox in OS system, with minikube and skaffold. Services are being exposed as k8s service node ports.
Is it possible to Fix this port services?

By changing the nodePort option:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
nodePort is the one exposed by minikube service my-service --url by adding this option it will not be random any more, but the port you need.

Related

How can I access kubernetes service via dns name from localhost on which is Docker Desktop/Kubernetes?

I have such configuration localhost on which is installed Docker Desktop with Kubernetes.
I deployed for example some cassandra server as statefulset I created such service to expose individual pods from statefulset. I can not use cassandra service since for some Cassandra operation default load balancing is toxic. I need to connect all pods in headless service or selected seed pods - proxyless option is the best choice in this case by design.
apiVersion: v1
kind: Service
metadata:
name: cassandra-0
spec:
type: LoadBalancer
selector:
statefulset.kubernetes.io/pod-name: cassandra-0
ports:
- name: cql
protocol: TCP
port: 9042
targetPort: 9042
I want to expose it as cassandra-0.. inside localhost.
What should I do to make it in the easiest way?
What I imagine now but maybe it not need.
Configure some localhost dns to point cassandra-0:9042 to port 9042.
Use nginx to route traffic to specific exposed ports (9042,9043, ...) for (cassandra-0, cassandra-1, ...)
I plan to make test from localhost using pods in kubernetes.

How to route traffic from pysical server's port to minikube cluster?

I want to sent traffic from one port through kubernetes cluster (using minikube) to another physical port. I don't know how to route traffic from physical port to cluster and from cluster to the second physical port. I'm exposing cluster via ingress (and I tested service solution also), i have one service to send external tarffic to pod and another to sent traffic from first pod to second pod. But I really don't know how to send this traffic from port to cluster and how to sent from cluster to receiving port...
My cluster is described in there: How to route test traffic through kubernetes cluster (minikube)?
Assuming that:
Traffic needs to enter through a physical enp0s6 port on Ubuntu Server and be sent to Pod
Pod is configured with some software capable of routing traffic.
Pod from the image is routing traffic received to a physical enp0s5 port on the same Ubuntu Server machine (or further down the line).
This answer does not acknowledge:
Software used to route the traffic from Pod to a physical port enp0s5.
A side note!
Please consider entering each link that I included in the answer as there are a lot of useful information.
Minikube is a tool that spawn your single node Kubernetes cluster for development purposes on your machine (PC, Laptop, Server, etc.).
It uses different drivers to run Kubernetes (it can be deployed as bare-metal, in docker, in virtualbox, in kvm, etc.). This allows for isolation from host (Ubuntu Server). It also means that there are differences when it comes to the networking part of this setup.
By the setup of minikube with kvm2 driver you will need to make some additional changes to your setup to be able to route traffic from 192.168.0.150 to your Deployment (set of Pods).
Let' assume that the Deployment manifest is following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Also let's assume that the Service manifest is following:
apiVersion: v1
kind: Service
metadata:
name: nginx-deployment
spec:
type: NodePort
selector:
app: nginx # <-- this needs to match with Deployment matchLabels
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
Service of type NodePort from above example will expose your Deployment on a minikube instance (IP) on port 30000.
In this particular example Service (An abstract way to expose an application running on a set of Pods as a network service) will expose the Pod within minikube instance and your host but not for external access (like other machine in the 192.168.0.0/24 network).
Options to allow external traffic are either:
Run on your host (Ubuntu Server):
$ kubectl port-forward --address 192.168.0.150 service/nginx-deployment 8000:80
kubectl will allow connections on your Ubuntu Server on port 8000 to be forwarded directly to the nginx-deployment service and inherently to your Pod.
Side notes!
You can also use kubectl port-forward on your PC/Laptop and by that you can direct traffic from the PC/Laptop port to your Pod.
--address 192.168.0.150 is set to target specifically enp0s6.
Use OS built-in port forwarding.
You can read more about it by following this answer:
Serverfault: Setup bridge for existing virtual birdge that minikube is running on
Above explanation should help you to direct the traffic to your Pod directly from enp0s6. Sending traffic from Pod to your enp0s5 interface is pretty straightforward. You can run (from your Pod):
curl 10.0.0.150 (enp0s5)
curl 10.0.0.X (device in enp0s5 network)
Alternative
As an alternative you can try to provision your own Kubernetes cluster without using minikube. This will inherently eliminate the isolation layer and allow you for a more direct access. There are a lot of options like for example:
Kubeadm
Kubespray
MicroK8S
I encourage you to check the additional resources as Kubernetes is a complex solution and there is a lot to discover:
Kubernetes.io: Docs: Home

How to access a container set up in a host with another machine?

I have deployed a mosquitto broker with kubernetes in my Linux machine. Now I want to connect this container with a MQTT client running on my smartphone. How could I do that? Which IP should I connect to?
I have connected to the mosquitto broker with a client inside my machine and it works perfectly.
EDIT: I'm using NodePort:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/mosquitto-entrypoint NodePort 10.152.183.235 <none> 8080:30001/TCP 24h
If your mobile app is on the same network, ideally NodePort must do good. You must be able to reach your service with IP 10.152.183.235. But this might not be the scenario I believe
You must run your service with LoadBalancer type, that will generate an External Facing IP for your cluster. An example given below,
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancer
Define a yml for your service, and apply it via kubectl kubectl apply -f <yourfile>
If you have a DNS server of your own, then you can prefer using an Ingress Controller and expose your service to outside network.
If the host where your service runs is accessible from your smartphone, you could port map the service to Nodeport.
For example , if your machine IP is 192.168.x.y and you map your service to Hosts port/Nodeport 5000
And machine allows incoming connections from your phone when you are connected to allowed network.
You could reach the service on 192.168.x.y:5000

How to get browsable url from Docker-for-mac or Docker-for-Windows?

In minikube I can get a service's url via minikube service kubedemo-service --url. How do I get the URL for a type: LoadBalancer service in Docker for Mac or Docker for Windows in Kubernetes mode?
service.yml is:
apiVersion: v1
kind: Service
metadata:
name: kubedemo-service
spec:
type: LoadBalancer
selector:
app: kubedemo
ports:
- port: 80
targetPort: 80
When I switch to type: NodePort and run kubectl describe svc/kubedemo-service I see:
...
Type: NodePort
LoadBalancer Ingress: localhost
...
NodePort: <unset> 31838/TCP
...
and I can browse to http://localhost:31838/ to see the content. Switching to type: LoadBalancer, I see localhost ingress lines in kubectl describe svc/kubedemo-service but I get ERR_CONNECTION_REFUSED browsing to it.
(I'm familiar with http://localhost:8080/api/v1/namespaces/kube-system/services/kubedemo-service/proxy/ though this changes the root directory of the site, breaking css and js references that assume a root directory. I'm also familiar with kubectl port-forward pods/pod-name though this only connects to pods until k8s 1.10.)
How do I browse to a type: LoadBalancer service in Docker for Win or Docker for Mac?
LoadBalancer will work on Docker-for-Mac and Docker-for-Windows as long as you're running a recent build. Flip the type back to LoadBalancer and update. When you check the describe command output look for the Port: <unset> 80/TCP line. And try hitting http://localhost:80.
How do I browse to a type: ClusterIP service or type: LoadBalancer service in Docker for Win or Docker for Mac?
This is usual confusion when it comes to scope of kubernetes network levels, and exposures on service level. Here is quick overview of types and scope:
A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access. To access it outside of cluster, you would need to run kube proxy (such as in standard dashboard example).
A LoadBalancer service is the standard way to expose a service to the internet. Load balancer access and setup is dependent on cloud provider.
A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
This said, only way to access your service while on ClusteIP is from within one of the containers from the cluster or with help of proxy, and for LoadBalancer you need cloud provider. You can also mimic LoadBalancer with ingress of your own (upstream proxy such as nginx, sitting in front of ClusterIP type of service).
Useful link with more in-depth explanation and nice images: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
Updated for LoadBalancer discussion:
As for using LoadBalancer, here is useful reference from documentation (https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/):
The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service.
On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
minikube service name-of-the-service
This automatically opens up a browser window using a local IP address that serves your app on service port.

Simplest approach to expose a HAProxy (port 80) Docker in IBM Cloud Kubernetes

I need to deploy a Docker running HAProxy which I already have working on on premise dockers into IBM Cloud (Bluemix) Kubernetes service. I am a bit lost on how to expose por 80 and 443. In plain simple docker that is very straightforward but seems complicated in Kubernetes, or at least in IBM Cloud.
I don't need load balancing, virtual hosts, or any extra configuration, as HAProxy will take care of it. Just need to replicate (move) my on premise running HAProxy exposing ports 80 and 443 into bluemix. (For multiple reasons I want to use HAproxy, so the request here is very specific: Simplest way to expose HAProxy ports 443 and 80 to a permanent IP address in IBM Cloud Kubernetes service.
could I have a basic example yaml kubectl file for that? Thanks
NodePort
To keep the same image running in both environments then you can define a Deployment for the HAProxy containers and a Service to access them via a NodePort on the NodeIP or clusterIP. A NodePort is similar in concept to running docker run -p n:n.
The IP:NodePort would need to be accessable externally and HAProxy will take over from there. Here's a sample HAProxy setup that uses an AWS ELB to get external users to a Node. Most people don't recommend running services via NodePort because Kubernetes offers alternate methods that provide more integration.
LoadBalancer
A LoadBalancer is specifically for automatic configuration of a cloud providers load balancer service. I don't believe IBM Clouds load balancer has any support in Kubernetes, maybe IBM have added something in? If they have you could use this instead of a NodePort to get to your Service.
Ingress
If you are running Docker locally and Kubernetes externally you've kind of thrown consistency out the window already so you could setup Ingress with an Ingress Controller based on HAProxy, there's a few available:
https://github.com/appscode/voyager
https://github.com/jcmoraisjr/haproxy-ingress
This gives you the standard Kubernetes abstraction of how to manage ingress for a service but using HAProxy underneath. This will not be your HAProxy image though, it's likely you can configure the same things for the HAProxy Ingress as you do in your HAProxy image.
Voyagers docco is pretty good:
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/test'
backend:
serviceName: test-service
servicePort: '80'
backendRules:
- 'acl add_url capture.req.uri -m beg /test-second'
- 'http-response set-header X-Added-From-Proxy added-from-proxy if add_url'
If you are fine with running this HAProsy on each node that is supposed to expose port 80/443 then consider running DaemonSet with hostNetwork: true. That will allow you to create pods that open 80 and 443 directly on node network. If you have a loadbalancer support in your cluster, you can instead use a Service of LoadBalancer type. It will forward from high node ports like ie. 32080 to your backing haproxy pods, and also automaticaly configure LB in front of it to give you an external IP and forward 80/443 from that IP to your high node ports (again, assuming your kube deployment supports use of LB services)
IBM Cloud has built-in solutions for load balancer and Ingress. The docs include sample YAMLs for both.
Load Balancer: https://console.bluemix.net/docs/containers/cs_loadbalancer.html#loadbalancer
Ingress: https://console.bluemix.net/docs/containers/cs_ingress.html#ingress
If you need tls termination or want to use a route rather than an IP address for accessing your HAProxy, then Ingress would be the best choice. If those options don't matter, then I'd suggest starting with the provided load balancer to see if that meets your needs.
Note, both load balancer and Ingress required a paid cluster. For lite clusters, only NodePort is available.
Here's a sample YAML that deploys IBM Liberty and exposes it via a load balancer service.
#If you are not logged into the US-South https://api.ng.bluemix.net
region, change the image registry location to match your region.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ibmliberty-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: ibmliberty
spec:
containers:
- name: ibmliberty
image: registry.ng.bluemix.net/ibmliberty
---
apiVersion: v1
kind: Service
metadata:
name: ibmliberty-loadbalancer
spec:
type: LoadBalancer
selector:
app: ibmliberty
ports:
- protocol: TCP
port: 9080

Resources