I have non deckerised application that needs to connect to dockerised application running inside kubernetes pod.
Given that pods may died and came again with different ip address, how my application can detect this? any way to assign a hostname that redirect to whatever existing pods?
You will have to use kubernetes service. Service gives you a way to talk to your pods with static Ip and dns (if you're client app is inside the cluster).
https://kubernetes.io/docs/concepts/services-networking/service/
You can do it in several ways:
Easiest: Use kubernetes service with type: NodePort. Then you can access the pod using http://[nodehost]:[nodeport]
Use kubernetes ingress. See this link for more details (https://kubernetes.io/docs/concepts/services-networking/ingress/)
If you are running in the cloud like aws, azure or gce, you can use kubernetes service type LoadBalancer.
In addition to Bal Chua’s work and suggestions from silverfox, I would like to show you the method
I used for Kubernetes to expose and manage incoming traffic from the outside:
Step 1: Deploy an application
In this example, Kubernetes sample hello application will run on port 8080/tcp
kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080
Step 2: Expose your Deployment as a Service internally
This command tells Kubernetes to expose port 8080/tcp to interact with the world outside:
kubectl expose deployment web --target-port=8080 --type=NodePort
After, please check if it exposed running command:
kubectl get service web
Step 3: Manage Ingress resource
Ingress sends traffic to a proper service working inside Kubernetes.
Open a text editor and then create a file basic-ingress.yaml
with content:
apiVersion:
extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
Apply the configuration:
kubectl apply -f basic-ingress.yaml
and that's all. It is time to test. Get the external IP address of Kubernetes installation:
kubectl get ingress basic-ingress
and run web browser with this address to see hello application working.
Related
The context
Let me know if I've gone down a rabbit hole here.
I have a simple web app with a frontend and backend component, deployed using Docker/Helm inside a Kubernetes cluster. The frontend is servable via nginx, and the backend component will be running a NodeJS microservice.
I had been thinking to have both run on the same pod inside Docker, but ran into some problems getting both nginx and Node to run in the background. I could try having a startup script that runs both, but the Internet says it's a best practice to have different containers each be responsible for only running one service - so one container to run nginx and another to run the microservice.
The problem
That's fine, but then say the nginx server's HTML pages need to know what to send a POST request to in the backend - how can the HTML pages know what IP to hit for the backend's Docker container? Articles like this one come up talking about manually creating a Docker network for the two containers to speak to one another, but how can I configure this with Helm so that the frontend container knows how to hit the backend container each time a new container is deployed, without having to manually configure any network service each time? I want the deployments to be automated.
You mention that your frontend is based on Nginx.
Accordingly,Frontend must hit the public URL of backend.
Thus, backend must be exposed by choosing the service type, whether:
NodePort -> Frontend will communicate to backend with http://<any-node-ip>:<node-port>
or LoadBalancer -> Frontend will communicate to backend with the http://loadbalancer-external-IP:service-port of the service.
or, keep it ClusterIP, but add Ingress resource on top of it -> Frontend will communicate to backend with its ingress host http://ingress.host.com.
We recommended the last way, but it requires to have ingress controller.
Once you tested one of them and it works, then, you can extend your helm chart to update the service and add the ingress resource if needed
You may try to setup two containers in one pod and then communicate between containers via localhost (but on different ports!). Good example is here - Kubernetes multi-container pods and container communication.
Another option is to create two separate deployments and for each create service. Instead of using IP addresses (won't be the same for every re-deployment of your app) use a DNS name for connecting to them.
Example - two NGINX services communication.
First create two NGINX deplyoments:
kubectl create deployment nginx-one --image=nginx --replicas=3
kubectl create deployment nginx-two --image=nginx --replicas=3
Let's expose them using the kubectl expose command. It's the same if I had created a service from a yaml file:
kubectl expose deployment nginx-one --name=my-service-one --port=80
kubectl expose deployment nginx-two --name=my-service-two --port=80
Now let's check services - as you can see both of them are ClusterIP type:
user#shell:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.36.0.1 <none> 443/TCP 66d
my-service-one ClusterIP 10.36.6.59 <none> 80/TCP 60s
my-service-two ClusterIP 10.36.15.120 <none> 80/TCP 59s
I will exec into pod from nginx-one deployment and curl the second service:
user#shell:~$ kubectl exec -it nginx-one-5869965455-44cwm -- sh
# curl my-service-two
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
If you have problems, make sure you have a proper CNI plugin installed for your cluster - also check this article - Cluster Networking for more details.
Also check these:
My similar answer but with a wider explanation + example of communication between two namespaces.
Access Services Running on Clusters | Kubernetes
Service | Kubernetes
Debug Services | Kubernetes
DNS for Services and Pods | Kubernetes
I want to sent traffic from one port through kubernetes cluster (using minikube) to another physical port. I don't know how to route traffic from physical port to cluster and from cluster to the second physical port. I'm exposing cluster via ingress (and I tested service solution also), i have one service to send external tarffic to pod and another to sent traffic from first pod to second pod. But I really don't know how to send this traffic from port to cluster and how to sent from cluster to receiving port...
My cluster is described in there: How to route test traffic through kubernetes cluster (minikube)?
Assuming that:
Traffic needs to enter through a physical enp0s6 port on Ubuntu Server and be sent to Pod
Pod is configured with some software capable of routing traffic.
Pod from the image is routing traffic received to a physical enp0s5 port on the same Ubuntu Server machine (or further down the line).
This answer does not acknowledge:
Software used to route the traffic from Pod to a physical port enp0s5.
A side note!
Please consider entering each link that I included in the answer as there are a lot of useful information.
Minikube is a tool that spawn your single node Kubernetes cluster for development purposes on your machine (PC, Laptop, Server, etc.).
It uses different drivers to run Kubernetes (it can be deployed as bare-metal, in docker, in virtualbox, in kvm, etc.). This allows for isolation from host (Ubuntu Server). It also means that there are differences when it comes to the networking part of this setup.
By the setup of minikube with kvm2 driver you will need to make some additional changes to your setup to be able to route traffic from 192.168.0.150 to your Deployment (set of Pods).
Let' assume that the Deployment manifest is following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Also let's assume that the Service manifest is following:
apiVersion: v1
kind: Service
metadata:
name: nginx-deployment
spec:
type: NodePort
selector:
app: nginx # <-- this needs to match with Deployment matchLabels
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
Service of type NodePort from above example will expose your Deployment on a minikube instance (IP) on port 30000.
In this particular example Service (An abstract way to expose an application running on a set of Pods as a network service) will expose the Pod within minikube instance and your host but not for external access (like other machine in the 192.168.0.0/24 network).
Options to allow external traffic are either:
Run on your host (Ubuntu Server):
$ kubectl port-forward --address 192.168.0.150 service/nginx-deployment 8000:80
kubectl will allow connections on your Ubuntu Server on port 8000 to be forwarded directly to the nginx-deployment service and inherently to your Pod.
Side notes!
You can also use kubectl port-forward on your PC/Laptop and by that you can direct traffic from the PC/Laptop port to your Pod.
--address 192.168.0.150 is set to target specifically enp0s6.
Use OS built-in port forwarding.
You can read more about it by following this answer:
Serverfault: Setup bridge for existing virtual birdge that minikube is running on
Above explanation should help you to direct the traffic to your Pod directly from enp0s6. Sending traffic from Pod to your enp0s5 interface is pretty straightforward. You can run (from your Pod):
curl 10.0.0.150 (enp0s5)
curl 10.0.0.X (device in enp0s5 network)
Alternative
As an alternative you can try to provision your own Kubernetes cluster without using minikube. This will inherently eliminate the isolation layer and allow you for a more direct access. There are a lot of options like for example:
Kubeadm
Kubespray
MicroK8S
I encourage you to check the additional resources as Kubernetes is a complex solution and there is a lot to discover:
Kubernetes.io: Docs: Home
I just tried setting up kubernetes on my bare server,
Previously I had successfully create my docker compose
There are several apps :
Apps A (docker image name : a-service)
Apps B (docker image name : b-service)
Inside Application A and B there are configs (actually there are apps A,B,C,D,etc lots of em)
The config file is something like this
IPFORSERVICEA=http://a-service:port-number/path/to/something
IPFORSERVICEB=http://b-service:port-number/path/to/something
At least above config work in docker compose (the config is inside app level, which require to access another apps). Is there any chance for me to access another Kubernetes Service from another service ? As I am planning to create 1 app inside 1 deployment, and 1 service for each deployment.
Something like:
App -> Deployment -> Service(i.e: NodePort,ClusterIP)
Thanks !
Is there any chance for me to access another Kubernetes Service from
another service ?
Yes, you just need to specify DNS name of service (type: ClusterIP works fine for this) you need to connect to as:
<service_name>.<namespace>.svc.cluster.local
In this case such domain name will be correctly resolved into internal IP address of the service you need to connect to using built-in DNS.
For example:
nginx-service.web.svc.cluster.local
where nginx-service - name of your service and web - is apps's namespace, so service yml definition can look like:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: web
spec:
ports:
- name: http
protocol: TCP
port: 80
selector:
app: nginx
type: ClusterIP
See official docs to get more information.
Use Kubernetes service discovery.
Service discovery is the process of figuring out how to connect to a
service. While there is a service discovery option based on
environment variables available, the DNS-based service discovery is
preferable. Note that DNS is a cluster add-on so make sure your
Kubernetes distribution provides for one or install it yourself.
Service dicovery by example
I have successfully connect my Kubernetes-Cluster with Gitlab. Also I was able to install Helm through the Gitlab UI (Operations->Kubernetes)
My Problem is that if I click on the "Install"-Button of Ingress Gitlab will create all the nessecary stuff that is needed for the Ingress-Controller. But one thing will be missed : external IP. External IP will mark as "?".
And If I run this command:
kubectl get svc --namespace=gitlab-managed-apps ingress-nginx-ingress- controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'; echo
It will show nothing. Like I won´t have a Loadbalancer that exposes an external IP.
Kubernetes Cluster
I installed Kubernetes through kubeadm, using flannel as CNI
kubectl version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Is there something that I have to configure before installing Ingress. Do I need an external Loadbalancer(my thought: Gitlab will create that service for me)?
One more hint: After installation, the state of the Nginx-Ingress-Controller Service will be stay on pending. The reason for that it is not able to detect external IP. I also modified the yaml-File of the service and I manually put the "externalIPs : -External-IP line. The output of this was that it was not pending anymore. But still I couldn't find an external IP by typing the above command and Gitlab also couldn´t find any external IP
EDIT:
This happens after installation:
see picture
EDIT2:
By running the following command:
kubectl describe svc ingress-nginx-ingress-controller -n gitlab-managed-apps
I get the following result:
see picture
In Event log you will see that I switch the type to "NodePort" once and then back to "LoadBalancer" and I added the "externalIPs: -192.168.50.235" line in the yaml file. As you can see there is an externalIP but Git is not detecting it.
Btw. Im not using any of these cloud providers like AWS or GCE and I found out that LoadBalancer is not working that way. But there must be a solution for this without LoadBalancer.
I would consider to look at MetalLB as for the main provisioner of Load balancing service in your cluster. If you don't use any of Cloud providers in order to obtain the entry point (External IP) for Ingress resource, there is option for Bare-metal environments to switch to MetalLB solution which will create Kubernetes services of type LoadBalancer in the clusters that don’t run on a cloud provider, therefore it can be also implemented for NGINX Ingress Controller.
Generally, MetalLB can be installed via Kubernetes manifest file or using Helm package manager as described here.
MetalLB deploys it's own services across Kubernetes cluster and it might require to reserve pool of IP addresses in order to be able to take ownership of the ingress-nginx service. This pool can be defined in a ConfigMap called config located in the same namespace as the MetalLB controller:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 203.0.113.2-203.0.113.3
External IP would be assigned to your LoadBalancer once ingress service obtains IP address from this address pool.
Find more details about MetalLB implementation for NGINX Ingress Controller in official documentation.
After some research I found out that this is an Gitlab issue. As I said above, I successfully build a connection to my cluster. Since Im using Kubernetes without cloud providers it is not possible to use the type "LoadBalancer". Therefore you need to add an external IP or change the type to "NodePort". This way you can make your Ingress-Controller accessible outside.
Check this out: kubernetes service external ip pending
I just continued the Gitlab tutorial and it worked.
In minikube I can get a service's url via minikube service kubedemo-service --url. How do I get the URL for a type: LoadBalancer service in Docker for Mac or Docker for Windows in Kubernetes mode?
service.yml is:
apiVersion: v1
kind: Service
metadata:
name: kubedemo-service
spec:
type: LoadBalancer
selector:
app: kubedemo
ports:
- port: 80
targetPort: 80
When I switch to type: NodePort and run kubectl describe svc/kubedemo-service I see:
...
Type: NodePort
LoadBalancer Ingress: localhost
...
NodePort: <unset> 31838/TCP
...
and I can browse to http://localhost:31838/ to see the content. Switching to type: LoadBalancer, I see localhost ingress lines in kubectl describe svc/kubedemo-service but I get ERR_CONNECTION_REFUSED browsing to it.
(I'm familiar with http://localhost:8080/api/v1/namespaces/kube-system/services/kubedemo-service/proxy/ though this changes the root directory of the site, breaking css and js references that assume a root directory. I'm also familiar with kubectl port-forward pods/pod-name though this only connects to pods until k8s 1.10.)
How do I browse to a type: LoadBalancer service in Docker for Win or Docker for Mac?
LoadBalancer will work on Docker-for-Mac and Docker-for-Windows as long as you're running a recent build. Flip the type back to LoadBalancer and update. When you check the describe command output look for the Port: <unset> 80/TCP line. And try hitting http://localhost:80.
How do I browse to a type: ClusterIP service or type: LoadBalancer service in Docker for Win or Docker for Mac?
This is usual confusion when it comes to scope of kubernetes network levels, and exposures on service level. Here is quick overview of types and scope:
A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access. To access it outside of cluster, you would need to run kube proxy (such as in standard dashboard example).
A LoadBalancer service is the standard way to expose a service to the internet. Load balancer access and setup is dependent on cloud provider.
A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
This said, only way to access your service while on ClusteIP is from within one of the containers from the cluster or with help of proxy, and for LoadBalancer you need cloud provider. You can also mimic LoadBalancer with ingress of your own (upstream proxy such as nginx, sitting in front of ClusterIP type of service).
Useful link with more in-depth explanation and nice images: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
Updated for LoadBalancer discussion:
As for using LoadBalancer, here is useful reference from documentation (https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/):
The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service.
On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
minikube service name-of-the-service
This automatically opens up a browser window using a local IP address that serves your app on service port.