How to access host's localhost from inside kubernetes cluster - docker

In this application, nodejs pods are running inside kubernetes, and mongodb itself sitting outside at host as localhost.
This indeed not good design, but its only for dev environment. In production a separte mongodb server will be there, as such option to have a non loopback ip in endpoint, so will not be a problem in Production.
Have considered following options for dev environment
Use localhost connect string to connect to mongodb, but it will refer to pod's own localhost not host's localhost
Use headless service and provide localhost ip and port in endpoint. However endpoint doesn't allow loopback
Suggest if there is a way to access mongodb database at host's localhost from inside cluster (pod / nodejs application).

I'm running on docker for windows, and for me just using host.docker.internal instead of localhost seems to work fine.
For example, my mongodb connection string looks like this:
mongodb://host.docker.internal:27017/mydb
As an aside, my hosts file includes the following lines (which I didn't add, I guess the docker desktop installation did that):
# Added by Docker Desktop
192.168.1.164 host.docker.internal
192.168.1.164 gateway.docker.internal

127.0.0.1 is a localhost(lo0) interface IP address. Hosts, nodes and pods have their own localhost interfaces and they are not connected to each other.
Your mongodb is running on the Host machine and cannot be accessible using the localhost (or it's IP range) from inside a cluster pod or from inside vm.
In your case, create a headless service and Endpoint for it inside the cluster:
Your mongodb-service.yaml file should look like this:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
clusterIP: None
ports:
- protocol: TCP
port: <multipass-port-you-are-using>
targetPort: <multipass-port-you-are-using>
selector:
name: example
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mongodb-service
subsets:
- addresses:
- ip: 10.62.176.1
ports:
- port: <multipass-port-you-are-using>
I have add IP you've mentioned in comment section.
After creating service and endpoint you can use mongodb-service name and port <multipass-port-you-are-using> inside any pod of this cluster as a destination point.
Take a look: mysql-localhost, mongodb-localhost.

If you are using minikube to deploy a local kubernetes, you can reach your local environment using the variable host.minikube.internal.

I can add one more solution with Ingress and external-service, which may help some of you.
I deploy my complete system locally with a special Kustomize overlay.
When I want to replace one of the deployments with a service running locally in my IDE, I do the following:
I add an ExternalName service which forwards to host.docker.internal:
kind: Service
apiVersion: v1
metadata:
name: backend-ide
spec:
type: ExternalName
externalName: host.docker.internal
and reconfigured my ingress to forward certain request from my web-app to this external-service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backend-ingress
spec:
ingressClassName: nginx
rules:
- host: url.used.by.webapp.com
http:
paths:
- path: /customerportal/api(/|$)(.*)
pathType: Prefix
backend:
service:
name: backend-ide
port:
number: 8080
The same way, I can access all other ports on my host.

Related

How can I correctly forward traffic from a container to a NodePort service with Kubernetes?

I am running Minikube on an m1 mac with the docker daemon. I have a container in a pod serving HTTP on port 7777; according to the documentation, I can use a combination of a nodeport and the minikube service command to expose it to the local machine. My configuration yaml file is pretty simple as well:
apiVersion: v1
kind: Pod
metadata:
name: door-controls
labels:
type: door-controls
spec:
containers:
- image: door_controls
name: door-controls
imagePullPolicy: Never
ports:
- containerPort: 7777
name: httpz
---
apiVersion: v1
kind: Service
metadata:
name: door-control-service
spec:
type: NodePort
selector:
type: door-controls
ports:
- name: svc-http
protocol: TCP
port: 80
targetPort: httpz
Running this in minikube and then attempting to use minikube service will expose the running process on a random port. From a machine inside the network, I can wget the pod IP on port 7777 and get data back, so I know the pod is correctly serving traffic. I also can wget the door-control-service nodeport service from inside the network on port 80 and get traffic back, so I know that the door-control-service configuration is working. But no amount of futzing will allow me to access the door-control-service inside the network via the nodeport (which is randomly generated in the port ~30k range, and the browser launched by minikube service never returns data so I can't access it outside of that range either.
What am I doing wrong? Or more generally, how can I debug this issue? I am new to kubernetes and not sure where in the logs I should be looking for errors in the first place.

How to expose two apps/services over unique ports with k3d?

*Cross-posted from k3d GitHub Discussion: https://github.com/rancher/k3d/discussions/690
I am attempting to expose two services over two ports. As an alternative, I'd also love to know how to expose them over the same port and use different routes. I've attempted a few articles and a lot of configurations. Let me know where I'm going wrong with the networking of k3d + k3s / kubernetes + traefik (+ klipper?)...
I posted an example:
https://github.com/ericis/k3d-networking
The goal:
Reach "app-1" on host over port 8080
Reach "app-2" on host over port 8091
Steps
*See: files in repo
Configure k3d cluster and expose app ports to load balancer
ports:
# map localhost to loadbalancer
- port: 8080:80
nodeFilters:
- loadbalancer
# map localhost to loadbalancer
- port: 8091:80
nodeFilters:
- loadbalancer
Deploy apps with "deployment.yaml" in Kubernetes and expose container ports
ports:
- containerPort: 80
Expose services within kubernetes. Here, I've tried two methods.
Using CLI
$ kubectl create service clusterip app-1 --tcp=8080:80
$ kubectl create service clusterip app-2 --tcp=8091:80
Using "service.yaml"
spec:
ports:
- protocol: TCP
# expose internally
port: 8080
# map to app
targetPort: 80
selector:
run: app-1
Expose the services outside of kubernetes using "ingress.yaml"
backend:
service:
name: app-1
port:
# expose from kubernetes
number: 8080
You either have to use an ingress, or have to open ports on each individual node (k3d runs on docker, so you have to expose the docker ports)
Without opening a port during the creation of the k3d cluster, a nodeport service will not expose your app
k3d cluster create mycluster -p 8080:30080#agent[0]
For example, this would open an "outside" port (on your localhost) 8080 and map it to 30080 on the node - then you can use a NodePort service to actually connect the traffic from that port to your app:
apiVersion: v1
kind: Service
metadata:
name: some-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: some-port
nodePort: 30080
selector:
app: pgadmin
type: NodePort
You can also open ports on the server node like:
k3d cluster create mycluster -p 8080:30080#server[0]
Your apps can get scheduled to run on whatever node, and if you force a pod to appear on a specific node (lets say you open a certain port on agent[0] and set up your .yaml files to work with that certain port), for some reason the local-path rancher storage-class just breaks and will not create a persistent volume for your claim. You kinda have to get lucky & have your pod get scheduled where you need it to. (if you find a way to schedule pods on specific nodes without the storage provisioner breaking, let me know)
You also can map a whole range of ports, like:
k3d cluster create mycluster --servers 1 --agents 1 -p "30000-30100:30000-30100#server[0]"
but be careful with the amount of ports you open, if you open too much, k3d will crash.
Using a load balancer - it's similar, you just have to open one port & map to to the load balancer.
k3d cluster create my-cluster --port 8080:80#loadbalancer
You then have to use an ingress, (or the traffic won't reach)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 80
I also think that ingress will only route http & https traffic, https should be done on the port 443, supposedly you can map both port 80 and port 443, but I haven't been able to get that to work (I think that certificates need to be set up as well).

Access Kubernetes ingress remotely

I've set up a Kubernetes ingress with minikube, on a virtual machine of CentOS 7.6.
It finally works well in that machine, described as below:
Name: my-ingress
Namespace: default
Address: 172.17.0.2
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
localhost
/route1/?(.*) service1 (172.18.0.4:80)
/route2/?(.*) service2 (172.18.0.4:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
And I made my /etc/hosts as follow
172.17.0.2 localhost
172.17.0.2 0.0.0.0
Which works fine on the virtual machine, that I can successfully access my API through curl localhost/route1/api/values.
But here I would like to access this on other machine for developing. My thought was to see the same successful result through curl 192.168.2.21/route1/api/values on other machine, with 192.168.2.21 the IP address of the virtual machine with Kubernetes. But it failed with message "empty reply from server".
Is there other method that I can make this happen, accessing the result of ingress on other machine?
What I tried was to install local-dev-with-docker-for-mac-kubernetes, but didn't help.
And also saw some other suggestions to work around services, but for I would have to work with a lot of services, afraid that may be hard to manage if I have to avoid any port duplicated. So am looking for result workaround Ingress mainly.
Your config specifying host as localhost, so only incoming traffic with localhost got handled. You can verify this with curl 172.17.0.2/route1/api/values from the same machine. Should get the same empty reply message.
To fix this, you can omit the host setting so ingress controller will handle all incoming HTTP traffics. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
UPDATE
minimal ingress example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80

How can we access ubuntu container image from outside the host?

We access the container through cluster IP and even we deploy web application containers can be accessed.The issue with how can we access container from outside the host.
Tried with giving external IP to containers.
You can create a service and bind it to a node port, from outside your cluster if you try to access that service using node_ip:port.
apiVersion: v1
kind: Service
metadata:
name: api-server
spec:
ports:
- port: 80
name: http
targetPort: api-http
nodePort: 30004
- port: 443
name: https
targetPort: api-http
type: LoadBalancer
selector:
run: api-server
if you do kubectl get service you can get the external ip.
The best approach would be to expose your pods with ClusterIP type services, and then use an Ingress resource along with Ingress Controller to expose HTTP and/or HTTPS routes so you can access your app outside of the cluster.
For testing purposes it's ok to use NodePort or LoadBalancer type services. Whether you are running on your own infrastructure or using a managed solution, you can use NodePort, while using LoadBalancer requires cloud provider's load balancer.
Source: Official docs

How to preserve source IP from traffic arriving on a ClusterIP service with an external IP?

I currently have a service that looks like this:
apiVersion: v1
kind: Service
metadata:
name: httpd
spec:
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
- port: 443
targetPort: 443
name: https
protocol: TCP
selector:
app: httpd
externalIPs:
- 10.128.0.2 # VM's internal IP
I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP 10.104.0.1, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.
How can I get the real source IP for the request without having to set up a load balancer or ingress?
This is not simple to achieve -- because of the way kube-proxy works, your traffic can get forwarded between nodes before it reaches the pod that's backing your Service.
There are some beta annotations that you can use to get around this, specifically service.beta.kubernetes.io/external-traffic: OnlyLocal.
More info in the docs, here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
But this does not meet your additional requirement of not requiring a LoadBalancer. Can you expand upon why you don't want to involve a LoadBalancer?
If you only have exactly one pod, you can use hostNetwork: true to achieve this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: caddy
spec:
replicas: 1
template:
metadata:
labels:
app: caddy
spec:
hostNetwork: true # <---------
containers:
- name: caddy
image: your_image
env:
- name: STATIC_BACKEND # example env in my custom image
value: $(STATIC_SERVICE_HOST):80
Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.

Resources