I'm learning and trying to setup minikube in an ubuntu server to manage and deploy applications.
I'm using nginx proxy manager application to manage the proxy's in the server.
I've followed this tutorial to setup ingress with NGINX Ingress Controller, and everyhing works fine, when I run curl *minikube_ip*:service_port I get
Hello, world!
Version: 1.0.0
Hostname: web-746c8679d4-zhtll
Now, the problem is, I'm trying to expose this to the outside world by adding a proxy host in nginx proxy manager that proxies domain_name.com to the *minikube_ip*:service_port but it just keeps giving me 504 Gateway Time-out.
here's the ingress yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
When I run kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d12h
web NodePort 10.104.186.135 <none> 8080:31806/TCP 2d12h
In my hosts file
*minikube_ip* hello-world.info
I suspect it might be related to the minikube docker container not being in the same network as the nginx proxy manager container, but I really don't know how to solve this, help pls
A NodePort type service can be accessed via the IP Address of the node(s) running in your cluster. Kubernetes will route the request to the proper pod based on the IP and port provided.
If you want to use a host name defined in an Ingress resource instead of the IP Address and port, change your service to type=ClusterIP.
Try running the following command to change your service to type ClusterIP:
kubectl patch svc web -p '{"spec": {"type": "ClusterIP"}}'
Wait for an EXTERNAL-IP to be assigned to the service, you can watch using kubectl get svc --watch
Update your hosts file with the EXTERNAL-IP value instead of the minikube_ip
Finally, try visiting hello-world.info in the browser
Related
I am trying to assign static external IP to the GKE LB.
apiVersion: v1
kind: Service
metadata:
name: onesg
labels:
app: onesg
spec:
selector:
app: onesg
ports:
- port: 80
targetPort: 5000
type: LoadBalancer
loadBalancerIP: "my regional IP"
But after deployment, I cannot access my app from the regional IP. Any idea?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.104.0.1 <none> 443/TCP 23h
onesg LoadBalancer 10.104.15.191 my regional IP 80:31293/TCP 7m18s
If I use ephemeral IP assigned by GKE LB, I can access my app.
You have to check if your Service is actually pointing to the proper pods.
run the following
kubectl describe service onesg
in the output there should be a field called endpoints.
run this
kubectl get pods -o wide
and make sure the list of IP's in the endpoint field from the first command matches the IP of the pods from the second one
Next to troubleshoot you can try to make sure you app works, you can do that by using the kubectl port-forward command
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
I'm trying a simple microservices app on a cloud Kubernetes cluster. This is the Ingress yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-nginx-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: auth-svc
port:
number: 5000
rules:
- host: "somehostname.xyz"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: auth-svc
port:
number: 5000
The problem:
When I use this URL, I'm able to access the auth service: http://somehostname.xyz:31840. However, if I use http://somehostname.xyz, I get a "This site can’t be reached somehostname.xyz refused to connect." error.
The auth service sends GET requests to other services too, and I'm able to see the response from those services if I use:
http://somehostname.xyz:31840/go or http://somehostname.xyz:31840/express. But again, these work only if the nodeport 31840 is used.
My questions:
What typically causes such a problem, where I can access the service
using the hostname and nodeport, but it won't work without supplying the
nodeport?
Is there a method to test this in a different way to figure out where
the problem is?
Is it a problem with the Ingress or Auth namespace? Is it a problem
with the hostname in Flask? Is it a problem with the Ingress
controller? How do I debug this?
These are the results of kubectl get all and other commands.
NAME READY STATUS RESTARTS
pod/auth-flask-58ccd5c94c-g257t 1/1 Running 0
pod/ingress-nginx-nginx-ingress-6677d54459-gtr42 1/1 Running 0
NAME TYPE EXTERNAL-IP PORT(S)
service/auth-svc ClusterIP <none> 5000/TCP
service/ingress-nginx-nginx-ingress LoadBalancer 172.xxx.xx.130 80:31840/TCP,443:30550/TCP
NAME READY UP-TO-DATE AVAILABLE
deployment.apps/auth-flask 1/1 1 1
deployment.apps/ingress-nginx-nginx-ingress 1/1 1 1
NAME DESIRED CURRENT READY
replicaset.apps/auth-flask-58ccd5c94c 1 1 1
replicaset.apps/ingress-nginx-nginx-ingress-6677d54459 1 1 1
NAME CLASS HOSTS ADDRESS PORTS
ingress-nginx-nginx-ingress <none> somehostname.xyz 172.xxx.xx.130 80
Describing ingress also seems normal.
kubectl describe ingress ingress-nginx-nginx-ingress
Name: ingress-nginx-nginx-ingress
Namespace: default
Address: 172.xxx.xx.130
Default backend: auth-svc:5000 (10.x.xx.xxx:5000)
Rules:
Host Path Backends
---- ---- --------
somehostname.xyz
/ auth-svc:5000 (10.x.xx.xxx:5000)
Annotations: kubernetes.io/ingress.class: nginx
This is the code of Auth.
import requests
from flask import Flask
app = Flask(__name__)
#app.route('/')
def indexPage():
return ' <!DOCTYPE html><html><head><meta charset="UTF-8" />\
<title>Microservice</title></head> \
<body><div style="text-align: center;">Welcome to the Auth page</div></body></html>'
#app.route('/go')
def getGoJson():
return requests.get('http://analytics-svc:8082/info').content
#app.route('/express')
def getNodeResponse():
return requests.get('http://node-svc:8085/express').content
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0")
and Auth's Dockerfile:
FROM python:3.8-slim-buster
WORKDIR /usr/src/app
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
The part of docker-compose yaml for auth:
version: "3.3"
services:
auth:
build: ./auth/
image: nav9/auth-flask:v1
ports:
- "5000:5000"
Auth's Kubernetes manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-flask
spec:
selector:
matchLabels:
any-name: auth-flask
template:
metadata:
labels:
any-name: auth-flask
spec:
containers:
- name: auth-name
image: nav9/auth-flask:v1
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: auth-svc
spec:
# type: ClusterIP
ports:
- targetPort: 5000
port: 5000
selector:
any-name: auth-flask
What typically causes such a problem, where I can access the service using the hostname and nodeport, but it won't work without supplying the nodeport?
If the URL works when using the nodeport and not without the nodeport, then this means that the ingress is not configured properly for what you want to do.
Is there a method to test this in a different way to figure out where the problem is?
Steps for troubleshooting are:
The first step is determine if the error is from the ingress or from your back-end service.
In your case, the error This site can’t be reached somehostname.xyz refused to connect, sounds like the Ingress found the service to map to and used port 5000 to connect to it, and the connection was refused or nothing was listening on port 5000 for that service.
I'd next look at the auth-svc logs to see that that request came into the system and why it was refused.
My guess is that the auth service is listening on port 31840 but your ingress says to connect to port 5000 based on the configuration.
You might try adding a port mapping from 80 to 31840 as a hack/test to see if you get a different error.
Something like:
spec:
rules:
- host: "somehostname.xyz"
http:
paths:
- path: "/"
backend:
service:
port:
number: 31840
I've only included the part needed to show the indentation properly.
So the other way to test this out is to create additional URLs that map to different ports, so for example:
/try1 => auth-svc:5000
/try2 => auth-svc:31840
/try3 => auth-svc:443
The other part that I haven't played with that might be an issue is that you are using http and I don't know of any auth service that would use http, so simply trying to connect using http to an app that wants https will get a connection either refused or a strange error, so that might be related to the problem/error you are seeing.
Hope this gives you some ideas to try.
The solution has three parts:
Use kubectl get all to find out the running ingress service:
NAME TYPE EXTERNAL-IP PORT(S)
service/ingress-nginx-nginx-ingress LoadBalancer 172.xxx.xx.130 80:31840/TCP,443:30550/TCP
Copy the EXTERNAL-IP of the service (in this case 172.xxx.xx.130).
Add a DNS A record named *.somehostname.xyz for the cloud cluster, and use the IP address 172.xxx.xx.130.
When accessing the hostname via the browser, make sure that http is used instead of https.
In this application, nodejs pods are running inside kubernetes, and mongodb itself sitting outside at host as localhost.
This indeed not good design, but its only for dev environment. In production a separte mongodb server will be there, as such option to have a non loopback ip in endpoint, so will not be a problem in Production.
Have considered following options for dev environment
Use localhost connect string to connect to mongodb, but it will refer to pod's own localhost not host's localhost
Use headless service and provide localhost ip and port in endpoint. However endpoint doesn't allow loopback
Suggest if there is a way to access mongodb database at host's localhost from inside cluster (pod / nodejs application).
I'm running on docker for windows, and for me just using host.docker.internal instead of localhost seems to work fine.
For example, my mongodb connection string looks like this:
mongodb://host.docker.internal:27017/mydb
As an aside, my hosts file includes the following lines (which I didn't add, I guess the docker desktop installation did that):
# Added by Docker Desktop
192.168.1.164 host.docker.internal
192.168.1.164 gateway.docker.internal
127.0.0.1 is a localhost(lo0) interface IP address. Hosts, nodes and pods have their own localhost interfaces and they are not connected to each other.
Your mongodb is running on the Host machine and cannot be accessible using the localhost (or it's IP range) from inside a cluster pod or from inside vm.
In your case, create a headless service and Endpoint for it inside the cluster:
Your mongodb-service.yaml file should look like this:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
clusterIP: None
ports:
- protocol: TCP
port: <multipass-port-you-are-using>
targetPort: <multipass-port-you-are-using>
selector:
name: example
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mongodb-service
subsets:
- addresses:
- ip: 10.62.176.1
ports:
- port: <multipass-port-you-are-using>
I have add IP you've mentioned in comment section.
After creating service and endpoint you can use mongodb-service name and port <multipass-port-you-are-using> inside any pod of this cluster as a destination point.
Take a look: mysql-localhost, mongodb-localhost.
If you are using minikube to deploy a local kubernetes, you can reach your local environment using the variable host.minikube.internal.
I can add one more solution with Ingress and external-service, which may help some of you.
I deploy my complete system locally with a special Kustomize overlay.
When I want to replace one of the deployments with a service running locally in my IDE, I do the following:
I add an ExternalName service which forwards to host.docker.internal:
kind: Service
apiVersion: v1
metadata:
name: backend-ide
spec:
type: ExternalName
externalName: host.docker.internal
and reconfigured my ingress to forward certain request from my web-app to this external-service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backend-ingress
spec:
ingressClassName: nginx
rules:
- host: url.used.by.webapp.com
http:
paths:
- path: /customerportal/api(/|$)(.*)
pathType: Prefix
backend:
service:
name: backend-ide
port:
number: 8080
The same way, I can access all other ports on my host.
I've set up a Kubernetes ingress with minikube, on a virtual machine of CentOS 7.6.
It finally works well in that machine, described as below:
Name: my-ingress
Namespace: default
Address: 172.17.0.2
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
localhost
/route1/?(.*) service1 (172.18.0.4:80)
/route2/?(.*) service2 (172.18.0.4:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
And I made my /etc/hosts as follow
172.17.0.2 localhost
172.17.0.2 0.0.0.0
Which works fine on the virtual machine, that I can successfully access my API through curl localhost/route1/api/values.
But here I would like to access this on other machine for developing. My thought was to see the same successful result through curl 192.168.2.21/route1/api/values on other machine, with 192.168.2.21 the IP address of the virtual machine with Kubernetes. But it failed with message "empty reply from server".
Is there other method that I can make this happen, accessing the result of ingress on other machine?
What I tried was to install local-dev-with-docker-for-mac-kubernetes, but didn't help.
And also saw some other suggestions to work around services, but for I would have to work with a lot of services, afraid that may be hard to manage if I have to avoid any port duplicated. So am looking for result workaround Ingress mainly.
Your config specifying host as localhost, so only incoming traffic with localhost got handled. You can verify this with curl 172.17.0.2/route1/api/values from the same machine. Should get the same empty reply message.
To fix this, you can omit the host setting so ingress controller will handle all incoming HTTP traffics. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules
UPDATE
minimal ingress example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
I'm using minikube to test kubernetes on latest MacOS.
Here are my relevant YAMLs:
namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: micro
labels:
name: micro
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: adderservice
spec:
replicas: 1
template:
metadata:
labels:
run: adderservice
spec:
containers:
- name: adderservice
image: jeromesoung/adderservice:0.0.1
ports:
- containerPort: 8080
service.yml
apiVersion: v1
kind: Service
metadata:
name: adderservice
labels:
run: adderservice
spec:
ports:
- port: 8080
name: main
protocol: TCP
targetPort: 8080
selector:
run: adderservice
type: NodePort
After running minikube start, the steps I took to deploy is as follows:
kubectl create -f namespace.yml to create the namespace
kubectl config set-context minikube --namespace=micro
kubectl create -f deployment.yml
kubectl create -f service.yml
Then, I get the NodeIP and NodePort with below commands:
kubectl get services to get the NodePort
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adderservice NodePort 10.99.155.255 <none> 8080:30981/TCP 21h
minikube ip to get the nodeIP
$ minikube ip
192.168.99.103
But when I do curl, I always get Connection Refused like this:
$ curl http://192.168.99.103:30981/add/1/2
curl: (7) Failed to connect to 192.168.99.103 port 30981: Connection refused
So I checked node, pod, deployment and endpoint as follows:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 23h v1.13.3
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
adderservice-5b567df95f-9rrln 1/1 Running 0 23h
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
adderservice 1 1 1 1 23h
$ kubectl get endpoints
NAME ENDPOINTS AGE
adderservice 172.17.0.5:8080 21h
I also checked service list from minikube with:
$ minikube service -n micro adderservice --url
http://192.168.99.103:30981
I've read many posts regarding accessing k8s service via NodePorts. To my knowledge, I should be able to access the app with no problem. The only thing I suspect is that I'm using a custom namespace. Will this cause the access issue?
I know namespace will change the DNS, so, to be complete, I ran below commands also:
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
$ kubectl exec -ti adderservice-5b567df95f-9rrln -- nslookup kubernetes.micro
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: kubernetes.micro
Address: 198.105.244.130
Name: kubernetes.micro
Address: 104.239.207.44
Could anyone help me out? Thank you.
The error Connection Refused mostly means that the application inside the container does not accept requests on the targeted interface or not mapped through the expected ports.
Things you need to be aware of:
Make sure that your application bind to 0.0.0.0 so it can receive requests from outside the container either externally as in public or through other containers.
Make sure that your application is actually listening on the containerPort and targetPort as expect
In your case you have to make sure that ADDERSERVICE_SERVICE_HOST equals to 0.0.0.0 and ADDERSERVICE_SERVICE_PORT equals to 8080 which should be the same value as targetPort in service.yml and containerPort in deployment.yml
Not answering the question but if someone who googled comes here like me who faced the same issue. Here is my solution for the same problem.
My Mac System IP and minikube IP are different.
So localhost:port didn't work instead try getting IP
minikube ip
Later, use that IP:Port to access the app and it works.
Check if service is really listening on 8080.
Try telnet within the container.
telnet 127.0.0.1 8080
.
.
.
telnet 172.17.0.5 8080