I am following a tutorial to access a pod running inside a Kubernetes cluster behind a service. This Kubernetes cluster is running on Windows 10 using Desktop Docker (by enabling the Kubernetes option)
I am unable to access it using this https://local.ticket.dev/api/users/currentuser it always says "Site can't be reached: local.ticket.dev unexpectedly closed the connection."
I have disabled the redirect but it still redirects HTTP to HTTPs
Request URL: http://local.ticket.dev/api/users/currentuser
Request Method: GET
Status Code: 307 Internal Redirect
Referrer Policy: strict-origin-when-cross-origin
Location: https://local.ticket.dev/api/users/currentuser
Non-Authoritative-Reason: HSTS
Here is visually what I want
kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> local.ticket.dev 80 29s
kubectl get services
Please note it's running on local machine windows 10 with Docker Desktop. and the LoadBalancer external IP always remain pending even after 6 hours
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.96.254.94 <none> 3000/TCP 45s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h17m
nginx-ingress-1629401528-controller LoadBalancer 10.110.199.210 <pending> 80:31430/TCP,443:32346/TCP 5h13m
nginx-ingress-1629401528-default-backend ClusterIP 10.108.79.252 <none> 80/TCP 5h13m
kubectl get pods
NAME READY STATUS RESTARTS AGE
auth-depl-c98cdf66f-txqxt 1/1 Running 0 54s
nginx-ingress-1629401528-controller-569576ddbd-2htxz 1/1 Running 0 5h13m
nginx-ingress-1629401528-default-backend-69c7fc6549-xxf8w 1/1 Running 0 5h13m
How I configured it is as follows
1 - Installation of Nginx by the following command
helm install stable/nginx-ingress --generate-name
2 - Skaffold dev
Listing files to watch...
- billo/ticket_auth
Generating tags...
- billo/ticket_auth -> billo/ticket_auth:latest
Some taggers failed. Rerun with -vdebug for errors.
Checking cache...
- billo/ticket_auth: Found Locally
Starting test...
Tags used in deployment:
- billo/ticket_auth -> billo/ticket_auth:d869228....
Starting deploy...
- deployment.apps/auth-depl created
- service/auth-srv created
- ingress.networking.k8s.io/ingress-service created
Waiting for deployments to stabilize...
- deployment/auth-depl is ready.
Deployments stabilized in 2.302 seconds
Waiting for deployments to stabilize...
Deployments stabilized in 6.9904ms
Press Ctrl+C to exit
Watching for changes...
[auth]
[auth] > auth#1.0.0 start
[auth] > ts-node-dev --poll src/index.ts
[auth]
[auth] [INFO] 00:59:23 ts-node-dev ver. 1.1.8 (using ts-node ver. 9.1.1, typescript ver. 4.3.5)
[auth] Auth!!!! listen to 3000 port
if I look at the last line it seems that my Auth Pod is running on 3000 port.
auth-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: billo/ticket_auth
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: local.ticket.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Configuration in the Host file
# Added by Docker Desktop
127.0.0.1 host.docker.internal
127.0.0.1 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
127.0.0.1 ingress.local
127.0.0.1 local.ticket.dev
First Disable the HTTPS redirect first
nginx.ingress.kubernetes.io/ssl-redirect: "false"
add annotation into the ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: local.ticket.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Did you get the external IP for Nginx controller svc? it's showing pending as you are on the Local system.
You might also need to add entries into host file
manually adding your ingresses' hostnames to /etc/hosts:
127.0.0.1 ingress.local
127.0.0.1 local.ticket.dev
OR
Host IP local.ticket.dev
Related
I am recently new to Kubernetes and Docker in general and am experiencing issues.
I am running a single local Kubernetes cluster via Docker and am using skaffold to control the build up and teardown of objects within the cluster. When I run skaffold dev the build seems successful, yet when I attempt to make a request to my cluster via Postman the request hangs. I am using an ingress-nginx controller and I feel the bug lies somewhere here. My request handling logic is simple and so I feel the issue is not in the route handling but the configuration of my cluster, specifically with the ingress controller. I will post below my skaffold yaml config and my ingress yaml config.
Any help is greatly appreciated as I have struggled with this bug for sometime.
ingress yaml config :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Note that I have a redirect in my /etc/hosts file from ticketing.dev to 127.0.0.1
Auth service yaml config :
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: conorl47/auth
---
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
skaffold yaml config :
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: conorl47/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
For installing the ingress nginx controller I followed the installation instructions at https://kubernetes.github.io/ingress-nginx/deploy/ , namely the Docker desktop installation instruction.
After running that command I see the following two Docker containers running in Docker desktop
The two services created in the ingress-nginx namespace are :
❯ k get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.6.146 <pending> 80:30036/TCP,443:30465/TCP 22m
ingress-nginx-controller-admission ClusterIP 10.108.8.26 <none> 443/TCP 22m
When I kubectl describe both of these services I see the following :
❯ kubectl describe service ingress-nginx-controller -n ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.6.146
IPs: 10.103.6.146
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30036/TCP
Endpoints: 10.1.0.10:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30465/TCP
Endpoints: 10.1.0.10:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32485
Events: <none>
and :
❯ kubectl describe service ingress-nginx-controller-admission -n ingress-nginx
Name: ingress-nginx-controller-admission
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.8.26
IPs: 10.108.8.26
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 10.1.0.10:8443
Session Affinity: None
Events: <none>
As it seems, you have made the ingress service of type LoadBalancer, this will usually provision an external loadbalancer from your cloud provider of choice. That's also why It's still pending. Its waiting for the loadbalancer to be ready, but it will never happen.
If you want to have that ingress service reachable outside your cluster, you need to use type NodePort.
Since their docs are not great on this point, and it seems to be by default like this. You could download the content of https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml and modify it before applying. Or you use helm, then you probably can configure this.
You could also do it in this dirty fashion.
kubectl apply --dry-run=client -o yaml -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml \
| sed s/LoadBalancer/NodePort/g \
| kubectl apply -f -
You could also edit in place.
kubectl edit svc ingress-nginx-controller-admission -n ingress-nginx
I'm trying to execute an application inside a kubernetes cluster.
I used to launch the application with docker-compose without problems, but when I create
my kubernetes deployment files, I am not able to access the service inside the cluster even after exposing them. here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
# type: LoadBalancer
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: jksun12/vdsaipro
# command: ["/run.sh"]
ports:
- containerPort: 80
- containerPort: 3306
# volumeMounts:
# - name: myapp-pv-claim
# mountPath: /var/lib/mysql
# volumes:
# - name: myapp-pv-claim
# persistentVolumeClaim:
# claimName: myapp-pv-claim
---
apiVersion: apps/v1
kind: PersistentVolumeClaim
metadata:
name: myapp-pv-claim
labels:
app: myapp
spec:
accesModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
Here is the result of
kubectl describe service myapp-service
:
Name: myapp-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp
Type: NodePort
IP: 10.109.12.113
Port: port-1 80/TCP
TargetPort: 80/TCP
NodePort: port-1 31892/TCP
Endpoints: 172.18.0.5:80,172.18.0.8:80,172.18.0.9:80
Port: port-2 3306/TCP
TargetPort: 3306/TCP
NodePort: port-2 32393/TCP
Endpoints: 172.18.0.5:3306,172.18.0.8:3306,172.18.0.9:3306
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
And here are the errors that I get when I try to access them:
curl 172.17.0.2:32393
curl: (1) Received HTTP/0.9 when not allowed
And here is the next result when I try to access the other port
curl 172.17.0.2:31892
curl: (7) Failed to connect to 172.17.0.2 port 31892: Connection refused
curl: (7) Failed to connect to 172.17.0.2 port 31892: Connection refused
I'm running ubuntu server 20.04.1 LTS. The manip is on top of minikube.
Thanks for your help.
If you are accessing the service from inside the cluster use ClusterIP as the IP. So curl should be 10.109.12.113:80 and 10.109.12.113:3306
In case accessing it from outside the cluster then use NODEIP and NODEPORT. So curl should be on <NODEIP>:32393 and <NODEIP>:31892
From inside the cluster I would also use POD IPs directly to understand if the issue is at service level or pod level.
You need to make sure that the application is listening on port 80 and port 3306. Only mentioning containerPort as 80 and 3306 does not make the application listen on those ports.
Also make sure that the application code inside the pod is listening on 0.0.0.0 instead of 127.0.0.1
Trying to do something that should be pretty simple: starting up an Express pod and fetch the localhost:5000/ which should respond with Hello World!.
I've installed ingress-nginx for Docker for Mac and minikube
Mandatory: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Docker for Mac: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
minikube: minikube addons enable ingress
I run skaffold dev --tail
It prints out Example app listening on port 5000, so apparently is running
Navigate to localhost and localhost:5000 and get a "Could not get any response" error
Also, tried minikube ip which is 192.168.99.100 and experience the same results
Not quite sure what I am doing wrong here. Code and configs are below. Suggestions?
index.js
// Import dependencies
const express = require('express');
// Set the ExpressJS application
const app = express();
// Set the listening port
// Web front-end is running on port 3000
const port = 5000;
// Set root route
app.get('/', (req, res) => res.send('Hello World!'));
// Listen on the port
app.listen(port, () => console.log(`Example app listening on port ${port}`));
skaffold.yaml
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: sockpuppet/server
context: server
docker:
dockerfile: Dockerfile.dev
sync:
manual:
- src: '**/*.js'
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress-service.yaml
- k8s/server-deployment.yaml
- k8s/server-cluster-ip-service.yaml
ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: server-cluster-ip-service
servicePort: 5000
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
replicas: 3
selector:
matchLabels:
component: server
template:
metadata:
labels:
component: server
spec:
containers:
- name: server
image: sockpuppet/server
ports:
- containerPort: 5000
server-cluster-ip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: ClusterIP
selector:
component: server
ports:
- port: 5000
targetPort: 5000
Dockerfile.dev
FROM node:12.10-alpine
EXPOSE 5000
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
Output from describe
$ kubectl describe ingress ingress-service
Name: ingress-service
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
localhost
/ server-cluster-ip-service:5000 (172.17.0.7:5000,172.17.0.8:5000,172.17.0.9:5000)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-service","namespace":"default"},"spec":{"rules":[{"host":"localhost","http":{"paths":[{"backend":{"serviceName":"server-cluster-ip-service","servicePort":5000},"path":"/"}]}}]}}
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 16h nginx-ingress-controller Ingress default/ingress-service
Normal CREATE 21s nginx-ingress-controller Ingress default/ingress-service
Output from kubectl get po -l component=server
$ kubectl get po -l component=server
NAME READY STATUS RESTARTS AGE
server-deployment-cf6dd5744-2rnh9 1/1 Running 0 11s
server-deployment-cf6dd5744-j9qvn 1/1 Running 0 11s
server-deployment-cf6dd5744-nz4nj 1/1 Running 0 11s
Output from kubectl describe pods server-deployment: Noticed that the Host Port: 0/TCP. Possibly the issue?
Name: server-deployment-6b78885779-zttns
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Tue, 08 Oct 2019 19:54:03 -0700
Labels: app.kubernetes.io/managed-by=skaffold-v0.39.0
component=server
pod-template-hash=6b78885779
skaffold.dev/builder=local
skaffold.dev/cleanup=true
skaffold.dev/deployer=kubectl
skaffold.dev/docker-api-version=1.39
skaffold.dev/run-id=c545df44-a37d-4746-822d-392f42817108
skaffold.dev/tag-policy=git-commit
skaffold.dev/tail=true
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/server-deployment-6b78885779
Containers:
server:
Container ID: docker://2d0aba8f5f9c51a81f01acc767e863b7321658f0a3d0839745adb99eb0e3907a
Image: sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Image ID: docker://sha256:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7
Port: 5000/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 08 Oct 2019 19:54:05 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qz5kr (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-qz5kr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qz5kr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/server-deployment-6b78885779-zttns to minikube
Normal Pulled 7s kubelet, minikube Container image "sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7" already present on machine
Normal Created 7s kubelet, minikube Created container server
Normal Started 6s kubelet, minikube Started container server
OK, got this sorted out now.
It boils down to the kind of Service being used: ClusterIP.
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
If I am wanting to connect to a Pod or Deployment directly from outside of the cluster (something like Postman, pgAdmin, etc.) and I want to do it using a Service, I should be using NodePort:
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So in my case, if I want to continue using a Service, I'd change my Service manifest to:
apiVersion: v1
kind: Service
metadata:
name: server-cluster-ip-service
spec:
type: NodePort
selector:
component: server
ports:
- port: 5000
targetPort: 5000
nodePort: 31515
Making sure to manually set nodePort: <port> otherwise it is kind of random and a pain to use.
Then I'd get the minikube IP with minikube ip and connect to the Pod with 192.168.99.100:31515.
At that point, everything worked as expected.
But that means having separate sets of development (NodePort) and production (ClusterIP) manifests, which is probably totally fine. But I want my manifests to stay as close to the production version (i.e. ClusterIP).
There are a couple ways to get around this:
Using something like Kustomize where you can set a base.yaml and then have overlays for each environment where it just changes the relevant info avoiding manifests that are mostly duplicative.
Using kubectl port-forward. I think this is the route I am going to go. That way I can keep my one set of production manifests, but when I want to QA Postgres with pgAdmin I can do:
kubectl port-forward services/postgres-cluster-ip-service 5432:5432
Or for the back-end and Postman:
kubectl port-forward services/server-cluster-ip-service 5000:5000
I'm playing with doing this through the ingress-service.yaml using nginx-ingress, but don't have that working quite yet. Will update when I do. But for me, port-forward seems the way to go since I can just have one set of production manifests that I don't have to alter.
Skaffold Port-Forwarding
This is even better for my needs. Appending this to the bottom of the skaffold.yaml and is basically the same thing as kubectl port-forward without tying up a terminal or two:
portForward:
- resourceType: service
resourceName: server-cluster-ip-service
port: 5000
localPort: 5000
- resourceType: service
resourceName: postgres-cluster-ip-service
port: 5432
localPort: 5432
Then run skaffold dev --port-forward.
I've developed a containerized Flask application and I want to deploy it with Kubernetes. However, I can't connect the ports of the Container with the Service correctly.
Here is my Deployment file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: <my-app-name>
spec:
replicas: 1
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: <container-name>
image: <container-image>
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
name: http-port
---
apiVersion: v1
kind: Service
metadata:
name: <service-name>
spec:
selector:
app: flaskapp
ports:
- name: http
protocol: TCP
targetPort: 5000
port: 5000
nodePort: 30013
type: NodePort
When I run kubectl get pods, everything seems to work fine:
NAME READY STATUS RESTARTS AGE
<pod-id> 1/1 Running 0 7m
When I run kubectl get services, I get the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
<service-name> NodePort 10.105.247.63 <none> 5000:30013/TCP
...
However, when I give the following URL to the browser: 10.105.247.63:30013, the browser keeps loading but never returns the data from the application.
Does anyone know where the problem could be? It seems that the service is not connected to the container's port.
30013 is the port on the Node not in the cluster IP. To get a reply you would have to connect to <IP-address-of-the-node>:30013. To get the list of nodes you can:
kubectl get nodes -o=wide
You can also go through the CLUSTER-IP but you'll have to use the exposed port 5000: 10.105.247.63:5000
I basically want to access the Nginx-hello page externally by URL. I've made a (working) A-record for a subdomain to my v-server running kubernetes and Nginx ingress: vps.my-domain.com
I installed Kubernetes via kubeadm on CoreOS as a single-node cluster using these tutorials: https://kubernetes.io/docs/setup/independent/install-kubeadm/, https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/, and nginx-ingress using https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal.
I also added the following entry to the /etc/hosts file:
31.214.xxx.xxx vps.my-domain.com
(xxx was replaced with the last three digits of the server IP)
I used the following file to create the deployment, service, and ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
run: my-nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "False"
spec:
rules:
- host: vps.my-domain.com
http:
paths:
- backend:
serviceName: my-nginx
servicePort: 80
Output of describe ing:
core#vps ~/k8 $ kubectl describe ing
Name: my-nginx
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
vps.my-domain.com
my-nginx:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1",...}
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: False
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UPDATE 49m (x2 over 56m) nginx-ingress-controller Ingress default/my-nginx
While I can curl the Nginx hello page using the nodeip and port 80 it doesn't work from outside the VM. Failed to connect to vps.my-domain.com port 80: Connection refused
Did I forgot something or is the configuration just wrong? Any help or tips would be appreciated!
Thank you
EDIT:
Visiting "vps.my-domain.com:30519` gives me the nginx welcome page. But in the config I specified port :80.
I got the port from the output of get services:
core#vps ~/k8 $ kubectl get services --all-namespaces | grep "my-nginx"
default my-nginx ClusterIP 10.107.5.14 <none> 80/TCP 1h
I also got it to work on port :80 by adding
externalIPs:
- 31.214.xxx.xxx
to the my-nginx service. But this is not how it's supposed to work, right? In the tutorials and examples for kubernetes and ingress-nginx, it worked always without externalIPs. Also the ingress rules doesn't work now (e.g. if I set the path to /test).
So apparently I was missing one part: the load balancer. I'm not sure why this wasn't mentioned in those instructions as a requirement. But i followed this tutorial: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb and now everything works.
Since metallb requires multiple ip addresses, you have to list your single ip-adress with the subnet \32: 31.214.xxx.xxx\32