Ftps server doesn't work properly using kubernetes - docker

i have a problem with ftps-filezilla and Kubernetes for weeks.
CONTEXT :
I have a school project with Kubernetes and ftps.
I need to create a ftps server in kubernetes in the port 21, and it needs to run on alpine linux.
So i create an image of my ftps-alpine server using a docker container.
I test it, if it work properly on it own :
Using docker run --name test-alpine -itp 21:21 test_alpine
I have this output in filezilla :
Status: Connecting to 192.168.99.100:21…
Status: Connection established, waiting for welcome message…
Status: Initializing TLS…
Status: Verifying certificate…
Status: TLS connection established.
Status: Logged in
Status: Retrieving directory listing…
Status: Calculating timezone offset of server…
Status: Timezone offset of server is 0 seconds.
Status: Directory listing of “/” successful
It work successfully, filezilla see the file that is within my ftps directory
I am good for now(work on active mode).
PROBLEM :
So what i wanted, was to use my image in my kubernetes cluster(I use Minikube).
When i connect my docker image to an ingress-service-deployment in kubernetes I have that :
Status: Connecting to 192.168.99.100:30894…
Status: Connection established, waiting for welcome message…
Status: Initializing TLS…
Status: Verifying certificate…
Status: TLS connection established.
Status: Logged in
Status: Retrieving directory listing…
Command: PWD
Response: 257 “/” is the current directory
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PORT 192,168,99,1,227,247
Response: 500 Illegal PORT command.
Command: PASV
Response: 227 Entering Passive Mode (172,17,0,5,117,69).
Command: LIST
Error: The data connection could not be established: EHOSTUNREACH - No route to host
Error: Connection timed out after 20 seconds of inactivity
Error: Failed to retrieve directory listing
SETUP :
ingress.yaml :
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
namespace: default
name: ingress-controller
spec:
backend:
serviceName: my-nginx
servicePort: 80
backend:
serviceName: ftps-alpine
servicePort: 21
ftps-alpine.yml :
apiVersion: v1
kind: Service
metadata:
name: ftps-alpine
labels:
run: ftps-alpine
spec:
type: NodePort
ports:
port: 21
targetPort: 21
protocol: TCP
name: ftp21
port: 20
targetPort: 20
protocol: TCP
name: ftp20
selector:
run: ftps-alpine
apiVersion: apps/v1
kind: Deployment
metadata:
name: ftps-alpine
spec:
selector:
matchLabels:
run: ftps-alpine
replicas: 1
template:
metadata:
labels:
run: ftps-alpine
spec:
- name: ftps-alpine
image: test_alpine
imagePullPolicy: Never
ports:
- containerPort: 21
- containerPort: 20
WHAT DID I TRY :
When i see the error message : Error: The data connection could not
be established: EHOSTUNREACH - No route to host google it and i see
this message :
FTP in passive mode : EHOSTUNREACH - No route to host
. And i already run my ftps server in active mode.
Change vsftpd.conf file and my service:
vsftpd.conf :
seccomp_sandbox=NO
pasv_promiscuous=NO
listen=NO
listen_ipv6=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
#secure_chroot_dir=/vsftpd/empty
pam_service_name=vsftpd
pasv_enable=YES
pasv_min_port=30020
pasv_max_port=30021
user_sub_token=$USER
local_root=/home/$USER/ftp
userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
allow_writeable_chroot=YES
#listen_port=21
I did change my the nodeport of my kubernetes to 30020 and 30021 and i add them to containers ports.
I change the pasv min port and max port.
I add the pasv_adress of my minikube ip.
Nothing work .
Question :
How can i have the successfully first message but for my kubernetes cluster ?
If you have any questions to clarify, no problem.
UPDATE :
Thanks to coderanger, i have advance and there is this problem :
Status: Connecting to 192.168.99.100:30894...
Status: Connection established, waiting for welcome message...
Status: Initializing TLS...
Status: Verifying certificate...
Status: TLS connection established.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is the current directory
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PASV
Response: 227 Entering Passive Mode (192,168,99,100,178,35).
Command: LIST
Error: The data connection could not be established: ECONNREFUSED - Connection refused by server

It works with the following change:
apiVersion: v1
kind: Service
metadata:
name: ftps-alpine
labels:
run: ftps-alpine
spec:
type: NodePort
ports:
- port: 21
targetPort: 21
nodePort: 30025
protocol: TCP
name: ftp21
- port: 20
targetPort: 20
protocol: TCP
nodePort: 30026
name: ftp20
- port: 30020
targetPort: 30020
nodePort: 30020
protocol: TCP
name: ftp30020
- port: 30021
targetPort: 30021
nodePort: 30021
protocol: TCP
name: ftp30021
selector:
run: ftps-alpine
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ftps-alpine
spec:
selector:
matchLabels:
run: ftps-alpine
replicas: 1
template:
metadata:
labels:
run: ftps-alpine
spec:
containers:
- name: ftps-alpine
image: test_alpine
imagePullPolicy: Never
ports:
- containerPort: 21
- containerPort: 20
- containerPort: 30020
- containerPort: 30021
and for the vsftpd.conf :
seccomp_sandbox=NO
pasv_promiscuous=NO
listen=YES
listen_ipv6=NO
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
#secure_chroot_dir=/vsftpd/empty
pam_service_name=vsftpd
pasv_enable=YES
pasv_min_port=30020
pasv_max_port=30021
user_sub_token=$USER
local_root=/home/$USER/ftp
userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
allow_writeable_chroot=YES
#listen_port=21
pasv_address=#minikube_ip#

First you need to fix your passive port range to actually be port 20 like you set in service:
pasv_min_port=20
pasv_max_port=20
And then you need to override the pasv_address to match whatever IP the user should be connecting to, pick one of your node IPs.

Related

How to expose MariaDB in Kubernetes?

I have a Docker container with MariaDB running in Microk8s (running on a single Unix machine).
# Hello World Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:latest
env:
- name: MARIADB_ROOT_PASSWORD
value: sa
ports:
- containerPort: 3306
These are the logs:
(...)
2021-09-30 6:09:59 0 [Note] mysqld: ready for connections.
Version: '10.6.4-MariaDB-1:10.6.4+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
Now,
connecting to port 3306 on the machine does not work.
connecting after exposing the pod with a service (any type) on port 8081 also does not work.
How can I get the connection through?
The answer has been written in comments section, but to clarify I am posting here solution as Community Wiki.
In this case problem with connection has been resolved by setting spec.selector.
The .spec.selector field defines how the Deployment finds which Pods to manage. In this case, you select a label that is defined in the Pod template (app: nginx).
.spec.selector is a required field that specifies a label selector for the Pods targeted by this Deployment.
You need to use the service with proper label
example service
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
selector:
name: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: ClusterIP
you can use the service name to connect or else change the service type as LoadBalancer to expose it with IP.
apiVersion: v1
kind: Service
metadata:
name: mariadb
spec:
selector:
name: mariadb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
type: LoadBalancer

Why ingress always return 502 Error to specific service nodeport only?

I have an alpine docker image to run my raw PHP website on an apache server(PHP 7.4) EXPOSE 80.
I want to run the image on Kubernetes(GKE) with an ingress controller.
I'm pushing the image with gcloud command to the google container registry.
Both the deployment and service have no errors and created successfully as NodePort.
The ingress I deployed is from google tutorials(https://cloud.google.com/community/tutorials/nginx-ingress-gke)
In my ingress now there is:
34.68.78.46.xip.io/
34.68.78.46.xip.io/hello
34.68.78.46.xip.io/jb(/|$)(.*)
The /hello is the same configuration of the tutorial and it is working fine.
The /jb is the same configuration as I mentioned below and always returning 502 Error.
Ingress details in the GCP console show no warnings or errors
I have checked:
Kubernetes GKE Ingress : 502 Server Error
GKE Ingress: 502 error when downloading file
502 Server Error Google kubernetes
Here is the deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jomlahbazar-deployment
spec:
selector:
matchLabels:
greeting: jomlah
department: bazar
replicas: 1
template:
metadata:
labels:
greeting: jomlah
department: bazar
spec:
containers:
- name: jomlah
image: "us.gcr.io/third-nature-273904/jb-img-1-0:v1"
ports:
- containerPort: 80
env:
- name: "PORT"
value: "80"
Here is the service file:
apiVersion: v1
kind: Service
metadata:
name: jomlahbazar-service
spec:
type: NodePort
selector:
greeting: jomlah
department: bazar
ports:
- protocol: TCP
port: 80
targetPort: 80
Here in the ingress file:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/add-base-url : "true"
spec:
rules:
- host: 34.68.78.46.xip.io
http:
paths:
- path: /
backend:
serviceName: jomlahbazar-service
servicePort: 80
- path: /hello
backend:
serviceName: hello-app
servicePort: 8080
- path: /jb(/|$)(.*)
backend:
serviceName: jomlahbazar-service
servicePort: 80
Here is the ingress description:
Name: ingress-resource
Namespace: default
Address: 34.68.78.46
Default backend: default-http-backend:80 (10.20.1.6:8080)
Rules:
Host Path Backends
---- ---- --------
34.68.78.46.xip.io
/ jomlahbazar-service:80 (<none>)
/hello hello-app:8080 (10.20.2.61:8080)
/jb(/|$)(.*) jomlahbazar-service:80 (<none>)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: true
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: false
nginx.ingress.kubernetes.io/use-regex: true
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/add-base-url":"true","nginx.ingress.kubernetes.io/rewrite-target":"/$2","nginx.ingress.kubernetes.io/ssl-redirect":"false","nginx.ingress.kubernetes.io/use-regex":"true"},"name":"ingress-resource","namespace":"default"},"spec":{"rules":[{"host":"34.68.78.46.xip.io","http":{"paths":[{"backend":{"serviceName":"jomlahbazar-service","servicePort":80},"path":"/"},{"backend":{"serviceName":"hello-app","servicePort":8080},"path":"/hello"},{"backend":{"serviceName":"jomlahbazar-service","servicePort":80},"path":"/jb(/|$)(.*)"}]}}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 36m (x6 over 132m) nginx-ingress-controller Configuration for default/ingress-resource was added or updated
The output of kubectl get ing ingress-resource -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/add-base-url":"true","nginx.ingress.kubernetes.io/rewrite-target":"/$2","nginx.ingress.kubernetes.io/ssl-redirect":"false","nginx.ingress.kubernetes.io/use-regex":"true"},"name":"ingress-resource","namespace":"default"},"spec":{"rules":[{"host":"34.68.78.46.xip.io","http":{"paths":[{"backend":{"serviceName":"jomlahbazar-service","servicePort":80},"path":"/"},{"backend":{"serviceName":"hello-app","servicePort":8080},"path":"/hello"},{"backend":{"serviceName":"jomlahbazar-service","servicePort":80},"path":"/jb(/|$)(.*)"}]}}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
creationTimestamp: "2021-02-11T06:00:07Z"
generation: 5
name: ingress-resource
namespace: default
resourceVersion: "2195351"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/ingress-resource
uid: 74dc822f-91cb-4902-991b-1ad298f44ae6
spec:
rules:
- host: 34.68.78.46.xip.io
http:
paths:
- backend:
serviceName: jomlahbazar-service
servicePort: 80
path: /
- backend:
serviceName: hello-app
servicePort: 8080
path: /hello
- backend:
serviceName: jomlahbazar-service
servicePort: 80
path: /jb(/|$)(.*)
status:
loadBalancer:
ingress:
- ip: 34.68.78.46
I've run some tests on my GKE cluster. I've replicate your behavior using 2 hello world applications v1 and v2.
Scenario 1
HW 1
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
SVC HW1
spec:
type: NodePort
selector:
key: app
ports:
- port: 80
targetPort: 8080
HW 2
spec:
containers:
- name: hello2
image: gcr.io/google-samples/hello-app:2.0
env:
- name: "PORT"
value: "80"
SVC HW2
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: hello2
Ingress
spec:
rules:
- http:
paths:
- path: /hello2
backend:
serviceName: h2
servicePort: 80
- path: /hello
backend:
serviceName: fs
servicePort: 80
Outputs:
$ curl 34.117.70.75/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-7rmmd
$ curl 34.117.70.75/hello2
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
In this scenario, deployment was configured to create pod which will listen on port: 80. As you skipped configureing containerPort in deployment, Kubernetes automatically used the same port in containerPort as was set in port. You can verify it using netstat command.
$ kubectl exec -ti h2-deploy-6dbf5b7899-g7rbj -- bin/sh
/ # netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::80 :::* LISTEN 1/hello-app
/ #
In your service you set containerPort: 8080, so service expect traffic to go trought port 8080. As pod is listening only on 80 and traffic goes trought 8080 you are getting 502 error.
Scenario 2
After change value from "80" to "8080" and apply new configuration.
$ curl 34.117.70.75/hello
Hello, world!
Version: 1.0.0
Hostname: fd-c6d79cdf8-7rmmd
$ curl 34.117.70.75/hello2
Hello, world!
Version: 2.0.0
Hostname: h2-deploy-5f5ccfbf9f-fjhrb
Netstat:
$ kubectl exec -ti f2-deploy-5f5ccfbf9f-fjhrb -- bin/sh
/ # netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/hello-app
Solution
Solution 1
You should change your application deployment to:
env:
- name: "PORT"
value: "8080"
and apply new configuration.
Solution 2
Use containerPort
spec:
containers:
- name: jb
image: "us.gcr.io/third-nature-273904/jb-img-1-0:v3"
ports:
- containerPort: 8080
Note
Please note, if you created your own image and used EXPOSE in your Dockerfile, you should configure your Deployment to use this specific port.
Let me know if you still have issue.

Why I can't access my web service from the kubernetes cluster?

I'm trying to execute an application inside a kubernetes cluster.
I used to launch the application with docker-compose without problems, but when I create
my kubernetes deployment files, I am not able to access the service inside the cluster even after exposing them. here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
# type: LoadBalancer
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: jksun12/vdsaipro
# command: ["/run.sh"]
ports:
- containerPort: 80
- containerPort: 3306
# volumeMounts:
# - name: myapp-pv-claim
# mountPath: /var/lib/mysql
# volumes:
# - name: myapp-pv-claim
# persistentVolumeClaim:
# claimName: myapp-pv-claim
---
apiVersion: apps/v1
kind: PersistentVolumeClaim
metadata:
name: myapp-pv-claim
labels:
app: myapp
spec:
accesModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
Here is the result of
kubectl describe service myapp-service
:
Name: myapp-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp
Type: NodePort
IP: 10.109.12.113
Port: port-1 80/TCP
TargetPort: 80/TCP
NodePort: port-1 31892/TCP
Endpoints: 172.18.0.5:80,172.18.0.8:80,172.18.0.9:80
Port: port-2 3306/TCP
TargetPort: 3306/TCP
NodePort: port-2 32393/TCP
Endpoints: 172.18.0.5:3306,172.18.0.8:3306,172.18.0.9:3306
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
And here are the errors that I get when I try to access them:
curl 172.17.0.2:32393
curl: (1) Received HTTP/0.9 when not allowed
And here is the next result when I try to access the other port
curl 172.17.0.2:31892
curl: (7) Failed to connect to 172.17.0.2 port 31892: Connection refused
curl: (7) Failed to connect to 172.17.0.2 port 31892: Connection refused
I'm running ubuntu server 20.04.1 LTS. The manip is on top of minikube.
Thanks for your help.
If you are accessing the service from inside the cluster use ClusterIP as the IP. So curl should be 10.109.12.113:80 and 10.109.12.113:3306
In case accessing it from outside the cluster then use NODEIP and NODEPORT. So curl should be on <NODEIP>:32393 and <NODEIP>:31892
From inside the cluster I would also use POD IPs directly to understand if the issue is at service level or pod level.
You need to make sure that the application is listening on port 80 and port 3306. Only mentioning containerPort as 80 and 3306 does not make the application listen on those ports.
Also make sure that the application code inside the pod is listening on 0.0.0.0 instead of 127.0.0.1

Connection refused error when deploying couchbase in kubernetes {failed to connect to 127.0.0.1 port 8091: Connection refused}

I used the following yaml files to deploy couchbase in kubernetes.
Master:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-master-rc
spec:
replicas: 1
selector:
app: master-pod
template:
metadata:
labels:
app: master-pod
spec:
containers:
- name: couchbase-master
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: MASTER
ports:
- containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: couchbase-master-service
labels:
app: couchbase-master-service
spec:
ports:
- port: 8091
selector:
app: master-pod
type: LoadBalancer
Worker:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-worker-rc
spec:
replicas: 1
selector:
app: couchbase-worker-pod
template:
metadata:
labels:
app: couchbase-worker-pod
spec:
containers:
- name: couchbase-worker
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: "WORKER"
- name: COUCHBASE_MASTER
value: "couchbase-master-service"
- name: AUTO_REBALANCE
value: "false"
ports:
- containerPort: 8091
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: couchbase
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: xxx.com
http:
paths:
- path: /
backend:
serviceName: couchbase-master-service
servicePort: 8091
The pods started running and nothing seems to have an issue at first glance. But when I tried to hit the HostUrl it gives me bad gateway. And when I look into the logs of master's pod it shows me connection refused at 127.0.0.1:8091. I tried to exec into the pod and apply the curl statements from entrypoint.sh manually, but it also gave me the error "failed to connect to 127.0.0.1 port 8091: Connection refused".
I have found that master image is using this entrypoint script
I ran this container image and it looks like the curl is failing because 15s sleep is not enough time for couchbase-server to start and open 8091 port.
The easiest thing you could do is to set this sleep to higher value, but sleep is usually not the best option. (Actually this whole image is full of bad practises).
Better approach would be to replace sleep with following lines that wait until port 8091 is open:
while ! nc -z localhost 8091; do
sleep 1
done

Kubernetes Nginx Ingress Controller expose Nginx Webserver

I basically want to access the Nginx-hello page externally by URL. I've made a (working) A-record for a subdomain to my v-server running kubernetes and Nginx ingress: vps.my-domain.com
I installed Kubernetes via kubeadm on CoreOS as a single-node cluster using these tutorials: https://kubernetes.io/docs/setup/independent/install-kubeadm/, https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/, and nginx-ingress using https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal.
I also added the following entry to the /etc/hosts file:
31.214.xxx.xxx vps.my-domain.com
(xxx was replaced with the last three digits of the server IP)
I used the following file to create the deployment, service, and ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
run: my-nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "False"
spec:
rules:
- host: vps.my-domain.com
http:
paths:
- backend:
serviceName: my-nginx
servicePort: 80
Output of describe ing:
core#vps ~/k8 $ kubectl describe ing
Name: my-nginx
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
vps.my-domain.com
my-nginx:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1",...}
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/ssl-redirect: False
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UPDATE 49m (x2 over 56m) nginx-ingress-controller Ingress default/my-nginx
While I can curl the Nginx hello page using the nodeip and port 80 it doesn't work from outside the VM. Failed to connect to vps.my-domain.com port 80: Connection refused
Did I forgot something or is the configuration just wrong? Any help or tips would be appreciated!
Thank you
EDIT:
Visiting "vps.my-domain.com:30519` gives me the nginx welcome page. But in the config I specified port :80.
I got the port from the output of get services:
core#vps ~/k8 $ kubectl get services --all-namespaces | grep "my-nginx"
default my-nginx ClusterIP 10.107.5.14 <none> 80/TCP 1h
I also got it to work on port :80 by adding
externalIPs:
- 31.214.xxx.xxx
to the my-nginx service. But this is not how it's supposed to work, right? In the tutorials and examples for kubernetes and ingress-nginx, it worked always without externalIPs. Also the ingress rules doesn't work now (e.g. if I set the path to /test).
So apparently I was missing one part: the load balancer. I'm not sure why this wasn't mentioned in those instructions as a requirement. But i followed this tutorial: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb and now everything works.
Since metallb requires multiple ip addresses, you have to list your single ip-adress with the subnet \32: 31.214.xxx.xxx\32

Resources