Related:
Kubernetes service external ip pending
Kubernetes (Minikube) external ip does not work
Initial state:
$ kubectl.exe get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
mongo-express-service LoadBalancer 10.102.123.226 <pending> 8081:30000/TCP 14m app=mongo-express
mongodb-service ClusterIP 10.104.217.138 <none> 27017/TCP 29m app=mongodb
after patching with external IP:
$ kubectl patch svc mongo-express-service -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
service/mongo-express-service patched
the service gets an external IP:
$ kubectl.exe get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
mongo-express-service LoadBalancer 10.102.123.226 172.31.71.218 8081:30000/TCP 14m app=mongo-express
mongodb-service ClusterIP 10.104.217.138 <none> 27017/TCP 29m app=mongodb
however it's not reachable:
$ wget 172.31.71.218:30000
--2022-05-05 00:23:11-- http://172.31.71.218:30000/
Connecting to 172.31.71.218:30000... failed: Connection timed out.
Retrying.
--2022-05-05 00:23:33-- (try: 2) http://172.31.71.218:30000/
Connecting to 172.31.71.218:30000...
The service looks alright:
$ kubectl describe svc mongo-express-service
Name: mongo-express-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=mongo-express
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.102.123.226
IPs: 10.102.123.226
External IPs: 172.31.71.218
Port: <unset> 8081/TCP
TargetPort: 8081/TCP
NodePort: <unset> 30000/TCP
Endpoints: 172.17.0.4:8081
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalIP 34m service-controller Count: 0 -> 1
Launching the service with minikube:
$ minikube.exe service mongo-express-service
|-----------|-----------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-----------------------|-------------|---------------------------|
| default | mongo-express-service | 8081 | http://192.168.49.2:30000 |
|-----------|-----------------------|-------------|---------------------------|
* Starting tunnel for service mongo-express-service.
|-----------|-----------------------|-------------|-----------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-----------------------|-------------|-----------------------|
| default | mongo-express-service | | http://127.0.0.1:1298 |
|-----------|-----------------------|-------------|-----------------------|
* Opening service default/mongo-express-service in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
* Stopping tunnel for service mongo-express-service.
works for url http://127.0.0.1:1298 but not for the external ip.
minikube tunnel also fails:
$ minikube tunnel
* Tunnel successfully started
* NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
* Starting tunnel for service mongo-express-service.
When started, only the internal address is reachable:
It was reachable even before tunnel was started.
Setup: win 10, minikube started with docker image (minikube start --image=docker)
Is it possible to expose the internal address on windows?
mongo-express.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
mongo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
Looking at:
https://github.com/kubernetes/minikube/issues/7344#issuecomment-607318525
the docker network does not seem to work on win.
List of wins for which Hyper-V cannot be enabled:
Operating System Requirements
The Hyper-V role can be enabled on these versions of Windows 10:
Windows 10 Enterprise
Windows 10 Professional
Windows 10 Education
The Hyper-V role cannot be installed on:
Windows 10 Home
Windows 10 Mobile
Windows 10 Mobile Enterprise
List all of the features available in the operating system:
DISM /Online /Get-Feature
See what is the name listed there.
Download and install Windows 10 Client Hyper-V
Hyper-V on Windows 10 - Document links:
https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/windows_welcome
Requirements:
https://msdn.microsoft.com/virtualization/hyperv_on_windows/quick_start/walkthrough_compatibility
taken from:
https://social.technet.microsoft.com/Forums/ie/en-US/c3d8faaa-2e5a-4cfb-a681-9dfdf8bc5310/cant-install-hyperv-on-windows-10-version-1001024016384-feature-name-microsofthyperv-is?forum=win10itprovirt
Eventually had to use the docker vm. After reinstalling the latest version of virtual box and minicube, the existing host could not be loaded; already mentioned here:
https://github.com/kubernetes/minikube/issues/9130
win version:
systeminfo /fo csv | ConvertFrom-Csv | select OS*, System*, Hotfix* | Format-List
OS Name : Microsoft Windows 10 Home
OS Version : 10.0.19044 N/A Build 19044
OS Manufacturer : Microsoft Corporation
OS Configuration : Standalone Workstation
OS Build Type : Multiprocessor Free
System Boot Time : 05/05/2022, 21:27:10
System Manufacturer : --
System Model : --
System Type : x64-based PC
System Directory : C:\WINDOWS\system32
System Locale : en-us;English (United States)
Hotfix(s) : 13 Hotfix(s) Installed.,[01]: KB5012117,[02]: KB4562830,[03]: KB4577586,[04]: KB4580325,[05]:
KB4598481,[06]: KB5000736,[07]: KB5003791,[08]: KB5012599,[09]: KB5006753,[10]: KB5007273,[11]:
KB5011352,[12]: KB5011651,[13]: KB5005699
Related
I'm following below to launch a multi-container app (db and web-app). Following is based on this.
--- BLOW STEPS ARE COPIED FROM ANSWER PROVIDED BY THIS USER docker mysql in kuberneted ERROR 2005 (HY000): Unknown MySQL server host '' (-3) ---
First, use your favorite editor to start a eramba-cm.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: eramba
namespace: eramba-1
data:
c2.8.1.sql: |
CREATE DATABASE IF NOT EXISTS erambadb;
USE erambadb;
## IMPORTANT: MUST BE INDENT 2 SPACES AFTER c2.8.1.sql ##
<copy & paste content from here: https://raw.githubusercontent.com/markz0r/eramba-community-docker/master/sql/c2.8.1.sql>
kubectl create -f eramba-cm.yaml
Create the storage for MariaDB:
cat << EOF > eramba-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: eramba-storage
spec:
storageClassName: eramba-storage
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /home/osboxes/eramba/erambadb
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eramba-storage
namespace: eramba-1
spec:
storageClassName: eramba-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
...
EOF
kubectl create -f eramba-storage.yaml
Install bitnami/mariadb using Helm
helm repo add bitnami https://charts.bitnami.com/bitnami
helm upgrade -i eramba bitnami/mariadb --set auth.rootPassword=eramba,auth.database=erambadb,initdbScriptsConfigMap=eramba,volumePermissions.enabled=true,primary.persistence.existingClaim=eramba-storage --namespace eramba-1 --set mariadb.volumePermissions.enabled=true
Run eramba web application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
replicas: 1
selector:
matchLabels:
app: eramba-web
template:
metadata:
labels:
app: eramba-web
spec:
containers:
- name: eramba-web
image: markz0r/eramba-app:c281
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_HOSTNAME
value: eramba-mariadb
- name: MYSQL_DATABASE
value: erambadb
- name: MYSQL_USER
value: root
- name: MYSQL_PASSWORD
value: eramba
- name: DATABASE_PREFIX
value: ""
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
ports:
- name: http
nodePort: 30045
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: eramba-web
type: NodePort
...
Now browse eramba-web via port-forward or http://<node ip>:30045.
The kubectl get cm,pvc,pv,svc,pods output is:
root#osboxes:~# kubectl get cm,pvc,pv,svc,pods -o wide -n eramba-1
NAME DATA AGE
configmap/eramba 1 134m
configmap/eramba-mariadb 1 131m
configmap/kube-root-ca.crt 1 29h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/eramba-storage Bound eramba-storage 5Gi RWO eramba-storage 133m Filesystem
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/eramba-storage 5Gi RWO Retain Bound eramba-1/eramba-storage eramba-storage 133m Filesystem
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/eramba-mariadb ClusterIP 10.104.161.85 <none> 3306/TCP 131m app.kubernetes.io/component=primary,app.kubernetes.io/instance=eramba,app.kubernetes.io/name=mariadb
service/eramba-web NodePort 10.100.185.75 <none> 8080:30045/TCP 129m app.kubernetes.io/name=eramba-web
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/eramba-mariadb-0 1/1 Running 0 131m 10.20.0.6 osboxes <none> <none>
pod/eramba-web-6cc9c687d8-k6r9j 1/1 Running 0 129m 10.20.0.7 osboxes <none> <none>
When I tried to access 10.100.185.75:30045, the browser is says not reachable.
root#osboxes:/home/osboxes/eramba# kubectl describe service/eramba-web -n eramba-1
Name: eramba-web
Namespace: eramba-1
Labels: app.kubernetes.io/name=eramba-web
Annotations: <none>
Selector: app.kubernetes.io/name=eramba-web
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.185.75
IPs: 10.100.185.75
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 30045/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
the logs for the web-app pod:
root#osboxes:~# kubectl logs eramba-web-6cc9c687d8-k6r9j -n eramba-1
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.20.0.7. Set the 'ServerName' directive globally to suppress this message
root#osboxes:~#
I've noticed the lack of endpoint for the Eramba-web service. When I changed the selector app to eramba-web, the endpoint has an IP, but the browser still cant reach the app.
This is a community wiki answer posted for better visibility. Feel free to expand it.
The requester uses the NodePort type for the eramba-web service. To access the application, it necessary to use the IP addresses of the nodes in the cluster, instead of using the internal IP address 10.100.x.y.
From Kubernetes documentation:
NodePort: Exposes the Service on each Node's IP at a static port (the
NodePort). A ClusterIP Service, to which the NodePort Service routes,
is automatically created. You'll be able to contact the NodePort
Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
I have a docker container which runs a basic front end angular app. I have verified it runs with no issues and I can successfully access the web app in the browser with docker run -p 5901:80 formbuilder-stand-alone-form.
I am able to successfully deploy it with minikube and kubernetes on my cloud dev server
apiVersion: v1
kind: Service
metadata:
name: stand-alone-service
spec:
selector:
app: stand-alone-form
ports:
- protocol: TCP
port: 5901
targetPort: 80
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: stand-alone-form-app
labels:
app: stand-alone-form
spec:
replicas: 1
selector:
matchLabels:
app: stand-alone-form
template:
metadata:
labels:
app: stand-alone-form
spec:
containers:
- name: stand-alone-form-pod
image: formbuilder-stand-alone-form
imagePullPolicy: Never
ports:
- containerPort: 80
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get pods
NAME READY STATUS RESTARTS AGE
stand-alone-form-app-6d4669f569-vsffc 1/1 Running 0 6s
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
stand-alone-form-app 1/1 1 1 8s
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d7h
stand-alone-service LoadBalancer 10.96.197.197 <pending> 5901:30443/TCP 21s
However, I am not able to access it with the url:
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh
% minikube service stand-alone-service
|-----------|---------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------------|-------------|---------------------------|
| default | stand-alone-service | 5901 | http://192.168.49.2:30443 |
|-----------|---------------------|-------------|---------------------------|
In this example, http://192.168.49.2:30443/ gives me a dead web page.
I disabled all my iptables for troubleshooting.
Any idea how to access the front end web app? I was thinking I might have the selectors wrong but sure.
UPDATE: Here is the requested new outputs:
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl describe service stand-alone-service
Name: stand-alone-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=stand-alone-form
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.197.197
IPs: 10.96.197.197
LoadBalancer Ingress: 10.96.197.197
Port: <unset> 5901/TCP
TargetPort: 80/TCP
NodePort: <unset> 30443/TCP
Endpoints: 172.17.0.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% minikube tunnel
Password:
Status:
machine: minikube
pid: 237498
route: 10.96.0.0/12 -> 192.168.49.2
minikube: Running
services: [stand-alone-service]
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
Note: I noticed with the tunnel I do have a external IP for the loadbalancer now:
one#work ...github/stand-alone-form-builder-hhh/form-builder-hhh (main)
% kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d11h
stand-alone-service LoadBalancer 10.98.162.179 10.98.162.179 5901:31596/TCP 3m10s
It looks like your LoadBalancer hasn't quite resolved correctly, as the External-IP is still marked as <pending>
According to Minikube, this happens when the tunnel is missing:
https://minikube.sigs.k8s.io/docs/handbook/accessing/#check-external-ip
Have you tried running minikube tunnel in a separate command window?
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
https://minikube.sigs.k8s.io/docs/commands/tunnel/
On my server in microk8s I created an kubernetes service which is exposed via NodePort, but it refuses the connection. I am not sure why. No matter if I try to telnet to the NodePort port (31000), it always refuses the connection. Similar service is provided by microk8s addon (registry) which is listening on port 32000. Telneting to this port from the host itself as from outside works fine. No firewall is runnig, ufw is disabled.
This is the service:
apiVersion: v1
kind: Service
metadata:
namespace: openvpn
name: openvpn
labels:
app: openvpn
spec:
selector:
app: openvpn
type: NodePort
ports:
- name: openvpn
nodePort: 31000
port: 1194
targetPort: 1194
status:
loadBalancer: {}
This is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: openvpn
name: openvpn
labels:
app: openvpn
spec:
replicas: 1
selector:
matchLabels:
app: openvpn
template:
metadata:
labels:
app: openvpn
spec:
containers:
- image: private.registry.com/myovpn:1
name: openvpn-server
imagePullPolicy: Always
ports:
- containerPort: 1194
securityContext:
capabilities:
add:
- NET_ADMIN
This is the created service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
openvpn NodePort 10.152.183.80 <none> 1194:31000/UDP 9m19s
And this is its description:
Name: openvpn
Namespace: openvpn
Labels: app=openvpn
app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: openvpn
meta.helm.sh/release-namespace: openvpn
Selector: app=openvpn
Type: NodePort
IP Families: <none>
IP: 10.152.183.80
IPs: 10.152.183.80
Port: openvpn 1194/UDP
TargetPort: 1194/TCP
NodePort: openvpn 31000/UDP
Endpoints: 10.1.246.217:1194
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
The endpoint is present:
NAME ENDPOINTS AGE
openvpn 10.1.246.228:1194 110m
get nodes - owide output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hostname Ready <none> 24d v1.20.7-34+df7df22a741dbc 194.xxx.xxx.xxx <none> Ubuntu 20.04.2 LTS 5.4.0-73-generic containerd://1.3.7
The Dockerfile is mega simple. Just basics:
FROM alpine:3
ENV HOST=""
RUN apk add openvpn
RUN mkdir -p /opt/openvpn/sec
COPY ./run.sh /opt/openvpn
RUN chmod +x /opt/openvpn/run.sh
COPY ./openvpn.conf /opt/openvpn
COPY ./sec/srv.key /opt/openvpn/sec
COPY ./sec/srv.crt /opt/openvpn/sec
COPY ./sec/ca.crt /opt/openvpn/sec
COPY ./sec/dh2048.pem /opt/openvpn/sec
ENTRYPOINT ["/bin/sh", "/opt/openvpn/run.sh"]
And the run script:
#!/bin/sh
mkdir -p /dev/net
mknod /dev/net/tun c 10 200
openvpn --config /opt/openvpn/openvpn.conf --local 0.0.0.0
Nothing special. Any ideas why it does not work?
I'm new to Kubernetes and I live some problems.
I have a ubuntu server and I working on it. I created pods and services, also I have an API-Gateway pod and service. And I want to reach this pod with my ubuntu server IP address from my PC.
But I cannot reach this pod from outside of the server.
My app on the docker image is running on 80 port.
My api-gateway.yaml file is like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
spec:
replicas: 1
selector:
matchLabels:
app: api-gateway
template:
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: myapi/api-gateway
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway
spec:
selector:
app: api-gateway
ports:
- name: api-gateway
protocol: TCP
port: 80
targetPort: 80
nodePort: 30007
type: NodePort
externalIPs:
- <My Ubuntu Server IP Adress>
and when I type kubectl get services api-gateway, I get
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-gateway NodePort 10.104.42.32 <MyUbuntuS IP> 80:30007/TCP 131m
also when I type kubectl describe services api-gateway, I get
Name: api-gateway
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=api-gateway
Type: NodePort
IP Families: <none>
IP: 10.104.42.32
IPs: 10.104.42.32
External IPs: <My Ubuntu Server IP Adress>
Port: api-gateway 80/TCP
TargetPort: 80/TCP
NodePort: api-gateway 30007/TCP
Endpoints: 172.17.0.4:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 30m service-controller ClusterIP -> LoadBalancer
Normal Type 6m10s service-controller NodePort -> LoadBalancer
Normal Type 77s (x2 over 9m59s) service-controller LoadBalancer -> NodePort
So, how can I reach this pod on my PC's browser or Postman?
I have a problem with accessible my service from outside.
First of all, here is my conf yaml files:
nginx-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: development
spec:
selector:
matchLabels:
app: my-nginx
replicas: 2
template:
metadata:
labels:
app: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: development
spec:
type: LoadBalancer
selector:
app: my-nginx
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 51.15.41.227-51.15.41.227
Then i have created the cluster. Command kubectl get all -o wide prints:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/my-nginx-5796dcf6c4-rxl6k 1/1 Running 1 20h 10.244.0.16 scw-7d6c86
pod/my-nginx-5796dcf6c4-zf7vd 1/1 Running 0 20h 10.244.1.4 scw-7a7908
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/nginx-service LoadBalancer 10.100.63.177 51.15.41.227 80:30883/TCP 54m app=my-nginx
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/my-nginx 2 2 2 2 20h my-nginx nginx:1.7.9 app=my-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/my-nginx-5796dcf6c4 2 2 2 20h my-nginx nginx:1.7.9 app=my-nginx,pod-template-hash=5796dcf6c4
Everythink is fine, also kubectl describe service/nginx-service prints:
Name: nginx-service
Namespace: development
Labels:
Annotations:
Selector: app=my-nginx
Type: LoadBalancer
IP: 10.100.63.177
LoadBalancer Ingress: 51.15.41.227
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30883/TCP
Endpoints: 10.244.0.16:80,10.244.1.4:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 56m metallb-controller Assigned IP "51.15.41.227"
Curl command inside master server curl 51.15.41.227 prints Welcome to nginx blablabla. Next i tried to open from another network, it doesn't work, however i added node port it works curl 51.15.41.227:30883. All this i did on a bare-metal. I expected to happen curl 51.15.41.227 from external host should reach result.
What did i do wrong?
Definitely it will work with http://51.15.41.227 or 51.15.41.227:80. You can upvote answer by pressing up button.
You should definitely use the node port 30883(randomly assigned port) while accessing from External Network. Otherwise it don't know where to route the request.
curl http://51.15.41.227:30883