On my server in microk8s I created an kubernetes service which is exposed via NodePort, but it refuses the connection. I am not sure why. No matter if I try to telnet to the NodePort port (31000), it always refuses the connection. Similar service is provided by microk8s addon (registry) which is listening on port 32000. Telneting to this port from the host itself as from outside works fine. No firewall is runnig, ufw is disabled.
This is the service:
apiVersion: v1
kind: Service
metadata:
namespace: openvpn
name: openvpn
labels:
app: openvpn
spec:
selector:
app: openvpn
type: NodePort
ports:
- name: openvpn
nodePort: 31000
port: 1194
targetPort: 1194
status:
loadBalancer: {}
This is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: openvpn
name: openvpn
labels:
app: openvpn
spec:
replicas: 1
selector:
matchLabels:
app: openvpn
template:
metadata:
labels:
app: openvpn
spec:
containers:
- image: private.registry.com/myovpn:1
name: openvpn-server
imagePullPolicy: Always
ports:
- containerPort: 1194
securityContext:
capabilities:
add:
- NET_ADMIN
This is the created service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
openvpn NodePort 10.152.183.80 <none> 1194:31000/UDP 9m19s
And this is its description:
Name: openvpn
Namespace: openvpn
Labels: app=openvpn
app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: openvpn
meta.helm.sh/release-namespace: openvpn
Selector: app=openvpn
Type: NodePort
IP Families: <none>
IP: 10.152.183.80
IPs: 10.152.183.80
Port: openvpn 1194/UDP
TargetPort: 1194/TCP
NodePort: openvpn 31000/UDP
Endpoints: 10.1.246.217:1194
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
The endpoint is present:
NAME ENDPOINTS AGE
openvpn 10.1.246.228:1194 110m
get nodes - owide output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hostname Ready <none> 24d v1.20.7-34+df7df22a741dbc 194.xxx.xxx.xxx <none> Ubuntu 20.04.2 LTS 5.4.0-73-generic containerd://1.3.7
The Dockerfile is mega simple. Just basics:
FROM alpine:3
ENV HOST=""
RUN apk add openvpn
RUN mkdir -p /opt/openvpn/sec
COPY ./run.sh /opt/openvpn
RUN chmod +x /opt/openvpn/run.sh
COPY ./openvpn.conf /opt/openvpn
COPY ./sec/srv.key /opt/openvpn/sec
COPY ./sec/srv.crt /opt/openvpn/sec
COPY ./sec/ca.crt /opt/openvpn/sec
COPY ./sec/dh2048.pem /opt/openvpn/sec
ENTRYPOINT ["/bin/sh", "/opt/openvpn/run.sh"]
And the run script:
#!/bin/sh
mkdir -p /dev/net
mknod /dev/net/tun c 10 200
openvpn --config /opt/openvpn/openvpn.conf --local 0.0.0.0
Nothing special. Any ideas why it does not work?
Related
I am using Minikube and here is my configuration:
kubectl describe deployment mysql
the output:
Name: mysql
Namespace: default
CreationTimestamp: Sat, 12 Nov 2022 02:20:54 +0200
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=mysql
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=mysql
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'mysql-pass'> Optional: false
Mounts:
/docker-entrypoint-initdb.d from mysql-init (rw)
Volumes:
mysql-init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mysql-init
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: mysql-77fd55bbd9 (1/1 replicas created)
when I try to connect to it using mysql workbench:
it shows me:
However, when I execute this line to create a mysql-client to try to connect to mysql server:
kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u skaffold -p
and then enter the password, it works well! but still I need to use workbench better.
any help please?
edit 1:
Here is the yaml file for the deployment and the service:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-init
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-init
configMap:
name: mysql-init
---
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
app: mysql
First make sure your service is running, so
kubectl get service
should return something like :
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.99.140.115 <none> 3306/TCP 2d6h
From that point onwards, I'd try running a port-forward first :
kubectl port-forward service/mysql 3306:3306
This should allow you to connect even when using a ClusterIP service.
If you want to connect directly to your mysql Deployment's Pod via localhost, first, you have to forward a Pod's container port to the localhost.
kubectl port-forward <pod-name> <local-port>:<container-port>
Then your mysql will be accessible on localhost:<local-port>.
The other way to communicate with your Pod is created a Service object that will pass your requests directly to the Pod. There are couple type of Services for different types of usage. Check the documentation to learn more.
The reason the following command
kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u skaffold -p
connects to the database correctly is because the connect command is done inside the mysql container itself.
Edit 1
If you not specified the type of Service, the default is going to be ClusterIP which not allow you to expose port outside the cluster.
Because Minikube doesn't handle LoadBalancer use NodePort Service type instead.
Your Service YAML manifest should look like this:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
app: mysql
Finally, therefore your cluster is provisioned via Minikube, you still need to call the command below for fetch the Minikube IP and a Service’s NodePort:
minikube service <service-name> --url
I am working on an API that I will deploy with Kubernetes and I want to test it locally.
I created the Docker image, successfully tested it locally, and pushed it to a public Docker registry. Now I would like to deploy in a Kubernetes cluster and there are no errors being thrown, however, I am not able to make a request to the endpoint exposed by the Minikube tunnel.
Steps to reproduce:
Start Minikube container: minikube start --ports=127.0.0.1:30000:30000
Create deployment and service: kubectl apply -f fastapi.yaml
Start minikube tunnel: minikube service fastapi-server
Encountered the following error: 192.168.49.2 took too long to respond.
requirements.txt:
anyio==3.6.1
asgiref==3.5.2
click==8.1.3
colorama==0.4.4
fastapi==0.78.0
h11==0.13.0
httptools==0.4.0
idna==3.3
pydantic==1.9.1
python-dotenv==0.20.0
PyYAML==6.0
sniffio==1.2.0
starlette==0.19.1
typing_extensions==4.2.0
uvicorn==0.17.6
watchgod==0.8.2
websockets==10.3
main.py:
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
async def root():
return {"status": "OK"}
Dockerfile:
FROM python:3.9
WORKDIR /
COPY . .
RUN pip install --no-cache-dir --upgrade -r ./requirements.txt
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
fastapi.yaml:
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
replicas: 1
selector:
matchLabels:
app: fastapi-server
template:
metadata:
labels:
app: fastapi-server
spec:
containers:
- name: fastapi-server
image: smdf/fastapi-test
ports:
- containerPort: 8000
name: http
protocol: TCP
---
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
Your problem is that you did not set the service selector:
# service
apiVersion: v1
kind: Service
metadata:
labels:
app: fastapi-server
name: fastapi-server
spec:
selector: <------------- Missing part
app: fastapi-server <-------------
type: NodePort <------------- Set the type to NodePort
ports:
- port: 8000
targetPort: 8000
protocol: TCP
nodePort: 30000
How to check if your service is defined properly?
I checked to see if there are any endpoints, and there weren't any since you did not "attach" the service to your deployment
kubectl get endpoints -A
For more info you can read this section under my GitHub
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/05-Services
I am recently new to Kubernetes and Docker in general and am experiencing issues.
I am running a single local Kubernetes cluster via Docker and am using skaffold to control the build up and teardown of objects within the cluster. When I run skaffold dev the build seems successful, yet when I attempt to make a request to my cluster via Postman the request hangs. I am using an ingress-nginx controller and I feel the bug lies somewhere here. My request handling logic is simple and so I feel the issue is not in the route handling but the configuration of my cluster, specifically with the ingress controller. I will post below my skaffold yaml config and my ingress yaml config.
Any help is greatly appreciated as I have struggled with this bug for sometime.
ingress yaml config :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
Note that I have a redirect in my /etc/hosts file from ticketing.dev to 127.0.0.1
Auth service yaml config :
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: conorl47/auth
---
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
skaffold yaml config :
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: conorl47/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
For installing the ingress nginx controller I followed the installation instructions at https://kubernetes.github.io/ingress-nginx/deploy/ , namely the Docker desktop installation instruction.
After running that command I see the following two Docker containers running in Docker desktop
The two services created in the ingress-nginx namespace are :
❯ k get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.6.146 <pending> 80:30036/TCP,443:30465/TCP 22m
ingress-nginx-controller-admission ClusterIP 10.108.8.26 <none> 443/TCP 22m
When I kubectl describe both of these services I see the following :
❯ kubectl describe service ingress-nginx-controller -n ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.103.6.146
IPs: 10.103.6.146
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30036/TCP
Endpoints: 10.1.0.10:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30465/TCP
Endpoints: 10.1.0.10:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32485
Events: <none>
and :
❯ kubectl describe service ingress-nginx-controller-admission -n ingress-nginx
Name: ingress-nginx-controller-admission
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=1.0.0
helm.sh/chart=ingress-nginx-4.0.1
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.8.26
IPs: 10.108.8.26
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 10.1.0.10:8443
Session Affinity: None
Events: <none>
As it seems, you have made the ingress service of type LoadBalancer, this will usually provision an external loadbalancer from your cloud provider of choice. That's also why It's still pending. Its waiting for the loadbalancer to be ready, but it will never happen.
If you want to have that ingress service reachable outside your cluster, you need to use type NodePort.
Since their docs are not great on this point, and it seems to be by default like this. You could download the content of https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml and modify it before applying. Or you use helm, then you probably can configure this.
You could also do it in this dirty fashion.
kubectl apply --dry-run=client -o yaml -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml \
| sed s/LoadBalancer/NodePort/g \
| kubectl apply -f -
You could also edit in place.
kubectl edit svc ingress-nginx-controller-admission -n ingress-nginx
I'm trying to execute an application inside a kubernetes cluster.
I used to launch the application with docker-compose without problems, but when I create
my kubernetes deployment files, I am not able to access the service inside the cluster even after exposing them. here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
# type: LoadBalancer
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: jksun12/vdsaipro
# command: ["/run.sh"]
ports:
- containerPort: 80
- containerPort: 3306
# volumeMounts:
# - name: myapp-pv-claim
# mountPath: /var/lib/mysql
# volumes:
# - name: myapp-pv-claim
# persistentVolumeClaim:
# claimName: myapp-pv-claim
---
apiVersion: apps/v1
kind: PersistentVolumeClaim
metadata:
name: myapp-pv-claim
labels:
app: myapp
spec:
accesModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
Here is the result of
kubectl describe service myapp-service
:
Name: myapp-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp
Type: NodePort
IP: 10.109.12.113
Port: port-1 80/TCP
TargetPort: 80/TCP
NodePort: port-1 31892/TCP
Endpoints: 172.18.0.5:80,172.18.0.8:80,172.18.0.9:80
Port: port-2 3306/TCP
TargetPort: 3306/TCP
NodePort: port-2 32393/TCP
Endpoints: 172.18.0.5:3306,172.18.0.8:3306,172.18.0.9:3306
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
And here are the errors that I get when I try to access them:
curl 172.17.0.2:32393
curl: (1) Received HTTP/0.9 when not allowed
And here is the next result when I try to access the other port
curl 172.17.0.2:31892
curl: (7) Failed to connect to 172.17.0.2 port 31892: Connection refused
curl: (7) Failed to connect to 172.17.0.2 port 31892: Connection refused
I'm running ubuntu server 20.04.1 LTS. The manip is on top of minikube.
Thanks for your help.
If you are accessing the service from inside the cluster use ClusterIP as the IP. So curl should be 10.109.12.113:80 and 10.109.12.113:3306
In case accessing it from outside the cluster then use NODEIP and NODEPORT. So curl should be on <NODEIP>:32393 and <NODEIP>:31892
From inside the cluster I would also use POD IPs directly to understand if the issue is at service level or pod level.
You need to make sure that the application is listening on port 80 and port 3306. Only mentioning containerPort as 80 and 3306 does not make the application listen on those ports.
Also make sure that the application code inside the pod is listening on 0.0.0.0 instead of 127.0.0.1
I have created a simple php application trying to access from kubernetes cluster but I am unable access the application
my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: phpdeployment
spec:
replicas: 3
selector:
matchLabels:
app: phpapp
template:
metadata:
labels:
app: phpapp
spec:
containers:
- image: rajendar38/myhtmlapp
name: myhtmlapp
ports:
- containerPort: 80
my service.yml
apiVersion: v1
kind: Service
metadata:
name: php-service
spec:
selector:
app: myhtmlapp
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
rajendar#HP-EliteBook:~/Desktop/work$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
my-service ClusterIP 10.102.235.244 <none> 4000/TCP 24h
php-service NodePort 10.110.73.30 <none> 80:31000/TCP 22m
I am using minikube for this application
when I am trying to connect to http:127.0.0.1:31000/test.html
I unable to connect to application
Thanks
Rajendar
Minikube is using a virtual machine to provide the single node cluster.
When exposing a NodePort service it is local from the perspective of the VM, which is usually not the same as your local machine.
Use minikube ip to determine the IP of the machine and use that IP instead of localhost or 127.0.0.1 to access NodePort services on the minikube cluster.
The target port and port are same 80 therefore the services is using port 80 for both
try
port: 8000
targetPort: 80
As say #Thomas, you must find the IP of the VM with:
minikube ip
and after launch the service with this IP and the port 31000, in your case.