Minikube Ingress Controller (with custom domain) Not Loading - docker

I'm trying to run a VueJS app on my local machine with minikube and Kubernetes.
Now,
I applied the YAML file.
Added the IP address of the ingress to /etc/hosts on my MacOS (M1).
neu.com does not load, nor does the IP address of the ingress controller.
What I've tried
Tried running the service with a nodeport (it loads up fine)
Removing everything, and doing the whole thing from start
(minikube addon ingress is switched on)
Here is the version info for all the tools I'm using.
kubectl version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.24.1
-------------------------
minikube version
minikube version: v1.26.0
commit: f4b412861bb746be73053c9f6d2895f12cf78565
And this is the YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: cleo
labels:
app: cleo
spec:
replicas: 1
selector:
matchLabels:
app: cleo
template:
metadata:
labels:
app: cleo
spec:
containers:
- name: cleo
image: image-name-of-my-vuejs-app
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: cleo-service
spec:
selector:
app: cleo
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cleo-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: neu.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: cleo-service
port:
number: 80
The service and ingress are running:
Can somebody see what the problem is?

The direct access only works on Linux, the docker network is not accessible on macOS or on Windows
https://docs.docker.com/desktop/mac/networking/#known-limitations-use-cases-and-workarounds
Reference: https://github.com/kubernetes/minikube/issues/13951

Related

error when creating "STDIN": Internal error occurred while running skaffold dev

So, I'm using minikube v1.19.0 in ubuntu and using nginx-ingress with kubernetes. I have two node files: auth and client having docker image made respectively
i got 4 kubernetes cinfig files which are as follows:
auth-deply.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: xyz/auth
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
auth-moongo-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
client-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: xyz/client
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
ingress-srv.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
skaffold.yaml:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: xyz/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: xyz/client
context: client
docker:
dockerfile: Dockerfile
sync:
manual:
- src: '**/*.js'
dest: .
Now, when I run skaffold dev the following error is coming:
Listing files to watch...
- xyz/auth
- xyz/client
Generating tags...
- xyz/auth -> xyz/auth:abcb6e4
- xyz/client -> xyz/client:abcb6e4
Checking cache...
- xyz/auth: Found Locally
- xyz/client: Found Locally
Starting test...
Tags used in deployment:
- xyz/auth -> xyz/auth:370487d5c0136906178e602b3548ddba9db75106b22a1af238e02ed950ec3f21
- xyz/client -> xyz/client:a56ea90769d6f31e983a42e1c52275b8ea2480cb8905bf19b08738e0c34eafd3
Starting deploy...
- deployment.apps/auth-depl configured
- service/auth-srv configured
- deployment.apps/auth-mongo-depl configured
- service/auth-mongo-srv configured
- deployment.apps/client-depl configured
- service/client-srv configured
- Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
- Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
exiting dev mode because first deploy failed: kubectl apply: exit status 1
Actually everything was working fine until i reinstall minikube again and getting this problem.
Need some help here.
Actually I just found out the issue was when reinstalling the minikube, Validating Webhook was not deleted and creating the issue hence, should be removed using following command.
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
I found out that while reinstalling i forgot to remove this webhook that is installed in the manifests which created this problem.
Additional links related to this problem:
Nginx Ingress: service "ingress-nginx-controller-admission" not found
Nginx Ingress Controller - Failed Calling Webhook
Your logs are clearly saying there is an issue with the ingress version warning.
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+,
unavailable in v1.22+; use networking.k8s.io/v1 Ingress
your cluster version might be above the 1.14+
it might be a possible version of your minikube got updated
example latest ingress
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "example-ingress"
spec:
ingressClassName: "external-lb"
rules:
- host: "*.example.com"
http:
paths:
- path: "/example"
pathType: "Prefix"
backend:
serviceName: "example-service"
servicePort: 80
lastet ingress apiversion is : networking.k8s.io/v1beta1 you can check the version of your lubernetes cluster and verify which API cluster is supporting.
https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
To check which API version your Kubernetes supporting you can run :
kubectl api-versions

EKS help needed

Need some basic help with EKS. Not sure what I am doing wrong.
I have a java springboot application as a docker container in ECR.
I created a simple deployment script
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: java-microservice
spec:
replicas: 2
selector:
matchLabels:
app: java-microservice
template:
metadata:
labels:
app: java-microservice
spec:
containers:
- name: java-microservice-container
image: xxxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/yyyyyyy
ports:
- containerPort: 80
I created a loadbalancer to expose this outside
loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: java-microservice-service
spec:
type: LoadBalancer
selector:
app: java-microservice
ports:
- protocol: TCP
port: 80
targetPort: 80
The pods got created. I see they are running
When I do kubectl get service java-microservice-service, I do see the loadbalancer is running
When I go to browser and try to access the application via http://loadbalancer-address, I cannot reach it.
What am I missing? How do I go about debugging this?
thanks in advance
ok. so i changed the port in my yaml files to 8080 and it seems to be working fine.

Kubernetes container ports setup similar to docker-compose?

I'm having trouble setting up my k8s pods exactly how I want. My trouble is that I have multiple containers which listen to the same ports (80,443). In a remote machine, I normally use docker-compose with 'ports - 12345:80' to set this up. With K8s it appears from all of the examples I have found that with a container, the only option is to expose a port, not to proxy it. I know I can use reverse proxies to forward to multiple ports, but that would require the images to use different ports rather than using the same port and having the container forward the requests. Is there a way to do this in k8s?
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: xxx.xxx.xxx.xxx
selector:
app: app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 80
type: LoadBalancer
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
selector:
matchLabels:
app: app
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: app
tier: backend
track: stable
spec:
containers:
- name: app
image: image:example
ports:
- containerPort: 80
imagePullSecrets:
- name: xxxxxxx
Ideally, I would be able to have the containers on a Node listening to different ports, which the applications running in those containers continue to listen to 80/443, and my services would route to the correct container as necessary.
My load balancer is working correctly, as is my first container. Adding a second container succeeds, but the second container can't be reached. The second container uses a similar script with different names and a different image for deployment.
The answer here is adding a service for the pod where the ports are declared. Using Kompose to convert a docker-compose file, this is the result:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: "5000"
port: 5000
targetPort: 80
selector:
io.kompose.service: app
status:
loadBalancer: {}
as well as
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: app
strategy: {}
template:
metadata:
annotations:
kompose.cmd: pathToKompose.exe convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: app
spec:
containers:
- image: image:example
imagePullPolicy: ""
name: app
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
Some of the fluff from Kompose could be removed, but the relevant answer to this question is declaring the port and target port for the pod in a service, and exposing the targetPort as a containerPort in the deployment for the container.
Thanks to David Maze and GintsGints for the help!

Connection refused error when deploying couchbase in kubernetes {failed to connect to 127.0.0.1 port 8091: Connection refused}

I used the following yaml files to deploy couchbase in kubernetes.
Master:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-master-rc
spec:
replicas: 1
selector:
app: master-pod
template:
metadata:
labels:
app: master-pod
spec:
containers:
- name: couchbase-master
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: MASTER
ports:
- containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: couchbase-master-service
labels:
app: couchbase-master-service
spec:
ports:
- port: 8091
selector:
app: master-pod
type: LoadBalancer
Worker:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-worker-rc
spec:
replicas: 1
selector:
app: couchbase-worker-pod
template:
metadata:
labels:
app: couchbase-worker-pod
spec:
containers:
- name: couchbase-worker
image: arungupta/couchbase:k8s
env:
- name: TYPE
value: "WORKER"
- name: COUCHBASE_MASTER
value: "couchbase-master-service"
- name: AUTO_REBALANCE
value: "false"
ports:
- containerPort: 8091
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: couchbase
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: xxx.com
http:
paths:
- path: /
backend:
serviceName: couchbase-master-service
servicePort: 8091
The pods started running and nothing seems to have an issue at first glance. But when I tried to hit the HostUrl it gives me bad gateway. And when I look into the logs of master's pod it shows me connection refused at 127.0.0.1:8091. I tried to exec into the pod and apply the curl statements from entrypoint.sh manually, but it also gave me the error "failed to connect to 127.0.0.1 port 8091: Connection refused".
I have found that master image is using this entrypoint script
I ran this container image and it looks like the curl is failing because 15s sleep is not enough time for couchbase-server to start and open 8091 port.
The easiest thing you could do is to set this sleep to higher value, but sleep is usually not the best option. (Actually this whole image is full of bad practises).
Better approach would be to replace sleep with following lines that wait until port 8091 is open:
while ! nc -z localhost 8091; do
sleep 1
done

Google Kubernetes Engine Ingress UNHEALTHY backend service

Kind Note: I have googled a lot and take a look too many questions related to this issue at StackOverflow also but couldn't solve my issue, that's why don't mark this as duplicate, please!
I'm trying to deploy 2 services (One is Python flask and other is NodeJS) on Google Kubernetes Engine. I have created two Kubernetes-deployments one for each service and two Kubernetes-services one for each service of type NodePort. Then, I have created an Ingress and mentioned my endpoints but Ingress says that One backend service is UNHEALTHY.
Here are my Deployments YAML definitions:
# Pyservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: pyservice
labels:
app: pyservice
namespace: default
spec:
selector:
matchLabels:
app: pyservice
template:
metadata:
labels:
app: pyservice
spec:
containers:
- name: pyservice
image: docker.io/arycloud/docker_web_app:pyservice
ports:
- containerPort: 5000
imagePullSecrets:
- name: docksecret
# # Nodeservice deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodeservice
labels:
app: nodeservice
namespace: default
spec:
selector:
matchLabels:
app: nodeservice
template:
metadata:
labels:
app: nodeservice
tier: web
spec:
containers:
- name: nodeservice
image: docker.io/arycloud/docker_web_app:nodeservice
ports:
- containerPort: 8080
imagePullSecrets:
- name: docksecret
And, here are my services and Ingress YAML definitions:
# pyservcie service
kind: Service
apiVersion: v1
metadata:
name: pyservice
spec:
type: NodePort
selector:
app: pyservice
ports:
- protocol: TCP
port: 5000
nodePort: 30001
---
# nodeservcie service
kind: Service
apiVersion: v1
metadata:
name: nodeservcie
spec:
type: NodePort
selector:
app: nodeservcie
ports:
- protocol: TCP
port: 8080
nodePort: 30002
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pyservice
servicePort: 5000
- path: /*
backend:
serviceName: pyservice
servicePort: 5000
- path: /node/svc/
backend:
serviceName: nodeservcie
servicePort: 8080
The pyservice is working fine but the nodeservice shows as UNHEALTHY backend. Here's a screenshot:
Even I have edited the Firewall Rules for all gke-.... and allow all ports just for getting out from this issue, but it still showing the UNHEALTHY status for nodeservice.
What's wrong here?
Thanks in advance!
Why are you using a GCE ingress class and then specifying a nginx rewrite annotation? In case you haven't realised, the annotation won't do anything to the GCE ingress.
You have also got 'nodeservcie' as your selector instead of 'nodeservice'.

Resources