Getting error while creating NGinx Ingress rules, failed calling webhook "validate.nginx.ingress.kubernetes.io" - azure-aks

I have deployed Nginx Ingress Controller over Azure AKS and it's running well.
helm upgrade --install nginx-ingress ingress-nginx-3.10.1.tgz -n ingress-nginx
While I'm trying deploy below Ingress rule file, I am greeting below error,
Error is,
Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://nginx-ingress-ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": context deadline exceeded
What this error meaning and how to fix it?

You have to add an ingress rule to allow access on port 8443 on your AKS cluster nodes.
Look at the Helm chart values, you will see the port used here:
controller.admissionWebhooks.port

Related

response 404 (backend NotFound), service rules for the path non-existent for GKE ingress for microfront-end deployment and service

I have created the Deployment and service for the microfront end app. Also created the GKE ingress. There are 2 services for 2 Deployments. In the GKE ingress yaml I have mentioned the 2 backend services.
I am getting 404 if I do the curl to the ingress external IP

Kubernetes - Ingress ERR_EMPTY_RESPONSE everytime

im trying to make my first kubernetes project, but the problem is that i may have some configuration issues.
For example i wanted to run this project:
https://gitlab.com/codeching/kubernetes-multicontainer-application-react-nodejs-postgres-nginx
I did:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml
Then
kubectl apply -f k8s
But when i enter the http://localhost i just get ERR_EMPTY_RESPONSE
Anyone knows why? I have newly installed docker desktop & kubernetes, everything is green & working, but somehow i can't run even this simple project.
The ingress-nginx ingress service is getting deployed as LoadBalancer Service type. if LoadBalancer is not attached, you can use port forwarding of the service to access applications in the cluster.

Not able to wget a external servicers through DNS although it can be possible through ip address from Openshift POD

We are configuring our app in OCP (Openshift) through Docker container. In the deployment config we have mentioned the hostAliases field under spec.templatein the following way
template:
hostAliases:
- ip: 123.123.123.123
hostnames:
- sample.example.com
But after we create a deployment from this deploymentconfig,it starts. But we are not able to do the following operation.
wget http://sample.example.com:1234/example/rest
It gives bad address error. Although we can do the same call through it's ip
wget http://123.123.123.123/example/rest
When we import the deployment config, it will start the deployment but after deployment we are not able to find the hostAliases field in the yaml.
Please tell me how do we create the external service call through it's dns.

Kubernetes using Gitlab installing Ingress returns "?" as external IP

I have successfully connect my Kubernetes-Cluster with Gitlab. Also I was able to install Helm through the Gitlab UI (Operations->Kubernetes)
My Problem is that if I click on the "Install"-Button of Ingress Gitlab will create all the nessecary stuff that is needed for the Ingress-Controller. But one thing will be missed : external IP. External IP will mark as "?".
And If I run this command:
kubectl get svc --namespace=gitlab-managed-apps ingress-nginx-ingress- controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'; echo
It will show nothing. Like I won´t have a Loadbalancer that exposes an external IP.
Kubernetes Cluster
I installed Kubernetes through kubeadm, using flannel as CNI
kubectl version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Is there something that I have to configure before installing Ingress. Do I need an external Loadbalancer(my thought: Gitlab will create that service for me)?
One more hint: After installation, the state of the Nginx-Ingress-Controller Service will be stay on pending. The reason for that it is not able to detect external IP. I also modified the yaml-File of the service and I manually put the "externalIPs : -External-IP line. The output of this was that it was not pending anymore. But still I couldn't find an external IP by typing the above command and Gitlab also couldn´t find any external IP
EDIT:
This happens after installation:
see picture
EDIT2:
By running the following command:
kubectl describe svc ingress-nginx-ingress-controller -n gitlab-managed-apps
I get the following result:
see picture
In Event log you will see that I switch the type to "NodePort" once and then back to "LoadBalancer" and I added the "externalIPs: -192.168.50.235" line in the yaml file. As you can see there is an externalIP but Git is not detecting it.
Btw. Im not using any of these cloud providers like AWS or GCE and I found out that LoadBalancer is not working that way. But there must be a solution for this without LoadBalancer.
I would consider to look at MetalLB as for the main provisioner of Load balancing service in your cluster. If you don't use any of Cloud providers in order to obtain the entry point (External IP) for Ingress resource, there is option for Bare-metal environments to switch to MetalLB solution which will create Kubernetes services of type LoadBalancer in the clusters that don’t run on a cloud provider, therefore it can be also implemented for NGINX Ingress Controller.
Generally, MetalLB can be installed via Kubernetes manifest file or using Helm package manager as described here.
MetalLB deploys it's own services across Kubernetes cluster and it might require to reserve pool of IP addresses in order to be able to take ownership of the ingress-nginx service. This pool can be defined in a ConfigMap called config located in the same namespace as the MetalLB controller:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 203.0.113.2-203.0.113.3
External IP would be assigned to your LoadBalancer once ingress service obtains IP address from this address pool.
Find more details about MetalLB implementation for NGINX Ingress Controller in official documentation.
After some research I found out that this is an Gitlab issue. As I said above, I successfully build a connection to my cluster. Since Im using Kubernetes without cloud providers it is not possible to use the type "LoadBalancer". Therefore you need to add an external IP or change the type to "NodePort". This way you can make your Ingress-Controller accessible outside.
Check this out: kubernetes service external ip pending
I just continued the Gitlab tutorial and it worked.

Not able to access service on minikube cluster| Istio

Startup Logs of Pod I am not able to access a spring boot service on my minikube cluster.
On my local machine,I configured minikube cluster and built the docker image of my service. My service contains some simple REST endpoints.
I configured minikube to take my local docker image or should I say pull my docker image. But now when I do
kubectl get services -n istio-system
I get the below services
kubectl get services|Services list in minkube cluster |
Kubectl get pods all namespaces | Kubectl describe service
I am trying to access my service through below command
minikube service producer-service --url
which gives http://192.168.99.100:30696
I have a ping URL in my service so ideally I should be getting response by hitting http://192.168.99.100:30696/ping
I am not getting any response here. Can you guys please let me know what I am missing here?
The behaviour you describe would suggest a port mapping problem. Is your Spring boot service on the default port of 8080? Does the internal port of your Service match the port the Spring boot app is running on (it'll be in your app startup logs). The port in your screenshot seems to be 8899. It's also possible your pod is in a different namespace from your service. It would be useful to include your app startup logs and the output of 'kubectl get pods --all-namespaces', and 'kubectl describe service producer-service'.

Resources