Am looking for any options on creating creating nginx ingress controller for private AKS cluster.
The microsoft documents only mentions about the public cluster.
Thanks
Raam
A private AKS cluster is a cluster where the API endpoint is private (so the masters), but not the node pools with the actual workloads (https://learn.microsoft.com/en-us/azure/aks/private-clusters).
To install an NGINX ingress just follow the normal installation flow and it will work - but of course you will have to connect to your cluster using a valid method like a VM in the same VNET, for example.
If what you want to do is to create an ingress that is accessible only from inside your VNET, what you need is an ingress associated with an internal load balancer (https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip).
For NGINX with HELM
Create a file called internal-ingress.yaml with this content:
controller:
service:
loadBalancerIP: YOUR_PRIVATE_IP_HERE
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
Then install NGINX with HELM applying the file in question:
# Create a namespace for your ingress resources
kubectl create namespace ingress-basic
# Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-basic \
-f internal-ingress.yaml \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
Related
I have been trying to port over some infrastructure to K8S from a VM docker setup.
In a traditional VM docker setup I run 2 docker containers: 1 being a proxy node service, and another utilizing the proxy container through an .env file via:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' proxy-container
172.17.0.2
Then within the .env file:
URL=ws://172.17.0.2:4000/
This is what I am trying to setup within a cluster in K8S but failing to reference the proxy-service correctly. I have tried using the proxy-service pod name and/or the service name with no luck.
My env-configmap.yaml is:
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
data:
URL: "ws://$(proxy-service):4000/"
Containers that run in the same pod can connect to each other via localhost. Try URL: "ws://localhost:4000/" in your ConfigMap. Otherwise, you need to specify the service name like URL: "ws://proxy-service.<namespace>:4000".
I am using Lua code inside the Nginx ingress controller in Minikube to write some logs to a file. I would like this file to be available on the host.
Is there a way to map a volume from the ingress-controller pod to the host? I did not create the Nginx ingress controller pod using a YAML config, but merely enabled the ingress addon in Minikube, so I do not have a YAML that I can add a volume mapping to.
You should be able to kubectl get whatever is running in your cluster and save it to a file.
kubectl get pod nginx -oyaml > mynginxpod.yaml
Then you could edit the file adding your volume and then applying with:
kubectl apply -f mynginxpod.yaml.
this is just an example.
The context
Let me know if I've gone down a rabbit hole here.
I have a simple web app with a frontend and backend component, deployed using Docker/Helm inside a Kubernetes cluster. The frontend is servable via nginx, and the backend component will be running a NodeJS microservice.
I had been thinking to have both run on the same pod inside Docker, but ran into some problems getting both nginx and Node to run in the background. I could try having a startup script that runs both, but the Internet says it's a best practice to have different containers each be responsible for only running one service - so one container to run nginx and another to run the microservice.
The problem
That's fine, but then say the nginx server's HTML pages need to know what to send a POST request to in the backend - how can the HTML pages know what IP to hit for the backend's Docker container? Articles like this one come up talking about manually creating a Docker network for the two containers to speak to one another, but how can I configure this with Helm so that the frontend container knows how to hit the backend container each time a new container is deployed, without having to manually configure any network service each time? I want the deployments to be automated.
You mention that your frontend is based on Nginx.
Accordingly,Frontend must hit the public URL of backend.
Thus, backend must be exposed by choosing the service type, whether:
NodePort -> Frontend will communicate to backend with http://<any-node-ip>:<node-port>
or LoadBalancer -> Frontend will communicate to backend with the http://loadbalancer-external-IP:service-port of the service.
or, keep it ClusterIP, but add Ingress resource on top of it -> Frontend will communicate to backend with its ingress host http://ingress.host.com.
We recommended the last way, but it requires to have ingress controller.
Once you tested one of them and it works, then, you can extend your helm chart to update the service and add the ingress resource if needed
You may try to setup two containers in one pod and then communicate between containers via localhost (but on different ports!). Good example is here - Kubernetes multi-container pods and container communication.
Another option is to create two separate deployments and for each create service. Instead of using IP addresses (won't be the same for every re-deployment of your app) use a DNS name for connecting to them.
Example - two NGINX services communication.
First create two NGINX deplyoments:
kubectl create deployment nginx-one --image=nginx --replicas=3
kubectl create deployment nginx-two --image=nginx --replicas=3
Let's expose them using the kubectl expose command. It's the same if I had created a service from a yaml file:
kubectl expose deployment nginx-one --name=my-service-one --port=80
kubectl expose deployment nginx-two --name=my-service-two --port=80
Now let's check services - as you can see both of them are ClusterIP type:
user#shell:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.36.0.1 <none> 443/TCP 66d
my-service-one ClusterIP 10.36.6.59 <none> 80/TCP 60s
my-service-two ClusterIP 10.36.15.120 <none> 80/TCP 59s
I will exec into pod from nginx-one deployment and curl the second service:
user#shell:~$ kubectl exec -it nginx-one-5869965455-44cwm -- sh
# curl my-service-two
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
If you have problems, make sure you have a proper CNI plugin installed for your cluster - also check this article - Cluster Networking for more details.
Also check these:
My similar answer but with a wider explanation + example of communication between two namespaces.
Access Services Running on Clusters | Kubernetes
Service | Kubernetes
Debug Services | Kubernetes
DNS for Services and Pods | Kubernetes
I have successfully connect my Kubernetes-Cluster with Gitlab. Also I was able to install Helm through the Gitlab UI (Operations->Kubernetes)
My Problem is that if I click on the "Install"-Button of Ingress Gitlab will create all the nessecary stuff that is needed for the Ingress-Controller. But one thing will be missed : external IP. External IP will mark as "?".
And If I run this command:
kubectl get svc --namespace=gitlab-managed-apps ingress-nginx-ingress- controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'; echo
It will show nothing. Like I won´t have a Loadbalancer that exposes an external IP.
Kubernetes Cluster
I installed Kubernetes through kubeadm, using flannel as CNI
kubectl version:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2"}
Is there something that I have to configure before installing Ingress. Do I need an external Loadbalancer(my thought: Gitlab will create that service for me)?
One more hint: After installation, the state of the Nginx-Ingress-Controller Service will be stay on pending. The reason for that it is not able to detect external IP. I also modified the yaml-File of the service and I manually put the "externalIPs : -External-IP line. The output of this was that it was not pending anymore. But still I couldn't find an external IP by typing the above command and Gitlab also couldn´t find any external IP
EDIT:
This happens after installation:
see picture
EDIT2:
By running the following command:
kubectl describe svc ingress-nginx-ingress-controller -n gitlab-managed-apps
I get the following result:
see picture
In Event log you will see that I switch the type to "NodePort" once and then back to "LoadBalancer" and I added the "externalIPs: -192.168.50.235" line in the yaml file. As you can see there is an externalIP but Git is not detecting it.
Btw. Im not using any of these cloud providers like AWS or GCE and I found out that LoadBalancer is not working that way. But there must be a solution for this without LoadBalancer.
I would consider to look at MetalLB as for the main provisioner of Load balancing service in your cluster. If you don't use any of Cloud providers in order to obtain the entry point (External IP) for Ingress resource, there is option for Bare-metal environments to switch to MetalLB solution which will create Kubernetes services of type LoadBalancer in the clusters that don’t run on a cloud provider, therefore it can be also implemented for NGINX Ingress Controller.
Generally, MetalLB can be installed via Kubernetes manifest file or using Helm package manager as described here.
MetalLB deploys it's own services across Kubernetes cluster and it might require to reserve pool of IP addresses in order to be able to take ownership of the ingress-nginx service. This pool can be defined in a ConfigMap called config located in the same namespace as the MetalLB controller:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 203.0.113.2-203.0.113.3
External IP would be assigned to your LoadBalancer once ingress service obtains IP address from this address pool.
Find more details about MetalLB implementation for NGINX Ingress Controller in official documentation.
After some research I found out that this is an Gitlab issue. As I said above, I successfully build a connection to my cluster. Since Im using Kubernetes without cloud providers it is not possible to use the type "LoadBalancer". Therefore you need to add an external IP or change the type to "NodePort". This way you can make your Ingress-Controller accessible outside.
Check this out: kubernetes service external ip pending
I just continued the Gitlab tutorial and it worked.
I have an HTTP service running on a Google Container Engine cluster (behind a kubernetes service).
My goal is to access that service from a Dataflow job running on the same GCP project using a fixed name (in the same way services can be reached from inside GKE using DNS). Any idea?
Most solutions I have read on stackoverflow relies on having kube-proxy installed on the machines trying to reach the service. As far as I know, it is not possible to reliably set up that service on every worker instance created by Dataflow.
One option is to create an external balancer and create an A record in the public DNS. Although it works, I would rather not have an entry in my public DNS records pointing to that service.
EDIT:
this is now supported on GKE (now known as Kubernetes Engine): https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
I have implemented this in a pretty smooth way IMHO. I will try to walk through briefly how it works:
Remember that when you create a container cluster (or nodepool), it will consist of a set of GCE instances in an instance group that is a part of the default network. NB: add a specific GCE network tag(s) so that you can add only those instances to a firewall rule later for letting load balancer check instance health.
This instance group is just a regular instance group.
Now, remember that kubernetes has something called NodePort, which will expose the service at this port on all nodes, i.e all GCE instances in your cluster. This is what we want!
Now that we know that we have a set of GCE instances in an instance group we can then add this instance group to an internal load balancer in your default network without it needing to know anything about kubernetes internals or DNS.
The Guide which you can follow, skipping many of the initial steps is here: https://cloud.google.com/compute/docs/load-balancing/internal/
Remember that this works for regions, so dataflow and everything else must be in the same region.
See this spec for the service:
kind: Service
apiVersion: v1
metadata:
name: name
labels:
app: app
spec:
selector:
name: name
app: app
tier: backend
ports:
- name: health
protocol: TCP
enter code here port: 8081
nodePort: 30081
- name: api
protocol: TCP
port: 8080
nodePort: 30080
type: NodePort
This is the code for setting up the load balancer with health checks, forwarding rules and firewall that it needs to work:
_region=<THE_REGION>
_instance_group=<THE_NODE_POOL_INSTANCE_GROUP_NAME>
#Can be different for your case
_healtcheck_path=/liveness
_healtcheck_port=30081
_healtcheck_name=<THE_HEALTCHECK_NAME>
_port=30080
_tags=<TAGS>
_loadbalancer_name=internal-loadbalancer-$_region
_loadbalancer_ip=10.240.0.200
gcloud compute health-checks create http $_healtcheck_name \
--port $_healtcheck_port \
--request-path $_healtcheck_path
gcloud compute backend-services create $_loadbalancer_name \
--load-balancing-scheme internal \
--region $_region \
--health-checks $_healtcheck_name
gcloud compute backend-services add-backend $_loadbalancer_name \
--instance-group $_instance_group \
--instance-group-zone $_region-a \
--region $_region
gcloud compute forwarding-rules create $_loadbalancer_name-forwarding-rule \
--load-balancing-scheme internal \
--ports $_port \
--region $_region \
--backend-service $_loadbalancer_name \
--address $_loadbalancer_ip
#Allow google cloud to healthcheck your instance
gcloud compute firewall-rules create allow-$_healtcheck_name \
--source-ranges 130.211.0.0/22,35.191.0.0/16 \
--target-tags $_tags \
--allow tcp
Lukasz's answer is probably the most straightforward way to expose your service to dataflow. But, if you really don't want a public IP and DNS record, you can use a GCE route to deliver traffic to your cluster's private IP range (something like option 1 in this answer).
This would let you hit your service's stable IP. I'm not sure how to get Kubernetes' internal DNS to resolve from Dataflow.
The Dataflow job running on GCP will not be part of the Google Container Engine cluster, so it will not have access to the internal cluster DNS by default.
Try setting up a load balancer for the service that you want to expose which knows how to route the "external" traffic to it. This will allow you to connect to the IP address directly from a Dataflow job executing on GCP.