I created a AKS cluster with Terraform. I want the cluster to have a LoadBalancer and a static public IP, and I want those to be pre-existing to my Ingress Controller / LoadBalancer Service definitions, as I don't want them to be created/deleted dynamically by Kubernetes manifests.
So I also created with Terraform a LoadBalancer and a static public IP, in the node resource group and with SKU basic, according to the documentation recommendations, and attached the public IP to the LB.
Then I created a service of type LoadBalancer:
---
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: LoadBalancer
loadBalancerIP: 8.8.8.8 (the public static IP allocated by Terraform)
selector:
name: my-pods-selector
ports:
- name: my-port
protocol: TCP
port: 1234
targetPort: 1234
The service is then stuck in the PENDING state, and a describe give me this:
$ kubectl describe svc my-service
[...]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 5s (x9 over 15m) service-controller Ensuring load balancer
Warning CreatingLoadBalancerFailed 4s (x9 over 15m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service my-service: timed out waiting for the condition
I don't manage to find more informations about the error in the describe command output:
Error creating load balancer (will retry): failed to ensure load balancer for service my-service: timed out waiting for the condition
Also, if not pre-creating the LoadBalancer but only the public IP, the LoadBalancer is created dynamically and everything is going well.
Question is: how-to make Azure successfully (which configuration parameter am I missing?) use the pre-existing LB?
Kubernetes version: 1.13.5
I want the cluster to have a LoadBalancer and a static public IP, and
I want those to be pre-existing to my Ingress Controller /
LoadBalancer Service definitions, as I don't want them to be
created/deleted dynamically by Kubernetes manifests.
Unfortunately, you cannot use a pre-existing Load Balancer with a static public IP for the service in the AKS cluster. You can take a look at the same issue in the Github. As the suggestion shows:
You'd need to let AKS create the load balancer resources in Azure for
your services rather than trying to manually create them ahead of them
and then use them in AKS. Just create the service through the
Kubernetes API, and let the networking plugin create and configure the
appropriate Azure resources.
I will suggest that you can just create public IP with the static allocate method yourself. And then create the service with the Load Balancer type and the static public IP.
Create Static IP with --sku Standard. Without --sku Standard IP is created with SKU Basic.
Basic Static IP cannot use for Loadbalancers.
Take a look into the activity log, you see a warning like this:
Standard sku load balancer
/subscriptions/55aa..../resourceGroups/MC_kubernetes-dev-kubernetes-dev-cluster_northeurope/providers/Microsoft.Network/loadBalancers/kubernetes
cannot reference Basic sku publicIP
/subscriptions/55aa..../resourceGroups/MC_kubernetes-dev_kubernetes-dev-cluster_northeurope/providers/Microsoft.Network/publicIPAddresses/kubernetes-dev-public-ip.
STATICIP=$(az network public-ip create --resource-group <MC_your-RG> --name Your-public-ip-name --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv)
Now assign this ip to your load balancer service
Related
I want block outgoing traffic to the ip (eg-DB) in IP tables in K8s.
I know that in K8s ip tables exist only at node level.
and I'm not sure in which file changes should be made and what is the command or changes required.
Please help me with this query.
Thanks.
You could deploy istio and specifically the istio egress gateway.
This way you will be able to manage outgoing traffic within the istio manifest
You can directly run the IPtable command (ex. iptables -A OUTPUT -j REJECT) on top of a node if that's fine.
however file depends on the OS : /etc/sysconfig/iptables this is for ipv4
i would suggest checking out the Network policy in Kubernetes using that you can block the outgoing traffic.
https://kubernetes.io/docs/concepts/services-networking/network-policies/
No extra setup is required like Istio or anything.
Cluster security you can handle using the network policy in the backend it uses IP tables only.
For example to block traffic on specific CIDR or IP by applying the YAML only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
I was using minikube, and when I created a load balancer it would always give me a diferent ip in the external endpoint, and I was able to access my app.
But now, I changed to docker kubernetes, and when I create a load balancer, it always add the localhost:8181 at the external endpoints.
here is my yaml:
apiVersion: v1
kind: Service
metadata:
name: app1
labels:
app: app1
spec:
#externalIPs:
# - 172.29.0.0
ports:
- protocol: TCP
name: http
port: 8181
targetPort: 8181
type: LoadBalancer
selector:
app: app1
its the same as : kubectl expose deployment app1 --port=8181 --target-port=8181 --name=app1 --type=LoadBalancer
as you can see, I tried to add externalIPs, when I do that, both localhost and the externalIP appear in the dashboard, but using the externalIP doesn't work...
I would like it to generate an ip when I create a loadbalancer so I can access my app from there, like I did with minikube.
thanks for your time.
Official documentation says that:
Type values and their behaviors are:
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
that is why with Kubernetes you have to have a cloud provider enabled (otherwise no External IP would be provisioned):
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app1 LoadBalancer 10.0.2.46 <pending> 8181:30257/TCP 18s
While in minikube it is provisioned for you with the minikube service <service_name>:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app1 LoadBalancer 10.103.51.13 <pending> 8181:30129/TCP 68s
$ minikube service app1
|-----------|------|-------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|-----------------------------|
| default | app1 | http/8181 | http://192.168.99.100:30129 |
|-----------|------|-------------|-----------------------------|
I would like it to generate an ip when I create a loadbalancer so I can access my app from there, like I did with minikube.
There is awesome post by Ales Nosek on topic.
In short:
In order to be able to create a service of type LoadBalancer, a cloud provider has to be enabled in the configuration of the Kubernetes cluster. As of version 1.6, Kubernetes can provision load balancers on AWS, Azure, CloudStack, GCE and OpenStack.
It highly depends on what you'd like to achieve, but I believe that you may be interested in Ingress.
please I'm running a .war application on apache tomcat 8.5.56 in a docker container and everything work well, but when I create deploy the container on Kubernetes I can access my application welcome page: I have the error message
HTTP Status 404 – Not Found
Type Status Report
Message The requested resource [/SmartClass] is not available
Description The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.
Apache Tomcat/8.5.56
Please anyone knows how to solve it?
For the deployment I have just copied the .war file into
/opt/apache-tomcat/webapps/ and I have copied my server.xml file into /opt/apache-tomcat/conf/
It looks like the problem is related to the connection to the application.
Create a Service object that exposes your Tomcat deployment:
kubectl expose deployment tomcat-example --type=NodePort --name=example-service
Display information about the Service:
kubectl describe services example-service
The output is similar to this:
Name: example-service
Namespace: default
Labels: run=lexample
Annotations: <none>
Selector: run=example
Type: NodePort
IP: 10.32.0.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30000/TCP
Endpoints: 10.200.1.4:8080,10.200.2.5:8080
Session Affinity: None
Events: <none>
Make a note of the NodePort value for the service. For example, in the preceding output, the NodePort value is 30000.
List the pods that are running the Tomcat application:
kubectl get pods --selector="run=example" --output=wide
The output is similar to this:
NAME READY STATUS ... IP NODE
tomcat-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1
tomcat-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2
Get the public IP address of one of your nodes that is running a Tomcat pod. How you get this address depends on how you set up your cluster. For example, if you are using Minikube, you can see the node address by running kubectl cluster-info. If you are using Google Compute Engine instances, you can use the gcloud compute instances list command to see the public addresses of your nodes.
On your chosen node, create a firewall rule that allows TCP traffic on your node port. For example, if your Service has a NodePort value of 31568, create a firewall rule that allows TCP traffic on port 30000. Different cloud providers offer different ways of configuring firewall rules.
Use the node address and node port to access the Hello World application:
curl http://<public-node-ip>:<node-port>
where <public-node-ip> is the public IP address of your node, and <node-port> is the NodePort value for your service.
Please adjust above command according to proper names and values you have used.
Here is my service.yaml code :
kind: Service
apiVersion: v1
metadata:
name: login
spec:
selector:
app: login
ports:
- protocol: TCP
name: http
port: 5555
targetPort: login-http
type: NodePort
I wrote service type as
type: NodePort
but when i hit command as below it does not show the external ip as 'nodes' :
'kubectl get svc'
here is output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 7h
login NodePort 10.100.70.98 <none> 5555:32436/TCP 5m
please help me to understand the mistake.
There is nothing wrong with your service, you should be able to access it using <your_vm_ip>:32436.
NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service. So, On your node port 32436 is open and will receive all the external traffic on this port and forward it to the login service.
EDIT:
nodePort is the port that a client outside of the cluster will "see". nodePort is opened on every node in your cluster via kube-proxy. With iptables magic Kubernetes (k8s) then routes traffic from that port to a matching service pod (even if that pod is running on a completely different node).
nodePort is unique, so 2 different services cannot have the same nodePort assigned. Once declared, the k8s master reserves that nodePort for that service. nodePort is then opened on EVERY node (master and worker) - also the nodes that do not run a pod of that service - k8s iptables magic takes care of the routing. That way you can make your service request from outside your k8s cluster to any node on nodePort without worrying whether a pod is scheduled there or not.
See the following article, it shows different ways to expose your services:
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
I have following setup:
Private OpenStack Cloud - o̲n̲l̲y̲ Web UI (Horizon) is accessible
(API is restricted but maybe I could get access)
I have used CoreOS with a setup of one master and three nodes
Resources are standardized (as default of OpenStack)
I followed the getting-started guide for CoreOS (i.e. I'm using the default YAMLs for cloud-config provided) on GitHub
As I read extensions such like Web UI (kube-ui) can be added as Add-On - which I have added (only kube-ui).
Now if I run a test such like simple-nginx I get following output:
creating pods:
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
creating service:
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx run=my-nginx run=my-nginx 80/TCP
get service info:
$ kubectl describe service my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.161.90
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31170/TCP
Endpoints: 10.244.19.2:80,10.244.44.3:80
Session Affinity: None
No events.
I can access my service from every(!) external IP of the nodes.
My question now is as follows:
How can access any started service ether with a subdomain and therefore how can I set this configuration (for example I have domain.com as example) or could it be printed out on which node-IP I have to access my service (although I have only two replicas(?!))?
To describe my thoughts more understandable I mean following:
given domain: domain.com (pointing to master)
start service simple-nginx
service can be accessed with simple-nginx.domain.com
Does your OpenStack cloud provider implementation support services of type LoadBalancer?
If so, the service controller should assign an ingress IP or hostname to the service, which should eventually show up in kubectl describe svc output. You could then set up external DNS for it.
If not, just use type=NodePort, and you'll still get a NodePort on each node. You can then follow the advice in the comment to create an Ingress resource, which can do the port and host remapping.