I walked through the code in a 3 node K8 cluster and doesn't seem like I am able to block the flow of traffic using networkpolicy on a deployment pod.
Here is the the output from the exercise.
user#myk8master:~$ kubectl get deployment,svc,networkpolicy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP X.X.X.X <none> 443/TCP 20d
user#myk8master:~$
user#myk8master:~$
user#myk8master:~$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
user#myk8master:~$ kubectl expose deployment nginx --port=80
service/nginx exposed
user#myk8master:~$ kubectl run busybox --rm -ti --image=busybox -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (X.X.X.X:80)
remote file exists
/ # exit
Session ended, resume using 'kubectl attach busybox -c busybox -i -t' command when the pod is running
pod "busybox" deleted
user#myk8master:~$
user#myk8master:~$
user#myk8master:~$ vi network-policy.yaml
user#myk8master:~$ cat network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
access: "true"
user#myk8master:~$
user#myk8master:~$
user#myk8master:~$ kubectl apply -f network-policy.yaml
networkpolicy.networking.k8s.io/access-nginx created
user#myk8master:~$
user#myk8master:~$
user#myk8master:~$ kubectl run busybox --rm -ti --image=busybox -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.100.97.229:80)
remote file exists. <<<< THIS SHOULD NOT WORK
I followed all the steps as is, but it seems like I am unable to block the traffic even with networkpolicy defined.
Can someone please help and let me know if I am doing something dumb here?
As described in the documentation , restricting client access should work by using a network plugin. Because of some conflict or glitch it may not restrict the access. So try to reinstall/reconfigure.
You can also try another method like blocking them in NGINX
You can restrict Access by IP Address. NGINX can allow or deny access based on a particular IP address or the range of IP addresses of client computers. To allow or deny access, use the allow and deny directives inside the stream context or a server block:
stream {
#...
server {
listen 12345;
deny 192.168.1.2;
allow 192.168.1.1/24;
allow 2001:0db8::/32;
deny all;
}
}
Limiting the Number of TCP Connections. You can limit the number of simultaneous TCP connections from one IP address:
stream {
#...
limit_conn_zone $binary_remote_addr zone=ip_addr:10m;
#...
}
you can also limit bandwidth and ip range etc.,Using NGINX is more flexible.
Refer to the link for more information about network plugins.
My bad. I forgot to setup either one of the supported network services, as was indicated in the documentation. It worked flawlessly after that.
Related
I'm trying to create a simple microservice, where a JQuery app in one Docker container uses this code to get a JSON object from another (analytics) app that runs in a different container:
<script type="text/javascript">
$(document).ready(function(){
$('#get-info-btn').click(function(){
$.get("http://localhost:8084/productinfo",
function(data, status){
$.each(data, function(i, obj) {
//some code
});
});
});
});
</script>
The other app uses this for the Deployment containerPort.
ports:
- containerPort: 8082
and these for the Service ports.
type: ClusterIP
ports:
- targetPort: 8082
port: 8084
The 'analytics' app is a golang program that listens on 8082.
func main() {
http.HandleFunc("/productinfo", getInfoJSON)
log.Fatal(http.ListenAndServe(":8082", nil))
}
When running this on Minikube, I encountered issues with CORS, which was resolved by using this in the golang code when returning a JSON object as a response:
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
All this worked fine on Minikube (though in Minikube I was using localhost:8082). The first app would send a GET request to http://localhost:8084/productinfo and the second app would return a JSON object.
But when I tried it on a cloud Kubernetes setup by accessing the first app via :, when I open the browser console, I keep getting the error Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8084/productinfo.
Question:
Why is it working on Minikube but not on the cloud Kubernetes worker nodes? Is using localhost the right way to access another container? How can I get this to work? How do people who implement microservices use their GET and POST requests across containers? All the microservice examples I found are built for simple demos on Minikube, so it's difficult to get a handle on this nuance.
#P.... is absolutely right, I just want to provide some more details about DNS for Services and communication between containers in the same Pod.
DNS for Services
As we can find in the documentation, Kubernetes Services are assigned a DNS A (or AAAA) record, for a name of the form <serviceName>.<namespaceName>.svc.<cluster-domain>. This resolves to the cluster IP of the Service.
"Normal" (not headless) Services are assigned a DNS A or AAAA record, depending on the IP family of the service, for a name of the form my-svc.my-namespace.svc.cluster-domain.example. This resolves to the cluster IP of the Service.
Let's break down the form <serviceName>.<namespaceName>.svc.<cluster-domain> into individual parts:
<serviceName> - The name of the Service you want to connect to.
<namespaceName> - The name of the Namespace in which the Service to which you want to connect resides.
svc - This should not be changed - svc stands for Service.
<cluster-domain> - cluster domain, by default it's cluster.local.
We can use <serviceName> to access a Service in the same Namespace, however we can also use <serviceName>.<namespaceName> or <serviceName>.<namespaceName>.svc or FQDN <serviceName>.<namespaceName>.svc.<cluster-domain>.
If the Service is in a different Namespace, a single <serviceName> is not enough and we need to use <serviceName>.<namespaceName> (we can also use: <serviceName>.<namespaceName>.svc or <serviceName>.<namespaceName>.svc.<cluster-domain>).
In the following example, app-1 and app-2 are in the same Namespace and app-2 is exposed with ClusterIP on port 8084 (as in your case):
$ kubectl run app-1 --image=nginx
pod/app-1 created
$ kubectl run app-2 --image=nginx
pod/app-2 created
$ kubectl expose pod app-2 --target-port=80 --port=8084
service/app-2 exposed
$ kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/app-1 1/1 Running 0 45s
pod/app-2 1/1 Running 0 41s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/app-2 ClusterIP 10.8.12.83 <none> 8084/TCP 36s
NOTE: The app-2 is in the same Namespace as app-1, so we can use <serviceName> to access it from app-1, you can also notice that we got the FQDN for app-2 (app-2.default.svc.cluster.local):
$ kubectl exec -it app-1 -- bash
root#app-1:/# nslookup app-2
Server: 10.8.0.10
Address: 10.8.0.10#53
Name: app-2.default.svc.cluster.local
Address: 10.8.12.83
NOTE: We need to provide the port number because app-2 is listening on 8084:
root#app-1:/# curl app-2.default.svc.cluster.local:8084
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Let's create app-3 in a different Namespace and see how to connect to it from app-1:
$ kubectl create ns test-namespace
namespace/test-namespace created
$ kubectl run app-3 --image=nginx -n test-namespace
pod/app-3 created
$ kubectl expose pod app-3 --target-port=80 --port=8084 -n test-namespace
service/app-3 exposed
NOTE: Using app-3 (<serviceName>) is not enough, we also need to provide the name of the Namespace in which app-3 resides (<serviceName>.<namespaceName>):
# nslookup app-3
Server: 10.8.0.10
Address: 10.8.0.10#53
** server can't find app-3: NXDOMAIN
# nslookup app-3.test-namespace
Server: 10.8.0.10
Address: 10.8.0.10#53
Name: app-3.test-namespace.svc.cluster.local
Address: 10.8.12.250
# curl app-3.test-namespace.svc.cluster.local:8084
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Communication Between Containers in the Same Pod
We can use localhost to communicate with other containers, but only within the same Pod (Multi-container pods).
I've created a simple multi-container Pod with two containers: nginx-container and alpine-container:
$ cat multi-container-app.yml
apiVersion: v1
kind: Pod
metadata:
name: multi-container-app
spec:
containers:
- image: nginx
name: nginx-container
- image: alpine
name: alpine-container
command: ["sleep", "3600"]
$ kubectl apply -f multi-container-app.yml
pod/multi-container-app created
We can connect to the alpine-container container and check if we can access the nginx web server located in the nginx-container with localhost:
$ kubectl exec -it multi-container-app -c alpine-container -- sh
/ # netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 :::80 :::* LISTEN -
/ # curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
More information on communication between containers in the same Pod can be found here.
[root#kubemaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1deployment-c8b9c74cb-hkxmq 1/1 Running 0 12s 192.168.90.1 kubeworker1 <none> <none>
[root#kubemaster ~]# kubectl logs pod1deployment-c8b9c74cb-hkxmq
2020/05/16 23:29:56 Server listening on port 8080
[root#kubemaster ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m <none>
pod1service ClusterIP 10.101.174.159 <none> 80/TCP 16s creator=sai
Curl on master node:
[root#kubemaster ~]# curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
Curl on worker node 1 is sucessfull for cluster IP ( this is the node where pod is running )
[root#kubemaster ~]# ssh kubeworker1 curl -m 2 -v -s http://10.101.174.159:80
Hello, world!
Version: 1.0.0
Hostname: pod1deployment-c8b9c74cb-hkxmq
Curl fails on other worker node as well :
[root#kubemaster ~]# ssh kubeworker2 curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
I was facing the same issue so this is what I did and it worked:
Brief: I am running 2 VMs for a 2 Node cluster. 1 Master Node and 1 Worker Node. A Deployment is running on the worker node. I wanted to curl from the master node so that I can get response from my application running inside a pod on the worker node. For that I deployed a service on the worker node which then exposed those set of pods inside the cluster.
Issue: After deploying the service and doing Kubectl get service, it provided me with ClusterIP of that service and a port (BTW I used NodePort instead of Cluster IP when writing the service.yaml). But when curling on that IP address and port it was just hanging and then after sometime giving timeout.
Solution: Then I tried to look at the hierarchy. First I need to contact the Node on which service is located then on the port given by the NodePort (i.e The one between 30000-32767) so first I did Kubectl get nodes -o wide to get the Internal IP address of the required Node (mine was 10.0.1.4) and then I did kubectl get service -o wide to get the port (the one between 30000-32767) and curled it. So my curl command was -> curl http://10.0.1.4:30669 and I was able to get the output.
First of all, you should always be using Service DNS instead of Cluster/dynamic IPs to access the application deployed. The service DNS would be < service-name >.< service-namespace >.svc.cluster.local, cluster.local is the default Kubernetes cluster name, if not changed otherwise.
Now coming to the service accessibility, it may be DNS issues. What you can do is try to check the kube-dns pod logs in kube-system namespace. Also, try to curl from a standalone pod. If that's working.
kubectl run --generator=run-pod/v1 bastion --image=busybox
kubectl exec -it bastion bash
curl -vvv pod1service.default.svc.cluster.local
If not the further questions would be, where is the cluster and how it was created?
I have following setup:
Private OpenStack Cloud - o̲n̲l̲y̲ Web UI (Horizon) is accessible
(API is restricted but maybe I could get access)
I have used CoreOS with a setup of one master and three nodes
Resources are standardized (as default of OpenStack)
I followed the getting-started guide for CoreOS (i.e. I'm using the default YAMLs for cloud-config provided) on GitHub
As I read extensions such like Web UI (kube-ui) can be added as Add-On - which I have added (only kube-ui).
Now if I run a test such like simple-nginx I get following output:
creating pods:
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
creating service:
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx run=my-nginx run=my-nginx 80/TCP
get service info:
$ kubectl describe service my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.161.90
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31170/TCP
Endpoints: 10.244.19.2:80,10.244.44.3:80
Session Affinity: None
No events.
I can access my service from every(!) external IP of the nodes.
My question now is as follows:
How can access any started service ether with a subdomain and therefore how can I set this configuration (for example I have domain.com as example) or could it be printed out on which node-IP I have to access my service (although I have only two replicas(?!))?
To describe my thoughts more understandable I mean following:
given domain: domain.com (pointing to master)
start service simple-nginx
service can be accessed with simple-nginx.domain.com
Does your OpenStack cloud provider implementation support services of type LoadBalancer?
If so, the service controller should assign an ingress IP or hostname to the service, which should eventually show up in kubectl describe svc output. You could then set up external DNS for it.
If not, just use type=NodePort, and you'll still get a NodePort on each node. You can then follow the advice in the comment to create an Ingress resource, which can do the port and host remapping.
I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly:
kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16
flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"}
docker subnet setting: --bip=10.0.0.1/24
Hostnode IP: 192.168.4.57
I've got the nginx service running and I've tried to expose it like so:
[root#kubemaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-px6uy 1/1 Running 0 4m
[root#kubemaster ~]# kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S) AGE
kubernetes component=apiserver,provider=kubernetes <none> 172.16.0.1 443/TCP 31m
nginx run=nginx run=nginx 172.16.84.166 9000/TCP 3m
and then I exposed the service like this:
kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort
NAME LABELS SELECTOR IP(S) PORT(S) AGE
nginx run=nginx run=nginx 9000/TCP 292y
I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :(
Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?
So the issue here was that there's a missing piece of the puzzle when you use nodePort.
I was also making a mistake with the commands.
Firstly, you need to make sure you expose the right ports, in this case 80 for nginx:
kubectl expose rc nginx --port=80 --type=NodePort
Secondly, you need to use kubectl describe svc nginx and it'll show you the NodePort it's assigned on each node:
[root#kubemaster ~]# kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: run=nginx
Selector: run=nginx
Type: NodePort
IP: 172.16.92.8
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32033/TCP
Endpoints: 10.0.0.126:80,10.0.0.127:80,10.0.0.128:80
Session Affinity: None
No events.
You can of course assign one when you deploy, but I was missing this info when using randomly assigned ports.
yes, you would need to use NodePort.
When you hit the service, the destPort should be equal to NodePort.
The destIP for the service should be considered local by the nodes. E.g. you could use the hostIP of one of the nodes..
A load-balancer helps because it would handle situations where your node went down, but other nodes could still process the service..
if you're running a cluster on bare metal or not at a provider that provides the load balancer, you can also define the port to be a hostPort on your pod
you define your container, and ports
containers:
- name: ningx
image: nginx
ports:
- containerPort: 80
hostPort: 80
name: http
this will bind the container to the host networking and use the port defined.
The 2 limitations here are obviously:
1) You can only have one of these pods on each host maximum.
2) The IP is the host IP of the node it binds to
this is essentially how the cloud provider load balancers work in a way.
Using the new DaemonSet features, it's possible to define what node the pod will land on and fix the IP. However that necessarily impair the high availability aspect, but at some point there is not much choice as DNS load balancing will not avoid forwarding to a dead nodes
I am trying to create Kubernetes cluster using three VMs(Master – 10.x.x.4, Node1 – 10.x.x.150, Node2 – 10.x.x.160).
I was able to create the guestbook application successfully following this link: http://kubernetes.io/v1.0/examples/guestbook/. Only one change I made to frontend-service.yaml: to use NodePort. I can access the frontend service using nodes IP and port number(10.x.x.150:30724 or 10.x.x.160:30724). So everything is working as expected but I am not able to access the frontend service using ClusterIP address(in my case 10.x.x.79).
My understanding of NodePort is that the service can be accessed through cluster IP and also on a port on each node of the cluster. How can I access the service through ClusterIP so that I don’t have to access the each node? Am I missing something here?
service and pod details
$sudo kubectl describe service frontend
Name: frontend
Namespace: default
Labels: name=frontend
Selector: name=frontend
Type: NodePort
IP: 10.x.x.79
Port: <unnamed> 80/TCP
NodePort: <unnamed> 30724/TCP
Endpoints: 172.x.x.13:80,172.x.x.14:80,172.x.x.11:80
Session Affinity: None
No events.
$sudo kubectl describe pod frontend-2b5us
Name: frontend-2b5us
Namespace: default
Image(s): gcr.io/google_samples/gb-frontend:v3
Node: 10.x.x.150/10.x.x.150
Labels: name=frontend
Status: Running
Reason:
Message:
IP: 172.x.x.11
Replication Controllers: frontend (3/3 replicas created)
Containers:
php-redis:
Image: gcr.io/google_samples/gb-frontend:v3
State: Running
Started: Fri, 30 Oct 2015 04:00:40 -0500
Ready: True
Restart Count: 0
I tried to search but would not find any solution for my exact problem but I did find similar problem that looks like for GCE.
Why can't I access my Kubernetes service via its IP?
You do not have ClusterIP service. You do have a NodePort service. To access it, you connect to the NodePort on any of your nodes in the cluster, as you've already discovered. You do get load-balancing here. Even though you connect to a cluster node, the pod you get does not necessarily run on that particular node.
Read the relevant section in the documentation at https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types to learn about additional service types. You probably do not want NodePort on GCP.
Talking about ClusterIP. To access a ClusterIP service for debugging purposes, you can run kubectl port-forward. You will not actually access the service, but you will directly connect to one of the pods.
For example
kubectl port-forward frontend-2b5us 80 8080
Now connect to localhost:8080
More sophisticated command, which discovers the port on its own, given namespace -n weave and a selector. Taken from https://www.weave.works/docs/scope/latest/installing/
kubectl port-forward -n weave \
"$(kubectl get -n weave pod \
--selector=weave-scope-component=app \
-o jsonpath='{.items..metadata.name}')" \
4040
From where are you trying to access clusterIP? The clusterIP (by default) only works from within the cluster. It is a virtual IP, not routed.