How to block outgoing traffic to ip in IP tables in K8S - docker

I want block outgoing traffic to the ip (eg-DB) in IP tables in K8s.
I know that in K8s ip tables exist only at node level.
and I'm not sure in which file changes should be made and what is the command or changes required.
Please help me with this query.
Thanks.

You could deploy istio and specifically the istio egress gateway.
This way you will be able to manage outgoing traffic within the istio manifest

You can directly run the IPtable command (ex. iptables -A OUTPUT -j REJECT) on top of a node if that's fine.
however file depends on the OS : /etc/sysconfig/iptables this is for ipv4
i would suggest checking out the Network policy in Kubernetes using that you can block the outgoing traffic.
https://kubernetes.io/docs/concepts/services-networking/network-policies/
No extra setup is required like Istio or anything.
Cluster security you can handle using the network policy in the backend it uses IP tables only.
For example to block traffic on specific CIDR or IP by applying the YAML only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978

Related

Ingress NGNIX does not listen on URL with specified port

I am running Azure AKS with Kubenet networking, in which I have deployed several services, exposed on several ports.
I have configured a URL based routing and it seems to work for the services I could test.
I found out the following:
sending URL and URL:80, returns the desired web page, but the URL displayed in the browser's address bar is removing the port, if I send it. Looks like http://URL/
When I try accessing other web pages or services, I get a strange phenomena: Calling the URL with the port number, is waiting until the browser says it's unreachable. Fiddler returns "time out".
When I access the service (1 of 3 I could check visibly) and not provide the port, the Ingress rules I applied answer the request and I get the resulting web page, which is exposed on the internal service port.
i'm using this YAML, for rabbit management page:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rabbit-admin-on-ingress
namespace: mynamespace
spec:
rules:
- host: rabbit.my.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rabbitmq
port:
number: 15672
ingressClassName: nginx
and also, apply this config (using kubectl apply -f config.file.yaml):
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
15672: "mynamespace/rabbitmq:15672"
What happens is:
http://rabbit.my.local gets the rabbit admin page
http://rabbit.my.local:15672 get a time out and I get frustrated
It seems this is also happening on another service I have running on port 8085 and perhaps even the DB running on the usual SQL port (might be a TCP only connection)
Both are configured the same as the rabbitmq service in the yaml rules and config file, with their respected service names, namespaces and ports.
Please help me to figure out how I can make Ingress accept the URLs with the :PORT attached to it and answer them. Save me.
A quick reminder - :80 works fine. Perhaps because it's one of the defaults for Ingress
Thank you so much in advance.
Moshe

Can't get UDP packets to my container in kubernetes [MiniKube]

I'm new to Kubernetes, but have managed to build myself a container that runs perfectly directly under Docker and also seems to run up fine in a k8s deployment.
The container is an implementation of a couple of UDP packet replicators from github installed on Ubuntu.
When it's running directly under Docker on my machine, I can send UDP packets to the container and have them replicated back to different ports on my machine proving that the replication works. (Sending and receiving the packets with netcat).
However, I think I am missing something in the k8s networking side of things.
I am using MiniKube on my machine and I am using the following k8s manifest to create the deployment with just one container.
apiVersion: apps/v1
kind: Deployment
metadata:
name: samplicator-deployment
labels:
app: samplicator
spec:
replicas: 1
selector:
matchLabels:
app: samplicator
template:
metadata:
labels:
app: samplicator
spec:
containers:
- name: samplicator-container-01
image: dgbes/udp-fan-out-tools:latest
command: ["samplicate"]
args: ["-p3160","192.168.1.159/3161","192.168.1.159/3162"]
ports:
- name: receiver
protocol: UDP
containerPort: 3160
I then create the deployment with: kubectl apply -f create-samplicator-deployment.yaml
I then set up a couple of UDP listeners with netcat on my host machine with nc -ulk -p 3161 and nc -ulk -p 3162.
If I then connect to the running container with kubectl exec --stdin --tty samplicator-deployment-{randomPodName} -- /bin/bash and manually use netcat send packets to my host machine I can see those arriving no problem.
I find the container/pod IP address with kubectl get pod -o wide.
When I try to send a packet to the samplicator process in the pod, though, I see nothing coming back to my host machine.
So, I then spawned a shell in the container/pod, checked that the samplicator process was running correctly (it was), and installed netcat in the container instance.
Using netcat -u {my host machine IP} 3161 I can send packets from the container to my host machien and they are received no problem.
So, it seems that the issue is getting the packets TO the container.
I confirmed this by running nc -ulk -p 3600 in the container shell and sending a packet from my host to that port in the container - nothing is received in the container.
I am aware that the ports need to be exposed on the container and that 'services' are used for this, but I thought that that was what the ports: section in the template spec of the deployment was doing.
That didn't create a service to expose the port, so I added a service definition to the end of my deployment manifest YAML as follows:
---
apiVersion: v1
kind: Service
metadata:
name: samplicator-service
spec:
selector:
app: samplicator
type: LoadBalancer
ports:
- name: receiver-service
protocol: UDP
port: 3160
targetPort: 3160
I'm obviously missing something here, and my apologies if my k8s terminology is a bit mangled - as I say I'm completely new to k8s.
Any pointers to how to correctly make that UDP port reachable?
I figured it out...
Service type=LoadBalancer doesn't work with MiniKube as it relies on an external cloud service apparently.
I noticed that my service was sat in a pending state with the ExternalIP never being allocated, which lead me to this answer.
The magic being to simply run the following command in a separate terminal window which then allowed the external IP to be allocated to the service.
minikube tunnel
Once that was run and the ExternalIP allocated, sending packets to that ExternalIP works perfectly.

Kubernetes : Micro services running on same port?

I am building a microservice full stack web application as (so far) :
ReactJS (client microservice) : listens on 3000
Authentication (Auth microservice) : listens on 3000 // accidently assigned the same port
Technically, what I have heard/learned so far is that we cannot have two Pods running on the same port.
I am really confused how am I able to run the application (perfectly) like this with same ports on different applications/pods?
ingress-nginx config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
## our custom routing rules
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
I am really curious, am I missing something here?
Each Pod has its own network namespace and its own IP address, though the Pod-specific IP addresses aren't reachable from outside the cluster and aren't really discoverable inside the cluster. Since each Pod has its own IP address, you can have as many Pods as you want all listening to the same port.
Each Service also has its own IP address; again, not reachable from outside the cluster, though they have DNS names so applications can find them. Since each Service has its own IP address, you can have as many Services as you want all listening to the same port. The Service ports can be the same or different from the Pod ports.
The Ingress controller is reachable from outside the cluster via HTTP. The Ingress specification you show defines HTTP routing rules. If I set up a DNS service with a .dev TLD and define an A record for ticketing.dev that points at the ingress controller, then http://ticketing.dev/api/users/anything gets forwarded to http://auth-srv.default.svc.cluster.local:3000/ within the cluster, and http://ticketing.dev/otherwise goes to http://client-srv.default.svc.cluster.local:3000/. Those in turn will get forwarded to whatever Pods they're connected to.
There's no particular prohibition against multiple Pods or Services having the same port. I tend to like setting all of my HTTP Services to listen on port 80 since it's the standard HTTP port, even if the individual Pods are listening on port 3000 or 8000 or 8080 or whatever else.
You have two different services in the backend: auth-srv and client-srv. Therefore, you have two different addresses and then can use any port you want in each of it. that means you can get the same port in the two different services.

GKE - Bypass Pod LoadBalancer (Pod's external IP) to Pod's container's IP at runtime for WebSocket purpose

I have the following situation:
I have a couple of microservices, only 2 are relevant right now.
- Web Socket Service API
- Dispatcher Service
We have 3 users that we'll call respectively 1, 2, and 3. These users connect themselves to the web socket endpoint of our backend. Our microservices are running on Kubernetes and each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api. Each pod has its Load Balancer and this will be each time the entry point.
In our situation, we will then have the following "schema":
Now that we have a representation of our system (and a legend), our 3 users will want to use the app and connect.
As we can see, the load balancer of our pod forwarded the web socket connection of our users across the different containers. Each container, once it gets a new connection, will let to know the Dispatcher Service, and this one will save it in its own database.
Now, 3 users are connected to 2 different containers and the Dispatcher service knows it.
The user 1 wants to message user 2. The container A will then get a message and tell the Dispatcher Service: Please, send this to the user 2.
As the dispatcher knows to which container the user 2 is connected, I would like to send a request directly to my Container instead of sending it to the Pod. Sending it to the Pod is resulting in sending a request to a load balancer which actually dispatches the request to the most available container instance...
How could I manage to get the container IP? Can it be accessed by another container from another Pod?
To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP
Thanks!
edit 1
There is my web-socket-service-api.yaml
apiVersion: v1
kind: Service
metadata:
name: web-socket-service-api
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
# Port that accepts WebSockets.
- port: 8082
targetPort: 8082
protocol: TCP
name: websocket
selector:
app: web-socket-service-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-socket-service-api
spec:
replicas: 3
template:
metadata:
labels:
app: web-socket-service-api
spec:
containers:
- name: web-socket-service-api
image: gcr.io/[PROJECT]/web-socket-service-api:latest
ports:
- containerPort: 8080
- containerPort: 8081
- containerPort: 8082
Dispatcher ≈ Message broker
As how I understand your design, your Dispatcher is essentially a message broker for the pods of your Websocket Service. Let all Websocket pods connect to the broker and let the broker route messages. This is a stateful service and you should use a StatefulSet for this in Kubernetes. Depending on your requirements, a possible solution could be to use a MQTT-broker for this, e.g. mosquitto. Most MQTT brokers have support for websockets.
Scale out: Multiple replicas of pods
each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api.
This is not how Kubernetes is intented to be used. Use multiple replicas of pods instead of multiple containers in the pod. I recommend that you create a Deployment for your Websocket Service with as many replicas you want.
Service as Load balancer
Each pod has its Load Balancer and this will be each time the entry point.
In Kubernetes you should create a Service that load balance traffic to a set of pods.
Your solution
To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP
Yes, I mostly agree. That is similar to what I have described here. But I would let the Websocket Service establish a connection to the Broker/Dispatcher.
Any pod, has some information about itself. And one of the info, is it own IP address. As an example:
apiVersion: v1
kind: Pod
metadata:
name: envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_IP;
sleep 10;
done;
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
Within the container, MY_POD_IP would contain the IP address of the pod. You can let the dispatcher know about it.
$ kubectl logs envars-fieldref
10.52.0.3
$ kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
envars-fieldref 1/1 Running 0 31s 10.52.0.3 gke-klusta-lemmy-3ce02acd-djhm <none> <none>
Note that it is not a good idea to rely on pod IP address. But this should do the trick.
Also, it is exactly the same thing to send a request to the pod or to the container.

Google Compute kubernetes only can be access when nodePort on NodePort service is 80

Somehow, I am trying to start a kubernetes project on Google Compute (not GKE). After all installation (read docker-ce, kubelet, kubeadm) I create a Service and a Deployment inside as follows :
apiVersion : v1
kind : Service
metadata:
name: client-node-port
spec:
type: NodePort
ports:
- port: 90
targetPort: 80
nodePort: 31515
selector:
component: web
It was working until I change the targetPort inside service to any port beside 80 (along with the Deployment containerPort).
I already tried enabling the port on the instance firewall-cmd --permanent --add-port=(any port beside 80)/tcp
Beside that I also already enable the firewall rule in google Google Firewall Setting
Is there anything that I missed ? Why I can only access the NodePort when nodePort setting in the service is 80 ?
Thanks
PS : If it is relevant, I am using flannel network
May I know why you are trying to change targetPort ?
TargetPortis the port on the POD where the service is running.
Nodeportis the port on which the service can be accessed from external users by nodeip:nodeport.
Port: the same as Nodeport but can be used by cluster users using clusterip:port.
Again, in your case port 80 represents the service is actually running on port 80.
You should change targetPortin case you will set the service in the pod that is running on a different port.
Review this question for more details.

Resources