I am developing a game development platform that allows users to run their game servers within my Kubernetes cluster. What is everything that I need to restrict / configure to prevent malicious users from gaining access to resources they should not be allowed to access such as internal pods, Kubernetes access keys, image pull secrets, etc?
I'm currently looking at Network Policies to restrict access to internal IP addresses, but I'm not sure if they would still be able to enumerate DNS addresses to sensitive internal architecture. Would they still be able to somehow find out how my MongoDB, Redis, Kafka pods are configured?
Also, I'm aware Kubernetes puts an API token at the /var/run/secrets/kubernetes.io/serviceaccount/token path. How do I disable this token from being created? Are there other sensitive files I need to remove / disable?
I've been researching everything I can think of, but I want to make sure that I'm not missing anything.
Pods are defined within a Deployment with a Service, and exposed via Nginx Ingress TCP / UDP ConfigMap. Example Configuration:
---
metadata:
labels:
app: game-server
name: game-server
spec:
replicas: 1
selector:
matchLabels:
app: game-server
template:
metadata:
labels:
app: game-server
spec:
containers:
- image: game-server
name: game-server
ports:
- containerPort: 7777
resources:
requests:
cpu: 500m
memory: 500M
imagePullSecrets:
- name: docker-registry-image-pull-secret
---
metadata:
labels:
app: game-server
service: game-server
name: game-server
spec:
ports:
- name: tcp
port: 7777
selector:
app: game-server
TL;DR: How do I run insecure, end user-defined Pods within my Kubernetes cluster safely?
Related
I have a pod for which I want to restrict most outbound/egress traffic, apart from to another k8s service and DataDog. I am doing this with a k8s NetworkPolicy in AKS and it seems to work fine.
I'd like to move the pod to running in Azure Container Instances/ACI via an AKS virtual node, but ACI doesn't support Kubernetes NetworkPolicies.
It's unclear to me how I could implement the same NetworkPolicy some other way, perhaps using Network Security Groups (but can I whitelist what I need to?) or Azure Firewall, or perhaps it's just not possible.
The network policy I want to implement is:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: foo-network-policy
namespace: default
spec:
podSelector:
matchLabels:
"app.kubernetes.io/name": foo
policyTypes:
- Egress
egress:
- to: # Allow access to bar pods.
- podSelector:
matchLabels:
"app.kubernetes.io/name": bar
- to: # Allow access to DataDog for reporting to the agent.
- namespaceSelector:
matchLabels:
name: datadog
- to: # Allow access for kube-dns - needed for the pod to work.
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
I am new to kubernetes , I have created two ec2 ubuntu:20 instances in aws and enabled the required ports using security-groups, two nodes i mean master-node and worker-node are working fine and i deployed the webapp using yaml file, pod and svc are working fine.
However when i copy and paste master-node ip:port in browser, master-node cant able to access the application but when i use the worker-node able to access the application
if any one suggested me that would be helpfull for me
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
selector:
matchLabels:
app: webapp
replicas: 5
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: janaid/demoreactjs
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
type: NodePort
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 32001
hello, hope you are enjoying your kubernetes journey!
This is probably because by default, your master node is only used for the kubernetes control plane and not for your applications workloads.
That means that your webapp pod will only be deployed on your worker node
However, to enable your master node to accept workloads deployed on it, you have to remove the native taint label on the master node (not a best practice) here is a guide you can follow:
https://computingforgeeks.com/how-to-schedule-pods-on-kubernetes-control-plane-node/#:~:text=If%20you%20want%20to%20be,taint%20on%20the%20master%20nodes.&text=This%20will%20remove%20the%20node,able%20to%20schedule%20pods%20everywhere
( unless you have configured it to accept pods deployment? Btw, correct me if i am wrong but your kubectl get pod -o wide shows us 3ips but you have only two nodes right? )
Keep in touch.
I have an on-prem kubernetes cluster and I want to deploy to it a docker registry from which the cluster nodes can download images. In my attempts to do this, I've tried several methods of identifying the service: a NodePort, a LoadBalancer provided by MetalLB in Layer2 mode, its Flannel network IP (referring to the IP that, by default, would be on the 10.244.0.0/16 network), and its cluster IP (referring to the IP that, by default, would be on the 10.96.0.0/16 network). In every case, connecting to the registry via docker failed.
I performed a cURL against the IP and realized that while the requests were resolving as expected, the tcp dial step was consistently taking 63.15 +/- 0.05 seconds, followed by the HTTP(s) request itself completing in an amount of time that is within margin of error for the tcp dial. This is consistent across deployments with firewall rules varying from a relatively strict set to nothing except the rules added directly by kubernetes. It is also consistent across network architectures ranging from a single physical server with VMs for all cluster nodes to distinct physical hardware for each node and a physical switch. As mentioned previously, it is also consistent across the means by which the service is exposed. It is also consistent regardless of whether I use an ingress-nginx service to expose it or expose the docker registry directly.
Further, when I deploy another pod to the cluster, I am able to reach the pod at its cluster IP without any delays, but I do encounter an identical delay when trying to reach it at its external LoadBalancer IP or at a NodePort. No delays besides expected network latency are encountered when trying to reach the registry from a machine that is not a node on the cluster, e.g., using the LoadBalancer or NodePort.
As a matter of practice, my main inquiry is what is the "correct" way to do what I am attempting to do? Furthermore, as an academic matter, I would also like to know the source of the very long, very consistent delay that I've been seeing?
My deployment yaml file has been included below for reference. The ingress handler is ingress-nginx.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-pv-claim
namespace: docker-registry
labels:
app: registry
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-registry
namespace: docker-registry
spec:
replicas: 1
selector:
matchLabels:
app: docker-registry
template:
metadata:
labels:
app: docker-registry
spec:
containers:
- name: docker-registry
image: registry:2.7.1
env:
- name: REGISTRY_HTTP_ADDR
value: ":5000"
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: "/var/lib/registry"
ports:
- name: http
containerPort: 5000
volumeMounts:
- name: image-store
mountPath: "/var/lib/registry"
volumes:
- name: image-store
persistentVolumeClaim:
claimName: registry-pv-claim
---
kind: Service
apiVersion: v1
metadata:
name: docker-registry
namespace: docker-registry
labels:
app: docker-registry
spec:
selector:
app: docker-registry
ports:
- name: http
port: 5000
targetPort: 5000
---
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
kubernetes.io/ingress.class: docker-registry
name: docker-registry
namespace: docker-registry
spec:
rules:
- host: example-registry.com
http:
paths:
- backend:
serviceName: docker-registry
servicePort: 5000
path: /
tls:
- hosts:
- example-registry.com
secretName: tls-secret
For future visitors, seems like your issue is related to Flannel.
The whole problem was described here:
https://github.com/kubernetes/kubernetes/issues/88986
https://github.com/coreos/flannel/issues/1268
including workaround:
https://github.com/kubernetes/kubernetes/issues/86507#issuecomment-595227060
I am trying to expose one of my applications running on minikube to outer world. I have already used a Nodeport and I can access the application within the same hist machine using a web browser.
But I need to expose this application to one of my friends who is living somewhere far, so he can see it in his browser too.
This is how my deployment.yaml files look like, should I use an Ingress or how can I do this using an ingress ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-web-app
spec:
replicas: 2
selector:
matchLabels:
name: node-web-app
template:
metadata:
labels:
# you can specify any labels you want here
name: node-web-app
spec:
containers:
- name: node-web-app
# image must be the same as you built before (name:tag)
image: banuka/node-web-app
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Never
terminationGracePeriodSeconds: 60
How can I expose this deployment which is running a nodejs server to outside world?
You generally can’t. The networking is set up only for the host machine. You could probably use ngrok or something though?
You can use ngrok. For example
ngrok http 8000
This will generate piblicly accessable url.
I am pretty new on Kubernetes and cloud computing. I am working with bare metal servers on my home (actually virtual servers on vbox) and trying to run a stateful app with StatefulSet. I have 1 master and 2 worker nodes and I am trying to run a database application on this cluster. So each node has 1 pod and I am very confused about volumes. I use hostpath volume(code below) but volumes working separately(actually they are not synchronizing). So my 2 pods are working different(same apps but they run like 2 different servers) when I reach them.
How can I run that app in 2 synchronized pods?
I've tried to synchronized volume files between 2 slaves. I also tried to synchronize volume files with deployment. I've tried to do this with volume provisioning (persistent volume and persistent volume provisioning).
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cloud
spec:
selector:
matchLabels:
app: cloud
serviceName: "cloud"
replicas: 2
template:
metadata:
labels:
app: cloud
spec:
containers:
- name: cloud
image: owncloud:v2
imagePullPolicy: Never
ports:
- containerPort: 80
name: web
volumeMounts:
- name: cloud-volume
mountPath: /var/www/html/
volumes:
- name: cloud-volume
hostPath:
path: /volumes/cloud/
---
kind: Service
apiVersion: v1
metadata:
name: cloud
spec:
selector:
app: cloud
type: LoadBalancer
ports:
- protocol: TCP
port: 80