How to connect to minikube exposed Nodeport from other computers - docker

I am trying to expose one of my applications running on minikube to outer world. I have already used a Nodeport and I can access the application within the same hist machine using a web browser.
But I need to expose this application to one of my friends who is living somewhere far, so he can see it in his browser too.
This is how my deployment.yaml files look like, should I use an Ingress or how can I do this using an ingress ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-web-app
spec:
replicas: 2
selector:
matchLabels:
name: node-web-app
template:
metadata:
labels:
# you can specify any labels you want here
name: node-web-app
spec:
containers:
- name: node-web-app
# image must be the same as you built before (name:tag)
image: banuka/node-web-app
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Never
terminationGracePeriodSeconds: 60
How can I expose this deployment which is running a nodejs server to outside world?

You generally can’t. The networking is set up only for the host machine. You could probably use ngrok or something though?

You can use ngrok. For example
ngrok http 8000
This will generate piblicly accessable url.

Related

kubernete Master node not able to access application using master node ip

I am new to kubernetes , I have created two ec2 ubuntu:20 instances in aws and enabled the required ports using security-groups, two nodes i mean master-node and worker-node are working fine and i deployed the webapp using yaml file, pod and svc are working fine.
However when i copy and paste master-node ip:port in browser, master-node cant able to access the application but when i use the worker-node able to access the application
if any one suggested me that would be helpfull for me
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
selector:
matchLabels:
app: webapp
replicas: 5
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: janaid/demoreactjs
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
type: NodePort
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 32001
hello, hope you are enjoying your kubernetes journey!
This is probably because by default, your master node is only used for the kubernetes control plane and not for your applications workloads.
That means that your webapp pod will only be deployed on your worker node
However, to enable your master node to accept workloads deployed on it, you have to remove the native taint label on the master node (not a best practice) here is a guide you can follow:
https://computingforgeeks.com/how-to-schedule-pods-on-kubernetes-control-plane-node/#:~:text=If%20you%20want%20to%20be,taint%20on%20the%20master%20nodes.&text=This%20will%20remove%20the%20node,able%20to%20schedule%20pods%20everywhere
( unless you have configured it to accept pods deployment? Btw, correct me if i am wrong but your kubectl get pod -o wide shows us 3ips but you have only two nodes right? )
Keep in touch.

Can't access kubernetes nodeport service

I don't think I miss anything, but my angular app doesn't seem to be able to contact the service I exposed trough Kubernetes.
Whenever I try to call the exposed nodeport on my localhost, I get a connection refused.
The deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: society-api-gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: society-api-gateway-deployment
template:
metadata:
labels:
app: society-api-gateway-deployment
spec:
containers:
- name: society-api-gateway-deployment
image: tbusschaert/society-api-gateway:latest
ports:
- containerPort: 80
The service file
apiVersion: v1
kind: Service
metadata:
name: society-api-gateway-service
spec:
type: NodePort
selector:
app: society-api-gateway-deployment
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
I double checked, the call doesn't reach my pod, and this is the error I get when going the call:
I'm using minikube and kubectl on my local machine.
I'm out of options, tried everything I though it could be, thanks in advance.
EDIT 1:
So after following the suggestions, i used the node IP to call the service:
I changed the IP in my angular project, now I get a connection timeout:
As for the port forward, I get a permission error:
So, as I thought, the problem was related to minikube not opening up to my localhost.
First of all I didn't need a NodePort, but the LoadBalancer also fit my need, so my API gateway became a LoadBalancer.
Second, when using minikube, to achieve what I wanted ( running kubernetes on my local machine and my angular client also being on my local machine ), you have to create a minikube tunnel, exactly how they explain it here: https://minikube.sigs.k8s.io/docs/handbook/accessing/#run-tunnel-in-a-separate-terminal
From doc, you see that the templeate will be like this <NodeIP>:<NodePort>.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So first, From kubectl get node -o wide comamnd take the NodeIP.
then try with the <NodeIP>:<NodePort>. For example, If the NodeIP is 172.19.0.2 then try 172.19.0.2:30001 with your sub url.
Or, Another way is with port-forwarding. In a terminal first try port-forwarding with kubectl port-forward svc/society-api-gateway-service 80:80. Then use the url you have tried with localhost.

Multi Container ASP.NET Core app in a Kubernetes Pod gives error address already in use

I have an ASP.NET Core Multi-Container docker app which I am now trying to host to Kubernetes cluster on my local PC. But unfortunately one container is starting and other is giving error address already in use.
The Deployment file is given below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: multiapp
imagePullPolicy: Never
ports:
- containerPort: 80
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
The full logs of the container which is failing is:
Unable to start Kestrel.
System.IO.IOException: Failed to bind to address http://[::]:80: address already in use.
---> Microsoft.AspNetCore.Connections.AddressInUseException: Address already in use
---> System.Net.Sockets.SocketException (98): Address already in use
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
--- End of inner exception stack trace ---
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.<Bind>g__BindSocket|13_0(<>c__DisplayClass13_0& )
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.TransportManager.BindAsync(EndPoint endPoint, ConnectionDelegate connectionDelegate, EndpointConfig endpointConfig)
Note that I already tried putting another port to that container in the YAML file
ports:
- containerPort: 81
But it seems to not working. How to fix it?
To quote this answer: https://stackoverflow.com/a/62057548/12201084
containerPort as part of the pod definition is only informational purposes.
This means that setting containerPort does not have any influence on what port application opens. You can even skip it and don't set it at all.
If you want your application to open a specific port you need to tell it to the applciation. It's usually done with flags, envs or configfiles. Setting a port in pod/container yaml definition won't change a thing.
You have to remember that k8s network model is different than docker and docker compose's model.
So why does the containerPort field exist if is doesn't do a thing? - you may ask
Well. Actually is not completely true. It's main puspose is indeed for informational/documenting purposes but it may also be used with services. You can name a port in pod definition and then use this name to reference the port in service definition yaml (this only applies to targetPort field).
Check whether your images exposes the same port or try to use the same port (see in the images Dockerfile).
I suppose, it is because of your images may be trying to start anything in the same port, so when first one get created it create perfectly but during second container creation it tries to use the same port, and it gets bind: address already in use error.
You can see the pod logs for one of your container (by kubectl logs <pod_name> <container_name>) then you will be clear.
I tried applying your yaml with one of my docker image (which used to start a server in 8080 port), then after applying the below yaml I got the same error as you got.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8081
I saw the first pod's log which ran successfully by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapp and the result is :
int port : :8080
start called
Then I saw the second pod's log which crashed by kubectl logs pod/multi-container-dep-854c78cfd4-7jd6n cmultiapi and seen the below error:
int port : :8080
start called
2021/03/20 13:49:24 listen tcp :8080: bind: address already in use # this is the reason of the error
So, I suppose your images also do something like that.
What works
This below yamls ran successfully both container:
1.
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 8080
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-container-dep
labels:
app: aspnet-core-multi-container-app
spec:
replicas: 1
selector:
matchLabels:
component: multi-container
template:
metadata:
labels:
component: multi-container
spec:
containers:
- name: cmultiapp
image: shahincsejnu/httpapiserver:v1.0.5
imagePullPolicy: Always
ports:
- containerPort: 80
- name: cmultiapi
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 8081
If you have a docker compose yaml, please use Kompose Tool to convert it into Kubernetes Objects.
Below is the documentation link
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Please use kubectl explain to understand every field of your deployment yaml
As can be seen in below explanation for ports, ports list in deployment yaml is primarily informational.
Since both the containers in the Pod share the same Network Namespace, the processes running inside the containers cannot use the same ports.
kubectl explain deployment.spec.template.spec.containers.ports
KIND: Deployment
VERSION: apps/v1
RESOURCE: ports <[]Object>
DESCRIPTION:
List of ports to expose from the container. Exposing a port here gives the
system additional information about the network connections a container
uses, but is primarily informational. Not specifying a port here DOES NOT
prevent that port from being exposed. Any port which is listening on the
default "0.0.0.0" address inside a container will be accessible from the
network. Cannot be updated.
ContainerPort represents a network port in a single container.
FIELDS:
containerPort <integer> -required-
Number of port to expose on the pod's IP address. This must be a valid port
number, 0 < x < 65536.
hostIP <string>
What host IP to bind the external port to.
hostPort <integer>
Number of port to expose on the host. If specified, this must be a valid
port number, 0 < x < 65536. If HostNetwork is specified, this must match
ContainerPort. Most containers do not need this.
name <string>
If specified, this must be an IANA_SVC_NAME and unique within the pod. Each
named port in a pod must have a unique name. Name for the port that can be
referred to by services.
protocol <string>
Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP".
Please provide the Dockerfile for both images and docker compose files or docker run commands or docker service create commands for the existing multi container docker application for futher help.
I solved this by using environment variables and assigning aspnet url to port 81.
- name: cmultiapi
image: multiapi
imagePullPolicy: Never
ports:
- containerPort: 81
env:
- name: ASPNETCORE_URLS
value: http://+:81
I would also like to mention the url where I got the necessary help. Link is here.

Securing End User-Defined Kubernetes Pods

I am developing a game development platform that allows users to run their game servers within my Kubernetes cluster. What is everything that I need to restrict / configure to prevent malicious users from gaining access to resources they should not be allowed to access such as internal pods, Kubernetes access keys, image pull secrets, etc?
I'm currently looking at Network Policies to restrict access to internal IP addresses, but I'm not sure if they would still be able to enumerate DNS addresses to sensitive internal architecture. Would they still be able to somehow find out how my MongoDB, Redis, Kafka pods are configured?
Also, I'm aware Kubernetes puts an API token at the /var/run/secrets/kubernetes.io/serviceaccount/token path. How do I disable this token from being created? Are there other sensitive files I need to remove / disable?
I've been researching everything I can think of, but I want to make sure that I'm not missing anything.
Pods are defined within a Deployment with a Service, and exposed via Nginx Ingress TCP / UDP ConfigMap. Example Configuration:
---
metadata:
labels:
app: game-server
name: game-server
spec:
replicas: 1
selector:
matchLabels:
app: game-server
template:
metadata:
labels:
app: game-server
spec:
containers:
- image: game-server
name: game-server
ports:
- containerPort: 7777
resources:
requests:
cpu: 500m
memory: 500M
imagePullSecrets:
- name: docker-registry-image-pull-secret
---
metadata:
labels:
app: game-server
service: game-server
name: game-server
spec:
ports:
- name: tcp
port: 7777
selector:
app: game-server
TL;DR: How do I run insecure, end user-defined Pods within my Kubernetes cluster safely?

How to preserve source IP from traffic arriving on a ClusterIP service with an external IP?

I currently have a service that looks like this:
apiVersion: v1
kind: Service
metadata:
name: httpd
spec:
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
- port: 443
targetPort: 443
name: https
protocol: TCP
selector:
app: httpd
externalIPs:
- 10.128.0.2 # VM's internal IP
I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP 10.104.0.1, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.
How can I get the real source IP for the request without having to set up a load balancer or ingress?
This is not simple to achieve -- because of the way kube-proxy works, your traffic can get forwarded between nodes before it reaches the pod that's backing your Service.
There are some beta annotations that you can use to get around this, specifically service.beta.kubernetes.io/external-traffic: OnlyLocal.
More info in the docs, here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
But this does not meet your additional requirement of not requiring a LoadBalancer. Can you expand upon why you don't want to involve a LoadBalancer?
If you only have exactly one pod, you can use hostNetwork: true to achieve this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: caddy
spec:
replicas: 1
template:
metadata:
labels:
app: caddy
spec:
hostNetwork: true # <---------
containers:
- name: caddy
image: your_image
env:
- name: STATIC_BACKEND # example env in my custom image
value: $(STATIC_SERVICE_HOST):80
Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.

Resources