How to expose two apps/services over unique ports with k3d? - docker

*Cross-posted from k3d GitHub Discussion: https://github.com/rancher/k3d/discussions/690
I am attempting to expose two services over two ports. As an alternative, I'd also love to know how to expose them over the same port and use different routes. I've attempted a few articles and a lot of configurations. Let me know where I'm going wrong with the networking of k3d + k3s / kubernetes + traefik (+ klipper?)...
I posted an example:
https://github.com/ericis/k3d-networking
The goal:
Reach "app-1" on host over port 8080
Reach "app-2" on host over port 8091
Steps
*See: files in repo
Configure k3d cluster and expose app ports to load balancer
ports:
# map localhost to loadbalancer
- port: 8080:80
nodeFilters:
- loadbalancer
# map localhost to loadbalancer
- port: 8091:80
nodeFilters:
- loadbalancer
Deploy apps with "deployment.yaml" in Kubernetes and expose container ports
ports:
- containerPort: 80
Expose services within kubernetes. Here, I've tried two methods.
Using CLI
$ kubectl create service clusterip app-1 --tcp=8080:80
$ kubectl create service clusterip app-2 --tcp=8091:80
Using "service.yaml"
spec:
ports:
- protocol: TCP
# expose internally
port: 8080
# map to app
targetPort: 80
selector:
run: app-1
Expose the services outside of kubernetes using "ingress.yaml"
backend:
service:
name: app-1
port:
# expose from kubernetes
number: 8080

You either have to use an ingress, or have to open ports on each individual node (k3d runs on docker, so you have to expose the docker ports)
Without opening a port during the creation of the k3d cluster, a nodeport service will not expose your app
k3d cluster create mycluster -p 8080:30080#agent[0]
For example, this would open an "outside" port (on your localhost) 8080 and map it to 30080 on the node - then you can use a NodePort service to actually connect the traffic from that port to your app:
apiVersion: v1
kind: Service
metadata:
name: some-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: some-port
nodePort: 30080
selector:
app: pgadmin
type: NodePort
You can also open ports on the server node like:
k3d cluster create mycluster -p 8080:30080#server[0]
Your apps can get scheduled to run on whatever node, and if you force a pod to appear on a specific node (lets say you open a certain port on agent[0] and set up your .yaml files to work with that certain port), for some reason the local-path rancher storage-class just breaks and will not create a persistent volume for your claim. You kinda have to get lucky & have your pod get scheduled where you need it to. (if you find a way to schedule pods on specific nodes without the storage provisioner breaking, let me know)
You also can map a whole range of ports, like:
k3d cluster create mycluster --servers 1 --agents 1 -p "30000-30100:30000-30100#server[0]"
but be careful with the amount of ports you open, if you open too much, k3d will crash.
Using a load balancer - it's similar, you just have to open one port & map to to the load balancer.
k3d cluster create my-cluster --port 8080:80#loadbalancer
You then have to use an ingress, (or the traffic won't reach)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 80
I also think that ingress will only route http & https traffic, https should be done on the port 443, supposedly you can map both port 80 and port 443, but I haven't been able to get that to work (I think that certificates need to be set up as well).

Related

How can I correctly forward traffic from a container to a NodePort service with Kubernetes?

I am running Minikube on an m1 mac with the docker daemon. I have a container in a pod serving HTTP on port 7777; according to the documentation, I can use a combination of a nodeport and the minikube service command to expose it to the local machine. My configuration yaml file is pretty simple as well:
apiVersion: v1
kind: Pod
metadata:
name: door-controls
labels:
type: door-controls
spec:
containers:
- image: door_controls
name: door-controls
imagePullPolicy: Never
ports:
- containerPort: 7777
name: httpz
---
apiVersion: v1
kind: Service
metadata:
name: door-control-service
spec:
type: NodePort
selector:
type: door-controls
ports:
- name: svc-http
protocol: TCP
port: 80
targetPort: httpz
Running this in minikube and then attempting to use minikube service will expose the running process on a random port. From a machine inside the network, I can wget the pod IP on port 7777 and get data back, so I know the pod is correctly serving traffic. I also can wget the door-control-service nodeport service from inside the network on port 80 and get traffic back, so I know that the door-control-service configuration is working. But no amount of futzing will allow me to access the door-control-service inside the network via the nodeport (which is randomly generated in the port ~30k range, and the browser launched by minikube service never returns data so I can't access it outside of that range either.
What am I doing wrong? Or more generally, how can I debug this issue? I am new to kubernetes and not sure where in the logs I should be looking for errors in the first place.

Which ports are supposed to be exposed in a Helm Chart when TWO ports are exposed in the docker image?

When working with helm charts (generated by helm create <name>) and specifying a docker image in values.yaml such as the image "kubernetesui/dashboard:v2.4.0" in which the exposed ports are written as EXPOSE 8443 9090 I found it hard to know how to properly specify these ports in the actual helm chart-files and was wondering if anyone could explain a bit further on the topic.
By my understanding, the EXPOSE 8443 9090 means that hostPort "8443" maps to containerPort "9090". It in that case seems clear that service.yaml should specify the ports in a manner similar to the following:
spec:
type: {{ .Values.service.type }}
ports:
- port: 8443
targetPort: 9090
The deployment.yaml file however only comes with the field "containerPort" and no port field for the 8443 port (as you can see below) Should I add some field here in the deployment.yaml to include port 8443?
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
ports:
- name: http
containerPort: 9090
protocol: TCP
As of now when I try to install the helm charts it gives me an error message: "Container image "kubernetesui/dashboard:v2.4.0" already present on machine" and I've heard that it means the ports in service.yaml are not configured to match the docker images exposed ports. I have tested this with simpler docker image which only exposes one port and just added the port everywhere and the error message goes away so it seems to be true, but I am still confused about how to do it with two exposed ports.
I would really appreciate some help, thank you in advance if you have any experience of this and is willing to share.
A Docker image never gets to specify any host resources it will use. If the Dockerfile has EXPOSE with two port numbers, then both ports are exposed (where "expose" means almost nothing in modern Docker). That is: this line says the container listens on both port 8443 and 9090 without requiring any specific external behavior.
In your Kubernetes Pod spec (usually nested inside a Deployment spec), you'd then generally list both ports as containerPorts:. Again, this doesn't really say anything about how a Service uses it.
# inside templates/deployment.yaml
ports:
- name: http
containerPort: 9090
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
Then in the corresponding Service, you'd republish either or both ports.
# inside templates/service.yaml
spec:
type: {{ .Values.service.type }}
ports:
- port: 80 # default HTTP port
targetPort: http # matching name: in pod, could also use 9090
- port: 443 # default HTTP/TLS port
targetPort: https # matching name: in pod, could also use 8443
I've chosen to publish the unencrypted and TLS-secured ports on their "normal" HTTP ports, and to bind the service to the pod using the port names.
None of this setup is Helm-specific; in this the only Helm template reference is the Service type: (for if the operator needs to publish a NodePort or LoadBalancer service).

How do i translate a docker command with -p 80:80 to kubernetes yaml

docker run -it -p 80:80 con-1
docker run -it -p hostport:containerport
Lets say i have this yaml definition, does below where it says 80 ports -> containerPort: 80 sufficent? In other words how do i account for -p 80:80 the hostport and container port in kubernetes yaml definition?
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Exposing ports of applications with k8s is different than exposing it with docker.
For pods, spec.containers.ports field isn't used to expose ports. It mostly used for documenting purpouses and also to name ports so that you can reference them later in service object's target-port with their name, and not a number (https://stackoverflow.com/a/65270688/12201084).
So how do we expose pods to the outside?
It's done with service objects. There are 4 types of service: ClusterIP, NodePort, LoadBalancer and ExternalName.
They are all well explained in k8s documentation so I am not going to explain it here. Check out K8s docs on types of servies
Assuming you know what type you want to use you can now use kubectl to create this service:
kubectl expose pod <pod-name> --port <port> --target-port <targetport> --type <type>
kubectl expose deployment <pod-name> --port <port> --target-port <targetport> --type <type>
Where:
--port - is used to pecify the port on whihc you want to expose the application
--target-port - is used to specify the port on which the applciation is running
--type - is used to specify the type of service
With docker you would use -p <port>:<target-port>
OK, but maybe you don't want to use kubeclt to create a service and you would like to keep the service in git or whatever as a yaml file. You can check out the examples in k8s docs, copy it and write you own yaml or do the following:
$ kubectl expose pod my-svc --port 80 --dry-run=client -oyaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: my-svc
name: my-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: my-svc
status:
loadBalancer: {}
Note: notice that if you don't pass a value for --target-port it defaults to the same value as --port
Also notice the selector filed, that has the same values as the labels on a pod. It will forward the traffic to every pod with this label (within the namespace).
Now, if you don't pass the value for --type, it defaults to ClisterIP so it means the service will be accessible only from within the cluster.
If you want to access the pod/application from the outside, you need to use either NodePort or LoadBalancer.
Nodeport opens some random port on every node and connecting to this port will forward the packets to the pod. The problem is that you can't just pick any port to open, and often you dont even get to pick the port at all (it's randomly assigned).
In case of type LoadBalancer you can use whateever port you'd like, but you need to run in cloud and use cloud provisioner to create and configure external loadbalancer for you and point it to your pod. If you are running on bare-metal you can use projects like MetalLB to make use of LoadBalancer type.
To summarize, exposing containers with docker is totally different than exposing them with kubernetes. Don't assume k8s will work the same way the docker works just with different notation, because it won't.
Read the docs and blogs about k8s services and learn how they work.

How to route test traffic through kubernetes cluster (minikube)?

I have a minikube cluster with two pods (with ubuntu containers). What I need to do is route test traffic from one port to another through this minikube cluster. This traffic should be sent through these two pods like in the picture. I am a beginner in this Kubernetes stuff so I really don't know how to do this and which way to go... Please, help me or give me some hints.
I am working on ubuntu server ver. 18.04.
enter image description here
I agree with an answer provided by #Harsh Manvar and I would also like to expand a little bit on this topic.
There already is an answer with a similar setup. I encourage you to check it out:
Stackoverflow.com: Questions: How to access a service from other machine in LAN
There are different drivers that could be used to run your minikube. They will have differences when it comes to dealing with inbound traffic. I missed the part that was telling about the driver used in the setup (comment). If it's the Docker shown in the tags, you could follow below example.
Example
Steps:
Spawn nginx-one and nginx-two Deployments to imitate Pods from the image
Create a service that will be used to send traffic from nginx-one to nginx-two
Create a service that will allow you to connect to nginx-one from LAN
Test the setup
Spawn nginx-one and nginx-two Deployments to imitate Pods from the image
You can use following definitions to spawn two Deployments where each one will have a single Pod:
nginx-one.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-one
spec:
selector:
matchLabels:
app: nginx-one
replicas: 1
template:
metadata:
labels:
app: nginx-one
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
nginx-two.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-two
spec:
selector:
matchLabels:
app: nginx-two
replicas: 1
template:
metadata:
labels:
app: nginx-two
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Create a service that will be used to send traffic from nginx-one to nginx-two
You will need to use a Service to send the traffic from nginx-one to nginx-two. Example of such Service could be following:
apiVersion: v1
kind: Service
metadata:
name: nginx-two-service
spec:
type: ClusterIP # could be changed to NodePort
selector:
app: nginx-two # IMPORTANT
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
After applying this definition you will be able to send the traffic to nginx-two by using the service name (nginx-two-service)
A side note!
You can use the IP of the Pod without the Service but this is not a recommended way.
Create a service that will allow you to connect to nginx-one from LAN
Assuming that you want to expose your minikube instance to LAN with Docker driver you will need to create a service and expose it. Example of such setup could be the following:
apiVersion: v1
kind: Service
metadata:
name: nginx-one-service
spec:
type: ClusterIP # could be changed to NodePort
selector:
app: nginx-one # IMPORTANT
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
You will also need to run:
$ kubectl port-forward --address 0.0.0.0 service/nginx-one-service 8000:80
Above command (ran on your minikube host!) will expose your nginx-one-service to be available on LAN. It will map port 8000 on the machine that ran this command to the port 80 of this service. You can check it by executing from another machine at LAN:
curl IP_ADDRESS_OF_MINIKUBE_HOST:8000
A side note!
You will need root access to have your inbound traffic enter on ports lesser than 1024.
Test the setup
You will need to check if there is a communication between the objects as shown in below "connection diagram".
PC -> nginx-one -> nginx-two -> example.com
The testing methodology could be following:
PC -> nginx-one:
Run on a machine in your LAN:
curl MINIKUBE_IP_ADDRESS:8000
nginx-one -> nginx-two:
Exec into your nginx-one Pod and run command:
$ kubectl exec -it NGINX_POD_ONE_NAME -- /bin/bash
$ curl nginx-two-service
nginx-two -> example.com:
Exec into your nginx-two Pod and run command:
$ kubectl exec -it NGINX_POD_TWO_NAME -- /bin/bash
$ curl example.com
If you completed above steps you can swap nginx Pods for your own software.
Additional notes and resources:
I encourage you to check kubeadm as it's the tool to create your own Kubernetes clusters:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster kubeadm
As you said:
I am a beginner in this Kubernetes stuff so I really don't know how to do this and which way to go... Please, help me or give me some hints.
You could check following links for more resources:
Kubernetes.io
Kubernetes: Docs: Concepts: Workloads: Controllers: Deployment
Kubernetes.io: Docs: Concepts: Services networking: Service
There are multiple options you can follow:
As you have two PODs you can expose one via service,
so service-1 is exposed and sending traffic to POD-1
POD-1 will send a request to service-2 of Kubernetes
This way traffic will get forwarded to POD-2 and from there it will Go out of cluster
There is also a container to container communication possibility if you can run both applications in a single POD.
POD-1 to POD-2 communication you can use the service option or POD URI.

How can we access ubuntu container image from outside the host?

We access the container through cluster IP and even we deploy web application containers can be accessed.The issue with how can we access container from outside the host.
Tried with giving external IP to containers.
You can create a service and bind it to a node port, from outside your cluster if you try to access that service using node_ip:port.
apiVersion: v1
kind: Service
metadata:
name: api-server
spec:
ports:
- port: 80
name: http
targetPort: api-http
nodePort: 30004
- port: 443
name: https
targetPort: api-http
type: LoadBalancer
selector:
run: api-server
if you do kubectl get service you can get the external ip.
The best approach would be to expose your pods with ClusterIP type services, and then use an Ingress resource along with Ingress Controller to expose HTTP and/or HTTPS routes so you can access your app outside of the cluster.
For testing purposes it's ok to use NodePort or LoadBalancer type services. Whether you are running on your own infrastructure or using a managed solution, you can use NodePort, while using LoadBalancer requires cloud provider's load balancer.
Source: Official docs

Resources