I am new to Prometheus and relatively new to kubernetes so bear with me, please. I am trying to test Prometheus out and have tried two different approaches.
Run Prometheus as a docker container outside of kubernetes. To accomplish this I have created this Dockerfile:
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/
and this yaml file:
global:
scrape_interval: 15s
external_labels:
monitor: 'codelab-monitor'
scrape_configs:
- job_name: 'kubernetes-apiservers'
scheme: http
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: endpoints
api_server: localhost:443
When I run this I get:
Failed to list *v1.Pod: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
Failed to list *v1.Service: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
Failed to list *v1.Endpoints: Get http://localhost:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
on a loop. Prometheus will load when I go to localhost:9090 but there is no data.
I thought deploying Prometheus as a Kubernetes deployment may help, so I made this yaml and deployed it.
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: prometheus-monitor
spec:
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus-monitor
image: prom/prometheus
# args:
# - '-config.file=/etc/prometheus/prometheus.yaml'
imagePullPolicy: IfNotPresent
ports:
- name: webui
containerPort: 9090
The deployment was successful, but if I go to localhost:9090 I get 'ERR_SOCKET_NOT_CONNECTED'. (my port is forwarded)
Can anyone tell me the advantage of in vs out of Kubernetes and how to fix at least one of these issues?
Also, my config file is suppressed because it was giving an error, and I will look into that once I am able to get Prometheus loaded.
Kubernetes does not map the port outside it's cluster when you deploy your container.
You also have to create a service (can be inside the same file) to make it available from your workstation (append this to your prometheus yaml):
---
apiVersion: v1
kind: Service
metadata:
name: prometheus-web
labels:
app: prometheus
spec:
type: NodePort
ports:
- port: 9090
protocol: TCP
targetPort: 9090
nodePort: 30090
name: webui
selector:
app: prometheus
NodePort opens the given port on all nodes you have. You should be able to see the frontend with http://localhost:30090/
Per default, kubernetes allow ports 30000 to 32767 for NodePort type (https://kubernetes.io/docs/concepts/services-networking/service/#nodeport).
Please consider reading the documentation in general for more information on services in kubernetes: https://kubernetes.io/docs/concepts/services-networking/service/
So 2 different issues. On:
You are trying to connect to localhost:443 where Prometheus is running and it's expecting to talk to a Kubernetes API server. Apparently, nothing is listening on localhost:443. Are you doing port forwarding to your kube-apiserver?
In this case you need to expose your deployment port. With something like:
kubectl expose deployment prmetheus-web --type=LoadBalancer # or
kubectl expose deployment prmetheus-web --type=NodePort
depending on how you want to expose your service. NodePort exposes it in service that maps to a port on your Kubernetes nodes (IPAddress:Port) and LoadBalancer exposes the deployment using an external load balancer that may vary depending on what cloud you are using (AWS, GCP, OpenStack, Azure, etc). More about exposing your Deployments or DaemonSets or StatefulSets here. More about services here
Hope it helps.
Related
I am running Minikube on an m1 mac with the docker daemon. I have a container in a pod serving HTTP on port 7777; according to the documentation, I can use a combination of a nodeport and the minikube service command to expose it to the local machine. My configuration yaml file is pretty simple as well:
apiVersion: v1
kind: Pod
metadata:
name: door-controls
labels:
type: door-controls
spec:
containers:
- image: door_controls
name: door-controls
imagePullPolicy: Never
ports:
- containerPort: 7777
name: httpz
---
apiVersion: v1
kind: Service
metadata:
name: door-control-service
spec:
type: NodePort
selector:
type: door-controls
ports:
- name: svc-http
protocol: TCP
port: 80
targetPort: httpz
Running this in minikube and then attempting to use minikube service will expose the running process on a random port. From a machine inside the network, I can wget the pod IP on port 7777 and get data back, so I know the pod is correctly serving traffic. I also can wget the door-control-service nodeport service from inside the network on port 80 and get traffic back, so I know that the door-control-service configuration is working. But no amount of futzing will allow me to access the door-control-service inside the network via the nodeport (which is randomly generated in the port ~30k range, and the browser launched by minikube service never returns data so I can't access it outside of that range either.
What am I doing wrong? Or more generally, how can I debug this issue? I am new to kubernetes and not sure where in the logs I should be looking for errors in the first place.
*Cross-posted from k3d GitHub Discussion: https://github.com/rancher/k3d/discussions/690
I am attempting to expose two services over two ports. As an alternative, I'd also love to know how to expose them over the same port and use different routes. I've attempted a few articles and a lot of configurations. Let me know where I'm going wrong with the networking of k3d + k3s / kubernetes + traefik (+ klipper?)...
I posted an example:
https://github.com/ericis/k3d-networking
The goal:
Reach "app-1" on host over port 8080
Reach "app-2" on host over port 8091
Steps
*See: files in repo
Configure k3d cluster and expose app ports to load balancer
ports:
# map localhost to loadbalancer
- port: 8080:80
nodeFilters:
- loadbalancer
# map localhost to loadbalancer
- port: 8091:80
nodeFilters:
- loadbalancer
Deploy apps with "deployment.yaml" in Kubernetes and expose container ports
ports:
- containerPort: 80
Expose services within kubernetes. Here, I've tried two methods.
Using CLI
$ kubectl create service clusterip app-1 --tcp=8080:80
$ kubectl create service clusterip app-2 --tcp=8091:80
Using "service.yaml"
spec:
ports:
- protocol: TCP
# expose internally
port: 8080
# map to app
targetPort: 80
selector:
run: app-1
Expose the services outside of kubernetes using "ingress.yaml"
backend:
service:
name: app-1
port:
# expose from kubernetes
number: 8080
You either have to use an ingress, or have to open ports on each individual node (k3d runs on docker, so you have to expose the docker ports)
Without opening a port during the creation of the k3d cluster, a nodeport service will not expose your app
k3d cluster create mycluster -p 8080:30080#agent[0]
For example, this would open an "outside" port (on your localhost) 8080 and map it to 30080 on the node - then you can use a NodePort service to actually connect the traffic from that port to your app:
apiVersion: v1
kind: Service
metadata:
name: some-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: some-port
nodePort: 30080
selector:
app: pgadmin
type: NodePort
You can also open ports on the server node like:
k3d cluster create mycluster -p 8080:30080#server[0]
Your apps can get scheduled to run on whatever node, and if you force a pod to appear on a specific node (lets say you open a certain port on agent[0] and set up your .yaml files to work with that certain port), for some reason the local-path rancher storage-class just breaks and will not create a persistent volume for your claim. You kinda have to get lucky & have your pod get scheduled where you need it to. (if you find a way to schedule pods on specific nodes without the storage provisioner breaking, let me know)
You also can map a whole range of ports, like:
k3d cluster create mycluster --servers 1 --agents 1 -p "30000-30100:30000-30100#server[0]"
but be careful with the amount of ports you open, if you open too much, k3d will crash.
Using a load balancer - it's similar, you just have to open one port & map to to the load balancer.
k3d cluster create my-cluster --port 8080:80#loadbalancer
You then have to use an ingress, (or the traffic won't reach)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 80
I also think that ingress will only route http & https traffic, https should be done on the port 443, supposedly you can map both port 80 and port 443, but I haven't been able to get that to work (I think that certificates need to be set up as well).
I don't think I miss anything, but my angular app doesn't seem to be able to contact the service I exposed trough Kubernetes.
Whenever I try to call the exposed nodeport on my localhost, I get a connection refused.
The deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: society-api-gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: society-api-gateway-deployment
template:
metadata:
labels:
app: society-api-gateway-deployment
spec:
containers:
- name: society-api-gateway-deployment
image: tbusschaert/society-api-gateway:latest
ports:
- containerPort: 80
The service file
apiVersion: v1
kind: Service
metadata:
name: society-api-gateway-service
spec:
type: NodePort
selector:
app: society-api-gateway-deployment
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
I double checked, the call doesn't reach my pod, and this is the error I get when going the call:
I'm using minikube and kubectl on my local machine.
I'm out of options, tried everything I though it could be, thanks in advance.
EDIT 1:
So after following the suggestions, i used the node IP to call the service:
I changed the IP in my angular project, now I get a connection timeout:
As for the port forward, I get a permission error:
So, as I thought, the problem was related to minikube not opening up to my localhost.
First of all I didn't need a NodePort, but the LoadBalancer also fit my need, so my API gateway became a LoadBalancer.
Second, when using minikube, to achieve what I wanted ( running kubernetes on my local machine and my angular client also being on my local machine ), you have to create a minikube tunnel, exactly how they explain it here: https://minikube.sigs.k8s.io/docs/handbook/accessing/#run-tunnel-in-a-separate-terminal
From doc, you see that the templeate will be like this <NodeIP>:<NodePort>.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So first, From kubectl get node -o wide comamnd take the NodeIP.
then try with the <NodeIP>:<NodePort>. For example, If the NodeIP is 172.19.0.2 then try 172.19.0.2:30001 with your sub url.
Or, Another way is with port-forwarding. In a terminal first try port-forwarding with kubectl port-forward svc/society-api-gateway-service 80:80. Then use the url you have tried with localhost.
I have a minikube cluster with two pods (with ubuntu containers). What I need to do is route test traffic from one port to another through this minikube cluster. This traffic should be sent through these two pods like in the picture. I am a beginner in this Kubernetes stuff so I really don't know how to do this and which way to go... Please, help me or give me some hints.
I am working on ubuntu server ver. 18.04.
enter image description here
I agree with an answer provided by #Harsh Manvar and I would also like to expand a little bit on this topic.
There already is an answer with a similar setup. I encourage you to check it out:
Stackoverflow.com: Questions: How to access a service from other machine in LAN
There are different drivers that could be used to run your minikube. They will have differences when it comes to dealing with inbound traffic. I missed the part that was telling about the driver used in the setup (comment). If it's the Docker shown in the tags, you could follow below example.
Example
Steps:
Spawn nginx-one and nginx-two Deployments to imitate Pods from the image
Create a service that will be used to send traffic from nginx-one to nginx-two
Create a service that will allow you to connect to nginx-one from LAN
Test the setup
Spawn nginx-one and nginx-two Deployments to imitate Pods from the image
You can use following definitions to spawn two Deployments where each one will have a single Pod:
nginx-one.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-one
spec:
selector:
matchLabels:
app: nginx-one
replicas: 1
template:
metadata:
labels:
app: nginx-one
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
nginx-two.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-two
spec:
selector:
matchLabels:
app: nginx-two
replicas: 1
template:
metadata:
labels:
app: nginx-two
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Create a service that will be used to send traffic from nginx-one to nginx-two
You will need to use a Service to send the traffic from nginx-one to nginx-two. Example of such Service could be following:
apiVersion: v1
kind: Service
metadata:
name: nginx-two-service
spec:
type: ClusterIP # could be changed to NodePort
selector:
app: nginx-two # IMPORTANT
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
After applying this definition you will be able to send the traffic to nginx-two by using the service name (nginx-two-service)
A side note!
You can use the IP of the Pod without the Service but this is not a recommended way.
Create a service that will allow you to connect to nginx-one from LAN
Assuming that you want to expose your minikube instance to LAN with Docker driver you will need to create a service and expose it. Example of such setup could be the following:
apiVersion: v1
kind: Service
metadata:
name: nginx-one-service
spec:
type: ClusterIP # could be changed to NodePort
selector:
app: nginx-one # IMPORTANT
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
You will also need to run:
$ kubectl port-forward --address 0.0.0.0 service/nginx-one-service 8000:80
Above command (ran on your minikube host!) will expose your nginx-one-service to be available on LAN. It will map port 8000 on the machine that ran this command to the port 80 of this service. You can check it by executing from another machine at LAN:
curl IP_ADDRESS_OF_MINIKUBE_HOST:8000
A side note!
You will need root access to have your inbound traffic enter on ports lesser than 1024.
Test the setup
You will need to check if there is a communication between the objects as shown in below "connection diagram".
PC -> nginx-one -> nginx-two -> example.com
The testing methodology could be following:
PC -> nginx-one:
Run on a machine in your LAN:
curl MINIKUBE_IP_ADDRESS:8000
nginx-one -> nginx-two:
Exec into your nginx-one Pod and run command:
$ kubectl exec -it NGINX_POD_ONE_NAME -- /bin/bash
$ curl nginx-two-service
nginx-two -> example.com:
Exec into your nginx-two Pod and run command:
$ kubectl exec -it NGINX_POD_TWO_NAME -- /bin/bash
$ curl example.com
If you completed above steps you can swap nginx Pods for your own software.
Additional notes and resources:
I encourage you to check kubeadm as it's the tool to create your own Kubernetes clusters:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster kubeadm
As you said:
I am a beginner in this Kubernetes stuff so I really don't know how to do this and which way to go... Please, help me or give me some hints.
You could check following links for more resources:
Kubernetes.io
Kubernetes: Docs: Concepts: Workloads: Controllers: Deployment
Kubernetes.io: Docs: Concepts: Services networking: Service
There are multiple options you can follow:
As you have two PODs you can expose one via service,
so service-1 is exposed and sending traffic to POD-1
POD-1 will send a request to service-2 of Kubernetes
This way traffic will get forwarded to POD-2 and from there it will Go out of cluster
There is also a container to container communication possibility if you can run both applications in a single POD.
POD-1 to POD-2 communication you can use the service option or POD URI.
I currently have a service that looks like this:
apiVersion: v1
kind: Service
metadata:
name: httpd
spec:
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
- port: 443
targetPort: 443
name: https
protocol: TCP
selector:
app: httpd
externalIPs:
- 10.128.0.2 # VM's internal IP
I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP 10.104.0.1, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.
How can I get the real source IP for the request without having to set up a load balancer or ingress?
This is not simple to achieve -- because of the way kube-proxy works, your traffic can get forwarded between nodes before it reaches the pod that's backing your Service.
There are some beta annotations that you can use to get around this, specifically service.beta.kubernetes.io/external-traffic: OnlyLocal.
More info in the docs, here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
But this does not meet your additional requirement of not requiring a LoadBalancer. Can you expand upon why you don't want to involve a LoadBalancer?
If you only have exactly one pod, you can use hostNetwork: true to achieve this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: caddy
spec:
replicas: 1
template:
metadata:
labels:
app: caddy
spec:
hostNetwork: true # <---------
containers:
- name: caddy
image: your_image
env:
- name: STATIC_BACKEND # example env in my custom image
value: $(STATIC_SERVICE_HOST):80
Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.