PushProx setup in Kubernetes for metrics from federate endpoint of Prometheus deployed using kube prometheus stack helm chart - devops

I need insight on how we can set up a PushProx client in kubernetes which will communicate with kube prometheus and fetch all the metrics from /federate.
For refrenence I used the helm chart https://artifacthub.io/packages/helm/devopstales/pushprox, I create a pod template using this helm chart as refrenence as it adds a dameonset which is not required in my case. I only want my client to interact with internal prometheus so it can get all the metrics and send them to my external prometheus. Currently, the proxy is able to connect but I see 404 as a response on the proxy side when interacting with the client.
Any help is appreciated.

Related

Collect metrics from a Prometheus server to telegraf

I have a prometheus server running on a K8s instance and telegraf on a different cluster. Is there some way to pull metrics from the prometheus server using telegraf? I know there is telegraf support for scraping metrics from prometheus clients but I am looking to get these metrics from the prometheus server.
Thanks
there is this thing inside data sources, called scrapers, its a tab, you just need to put the url of the server.
I am trying to configure this using cli, but i can only do it with gui
There is a prometheus remote write parser (https://github.com/influxdata/telegraf/tree/master/plugins/parsers/prometheusremotewrite), I think it will be included in the 1.19.0 release of Telegraf. If you want to try it out now you can use a nightly build. (https://github.com/influxdata/telegraf#nightly-builds)
Configure your prometheus remote write towards telegraf and configure the input plugin to listen for traffic on the port that you configured. For convenience sake, have the output plugin configured to file so you can see the metrics in a file, almost immediately

Dask Gateway Setup on Azure

I am trying to setup dask gateway in AKS. Following the documentation I was able to start the dask gateway server in AKS. We have also hosted a separate jupyternotebook instace within the same cluster. When I try to access the gateway server from this jupyternotebook instance its failing with the below error :
In the dask gateway documentation it shows accessing the gateway server using an ip address. But in actual setup we would be using a url, right? How can I configure dask gateway helm chart for this

How to connect Google Cloud CDN to the Cloud Run for Anthos default setup?

Have a Cloud Run for Anthos set up with default configuration, istio-ingress as a gateway and couple of services. Cannot find any docs on how to connect Cloud CDN with this setup.
Does anybody have some experience with that?
Cloud CDN content delivery network works with HTTP(S) load balancing to deliver content to your users1. As Istio ingress works with Network load balancer instead, so it cannot be used with Cloud CDN.
Alternatively, when you create an Ingress object, the GKE ingress controller creates a Google Cloud HTTP(S) load balancer and configures it according to the information in the Ingress and its associated Services.2

How can I integrate my application with Kubernetes cluster running Docker containers?

This is more of a research question. If it does not meet the standards of SO, please let me know and I will ask elsewhere.
I am new to Kubernetes and have a few basic questions. I have read a lot of doc on the internet and was hoping someone can help answer few basic questions.
I am trying to create an integration with Kubernetes (user applications running inside Docker containers to be precise) and my application that would act as a backup for certain data in the containers.
My application currently runs in AWS. Would the Kube cluster need to run in AWS as well ? Or can it run in any cloud service or even on-prem as long as the APIs are available ?
My application needs to know the IP of the Master node API server to do POST/GET requests and nothing else ?
For authentication, can I use AD (my application uses AD today for a few things). That would also give me Role based policies for each user. Or do I have to use the Kube Token Reviewer API for authentication always ?
Would the applications running in Kubernetes use the APIs I provide to communicate with my application ?
Would my application use POST/GET to communicate with the Kube Master API server ? Do I need to use kubectl for this and above #4 ?
Thanks for your help.
Your application needn't exist on the same server as k8s. There are several ways to connect to k8s cluster, depending on your use case. Either you can expose the built-in k8s API using kubectl proxy, connect directly to the k8s API on the master, or you can expose services via load balancer or node port.
You would only need to know the IP for the master node if you're connecting to the cluster directly through the built-in k8s API, but in most cases you should only be using this API to internally administer your cluster. The preferred way of accessing k8s pods is to expose them via load balancer, which allows you to access a service on any node from a single IP. k8s also allows you to access a service with a nodePort from any k8s node (except the master) through a preassigned port.
TokenReview is only one of the k8s auth strategies. I don't know anything about Active Directory auth, but at a glance OpenID connect tokens seem to support it. You should review whether or not you need to allow users direct access to the k8s API at all. Consider exposing services via LoadBalancer instead.
I'm not sure what you mean by this, but if you deploy your APIs as k8s deployments you can expose their endpoints through services to communicate with your external application however you like.
Again, the preferred way to communicate with k8s pods from external applications is through services exposed as load balancers, not through the built-in API on the k8s master. In the case of services, it's up to the underlying API to decide which kinds of requests it wants to accept.

How to use https to call the kubernetes api server

I have a two node kubernetes cluster on a linux server and I use the kubernetes api to pull stats about them using a http api through kubeproxy. However I haven't found any good documentation on how to use https. I am kinda new to setting up environments so a lot of the high level documentation goes over me.
You need to configure https on kubernetes api server. You can check Kelseys's https://github.com/kelseyhightower/kubernetes-the-hard-way
Also look at this doc https://kubernetes.io/docs/admin/accessing-the-api/
Configuring kubernetes SSL manualy may be hard. If you have troubles try to use kubeadm utility - https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

Resources