URL check with Prometheus by giving username/password - url

I am looking a way to see if Prometheus can do checks on any URL is working or not by logging into that particular application automatically.

The best way to achieve this is using the Blackbox exporter. It allows probing of endpoints over HTTP, HTTPS, DNS, TCP, and ICMP.

Related

AKS proxy. What for?

Now you can add proxy to your aks.
https://learn.microsoft.com/en-us/azure/aks/http-proxy
And if you read the article it seems like you are not going to use any other outbound system for your aks.
But when you deploy it using az cli you find that you still have to decide which "--outbound-type" you need. Even if you don't text it it gets its default value.
My question is: proxy? What for? I thought it was an alternative to outbound-type
No matter if I use proxy because I still need other outbound traffic for the whole cluster.
am I wrong?

Elasticsearch Securing the connection

i am (desperately) new to elasticsearch (7.9.0) and i currently have a cluster with two nodes running.
After a lot of effort it is performing as i would like it to.
It is running on docker and also has an nginx in front of it to route the traffic to it since it is being accessed directly from my website (angular 10).
The elasticsearch is being used as well from my laravel backend directly through the docker container name so that is secure (i guess).
My problem now is that i cannot find or understand a way to secure the http access from outside docker (eg the normal website).
Going via Laravel is an option but this is too slow for my purpose.
Is there a way i can securely have http access to the elasticsearch from the web?
Also, is there a way i can restrict the actions to read only actions?
If you need more info to help out please let me know as i am not knowledgable on what is important here and what not.
Thanks
Angular is a front-end and is run in your user's web browser. If Angular can somehow reach your Elasticsearch instance, everyone can do so. No matter what. You can try to obscure it as many as you want, but if there is direct exposure to Elasticsearch, it will be reachable.
So you have to either assume this fact, or go the slow way and proxy the requests to Laravel, so it can verify that the information requested is actually available for the user performing the request.

Is there a reverse proxy for Solace Message Router?

IBM has MQIPT (IBM MQ Internet Pass-Thru) that acts as MQ forwarder/reverse proxy to implement messaging solutions between remote sites across the internet. Is there such an equivalence for Solace?
Solace has all kinds of fancy advanced features for load balancing and hybrid/multi-site deployments like bridges and dynamic message routing, but I don't really know those, and where's the fun in having everything ready-made and pre-solved for you anyway? :-)
So here I am going to assume you want to roll your own solution and use an actual reverse proxy:
You can switch to HTTP-based protocols, and just use any regular HTTP reverse proxy. Solace message brokers have a REST message interface, or if your application already uses the Solace API for messaging (or needs its advanced features), you can switch over to HTTP streaming or WebSockets as a transport by modifying the scheme portion of the broker URL in your application configuration. (http:// or ws:// instead of tcp://) This will only allow you to balance sessions, not individual messages within a single elephant flow.

How to set traefik with OAuth2 authentication

I'm using traefik as a reverse proxy. I want to set OAuth2 authentication for a entry point.
In the document, I found the Forward Authentication which I think may be useful for this. But the document is just too simple
This configuration will first forward the request to http://authserver.com/auth.
If the response code is 2XX, access is granted and the original request is performed. Otherwise, the response from the authentication server is returned.
I've no idea how can I achieve authentication OAuth2 within a forwarding?
I've tried oauth2_proxy but didn't find a solution.
In this issue/comment guybrush provided a solution. But that, in fact, was a double reverse proxys.
I've recently built an app for this: https://github.com/thomseddon/traefik-forward-auth
It uses Forward Authentication, as you mentioned, and uses Google OAuth to authenticate users.
There's an example Docker Compose setup here: https://github.com/thomseddon/traefik-forward-auth/blob/master/examples/docker-compose.yml. See the traefik.toml file to see how Traefik is configured to point at the app.
Let me know if it is helpful!
Instead of trying to make Traefik support your case, let Traefik do what it does best and instead use Keycloak Gatekeeper for authentication (and potentially authorization).
This would change your setup from
Client -- Traefik -- Service
to
Client -- Traefik -- Gatekeeper -- Service
This means that both Traefik and Gatekeeper act as reverse proxy.
It's incredibly simple to model complex auth setups with this approach. One potential drawback is however the additional RP layer, so for high performance setups this may not be an ideal solution.
Note that Gatekeeper can work with any OIDC compatible IdP, so you don't have to run Keycloak to use it.

Specifying IP for Some Domainname

I am calling a number of apis of a web service hosted on a number of servers. Requests get routed to these servers at random through a load balancer.
All these servers reside on my local network and I want one particular api call to go to one particular server.
Since I don't want other requests to get affected, I am unwilling to put a host entry on the server hosting my app.
Can this be achieved through code?
I am coding in ruby and using net-http gem to make api calls.
Any implementation using curb gem is also welcome.
Thanks
-Azitabh
I think the best way to achieve what you want is to use a proxy with DNS Spoofing.
Charles proxy does that but there might be other tools also.
One way(on the same lines as suggested by systho) I can think of is to make the api call directly using the IP and create a vhost on the server which is listening directly on a separate port.
This will work for me purely because of the fact that I have access to the servers hosting the web service.

Resources