I'm wondering is it possible to set an alternative "timeout server" on a specific action (url path)? For example, something like
timeout server 1000
timeout server /something-that-takes-long-time-to-respond 10000
?
This may be solved using separate backends.
frontend www-http
bind 10.0.0.1:80
default_backend app
acl long_url path_beg -i /long_url
use_backend app-extended if long_url
backend app
server web-1 10.0.0.2:80 check
backend app-extended
server web-1 10.0.0.2:80 trace app/web-1
timeout server 10m
Such configuration allows executing time consuming links and configure longer timeout only for specific url.
Please note that trace setting, which makes only one health check instead of separate ones to the same server.
Related
I have a client that can only make requests without authentication information.
I would like to use HAProxy or a similar proxy solution to add OAuth authentication to these client requests.
I already succeeded to add a Bearer token to the client requests. See below for the haproxy.cfg with some placeholders.
frontend front
mode http
bind *:8080
default_backend servers
http-request add-header Authorization "Bearer {{ .Env.ACCESS_TOKEN}}"
backend servers
mode http
server server1 myserver.com:443 ssl
The problem is that the access tokens have a TTL of 24 hours. So I need to refresh them or get a new token periodically.
Does HAProxy support this already?
I can write some script to get a new access token periodically, update the config and restart HAProxy. Is this a good approach when running HAProxy in docker? Are there better solutions?
You could give a try to create/test your script using Lua, it is now supported in the latest versions, check How Lua runs in HAProxy.
An example of this but using Nginx + Lua, can be found in this project: https://github.com/jirutka/ngx-oauth
I'm facing problem when I tried to use redis as session server for the below configuration:
more than one windows servers hosting same application with https://github.com/Azure/aspnet-redis-providers
Elastic load balancer with weighted routing redirects requests to the all IIS servers
Redis is hosted in AWS elastic-cache and accessible from both servers
Redis serves as a session server for 1 server at a time without any issue
for each session 3 keys are created
"{/_ktffpxxxxxxg2xixdnhe}_Write_Lock"
"{/_ktffpxxxxxxg2xixdnhe}_Data"
"{/_ktffpxxxxxxg2xixdnhe}_Internal"
Issue: When 1+ servers try to serve same user, by accessing session from redis at same instance, if server1 have placed _Write_Lock then the server2 fails to read+update-timeout OR write the data and after that it cleared the key
--> result, the user's next request to any server fails to verify his/her session.
Am i the only one who gets this issue ?? Please help ...
Note: With session stickiness enabled in ELB the signout is not intermittent, however that restricts us to take out a server for upgradation without loosing all user sessions for that server.
My Rails app listens on single port for API calls and browser requests. To increase security I would like to open another port for API and make web page URLs unabailable for this port.
How to do this in Rails? (Possibly without losing current app integrity).
I use WEBrick or Puma during development and Apache+Passenger in production.
P.S.
Currently I'm thinking about making HTTP proxy which will forward API calls.
Unicorn will bind to all interfaces on TCP port 8080 by default. You may use the -l switch to bind to a different address:port. Each worker process can also bind to a private port via the after_fork hook. But I think it is not useful if you have nginx on top layer.
I just noticed an issue on our production server whereby the apache balancer was configured thusly:
<Proxy balancer://thin_cluster>
BalancerMember http://127.0.0.1:6000
BalancerMember http://127.0.0.1:6001
BalancerMember http://127.0.0.1:6002
ProxySet lbmethod=bybusyness maxattempts=1 timeout=30
</Proxy>
But the thin config file only specified 2 servers;
thin.yml (condensed for brevity)
address: 127.0.0.1
port: 6000
servers: 2 # <-- wrong!!
The number of thin servers was increased from 2 to 3 about 6 months ago, but whoever increased it forgot to increase the servers count in the thin.yml file (they only did it in the apache config file). The reason I started looking into this is that it had been noticed that every third request to the application was slow. I'm assuming this is why.
The question I have is: What would thin actually do under these conditions? Why did the application still work? Surely every third request would have full on died rather than "coped with this situation".
Thanks in advance.
Thin doesn't know or care about the Apache configuration. It only adheres to its own config and will only spawn 2 servers as a result.
The reason every third request was kinda slow is probably due to Apache rerouting the request. Since the two thin servers are using ports 6000 and 6001 the reference from Apache to port 6002 cannot reach a server - the port is simply not used by any.
Apache still tries sending the request there, because it also doesn't know if there's a server behind that address/port. It then waited for a timeout (a few seconds?) since no response was given and then rerouted the request to one of the other ports (6000 or 6001).
Apache doesn't "save" the unreachable server because it might just be a temporary outage. You can probably change this behavior with some settings (at least that is possible in Nginx)
You should either remove the third port definition in Apache or add another thin server.
I have a web server which is protected behind http-basic-auth. I've read through the monit docs and it doesn't seem like there's a clear way to pass credentials in order to test that the test page on the server is being returned correctly.
Any thoughts?
Thanks!
(Please don't confuse this with monit's own httpd for showing monit status in a web page)
PS this is monit 4.8.1 -- that which comes with Ubuntu Hardy 8.04
It seems to be possible to include the credentials in the URL, have you tried this?:
(from http://mmonit.com/monit/documentation/monit.html#connection_testing )
[...] Where URL-spec is an URL on the
standard form as specified in RFC
2396:
<protocol>://<authority><path>?<query>
Here is an example of an URL where all
components are used:
http://user:password#www.foo.bar:8080/document/?querystring#ref
If a username and password is included
in the URL Monit will attempt to login
at the server using Basic
Authentication.
Try this if you just want to check that your web server is listening on port 80 (and you don't care what page or data it returns):
if failed port 80 type TCP then restart