I want to allow Bitbucket access to my Jenkins callback [Private IP]:[PORT]/bitbucket-hook without public it entire port (/login, /jobs..) via haproxy.
Found it, can use path_end to forward the request or use reqirep to modify when backend path is different with frontend path.
frontend http-in
bind :444
acl is-jenkin-callback path_end -i /bitbucket-hook
use_backend jenkin-bitbucket-webhook if is-jenkin-callback
Related
I have configured azure load balancer which points my public Ip http, and I reach my website and working fine.
Now, I want to achieve a routing rule is used to redirect the application traffic from HTTP protocol to HTTPS secure protocol with the help of azure application gateway.
Can we simply add our HTTPS services to the health probe after installing an SSL certificate on the load balancer? I don't have much knowledge in networking any help highly appreciate.
I tried to reproduce the same in my environment it works fine successfully.
you are able to configure your public Ip address to https using Azure application gateway. Try to create a self-signed certificate like below:
New-SelfSignedCertificate `
-certstorelocation cert:\localmachine\my `
-dnsname www.contoso.com
#To create pfx file
$pwd = ConvertTo-SecureString -String "Azure"-Force -AsPlainText
Export-PfxCertificate `
-cert cert:\localMachine\my\<YOURTHUMBPRINT> `
-FilePath c:\appgwcert.pfx `
-Password $pwd
Try to create an application gateway. you can use your exciting public Ip address like below.
In routing rule add your frontend Ip as public and protocol _ HTTPS _ as_ 443 ports _ and upload a pfx certificate like below:
Then, try to create listener with port 8080 with http and add routing rule as same and verify the backend status is healthy like below:
When I test the http protocol with Ip address redirect successfully like below:
I have a service that exposes endpoint, the main purpose of which is to perform custom HTTP call based on JSON data provided (URL, port, body, headers etc.).
The service is running inside Docker container on Cloud environment.
How do I prevent user from being able to make circular calls to the server itself for ex. post http://myservice.com/invoke body: {'url': 'localhost:8080/invoke', 'method': 'get'}
and how to deny 'localhost', '0.0.0.0' or '127.0.0.1' urls so the server was unable to find and call them inside container and execute requests for not exposed endpoints? (for ex. my service also has '\stats' endpoint that's not accessible from public VPC but accessible inside private VPC, so calling 'myservice.com/invoke {'url': 'localhost:8080/stats', 'method': 'get'}' will make this endpoint accessible to user outside private VPC).
My simple Dockerfile:
FROM registry.access.redhat.com/ubi8/ubi-minimal
COPY build/my-service-1.0-SNAPSHOT-runner my-service
RUN chmod +x my-service
EXPOSE 8080
CMD ["./my-service"]
Thank you
I want to use the same host to get to my 2 diffenrents Kibana :
https://test.com/kibana1/app/kibana
and
https://test.com/kibana2/app/kibana
Each of the kibana can be assessible with https://aaa/app/kibana
Here is my HAProxy script :
acl k1 path_beg -m sub -i /kibana1
acl k2 path_beg -m sub -i /kibana2
use_backend KIBANA1 if k1
use_backend KIBANA2 if k2
redirect location /kibana1/app/kibana if k1
redirect location /kibana2/app/kibana if k2
But when I redirect, it doesn't know the URL.
How can I do to ignore kibana1 in the URL ?
How can I do it with HA Proxy ?
Id recommend looking into using the roundrobin setting on your backend. That will loadbalance between your two kibana instances. Also, you would have it a lot better if you also simply routed by DNS name, and utilized DNS load balancing on your KB instances. Im assuming you are loading from the same elastic instance/data set.
The reason for why it not loading is your ACL isnt loading the backend.
I have a server with 2 domain names (let's say domain1.com and domain2.com).
I can SSH into the server by ssh user#domain1.com and ssh user#domain2.com. I would like to be able to only allow ssh user#domain1.com and disable SSH acces to domain2.com.
Is that possible?
It does not seem possible to allow SSH connection only to specific domain name. The domain name is resolved by the DNS and there is no way for the SSH server to know which domain you are using. See also this answer to the same question.
One thing you might try to do is to configure a firewall (for example iptable) to drop connection to domain2.com on port 22.
A similar problem was discussed here, where they were trying to block a domain in iptables so that visitor could not access the http server using it.
Adjusting the iptables rule to your case ( and assuming that your ssh server is running on port 22) I would try this:
iptables -I INPUT -p tcp --dport 22 -m string --string "Host: domain2.com" --algo bm -j DROP
UPDATE:
As Dusan Bajic commented the rule above would only work for http traffic because it take advantage of the http header fields. This would not work for ssh traffic.
I have 3 virtual hosts on a single IP address.Host_a, Host_b and Host_c all mapping to 192.168.1.10.
My HAProxy configuration is as follows:
frontend http
.
.
.
acl host_one path_end -i /ABC/application
acl host_two path_end -i /XYZ/application
acl host_three path_end -i /PQR/application
use_backend be_host1 if host_one
use_backend be_host2 if host_two
use_backend be_host3 if host_three
backend be_host1
server channel Host_a
backend be_host2
server channel Host_b
backend be_host3
server channel Host_c
Now for example, HAproxy forwards request to 192.168.1.10/ABC/application in case it matches an incoming URL ending with /ABC/application. Is there a way I could forward it to http://Host_a/ABC/application instead ? It is important for me that they use the hostname instead of its corresponding IP address.
The hostname is a part of the HTTP request, and that means you can use the HAProxy option reqirep to set it to whatever you want.
reqirep ^Host: Host:\ Host_a
You can use this type of option in all three of your backends.