Application gateway path based routing is not working - azure-application-gateway

I'm trying to setup application gateway in-front of my two app services which are API's sitting in a separate subnet on its own.
Let's say API1 and API2.
I have exposed the app services only to be accessed using a private endpoint within a VNET.
The following are my setup with application gateway,
Have created two backend pools as below,
i) API1 -> App services pointing to API1
ii) API2 -> App services pointing to API2
Have mapped the front-end IP to the public IP of the gateway
Have created an HTTPS inbound listener
i) Port: 443
ii) FrontendIP: Public IP of the gateway
iii) Have added my wild card certificate
iv) Listener type: Multi site
v) Host type: Single
vi) Domain name: MyDomain.com
Have associated a rule to my listener to map the backend path based rule as below,
To test in my windows hosts file I have added my public-IP to hostname created in my listener,
13.xx.xx.xx MyDomain.com
Now to test my routing logic, my default backend target is API1, where if I use mydomain.com/api2/ is suppose it would load the API2 response. But the following is happening for me,
MyDomain.com/ -> Loads the default API response
MyDomain.com/API1/ -> Gives HTTP 404 error response (Expected: Loads API1 response)
MyDomain.com/API2/ -> Gives HTTP 404 error response (Expected: Loads API2 response)
Please let me know if one can share input on what am I missing here to get this sorted.

Related

How can I access services via subdomain instead of ip + port?

I have a little server in my local network that provides several web services. Each service can be accessed by entering the ip of the server followed by the respective port.
Now I always have to remember which service is behind which port and it would be nicer to have specific subdomains forwarded to these ports. For example
ip:1234 -> foo.server.local
ip:4321 -> bar.server.local
How can this be done? I have pihole running on the server and had hoped to get this done using pihole but I was not successful.
What you are looking for is to set up a Domain Name Server (DNS). phoenixnap.com/kb/raspberry-pi-dns-server This guide should help.
You use a domain to direct to an IP:port combo. Like you could direct 123.12.12.12:8080 to some.thing and 123.12.12.12:8081 to any.address.
The domain name is arbitrary and masks the whole IP:port address.

Feign Client + Consul + Ribbon - HTTPS

I have the following setup (everything as docker containers):
Two web services running on HTTPS mode (self-signed certificate).
The web services are registered in consul.
Web service 1 calls web service 2 using feign client.
web service 2 is named authentication-service.
The docker containers cacerts were updated to include the self-signed certificate, however, the certificate does not have the IP address because they are dynamically generated by docker.
#FeignClient(name = "authentication-service")
public interface AuthenticationClient extends AuthenticationApi {
}
When web service 1 calls web service 2 Ribbon internally is using docker's IP address. (the problem)
Moreover, It is not clear to me why feign is using HTTP protocol instead of HTTPS.
feign.RetryableException: No subject alternative names matching IP address 172.20.0.10 found executing POST http://authentication-service/api/auth/authenticate
What am I missing?
How should I overcome this situation?
Thank you in advance.

Docker - use the same URL inside the container and outside

I have a docker container running an open source identity server UI. This consists of several web applications, and I am running them from the same docker container.
One web application, calls an API endpoint in another web application to get a config file. It then does a redirect to a URL found in that config file. The config file is dynamically generated using the domain name in the request.
I make a call from my local host, to the exposed port. This page then calls another webapi using the docker-compose service name for the URL: ex: https://webapi2/well-known/openid-configuration. This returns a config file with URLs that use webapi2 as the domain.
This causes a browser redirect to https://webapi2/singin. This fails, because my localhost does not know about wepapi1, it needs to use uses localhost:44310.

Traefik Allow Custom Domain

Using docker as backend and traefik as proxy, I'm using this label, under the service in docker-compose.yml
traefik.enable=true
traefik.frontend.rule=Host:sub.example.com
traefik.backend.port=80
traefik.docker.network=http_network
How to allow our user, to be able use their domain or subdomain by using CNAME redirect, such as
sub.usera.com CNAME sub.example.com
I already make my web app to handle the host redirect. But i can't get it work. It always resulting to "404 page not found", but the request never passed through our apps. The traefik log also resulting in 404 because it doesn't contain frontend rule of sub.usera.com. Does it mean, it not possible to serve a CNAME redirection using traefik?
change frontend.rule into traefik.frontend.rule=Host:sub.example.com,sub.usera.com

Dropbox OAuth callback to Mule using https

Dropbox requires the callback URL to be over HTTPS (when not using localhost).
Using Mule 3.6.0 with the latest dropbox connector, the callback defaults to http - thus only working with localhost. For production I need to use https for the OAuth dance.
What is the correct way to specify a https callback URL?
I've tried:
<https:connector name="connector.http.mule.default">
<https:tls-key-store path="${ssl.certfile}" keyPassword="${ssl.keyPass}" storePassword="${ssl.storePass}"/>
</https:connector>
<dropbox:config name="Dropbox" appKey="${dropbox.appKey}" appSecret="${dropbox.appSecret}" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="production.mydomain.com" path="callback" />
</dropbox:config>
But it errors:
Endpoint scheme must be compatible with the connector scheme. Connector is: "https", endpoint is "http://production.mydomain.com:8052/callback"
Here's what I ended up with that solved the problem:
<https:connector name="connector.http.mule.default" doc:name="HTTP-HTTPS">
<https:tls-key-store path="${ssl.certfile}" keyPassword="${ssl.keyPass}" storePassword="${ssl.storePass}"/>
</https:connector>
<dropbox:config name="Dropbox" appKey="${dropbox.appKey}" appSecret="${dropbox.appSecret}" doc:name="Dropbox">
<dropbox:oauth-callback-config domain="myserver.domain.com" path="callback" connector-ref="connector.http.mule.default" localPort="8052" remotePort="8052"/>
</dropbox:config>
This works great for localhost, but not if you need the callback to go to something other than localhost (e.g. myserver.domain.com)
Reviewing mule.log you can see that the connector binds to localhost (127.0.0.0) despite the config pointing to:
domain="myserver.domain.com"
Log Entry:
INFO ... Attempting to register service with name: Mule.Ops:type=Endpoint,service="DynamicFlow-https://localhost:8052/callback",connector=connector.http.mule.default,name="endpoint.https.localhost.8052.callback"
INFO ... Registered Endpoint Service with name: Mule.Ops:type=Endpoint,service="DynamicFlow-https://localhost:8052/callback",connector=connector.http.mule.default,name="endpoint.https.localhost.8052.callback"
INFO ... Registered Connector Service with name Mule.Ops:type=Connector,name="connector.http.mule.default.1"
The workaround is to force Mule to listen to 0.0.0.0 for connectors which define localhost as the endpoint.
In wrapper.conf set
wrapper.java.additional.x=-Dmule.tcp.bindlocalhosttoalllocalinterfaces=TRUE

Resources