Can the App proxy be used to pass through various ID's for a separate backend rails app?
I have the case where we've implemented a subscription system in a separate rails app, but we want to show the user their subscriptions from Shopify. To do this I would like to add an app proxy on Shopify, such as:
Proxy Url: subscriptions.com/api/customers/subscriptions
Proxy Path: /a/customers
But I'd like to be able to proxy /a/customers/:customer_id/subscriptions, maybe even /a/customers/:customer_id/subscriptions/:id (for a show subscription liquid response), so concatenating the ids into the url is my main goal.
On the rails side I can easily extract the path_prefix from the params, its a matter of how Shopify is matching the Proxy Paths I guess.
Is this at all possible? Or is there another way around this problem?
The extra path components get appended to the Proxy URL. The Shopify Application Proxy documentation even provides an example showing this in the Proxy Request section.
So for you example, where the proxy url is http://subscriptions.com/api/customers/subscriptions and the proxy path is /a/customers then a request to /a/customers/:customer_id/subscriptions will be proxied to http://subscriptions.com/api/customers/subscriptions/:customer_id/subscriptions
So it sounds like the proxy request is already exactly what you want.
Related
Context:
I have a keycloak inside a docker, I understand that there is a "proxy reverse" doing something like transforming this url for example: "http://example.com" into "http://171.20.2.97:8082" (this is the actual place where the Keycloak is "deployed" or "up"). It is just an example, my clients when they need to consume an endpoint from one microservice of mine do not use numbers, they use example.com.
so in the Keycloak when you want to see the metadata of the realm for SAML2.0 you can do it by following this link which is in the REALM settings section:
https://example.com/auth/realms/REALM-NAME/protocol/saml/descriptor
as you can see I am using "example.com" not "171.20.2.97:8082" to access the metadata link.
The problem is that inside the METADATA, the endpoints for SingleSignOnService, SingleLogoutService, etc. Are all configured to be "http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" (notice it is using the numbers and not example.com) and this causes that when the clients that want to use SAML.
Send inside their SAML REQUEST "Destination" attribute like so: "http://example.com/auth/realms/REALM-NAME/protocol/saml" and this causes an invalid request error, with reason invalid_destination, because the request attribute Destination was expected to be:
"http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml" like is inside the Metadata.
So my question is, how can I edit the metadata to change the endpoints numbers to example.com or if that is not possible, how can I make example.com get translated to 171.20.2.97:8082 inside my keycloak server? Or if you know another way to solve/figure out this it is very welcome
I feel like a BEAST after finding out how to achieve what I needed after like 3 weeks of searching about keycloak and SAML (I overcame many obstacles this was the lastone), finally I managed to fix this by using the "Frontend URL" setting in my REALM settings, there I can put anything I want so that it changes "http://171.20.2.97:8082/auth/" (inside the metadata urls) for whatever I configure there, so for example if I set Frontend URL to:
https://example.com/auth/
now all my metadata endpoints will be like so:
https://example.com/auth/realms/REALM-NAME/protocol/saml
instead of:
http://171.20.2.97:8082/auth/realms/REALM-NAME/protocol/saml
now my client is being able to properly login with SAML2 using keycloak.
how did I manage to find out this? Well there is not much info so this was what gave me the hint: Keycloak behind nginx reverse proxy: SAML Integration invalid_destination
The person asking said that he configured frontend-url, and I wanted to give a try to that, and after checking if that changed metadata urls, surprise it did =)
We're breaking our monolithic Rails application in to microservices. Our services are hosted on AWS and are behind ALBs. We cannot use host based routing as we are multi-tenant via subdomain, and it would be an SSL nightmare to maintain the required certs for each tenant/environment/service combination. So we are using path-based API routing with rules on the load balancer. A request looks like this:
Client -> www.example.com/api/:service_name/the_rest_of_the_path -> ALB -> route to rails service by name of :service_name
Because ALB cannot modify the path of a request before it sends it on to the serive, when it reaches the Rails services the path is still /api/:service_name/the_rest_of_the_path . This means in order to route to the proper controllers/actions in this case, we'd need to actually create a rails scope on namespace of /api/:service_name . This would work in theory but it has two drawbacks.
Firstly it means local developers have to deal with ALB/client specific concerns -- the path used for external service/cluster routing for ALB.
The second is that it couples the application to that path. If the load balancer decided the path should be /:service_name/the_rest_of_the_path instead then it would mean changing the application code in conjunction with the load balancer rules to accommodate it. It's not optimal and I'd prefer to avoid it if at all possible.
I thought then perhaps we could introduce a webserver to the mix, in between the load balancer and the application layer. I worked on a proof of concept for this and had it stripping out /api/:service_name before it got to the service -- leaving the Rails app with just "the_rest_of_the_path" which is all it cares about. Great! Perfect! Or so I thought.
It works well enough to route initial requests to, It however falls flat when any sort of redirects or links are used by taking the current path (as Rails sees it) in to consideration.
In the event /api/:service_name is stripped off before it hits the service, any subsequent links or redirects made from the Rails server itself naturally do not include it in there any longer. You may be on www.example.com/api/:service_name/foo/bar but Rails only thinks you're at /foo/bar. When it tries to tack something on to the path for a redirect or link like /foo/bar/baz, it loses the thing that identifies what service to send it to so the route dies at the load balancer.
This has particularly been an issue with Omniauth/Oauth2 flows for us. Omniauth wants to live at /auth/:provider by default. If the request path is actually /api/:service_name/auth/:provider then it won't match and the Oauth flow wont initiate. Further if there is a failure with the Oauth flow, Omniauth will hard redirect to www.example.com/auth/failure -- which of course does not resolve as the LB does not know where to route the request to.
If we provide a path_prefix to Omniauth as /api/:service_name/auth then it wont match when testing locally at /auth and it won't initiate the flow there.
We won't have control over all of the gems we use and where they redirect to so my question is: Is there a proper way of hanging Rails API microservices off a path on a load balancer, and not have to pull teeth to preserve the necessary prefix in all routes and links and redirects? Something that is essentially a global base href that we can set there, but not set locally so that we can continue to develop at localhost:3000/path instead of remembering to use (and coupling with) an LB path like localhost:3000/api/:service_name/path ?
I have two domains like one is flexha.com and second is flexha.co.uk.I want that whenever user search flexha.com.User must go to flexha.co.uk.How can I do that? what is the best way to achieve that?
Assuming that flexha.co.uk uses Apache / Nginx to serve content, you can point the DNS at this same server and within web serwer config redirect.
Other than this, some DNS providers offer redirection or something like AWS S3 based redirection.
Parsehub provides the webhook feature. But currently I'm testing my Rails app locally. So how could I provide the webhook url for a project on Parsehub to point to my local server or any specific method in my controller.
Parsehub Doc Webhook:
https://www.parsehub.com/docs/ref/api/v2/#webhooks
Webhook Url option Screenshot:
In order to use a webhook, you need to provide a publicly visible IP address for ParseHub to make requests to. You can get one by registering for a cheap VPS host (e.g. DigitalOcean for $5/month).
On that host, you want to run a webserver, and put the endpoint that the webserver listens on into the webhook textbox in ParseHub. To inspect the details of what ParseHub sends, you can just make your webserver log all the request data. You can also check out our API docs which have a description of all the fields: https://www.parsehub.com/docs/ref/api/v2/#run
The way I usually test webhooks is by using Request Bin http://requestb.in/
Essentially you get a url from them, you give this as your webhook address and anything that is posted to this URL will be caught by the website for further inspection!
You can then get these parameters and post them to your application manually, thus mimicking the entire process.
I am trying to stream a file from a remote storage service (not s3 :-)) to the client using Ruby on Rails 4.2.
My server needs to stay in the middle of things to authenticate the client request but also to build up the request to the remote storage service since all requests to that service need to be authenticated using a custom header param. This makes it not possible to do a simple redirect_to and let the client download the file directly (but do let me know if this IS in fact possible using rails!). Also I want to keep the url of the file cloaked for the client.
Up until now I am using a gem called ZipLine but this also does not work as it still buffers the remote file before sending it to the client. As I am using unicorn/nginx, this might also be due to a setting in either of those two, that prevents proper streaming.
As per rails doc's instructions I have tried adding
listen 3000, tcp_nopush: false
to config/unicorn.rb but to no avail.
A solution might be to cache the remote file locally for a certain period and just serve that file. This would make some things easier but also creating new headaches like keeping the remote and cached files in sync, setting the right triggers for cache expiration, etc.
So to sum up:
1) How do I accomplish the scenario above?
2) If this is not a intelligent/efficient way of doing things, should I just cache a remote copy?
3) What are your experiences/recommendations in given scenario?
I have come across various solutions scattered around the interweb but none inspire a complete solution.
Thanks!
I am assuming you the third party storage service has an HTTP access. If you did consider using redirect_to, I assume the service also provides a means to allow per download authorization. Like unique key in the header that expires and does not expose your secret api keys or HMAC signed URL with expiration time as a param.
Anyhow, most cloud storage services provide this kind of file access. I would highly recommend let the service stream the file. Your app should simply authorize the user and redirect to the service. Rails allows you to add custom headers while redirecting. It is discussed in Rails guides.
10.2.1 Setting Custom Headers
If you want to set custom headers for a response then response.headers
is the place to do it. The headers attribute is a hash which maps
header names to their values, and Rails will set some of them
automatically. If you want to add or change a header, just assign it
to response.headers
So your action code would end up being something like this:
def download
# do_auth_check
response.headers["Your-API-Auth-Key"] = "SOME-RANDOM-STRING"
redirect_to url
end
Don't use up unnecessary server resources by streaming through them all those downloads. We are paying cloud services to that after all :)