Access to mirage internals (db) in development mode? - ember-cli-mirage

I know how to use 'db' to work with internal mirage database in tests but I did not found out if it is possible to access internals like 'db' in standard routes. I understand that my app has no knowledge that mirage is used so it might not be possible.

Mirage's db is passed into route handlers as the first parameter, so you can access it there.
If you're talking about accessing it within your Ember app's routes (e.g. Ember.Route.extend), this is not really appropriate because Mirage is just a mock for your API and, as you say, your Ember app should have no knowledge of its data other than via XHR requests.

Related

Rails APIs and path based load balancer routing

We're breaking our monolithic Rails application in to microservices. Our services are hosted on AWS and are behind ALBs. We cannot use host based routing as we are multi-tenant via subdomain, and it would be an SSL nightmare to maintain the required certs for each tenant/environment/service combination. So we are using path-based API routing with rules on the load balancer. A request looks like this:
Client -> www.example.com/api/:service_name/the_rest_of_the_path -> ALB -> route to rails service by name of :service_name
Because ALB cannot modify the path of a request before it sends it on to the serive, when it reaches the Rails services the path is still /api/:service_name/the_rest_of_the_path . This means in order to route to the proper controllers/actions in this case, we'd need to actually create a rails scope on namespace of /api/:service_name . This would work in theory but it has two drawbacks.
Firstly it means local developers have to deal with ALB/client specific concerns -- the path used for external service/cluster routing for ALB.
The second is that it couples the application to that path. If the load balancer decided the path should be /:service_name/the_rest_of_the_path instead then it would mean changing the application code in conjunction with the load balancer rules to accommodate it. It's not optimal and I'd prefer to avoid it if at all possible.
I thought then perhaps we could introduce a webserver to the mix, in between the load balancer and the application layer. I worked on a proof of concept for this and had it stripping out /api/:service_name before it got to the service -- leaving the Rails app with just "the_rest_of_the_path" which is all it cares about. Great! Perfect! Or so I thought.
It works well enough to route initial requests to, It however falls flat when any sort of redirects or links are used by taking the current path (as Rails sees it) in to consideration.
In the event /api/:service_name is stripped off before it hits the service, any subsequent links or redirects made from the Rails server itself naturally do not include it in there any longer. You may be on www.example.com/api/:service_name/foo/bar but Rails only thinks you're at /foo/bar. When it tries to tack something on to the path for a redirect or link like /foo/bar/baz, it loses the thing that identifies what service to send it to so the route dies at the load balancer.
This has particularly been an issue with Omniauth/Oauth2 flows for us. Omniauth wants to live at /auth/:provider by default. If the request path is actually /api/:service_name/auth/:provider then it won't match and the Oauth flow wont initiate. Further if there is a failure with the Oauth flow, Omniauth will hard redirect to www.example.com/auth/failure -- which of course does not resolve as the LB does not know where to route the request to.
If we provide a path_prefix to Omniauth as /api/:service_name/auth then it wont match when testing locally at /auth and it won't initiate the flow there.
We won't have control over all of the gems we use and where they redirect to so my question is: Is there a proper way of hanging Rails API microservices off a path on a load balancer, and not have to pull teeth to preserve the necessary prefix in all routes and links and redirects? Something that is essentially a global base href that we can set there, but not set locally so that we can continue to develop at localhost:3000/path instead of remembering to use (and coupling with) an LB path like localhost:3000/api/:service_name/path ?

Rspec receive post from tested application

I am testing a running application using rspec/capybara. I have a route I want to test that is supposed to talk to a secondary service via a provided url.
Since the tests don't encapsulate the application, they just talk to it, I cant use the normal methods of stubbing out api calls, to make sure its calling the service properly.
What I would like is to be able to give the route a url, then have rspec receive a post back from the application. Is there a way to do this?
To be clear, I do NOT want rspec to mock/stub the request, because this isn't running as a wrapper to the application.
I will suppose the secondary service response is exposed somehow back to you.
So hitting https://not-my-service.com?secondary-service=http://service-i-control.com results in something that contains the response (partial or complete) from http://service-i-control.com.
If this service is up & running in production your secondary-service must also be something exposed to the internet, you can consider using something like ngrok to expose a local Rack application your testing environment is spinning up that returns a specific response.
If you don't mind using external services you could also consider using httpbin.org for example: https://not-my-service.com?secondary-service=https://httpbin.org/ip you will return a 200 OK with the IP of the origin that hit the server. So you could match that IP to https://not-my-service.com.
If you don't get any information besides the fact that it calls the secondary-service then I would suggest as part of the spec:
Spin up a rack application and expose it to the internet.
Hit the service passing your local application as parameter.
Wait until you get the request your are expecting, then stop the application and the test has succeeded.
Or it times out (say 30 seconds) and your test has failed (service was never called).

Partial migration from Rails to Phoenix

I have a rails app, and I want to gradually move it to Phoenix. While I implement functionality I want phoenix intercept the requests that are already implemented while passing unknown requests down to Rails app. What would be the best strategy in this case?
1) If I'm ready to accept some overhead, I could create a plug and route all unknown requests there (last route /*path). But how do I pass request intact and return the response? Parse it and then build again with HTTPoison would by unnecessary work, any better ideas?
2) I'm not sure, if it's possible with haproxy, but old app could be a fallback, where request would be passed if main backend responds with some error. Would this introduce less overhead?
3) Finally I could just split requests by mask in haproxy, but it seems like to much work, cause I'm planning on using rails for CUD actions and phoenix for R for some resources.
Any other options? Examples how someone done that? Thank you!
Read an excellent post about your exact problem here .
The basic idea is to use a gem rails-reverse-proxy to define a proxy to your Phoenix application.
Then, develop your feature on Phoenix, and add the necessary routes. Keep the rails conventions(it's the way the phoenix router works anyway).
Wire your rails app with a 'dummy' controller and set it to use rails-reverse-proxy.
It is recommended that you make all the ActiveRecord models that are owned by the Phoenix app read only. By adding a hook after_initialize :readonly! to the models owned by pheonix. This way you can use the models in Rails without compromising the phoenix ownership. Only the Phoenix app can change the model state.

Angular dart bookmarking views

It is my experience that Angular Dart is agnostic to your backend server implementation. it doesn't care if your server is in java, ruby or whatever. Angular dart has the concept of views and has a module that deals with routing between them. these routes also modify the address bar of the browser when it changes views.
I have come across this issue. Though the angular router module will change the address bar, because said route doesn't actually exist as far as the backend server is concerned, and as such will always issue a 404 response.
If such is the case, then I find the ability to route to different pages via angular to be pointless. Might as well I code in a more traditional server oriented fashion to transition between pages, than to sue angular.
Is it that there is something that is missing? Is there a way you can can get your server to resolve to the correct angular page?
You can use usePushState: false then only the (client) local part of the URL is changed.
see https://github.com/angular/angular.dart.tutorial/blob/master/Chapter_06/web/main.dart#L27
This part after the hash is never sent to the server.
This might cause some additional work for SEO.
http://example.com/index.html#someRoutePath/anotherRoutePath
or you can configure your server in a way that each request is handled independent of the path in the request and use the route package server side too.
see also https://stackoverflow.com/a/17909743/217408
You can configure your backend server to point all routes to the same file (using some kind of wildcard route which all decent servers should support). So app/some/page and app/another/page would both be served app.html. Then on your app startup you could have Angular parse the URL of the page, and manually route to that page.
I have used this approach with a Polymer app (with the Route library) and it works great. It should work similarly for Angular.

How to get domain/subdomains of an app outside of controllers/views?

How can I get the domain and subdomains of my application without being in a controller/view?
The reason for needing this is that I am subscribing to process_action.action_controller notifications within an initializer, and I need my applications' url from within that initializer.
If the host part of the URL (domain, subdomain) is dynamic ... which is to say "depends on the request" then, of course, the only way to get it is within the context of the request itself, which the controllers and views know about.
So I am assuming the application has a known host, perhaps dependent upon runtime environment (production, test, development, etc.), or maybe based on the server environment, but otherwise static. In this case, you could define a custom config variable containing the name, as noted in the more recent answer from Jack Pratt on this SO question: How to define custom configuration variables in rails.

Resources