how to secure or set a rails style before_filter for all angular controllers? - ruby-on-rails

I'm using angularjs for the front end and rails + devise for authentication on the backend.
On the front end I have added a responseInterceptor to redirect to the /#/sign_in page upon any 401 response from any xhr request and display a growl style pop-up message using toastr.
App.config(['$httpProvider', function ($httpProvider) {
$httpProvider.responseInterceptors.push('securityInterceptor');
}]);
App.factory('securityInterceptor', ['$injector', '$location', '$cookieStore', function ($injector,$location,$cookieStore) {
return function(promise) {
var $http = $injector.get('$http');
return promise.then(null, function(response){
if (response.status === 401) {
$cookieStore.remove('_angular_devise_user');
toastr.warning('You are logged out');
$location.path('/#/sign_in');
}
});
};
});
My problem is, when I click on a page that loads several xhr requests during the controllers initialization, for example:
var products = Product.query();
var categories = Category.query();
var variations = Variation.query();
These are needed for various navigation components and they all fire off in parallel, resulting in several duplicate growl-style messages.
Is there a way to make angular quit on the first 401 and stop execution of the rest of the controller from within the interceptor? In a traditional rails app, there would be a "before_filter" that stops regular execution, preventing the page and queries from loading... what's the best way to do this in angular?

I've been pondering about this problem for my own apps too. A sketch of my thoughts (NOT REAL IMPLEMENTATION, SO BEWARE):
A userData service keeps track of whether the user is logged in + other information (e.g. user name, real user name etc):
App.service("userData", function() {
var currentData = {
loggedIn: false
};
function getCurrent() {
return currentData;
}
// called when the user logs in with the user data
function loggedIn(userData) {
// the object is REPLACED to avoid race conditions, see explanation below
currentData = angular.extend({loggedIn: true}, userData);
}
return {
getCurrent: getCurrent,
loggedIn: loggedIn
};
});
The interceptors keep track of the currentData. If an interceptor receives HTTP 401 and the loggedIn flag is true, it changes the flag to false and redirects to the login view. If an interceptor receives HTTP 401 and the loggedIn flag is false, it does nothing besides rejecting the request, because another interceptor has done the view redirection.
When the user logs in, the currentData is replaced, so as to avoid situations with delayed responses (e.g. call1 and call2 are initiated, call1 responds 401; call2 also results in 401, but the delivery of the actual response is delayed; then the user logs in again; then call2 receives its 401; the second 401 should not overwrite the current state)
App.config(["$provide", "$httpProvider", function($provide, $httpProvider) {
$provide.factory("myHttpInterceptor", ["$q", "userData", "$cookieStore", "toastr", "$location",
function($q, userData, $cookieStore, toastr, $location) {
return {
request: function(config) {
config.currentUserData = userData.getCurrent();
return config;
},
responseError: function(rejection) {
if( rejection && rejection.status === 401 && rejection.config && rejection.config.currentUserData && rejection.config.currentUserData.loggedIn ) {
rejection.config.currentUserData.loggedIn = false;
$cookieStore.remove('_angular_devise_user');
toastr.warning('You are logged out');
$location.path('/#/sign_in');
}
return $q.reject(rejection);
}
};
}
]);
$httpProvider.interceptors.push("myHttpInterceptor");
});
Also note I am using the newer way to register interceptors, as $httpProvider.responseInterceptors seems to be deprecated.

Related

Manifest v3 extension: asynchronous event listener does not keep the service worker alive [duplicate]

I am trying to pass messages between content script and the extension
Here is what I have in content-script
chrome.runtime.sendMessage({type: "getUrls"}, function(response) {
console.log(response)
});
And in the background script I have
chrome.runtime.onMessage.addListener(
function(request, sender, sendResponse) {
if (request.type == "getUrls"){
getUrls(request, sender, sendResponse)
}
});
function getUrls(request, sender, sendResponse){
var resp = sendResponse;
$.ajax({
url: "http://localhost:3000/urls",
method: 'GET',
success: function(d){
resp({urls: d})
}
});
}
Now if I send the response before the ajax call in the getUrls function, the response is sent successfully, but in the success method of the ajax call when I send the response it doesn't send it, when I go into debugging I can see that the port is null inside the code for sendResponse function.
From the documentation for chrome.runtime.onMessage.addListener:
This function becomes invalid when the event listener returns, unless you return true from the event listener to indicate you wish to send a response asynchronously (this will keep the message channel open to the other end until sendResponse is called).
So you just need to add return true; after the call to getUrls to indicate that you'll call the response function asynchronously.
The accepted answer is correct, I just wanted to add sample code that simplifies this.
The problem is that the API (in my view) is not well designed because it forces us developers to know if a particular message will be handled async or not. If you handle many different messages this becomes an impossible task because you never know if deep down some function a passed-in sendResponse will be called async or not.
Consider this:
chrome.extension.onMessage.addListener(function (request, sender, sendResponseParam) {
if (request.method == "method1") {
handleMethod1(sendResponse);
}
How can I know if deep down handleMethod1 the call will be async or not? How can someone that modifies handleMethod1 knows that it will break a caller by introducing something async?
My solution is this:
chrome.extension.onMessage.addListener(function (request, sender, sendResponseParam) {
var responseStatus = { bCalled: false };
function sendResponse(obj) { //dummy wrapper to deal with exceptions and detect async
try {
sendResponseParam(obj);
} catch (e) {
//error handling
}
responseStatus.bCalled= true;
}
if (request.method == "method1") {
handleMethod1(sendResponse);
}
else if (request.method == "method2") {
handleMethod2(sendResponse);
}
...
if (!responseStatus.bCalled) { //if its set, the call wasn't async, else it is.
return true;
}
});
This automatically handles the return value, regardless of how you choose to handle the message. Note that this assumes that you never forget to call the response function. Also note that chromium could have automated this for us, I don't see why they didn't.
You can use my library https://github.com/lawlietmester/webextension to make this work in both Chrome and FF with Firefox way without callbacks.
Your code will look like:
Browser.runtime.onMessage.addListener( request => new Promise( resolve => {
if( !request || typeof request !== 'object' || request.type !== "getUrls" ) return;
$.ajax({
'url': "http://localhost:3000/urls",
'method': 'GET'
}).then( urls => { resolve({ urls }); });
}) );

How can I handle 301 redirects in a service worker?

On my Persian language website, there are some problems in the handling of 301 redirects when the service worker is fetching some requests with Persian/Farsi characters.
As regular, when a user enters a keyword, as for example رامسر (Ramsar as English) submits an Ajax search form, a request as below sent to the server (Apache/Laravel):
https://www.example.com/s?search=%D8%B1%D8%A7%D9%85%D8%B3%D8%B1&gstnum=1
Note that %D8%B1%D8%A7%D9%85%D8%B3%D8%B1 is Unicode of رامسر.
So, at back-end this request redirects to another URL (301 Redirecting):
https://www.example.com/s/ramsar/%D8%A7%D8%AC%D8%A7%D8%B1%D9%87-%D9%88%DB%8C%D9%84%D8%A7-%D9%88-%D8%B3%D9%88%D8%A6%DB%8C%D8%AA-%D8%AF%D8%B1-%D8%B1%D8%A7%D9%85%D8%B3%D8%B1?gstnum=
The above is the encoding type of this URL:
https://www.example.com/s/ramsar/اجاره-ویلا-و-سوئیت-در-رامسر
But, when service-worker is running, it cannot set the correct character setting of respond URL, it returns:
https://www.example.com/s/ramsar/اجارÙ-ÙÛÙا-Ù-سÙئÛت-در-راÙسر
This is my code for the fetch method of service-worker:
self.addEventListener("fetch", function (event) {
if (event.request.method === "GET" && event.request.mode === "navigate") {
event.respondWith(async function () {
try {
var networkResponse = await fetch(event.request);
return networkResponse;
} catch (error) {
...
}
}());
}
});
What can I do?

Getting email id value null as response during apple-authentication

I'm implementing apple-authentication in react native using expo-apple-authentication package.
Below is the code which I'm calling on button's onPress.
async handleSocialLogin() {
const { mutate, BB, onSuccess, navigation } = this.props;
try {
const result = await AppleAuthentication.signInAsync({
requestedScopes: [
AppleAuthentication.AppleAuthenticationScope.FULL_NAME,
AppleAuthentication.AppleAuthenticationScope.EMAIL,
],
});
Alert.alert(JSON.stringify(result))
// signed in
} catch (e) {
Alert.alert(e)
if (e.code === 'ERR_CANCELED') {
// handle that the user canceled the sign-in flow
} else {
// handle other errors
}
}
}
It should return me authentication-token, Full_Name and Email which I requested in scope but It is giving me null for Full_Name and Email.
As per the documentation:
requestedScopes (AppleAuthenticationScope[]) (optional) - Array of user information scopes to which your app is requesting access. Note that the user can choose to deny your app access to any scope at the time of logging in. You will still need to handle null values for any scopes you request. Additionally, note that the requested scopes will only be provided to you the first time each user signs into your app; in subsequent requests they will be null.
You have probably already logged in once and didn't catch the logs. Subsequent log in will result in this data being null

Ignore ajax requests in service worker

I have an app with a basic 'shell' of HTML, CSS and JS. The main content of the page is loaded via multiple ajax calls to an API that is at another URL to the one my app is running on. I have set up a service-worker to cache the main 'shell' of the application:
var urlsToCache = [
'/',
'styles/main.css',
'scripts/app.js',
'scripts/apiService.js',
'third_party/handlebars.min.js',
'third_party/handlebars-intl.min.js'
];
and to respond with the cached version when requested. The problem I am having is that the response of my ajax calls are also being cached. I'm pretty sure that I need to add some code to the fetch event of the service-worker that always get them from the network rather than looking in the cache.
Here is my fetch event:
self.addEventListener('fetch', function (event) {
// ignore anything other than GET requests
var request = event.request;
if (request.method !== 'GET') {
event.respondWith(fetch(request));
return;
}
// handle other requests
event.respondWith(
caches.open(CACHE_NAME).then(function (cache) {
return cache.match(event.request).then(function (response) {
return response || fetch(event.request).then(function (response) {
cache.put(event.request, response.clone());
return response;
});
});
})
);
});
I'm not sure how I can ignore the requests to the API. I've tried doing this:
if (request.url.indexOf(myAPIUrl !== -1) {
event.respondWith(fetch(request));
return;
}
but according to the network tab in Chrome Dev Tools, all of these responses are still coming from the service-worker.
You do not have to use event.respondWith(fetch(request)) to handle requests that you want to ignore. If you return without calling event.respondWith browser will fetch the resource for you.
You can do something like:
if (request.method !== 'GET') { return; }
if (request.url.indexOf(myAPIUrl) !== -1) { return; }
\\ handle all other requests
event.respondWith(/* return promise here */);
IOW as long as you can determine synchronously that you don't want to handle the request you can just return from the handler and let the default request processing to take over. Check out this example.

SignalR and Kendo Ui Scheduler

I'm working in an implementation using SignalR and the Kendo Scheduler. When a new task is created (for exemple), the SchedulerDataSource transport send the connection hub id to the server as an additional parameter:
transport: {
read: { url: global.web_path + 'Home/Tasks' },
update: { url: global.web_path + 'Home/UpdateTask', type: 'PUT', contentType: 'application/json' },
create: { url: global.web_path + 'Home/CreateTask', type: 'POST', contentType: 'application/json' },
destroy: { url: global.web_path + 'Home/DeleteTask', type: 'DELETE', contentType: 'application/json' },
parameterMap: function (options, operation) {
if (operation == "destroy" && options.models) {
return JSON.stringify({ taskId: options.models[0].Id, callerId: $.connection.hub.id });
}
if (operation !== "read" && options.models) {
return JSON.stringify({ tasks: options.models, callerId: $.connection.hub.id });
}
}
},
The server do whatever it has to do, and send a notification to every other user, except de caller:
[HttpPost]
public JsonResult CreateTask(List<ScheduledEvent> tasks, string callerId)
{
...create task and other stuff
//broadcast the newly created object to everyone except caller
var hubContext = GlobalHost.ConnectionManager.GetHubContext<Notebooks.Hubs.SchedulerHub>();
hubContext.Clients.AllExcept(callerId).UpdateSchedule(task);
//return the object to caller
return Json(task);
}
Once the other clients receive a new task from the hub, it is added to the SchedulerDataSource:
hub.client.updateSchedule = function (scheduledEvent) {
schedulerDataSource.add(scheduledEvent);
}
Everything seems to work fine, and it really took me some time to realize this behavior: if a client have the scheduler window open, this window is closed once the schedulerDataSource is updated. This is expected or am I doing something wrong?
Edit: I just realized how old this question is, so you have probably moved on to other things by now, or the pushCreate method may not have existed back then.
I think this may be how it works, but it seems like it should be able to add those events behind the scenes without having to close the edit window. Have you tried the pushCreate method? If that doesn't work, since the add automatically closes the edit dialog, maybe when the events come in, if the dialog is open, you could store the new events, then add them when the user closes the edit dialog.
My answer is now even older ;) but I faced this very same issue today.
First, I'm quite sure this is indeed the expected behavior. You can see in the kendo sources the call of the close editor window method in the transport update and create methods of the scheduler.
Below is what I've done to bypass the issue .
The idea is as simple as to prevent the edit window to close when an appointment modification comes from another hub client.
Server-side : modify the hub methods (example with update method)
public MyAppointmentViewModel Update(MyAppointmentViewModel appointment)
{
if (!appointmentService.Update(appointment))
throw new InvalidOperationException("Something went wrong");
else
{
Clients.Others.PrepareBeforeAddOrUpdateSignal(appointment.Id);
Clients.Others.Update(appointment);
return appointment;
}
}
Here you see we inform every other clients (through PrepareBeforeAddOrUpdate) we're about to update an appintment.
Client-side now (in index.cshtml for instance)
schedulerHub.client.prepareBeforeAddOrUpdateSignal = function(id){ lastModifiedRdvId = id; };
schedulerHub.client.create = function(appointment){ lastModifiedRdvId = 0; }; /* reset the variable for next time */
schedulerHub.client.update = function(appointment){ lastModifiedRdvId = 0; }; /* reset the variable for next time */
function SchedulerEditor()
{
return $(".k-scheduler-edit-form").data("kendoWindow");
}
var eventBeforeChanges = null;
var lastModifiedRdvId = 0;
function onEditorClose(e) {
if (eventBeforeChanges != null) {
if (lastModifiedRdvId > 0 && eventBeforeChanges.Id != lastModifiedRdvId)
e.preventDefault();
else {
var editWin = SchedulerEditor(); /* unbind this callback and use default behavior again */
editWin.unbind('close', onEditorClose);
}
}}
function onEditRdv(e) {
var editWin = SchedulerEditor();
if (editWin != null) /*Bind the close event */
editWin.unbind('close', onEditorClose).bind('close', onEditorClose);
eventBeforeChanges = e.event;
/* continue onEditRdv */
}
you see here the close event is prevented when the appointment id is not the appointment id beeing updated by the current client.
And fortunately, preventing the close event prevents the annoying behavior of having a sort of ghost appointment after one has been changed by another hub client.
I'm sorry if I'm not clear or if the code isn't clear enough. I can provide more information if necessary.
Bye

Resources