How to populate Adal8Service 's configOptions using config settings in Angular 7 and load when the application loads - angular7

I am using import { Adal8Service, Adal8HTTPService } from 'adal-angular8'; for Azure authentication. I am using the below in app.module.ts:
export function appInit(appConfigService: AppInitService) {
return (): any => {
appConfigService.getApplicationConfig().subscribe((res) =>{
sessionStorage.setItem("appConfig",JSON.stringify(res));
timeout(500);
});
}
}
my getApplicationConfig() is below:
public getApplicationConfig() {
return this.http.get('assets/config.json');}
and in the providers [] the below:
AuthenticationService,
AppInitService,
{
provide: APP_INITIALIZER,
useFactory: appInit,
deps: [AppInitService],
multi: true
},
Adal8Service,
{ provide: Adal8HTTPService,
useFactory: Adal8HTTPService.factory,
deps: [HttpClient, Adal8Service],
multi: true
},
The here is the appInit function does not block (even removing the timeout()) the application loading and proceeds to to the
this.adalService.init(this.adalConfig);
this.adalService.handleWindowCallback();
(where this.adalConfig = sessionStorage.getItem("appConfig")).
If I refresh the page, then I am getting redirected to the Azure Ad login page properly or if I am hardcoding the configOptions of the this.adalService.init("HARDOCDE all values") then it works fine. How do I make the application block the configuration. I am storing the config values under /assets/config.json. I am not sure what I am doing wrong here. I did try reading the "json" file, but again I have to change it before proceeding to production. How do I make the application wait, there are also other config values for the application stored in the /assets/config.json file. Is the way I use the APP_INITIALIZER correct? Please point me to right direction.

The problem is not related to ADAL but related to how asynchronous functions works in javascript.
In order to block the execution of the function, you can either write down a function which waits till the response is returned by the http request or you can use library like waitfor-ES6 which can help you do that.
Change needs to be done at
export function appInit(appConfigService: AppInitService) {
return (): any => {
response = yield wait.for(appConfigService.getApplicationConfig);
sessionStorage.setItem("appConfig",JSON.stringify(response));
}
}
Please note this is not exact change but the direction of the change that you will need to perform. Hope this helps.

Related

Downloading whole websites with k6

I'm currently evaluating whether k6 fits our load testing needs. We have a fairly traditional website architecture that uses Apache webservers with PHP und a MySQL database. Sending simple HTTP requests with k6 looks simple enough and I think we will be able to test all major functionality with it, as we don't rely on JavaScript that much and most pages are static.
However, I'm unsure how to deal with resources (stylesheets, images, etc.) that are referenced in the HTML that is returned in the requests. We need to load them as well, as this sometimes leads to database requests, which must be part of the load test.
Is there some out-of-the-box functionality in k6 that allows you to load all the resources like a browser would? I'm aware that k6 does NOT render the page and I don't need it to. I only need to request all the resources inside the HTML.
You basically have two options, both with their caveats:
Record your session - you can either export har directly from the browser as shown there or use an extension made for your browser here is firefox and chromes. Both should be usable without a k6 cloud account you just need to set them to download the har and it will automatically (and somewhat silently) download them when you hit stop. And then either use the in k6 har converter (which is deprecated, but still works) or the new har-to-k6 one which.
This method is particularly good if you have a lot of pages and/or resources and even works if you have a single page style of application as it just gets what the browser requested as a HAR and then transforms it into a script. And if there were no dynamic things that need to be inputed (username/password) the final script can be used as is most of the time.
The biggest problem with this approach is that if you add a css file you need to redo this whole exercise. This is even more problematic if you css/js file name change on each change or something like that. Which is what the next method is good for:
Use parseHTML and then find the elements you care about and make a request for them.
import http from "k6/http";
import {parseHTML} from "k6/html";
export default function() {
const res = http.get("https://stackoverflow.com");
const doc = parseHTML(res.body);
doc.find("link").toArray().forEach(function (item) {
console.log(item.attr("href"));
// make http gets for it
// or added them to an array and make one batch request
});
}
will produce
NFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico?v=4f32ecc8f43d
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] /opensearch.xml
INFO[0001] https://cdn.sstatic.net/Shared/stacks.css?v=53507c7c6e93
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/primary.css?v=d3fa9a72fd53
INFO[0001] https://cdn.sstatic.net/Shared/Product/product.css?v=c9b2e1772562
INFO[0001] /feeds
INFO[0001] https://cdn.sstatic.net/Shared/Channels/channels.css?v=f9809e9ffa90
As you can see some of the urls are relative and not absolute so you will need to handle this. And in this example only some are css, so probably more filtering is needed.
The problem here is that you need to write the code and if you add a relative link or something else you need to handle it. Luckily k6 is scriptable so you can reuse the code :D.
I've followed Михаил Стойков suggestion and written my own function to load resources. You can set the way resources are loaded (batch or sequential gets with options.concurrentResourceLoading).
/**
* #param {http.RefinedResponse<http.ResponseType>} response
*/
export function getResources(response) {
const resources = [];
response
.html()
.find('*[href]:not(a)')
.each((index, element) => {
resources.push(element.attributes().href.value);
});
response
.html()
.find('*[src]:not(a)')
.each((index, element) => {
resources.push(element.attributes().src.value);
});
if (options.concurrentResourceLoading) {
const responses = http.batch(
resources.map((r) => {
return ['GET', resolveUrl(r, response.url), null, {
headers: createHeader()
}];
})
);
responses.forEach(() => {
check(response, {
'resource returns status 200': (r) => r.status === 200,
});
});
} else {
resources.forEach((r) => {
const res = http.get(resolveUrl(r, response.url), {
headers: createHeader(),
});
!check(res, {
'resource returns status 200': (r) => r.status === 200,
});
});
}
}

Passing arguments to a running electron app

I have found some search results about using app.makeSingleInstance and using CLI arguments, but it seems that the command has been removed.
Is there any other way to send a string to an already started electron app?
One strategy is to have your external program write to a file that your electron app knows about. Then, your electron app can listen for changes to that file and can read it to get the string:
import fs
fs.watch("shared/path.txt", { persistent: false }, (eventType: string, fileName: string) => {
if (eventType === "change") {
const myString: string = fs.readFileSync(fileName, { encoding: "utf8" });
}
});
I used the synchronous readFileSync for simplicity, but you might want to consider the async version.
Second, you'll need to consider the case where this external app is writing so quickly that maybe the fs.watch callback is triggered only once for two writes. Could you miss a change?
Otherwise, I don't believe there's an Electron-native way of getting this information from an external app. If you were able to start the external app from your Electron app, then you could just do cp.spawn(...) and use its stdout pipe to listen for messages.
If shared memory were a thing in Node, then you could use that, but unfortunately it's not.
Ultimately, the most elegant solution to my particular problem was to add a http api endpoint for the Electron app using koa.
const Koa = require("koa");
const koa = new Koa();
let mainWindow;
function createWindow() {
let startServer = function() {
koa.use(async ctx => {
mainWindow.show();
console.log("text received", ctx.request.query.text);
ctx.body = ctx.request.query.text;
});
koa.listen(3456);
};
}
Now I can easily send texts to Electron from outside the app using the following url:
localhost:3456?text=myText

Setting service worker to exclude certain urls only

I built an app using create react which by default includes a service worker. I want the app to be run anytime someone enters the given url except when they go to /blog/, which is serving a set of static content. I use react router in the app to catch different urls.
I have nginx setup to serve /blog/ and it works fine if someone visits /blog/ without visiting the react app first. However because the service worker has a scope of ./, anytime someone visits any url other than /blog/, the app loads the service worker. From that point on, the service worker bypasses a connection to the server and /blog/ loads the react app instead of the static contents.
Is there a way to have the service worker load on all urls except /blog/?
So, considering, you have not posted any code relevant to the service worker, you might consider adding a simple if conditional inside the code block for fetch
This code block should already be there inside your service worker.Just add the conditionals
self.addEventListener( 'fetch', function ( event ) {
if ( event.request.url.match( '^.*(\/blog\/).*$' ) ) {
return false;
}
// OR
if ( event.request.url.indexOf( '/blog/' ) !== -1 ) {
return false;
}
// **** rest of your service worker code ****
note you can either use the regex or the prototype method indexOf.
per your whim.
the above would direct your service worker, to just do nothing when the url matches /blog/
Another way to blacklist URLs, i.e., exclude them from being served from cache, when you're using Workbox can be achieved with workbox.routing.registerNavigationRoute:
workbox.routing.registerNavigationRoute("/index.html", {
blacklist: [/^\/api/,/^\/admin/],
});
The example above demonstrates this for a SPA where all routes are cached and mapped into index.html except for any URL starting with /api or /admin.
here's whats working for us in the latest CRA version:
// serviceWorker.js
window.addEventListener('load', () => {
if (isAdminRoute()) {
console.info('unregistering service worker for admin route')
unregister()
console.info('reloading')
window.location.reload()
return false
}
we exclude all routes under /admin from the server worker, since we are using a different app for our admin area. you can change it of course for anything you like, here's our function in the bottom of the file:
function isAdminRoute() {
return window.location.pathname.startsWith('/admin')
}
Here's how you do it in 2021:
import {NavigationRoute, registerRoute} from 'workbox-routing';
const navigationRoute = new NavigationRoute(handler, {
allowlist: [
new RegExp('/blog/'),
],
denylist: [
new RegExp('/blog/restricted/'),
],
});
registerRoute(navigationRoute);
How to Register a Navigation Route
If you are using or willing to use customize-cra, the solution is quite straight-forward.
Put this in your config-overrides.js:
const { adjustWorkbox, override } = require("customize-cra");
module.exports = override(
adjustWorkbox(wb =>
Object.assign(wb, {
navigateFallbackWhitelist: [
...(wb.navigateFallbackWhitelist || []),
/^\/blog(\/.*)?/,
],
})
)
);
Note that in the newest workbox documentation, the option is called navigateFallbackAllowlist instead of navigateFallbackWhitelist. So, depending on the version of CRA/workbox you use, you might need to change the option name.
The regexp /^/blog(/.*)?/ matches /blog, /blog/, /blog/abc123 etc.
Try using the sw-precache library to overwrite the current service-worker.js file that is running the cache strategy. The most important part is setting up the config file (i will paste the one I used with create-react-app below).
Install yarn sw-precache
Create and specify the config file which indicates which URLs to not cache
modify the build script command to make sure sw-precache runs and overwrites the default service-worker.js file in the build output directory
I named my config file sw-precache-config.js is and specified it in build script command in package.json. Contents of the file are below. The part to pay particular attention to is the runtimeCaching key/option.
"build": "NODE_ENV=development react-scripts build && sw-precache --config=sw-precache-config.js"
CONFIG FILE: sw-precache-config.js
module.exports = {
staticFileGlobs: [
'build/*.html',
'build/manifest.json',
'build/static/**/!(*map*)',
],
staticFileGlobsIgnorePatterns: [/\.map$/, /asset-manifest\.json$/],
swFilePath: './build/service-worker.js',
stripPrefix: 'build/',
runtimeCaching: [
{
urlPattern: /dont_cache_me1/,
handler: 'networkOnly'
}, {
urlPattern: /dont_cache_me2/,
handler: 'networkOnly'
}
]
}
Update (new working solution)
In the last major release of Create React App (version 4.x.x), you can easily implement your custom worker-service.js without bleeding. customize worker-service
Starting with Create React App 4, you have full control over customizing the logic in this service worker, by creating your own src/service-worker.js file, or customizing the one added by the cra-template-pwa (or cra-template-pwa-typescript) template. You can use additional modules from the Workbox project, add in a push notification library, or remove some of the default caching logic.
You have to upgrade your react script to version 4 if you are currently using older versions.
Working solution for CRA v4
Add the following code to the file service-worker.js inside the anonymous function in registerRoute-method.
// If this is a backend URL, skip
if (url.pathname.startsWith("/backend")) {
return false;
}
To simplify things, we can add an array list of items to exclude, and add a search into the fetch event listener.
Include and Exclude methods below for completeness.
var offlineInclude = [
'', // index.html
'sitecss.css',
'js/sitejs.js'
];
var offlineExclude = [
'/networkimages/bigimg.png', //exclude a file
'/networkimages/smallimg.png',
'/admin/' //exclude a directory
];
self.addEventListener("install", function(event) {
console.log('WORKER: install event in progress.');
event.waitUntil(
caches
.open(version + 'fundamentals')
.then(function(cache) {
return cache.addAll(offlineInclude);
})
.then(function() {
console.log('WORKER: install completed');
})
);
});
self.addEventListener("fetch", function(event) {
console.log('WORKER: fetch event in progress.');
if (event.request.method !== 'GET') {
console.log('WORKER: fetch event ignored.', event.request.method, event.request.url);
return;
}
for (let i = 0; i < offlineExclude.length; i++)
{
if (event.request.url.indexOf(offlineExclude[i]) !== -1)
{
console.log('WORKER: fetch event ignored. URL in exclude list.', event.request.url);
return false;
}
}

Connecting to github with Ember.js and Torii (oauth2)

I'm trying to use the github-oauth2 provider in Torii, but I'm stumped on how I'm supposed to se tup some of the callbacks. I'll trace the code I'm using, as well as my understanding of it, and hopefully that can help pinpoint where I'm going wrong.
First, in my action, I'm calling torii's open method as it says to do in the docs:
this.get('torii').open('github-oauth2').then((data) => {
this.transitionTo('dashboard')
})
And, of course, I have the following setup in my config/environment.js:
var ENV = {
torii: {
// a 'session' property will be injected on routes and controllers
sessionServiceName: 'session',
providers: {
'github-oauth2': {
apiKey: 'my key',
redirectUri: 'http://127.0.0.1:3000/github_auth'
}
}
},
}
The redirectUri is for my Rails server. I have the same redirectUri setup on my github app, so they match.
Here's what I have on my server. It's likely this is where the problem is. I'll get to the symptoms at the end.
def github
client_id = 'my id'
client_secret = 'my secret'
code = params[:code]
#result = HTTParty.post("https://github.com/login/oauth/access_token?client_id=#{client_id}&client_secret=#{client_secret}&code=#{code}")
#access_token = #result.parsed_response.split('&')[0].split('=')[1]
render json: {access_token: #access_token}
end
So I post to github's access_token endpoint, as I'm supposed to, and I get back a result with an access token. Then I package up that access token as json.
The result of this is that the torii popup goes to the rails page:
Unfortunately, what I was hoping for was for the torii popup to disappear, give my app the access_token, and for the code to move on and execute the code in my then block.
Where am I going wrong?
Many thanks to Kevin Pfefferle, who helped me solve this and shared the code to his app (gitzoom) where he had implemented a solution.
So the first fix is to clear my redirectUri, and to set it on github to localhost:4200. This made the app redirect so that it's an Ember app that it's redirected to.
The second fix was to create a custom torii provider
//app/torii-providers/github.js
import Ember from 'ember';
import GitHubOauth2Provider from 'torii/providers/github-oauth2';
export default GitHubOauth2Provider.extend({
ajax: Ember.inject.service(),
fetch(data) {
return data;
},
open() {
return this._super().then((toriiData) => {
const authCode = toriiData.authorizationCode;
const serverUrl = `/github_auth?code=${authCode}`;
return this.get('ajax').request(serverUrl)
.then((data) => {
toriiData.accessToken = data.token;
return toriiData;
});
});
}
});
Not sure why this then triggers but the then I was using before didn't. Anyways, it grabs the data and returns it, and then the promise I was using before gets the data correctly.
this.get('torii').open('github-oauth2').then((data) => {
//do signon stuff with the data here
this.transitionTo('dashboard')
})
So there we go! Hopefully this helps other folks who are stuck in the future.

Meteor Twitter Help (Meteor NOOB)

I just started learning MeteorJS and after completing the tutorial, I decided to play around with the Twitter API. Initially, I followed this tutorial
http://artsdigital.co/exploring-twitter-api-meteor-js/
Once completing that, what I wanted to do is scrape data from a tweet and display it on the client side.
N/A = proper authentication
Here's the code I've written:
if (Meteor.isClient) {
Session.setDefault('screen_name', 'John');
Template.hello.helpers({
screen_name: function () {
return Session.get('screen_name');
}
});
Template.hello.events({
'click button': function () {
T.get('search/tweets',
{
q: '#UCLA',
count: 1
},
function(err,data,response) {
var user_name = data.statuses[0].users.screen_name;
Session.set('screen_name', user_name);
}
)
}
});
}
if (Meteor.isServer) {
Meteor.startup(function () {
// code to run on server at startup
var Twit = Meteor.npmRequire('twit');
var T = new Twit({
consumer_key: 'N/A', // API key
consumer_secret: 'N/A', // API secret
access_token: 'N/A',
access_token_secret: 'N/A'
});
});
}
What I believe the problem is that, the 'click button' function, the 'T' is seen to be undefined so the compiler doesn't know what that is or where it came. That thought did spark a thought in my mind to move what I have written inside the
if (Meteor.isServer) to if (Meteor.isClient)
But to no avail. It didn't work. What my reasoning is that once Meteor starts, the server starts, so if the server declares the variable T, shouldn't we be able to access it on the client side too?
I'm not sure if my approach is correct/don't know the conventions of Meteor/Meteor NOOB..so if someone could please help me, that will be highly appreciated!
Thanks!
You put a "var" declaration in front of your "T" variable. This binds the scope to the server side context of the app. I bet if you got rid of the var and made "T" global, then you would be able to access it from the client side as well.

Resources