I have created an offline documentation with MkDocs and Workboxjs.
I execute workbox generateSW on the files generated by MkDocs which generates a Service Worker with precache setup with the precacheAndRoute function.
This works fine but when I update the documentation and generate new html files and the Service Worker it does not serve the new content until I completely close the browser. Refreshing or just closing the tab is not enough.
The worker is updating the content to the Cache Storage correctly which I can see from the Chrome devtools (Application -> Cache Storage -> workbox-precache*) but no matter how many times I hit refresh the browser won't display the new content.
I use this function to register the Service Worker
async function register() {
const registration = await navigator.serviceWorker.register(SW_URL);
registration.onupdatefound = () => {
const installingWorker = registration.installing;
installingWorker.onstatechange = () => {
if (installingWorker.state === "installed") {
if (navigator.serviceWorker.controller) {
console.log(
"New content is available; please refresh."
);
} else {
console.log("Content is cached for offline use.");
}
}
};
};
}
I wonder if I have to do something extra to make the content refresh properly?
My workbox-config.js is
module.exports = {
globDirectory: ".doc_build",
globPatterns: ["**/*"],
swDest: ".doc_build/sw.js"
};
This happens on both Firefox and Chrome.
Thanks to Robert Rowntree's link in the question comment I figured this out.
I my case the content gets refreshed to cache the but old version of the precache service worker still keeps running which has a list of objects like this
{
"url": "index.html",
"revision": "e4919b0cd0e772b3beb2d1f3d09af437"
}
as you can see it has the checksum of the old version in it and it will keep serving that until the old service worker gets deactivated and the new one activated.
It is possible to see that by checking registration.waiting when the old service worker is waiting for to be deactivated and new one to be installed. It seems that browser does this "at some point". It actually seems to happen if I just keep the tabs closed long enough.
The solution for my question is to force the service worker to skip the waiting period. It is possible to do that by sending a message to the service worker from the update event
async function register() {
const registration = await navigator.serviceWorker.register(SW_URL);
registration.onupdatefound = () => {
const installingWorker = registration.installing;
installingWorker.onstatechange = async () => {
if (installingWorker.state === "installed") {
if (navigator.serviceWorker.controller) {
console.log(
"New content is available; please refresh."
);
// Send message to the service worker telling
// it should stop waiting for browser to deactivate it
registration.waiting.postMessage("skipWaiting");
} else {
console.log("Content is cached for offline use.");
}
}
};
};
}
Then in the Service Worker code I had to handle that message and call skipWaiting()
self.addEventListener("message", messageEvent => {
if (messageEvent.data === "skipWaiting") {
return skipWaiting();
}
});
To do this I had to move from workbox generateSW to workbox injectManifest to be able to add the skipping code.
But there are caveats in this solution. Read from the Robert's link onwards from
"The simplest and most dangerous approach is to just skip waiting during installation."
https://redfin.engineering/how-to-fix-the-refresh-button-when-using-service-workers-a8e27af6df68
Fortunately this is good enough for my case.
Related
I been scratching my head for days for this:
Our app runs on Kubernetes with nginx reverse proxy as usual with OpenIdConnect. It is a Blazor Server app that logs in users and then request access tokens to call other apis like MS Graph:
If I do not set the context.ProtocolMessage.RedirectUri it will get redirected to http://localhost/signin-oidc <= app is running in the container on port 80 and it is mapped to port 8080 which is the one nginx redirects. So with the code below it works nicely after the user is authenticated with AAD and lands on the home page as long as I do not request an access token later.
services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
{
options.Events = new OpenIdConnectEvents
{
OnRedirectToIdentityProvider = (context) =>
{
if (context.Request.Headers.ContainsKey("X-Forwarded-Host"))
{
** context.ProtocolMessage.RedirectUri = "https://" + context.Request.Headers["X-Forwarded-Host"] + this.Configuration.GetSection("AzureAd").GetValue<string>("CallbackPath")**;
}
return Task.FromResult(0);
}
};
});
But this is the problem, if I try then to get an access token (for any api, not just graph), it will go into an infinite loop.
This will trigger an inifite loop
services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
.EnableTokenAcquisitionToCallDownstreamApi("user.read")
.AddInMemoryTokenCaches();
This too
#code {
protected string? accessToken;
protected override async Task OnInitializedAsync()
{
try
{
string[] scopes = new string[] { "user.read" };
accessToken = await TokenAcquisition.GetAccessTokenForUserAsync(scopes);
}
catch (Exception ex)
{
string baseUri = ConsentHandler.BaseUri.ToString();
ConsentHandler.HandleException(ex);
}
}
}
I can repro on the dev box, even if forwarding is not needed. As soon as the OnRedirectToIdentityProvider is set, none of the token acquisition functions works well.
So to run on Kubernetes I cannot request access tokens, as long as I keep the OnRedirectToIdentityProvider to land on the right page
On dev box I need to disable the OnRedirectToIdentityProvider for tokens to work.
So it is a mutually exclusive situation.
Is there a workaround for this?
Is there a way to force AAD to go to the actual user facing endpoint without the need of including the OnRedirectToIdentityProvider code?
Thanks
Explained already on the description
I am working on a project where I am trying to achieve 'login one time to complete all my tests instead of logging in everytime for each test' I followed this document. My global setup looks exactly like how it is in the said document.
My test spec looks like this,
//sample.test.ts
test.describe("Login Tests", () => {
test.use({ storageState: "state.json" });
test(`TEST 1`, async ({ page }) => {
//browser launches in authenticated state
});
test(`Test 2`, async ({ page}) => {
//browser launches in unauthenticated state
});
});
//playwright.config.ts
globalSetup: require.resolve("./src/utility/globalSetup.ts"),
use: {
storageState: "state.json",
}
//globalSetup.ts
const { storageState } = config.projects[0].use;
const browser = await chromium.launch();
const page = await browser.newPage();
//code for login goes here....
await page.context().storageState({ path: storageState as string });
await browser.close();
The issue is my first test (TEST1) works fine - the browser launches in authenticated state. But my second test (TEST2) does not launch in authenticated state.
I tried running one test at a time. TEST1, TEST2 passes in isolated runs.
I swapped the test order, made TEST2 run first, in that case TEST2 launches in authenticated state TEST1 failed.
What could be the problem? Any help is highly appreciated.
I am using Azure AD along with asp.net core mvc. The following code is the same with a default MVC project generated with Work or School Accounts authentication.
services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
services.AddAuthentication(AzureADDefaults.AuthenticationScheme)
.AddAzureAD(options => Configuration.Bind("AzureAd", options));
services.AddMvc(options =>
{
var policy = new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.Build();
options.Filters.Add(new AuthorizeFilter(policy));
})
.SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
Everything works just fine for the most time. The app is basically a notepad. A user logs in and adds notes/tasks. Everything after logging in is done using ajax requests. After some time the app stops working because there is a need for authentication again. Note that if I refresh the page everything is working again.
Am I doing this right? Am I missing something or this kind of use case is not supported.
Should I just create a javascript function that will auto refresh the page after some time?
Should I just create a javascript function that will auto refresh the page after some time?
You could try to create a hidden iframe in all the templates used by the Web App to make automatic calls to a MVC controller method that forces a call to renew the authentication data on a regular basis.
This is achieved very easily by configuring an automatic javascript process in the front-end executed in a loop on a regular basis of 45'. This value could be configured or extracted from a configuration file too. The only key condition is that it must be less than one hour.
Here is the simplified example code related to MVC Controller:
/* Action method, inside "Account" controller class, to force renewal of user authentication session */
public void ForceSignIn()
{
HttpContext.GetOwinContext().Authentication.Challenge(new AuthenticationProperties { RedirectUri = "/" },
OpenIdConnectAuthenticationDefaults.AuthenticationType);
}
And here is the simplified example HTML and javascript code employed to call silently in a hidden iframe to MVC Controller:
<iframe id="renewSession" hidden></iframe>
<script>
setInterval( function ()
{ #if (Request.IsAuthenticated) {
<text>
var renewUrl = "/Account/ForceSignIn";
var element = document.getElementById("renewSession");
element.src = renewUrl;
</text>
}
},
1000*60*45
);
</script>
For more details, you could refer to this article which get the similar situation with you.
I found a simple solution by accident. My goal was to hit a check endpoint every minute and If I get a 302 status code I would redirect the user to the authentication page.
public IActionResult Check()
{
return Ok(new { });
}
I left the developer tools open and noticed that every 30 mins I get a bigger response.
And this actually refreshes the cookie and as a result there is no need to redirect the user.
So to sum up someone needs to do this check every 40-50 minutes because the expiration is set to ~1 hour by default.
I am working with a legacy Windows Service that reads messages from a private MSMQ queue processes them (does some database work, sends some emails) and then waits for the next message (PeekCompleted)
The service is problematic - whenever Windows Update requires a server reboot (so like almost always) the Service comes back up in a "Started" condition but has to be REstarted manually or the messages just pile up in the queue.
My first inclination is to think that there is something in the OnStart handler that isn't getting hit when the server comes back up and I am attempting to sort out the Logs (another story) but Windows Services and threading are not my normal domain so I am hoping someone can point me in the right direction....
Below are the OnStart Handler and message handling function, stripped inconsequential stuff.
Question: in OnStart the MessageRecieved function is attached to the PeekCompleted event.
I assume OnStart fires when the server comes back up so the handler must get attached, but I am not clear whether message that were (a) already in the queue at re-boot or (b) arrive during re-boot will actually trigger the event ?
If it should is there something else I should be looking for?
Any suggestions welcome!
protected override void OnStart(string[] args)
{
try
{
_inProcess = false;
_queueMessage = null;
_stopping = false;
_queue = ReadyQueue(_queueName);
if (_queue == null)
{
throw new Exception(string.Format("'ReadyFormQueue({0})' returned null", _queueName));
}
_queue.PeekCompleted += new PeekCompletedEventHandler(MessageReceived);
_queue.Formatter = new BinaryMessageFormatter();
_queue.BeginPeek();
}
catch (Exception exception)
{
//do cleanup and other recovery stuff
}
}
private void MessageReceived(object sender, PeekCompletedEventArgs e)
{
_currentMessage = null;
_inProcess = false;
try
{
_queueMessage = _queue.EndPeek(e.AsyncResult);
_queueMessage.Formatter = new BinaryMessageFormatter();
_currentMessage = (MyMessageType)_queueMessage.Body;
_queue.ReceiveById(_queueMessage.Id);
_inProcess = true;
_helper = new MessageHelper();
_currentMessage = _helper.Process(_currentMessage); //sets global _inProcess flag
if (_inProcess)
{
Thread.Sleep((int)(_retryWaitTime * 0x3e8));
SendFormMessageToQueue(FailedQueueName, _currentMessage);
}
else
{
_queue.BeginPeek();
}
}
catch (Exception exception)
{
_inProcess = false;
//do other recovery stuff
if (_currentMessage != null)
{
ReadyFormQueue(_poisonQueueName);
SendFormMessageToQueue(_poisonQueueName, _currentMessage);
}
}
}
This legacy windows service could be started before the queueing infrastructure is up and fully operational, must fail in the initial connection and therefore isn't processing messages.
The first thing that I would check (unless the windows service has proper logging) is if there is a windows service dependency that is properly set up - you don't want your legacy service to fully start until the MSMQ service has itself completely started.
I don't think there is a problem in the legacy service per say since once you restart it, it seems to work fine, I think you have a resource-available-race type of problem where the consumer starts before the resource and it wasn't completely designed to recover from that.
I would: create a service dependency (can be done in the SCM) and then reboot the server and see if you have any more MSMQ messages pilling up, my guess the answer will be no.
Hope this helps
Scenario: I have a Delphi Intraweb application that has some edit components and buttons on a screen. In the TIWEdit.OnAsyncExit and TIWButton.OnClick a flag is set, and another thread in the application sets the enabled properties of the buttons depending on the flags and some other application data.
By the time the TIWButton.Enabled properties are set, the request has already finished and the next interaction is cancelled as IW finds out that internal representation and HTML form are out of sync. It resynchonizes and you have to click again.
I would like to refresh the screen somehow on demand.
A timer that finds out whether the two are synchronized and issues a refresh has drawbacks in traffic and timing (I can click a button before a timer run).
A method that could push data would be great.
Maybe IW has a possibility to do an non-save sync without cancelling the action I just committed.
As my screens are built model driven (I cannot predict what components will be on the screen and what the interdependencies between components are, that is in the business logic), I cannot add JavaScript to enable or disable a button depending on user actions.
I am not completely sure if your question is the same as mine, yet I think there is a lot in common. See the demo project (v2) I posted in the Intraweb forum.
Based on some comments from Jackson Gomes I enable a TIWTimer before a long running thread starts and disable this after the thread has ended. See: http://forums3.atozed.com/IntraWeb.aspx (atozedsoftware.intraweb.attachments), thread 'IWLabel update via Thread', Oct 15, 2009.
The OnASync timer event is fired every 500 ms and is using some bandwith. Acceptable in my situation (company intranet).
Gert
You could use the Interop Web Module from the IWElite component pack.
Essentially you would write a bit of Javascript using the XMLHTTPRequest (XHR) object to call into your IW app's Web Module Action which returns when the processing is finished. If you need your IW app to continue to function as normal while the process is running, your Javascript could open a progress window and make the XHR call from there.
IW Elite can be found here:
http://code.google.com/p/iwelite/
An XHR request would look something like this:
function NewXHR() {
if (typeof XMLHttpRequest == "undefined") {
try { return new ActiveXObject('Msxml2.XMLHTTP.6.0');} catch(e) {}
try { return new ActiveXObject('Msxml2.XMLHTTP.3.0');} catch(e) {}
try { return new ActiveXObject('Msxml2.XMLHTTP');} catch(e) {}
try { return new ActiveXObject('Microsoft.XMLHTTP');} catch(e) {}
throw new Error('AJAX not supported in this browser.');
} else {
return = new XMLHttpRequest();
}
var xhr = NewXHR();
xhr.open("get", '/mywebaction', false);
xhr.send(null);
window.alert(xhr.responseText);
The above code will block and wait for the response. If you would rather have it act asynchronously, you could instead do the following:
var xhr = NewXHR();
xhr.open("get", '/mywebaction', true);
xhr.onreadystatechange = function() {
if(xhr.readyState == 4) {
if ((xhr.status == 200) || (xhr.status == 304) || (xhr.status === 0)) {
window.alert('Success: '+xhr.responseText);
} else {
window.alert('Error: ('+xhr.status+') '+xhr.statusText;
}
}
};
xhr.send(null);