Is it possible to let user to choose when to update Service Worker?
Why? I want to add economy mode which means that user could choose to save a lot of bandwidth. This could be useful when user's limit is almost full or he/she is using expensive internet abroad.
That's because if Service Worker updates and there are new assets' versions, it will download all of them which could be several MB. If you're 3 days and 50MB away from new month, every MB counts.
Let's say that I can retrieve the setting from localStorage:
const economy = localStorage.getItem(economy) || false
How to let Service Worker know that it should only update itself if economy is true?
I kind of realize that it could be a problem in a long run (outdated versions) but Im planning to annoy the user often if he/she doesn't want to update. I just want to add the option for user to choose.
If you're willing to handle updating/deleting (perhaps just a subset of) cache entries outside of the install and activate events, you have more flexibility as to when they should be triggered. You actually don't have to perform the updates in the service worker at all, if it ends up being easier for you not to. Individual webpages have access to the exact same Cache Storage API instances that the service worker for a given origin uses. You can modify the caches directly from the page in response to whatever action makes the most sense for you, e.g.:
// Ensure we have access to the Cache Storage API.
if ('caches' in window) {
// Swap this out for whatever UI element will trigger the update.
const el = document.querySelector('#update-caches-button');
el.addEventListener('click', () => {
window.caches.open('my-cache').then(cache => {
// Add or delete entries from cache.
});
});
}
You could do something similar, but keep all the logic in the service worker, by using postMessage() from a client page to trigger a service worker's message event, and then update the caches in the message event handler.
There are some advantages to relying on the install/activate events for performing cache management. In particular, you can rely on the service worker staying in a "waiting" state while there are other active clients that rely on the previous cache state, and don't have to worry about throwing away entries that will be needed by those other clients or swapping out the expected version of a resource for a new version while it's still being used. But as a developer, ultimately you're responsible for understanding how your cached resources are being used, and if the assets you're managing in these caches aren't likely to cause problems if there's a version mismatch or if something that was deleted by one tab needs to be retrieved from the network later on by another tab, then you're free to implement that sort of approach.
For what it's worth, we've thought about similar issues when implementing precaching/updates in sw-precache. There's some background at https://github.com/GoogleChrome/sw-precache/issues/145 about trying to make use of standard signals exposed by browsers indicating that the user prefers to conserve data, rather than everyone coming up with their own heuristics.
Related
I know about the main way, how Redis handles the key when memory limit is reached. However...
What if I want Redis to "lock down" it self by making it read-only till the point when some keys will receive a delete signal. The only reason is that all the data in our Redis Cluster are quite important, thus we wish to have them ready any time. But OFC, if the memory limit is reached, we need to save some space, but without losing any data decided by Redis, but rather by the user.
Example:
User watches his statistic window on our applicatino server. Behind that, we store every data in Redis and display it for him. When the user closes the webapp, I'm currently freeing up all the keys related to his session. I want to drop these keys when a memory limit is reached, so noone should be getting "random" keys deleted or the least frequently used one neither.
Is this a possible thing or I'm just dreaming?
maxmemory-policy determines how your cache behaves when it reaches maxmemory. The default option is noeviction which means it won't try to automatically evict any items based on TTL or LRU, etc. This essentially means the cache is then read-only, you can't add new items, but can still read existing items from it.
Obviously some external process or user will then need to delete some of the items, before you can add new data to it.
https://raw.githubusercontent.com/antirez/redis/5.0/redis.conf (search for MEMORY MANAGEMENT in that config file)
I implemented real time sync following Realm's tasks demo app.
There a dummy container is used, to hold a List with the models.
The demo app doesn't seem to support offline usage.
I wondered what happens when, given this setup, I start the app on an online as well as an offline device and then go online with the offline device.
My initial expectation was that I'd end with 2 containers (which would be an invalid state), but when I tested surprisingly there was only 1 container at the end.
But sometimes I get 2 containers and haven't been able to identify what causes this.
The question then is, how does this exactly work? I assume the reason that the container is normally not duplicated when I sync the offline device for the first time is that it's handled as the same object, maybe because it doesn't have a primary key or something? But then why is it sometimes duplicated? And what would be the best practice here? Do I maybe have to use a primary key or check after connecting if there's duplication and if yes do a manual merge of the containers?
At the moment, Realm Tasks merely checks if the default Realm is empty before it tries to add a new base list container object. If the synchronization process hasn't completed by the time this check occurs, it's reasonable that a second container would be created. When testing the app on a local network, this usually isn't a problem since the download speeds are so fast, but we definitely should test this a bit more thoroughly.
Adding a primary key will definitely help since it means that if a second list is created locally, it will get merged with the version that comes down from the server.
We've recently been focusing on the 'on-boarding' process when a second device connects to a user's Realm Mobile Platform account via the new progress notification system. A more logical approach would be to wait for the synchronization to complete the initial download after logging in, and then checking for the presence of the objects. Once the documentation is complete, we'll most likely be revamping how Realm Tasks handles this.
The demo app (as well as the Realm Mobile Platform) does support offline, but only after the user has logged in for the first time (which is when these container objects are initially generated). After that time, the apps can be used offline, and any changes done in that interim are synchronized the next time it comes online.
We're planning on building 'anonymous user' feature where a user can start using the app straight away (even offline) and then any changes they made before they log in (due to them being offline) are then transferred to the user account after they do so.
I have an old SL4/ria app, which I am looking to replace with breeze. I have a question about memory use and caching. My app loads lists of Jobs (a typical user would have access to about 1,000 of these jobs). Additionally, there are quite a few lookup entity types. I want to make sure these are cached well client-side, but updated per session. When a user opens a job, it loads many more related entities (anywhere from 200 - 800 additional entities) which compose multiple matrix-style views for the jobs. A user can view the list of jobs, or navigate to view 1 job at a time.
I feel that I should be concerned with memory management, especially not knowing how browsers might deal with this. Originally I felt this should all be 1 EntityManager and I would detachEntities when user navigates away from a job, but I'm thinking this might benefit from multiple managers by intended lifetime. Or perhaps I should create a new dataservice & EntityManager each time the user navigates to a new hash '/#/' area, since comments on clear() seems to indicate that this would be faster? If I did this, I suppose I will be using pub/sub to notify other viewmodels of changes to entities? This seems complex and defeating some of the benefits of breeze as the context.
Any tips or thoughts about this would be greatly appreciated.
I think I understand the question. I think I would use a multi-manager approach:
Lookups Manager - holds once-per session reference (lookup) entities
JobsView Manager - "readonly" list of Jobs in support of the JobsView
JobEditor Manager - One per edit session.
The Lookups Manager maintains the canonical copy of reference entities. You can fill it once with a single call to server (see docs for how). This Lookups Manager will Breeze-export these reference entities to other managers which Breeze-import them as they are created. I am assuming that, while numerous and diverse, the total memory footprint of reference entities is pretty low ... low enough that you can afford to have more than one copy in multiple managers. There are more complicated solutions if that is NOT so. But let that be for now.
The JobsView Manager has the necessary reference entities for its display. If you only displayed a projection of the Jobs, it would not have Jobs in cache. You might have an array and key map instead. Let's keep it simple and assume that it has all the Jobs but not their related entities.
You never save changes with this manager! When editing or creating a Job, your app always fires up a "Job Editor" view with its own VM and JobEditor Manager. Again, you import the reference entities you need and, when editing an existing Job, you import the Job too.
I would take this approach anyway ... not just because of memory concerns. I like isolating my edit sessions into sandboxes. Eases cancellation. Gives me a clean way to store pending changes in browser storage so that the user won't lose his/her work if the app/browser goes down. Opens the door to editing several Jobs at the same time ... without worrying about mutually dependent entities-with-changes. It's a proven pattern that we've used forever in SL apps and should apply as well in JS apps.
When a Job edit succeeds, You have to tell the local client world about it. Lots of ways to do that. If the ONLY place that needs to know is the JobsView, you can hardcode a backchannel into the app. If you want to be more clever, you can have a central singleton service that raises events specifically about Job saving. The JobsView and each new JobEditor communicate with this service. And if you want to be hip, you use an in-process "Event Aggregator" (your pub/sub) for this purpose. I'd probably be using Durandal for this app anyway and it has an event aggregator in the box.
Honestly, it's not that complicated to use and importing/exporting entities among managers is a ... ahem ... breeze. Well worth it compared to refreshing the Jobs List every time you return to it (although you'll want a "refresh button" too because OTHER users could be adding/changing those Jobs). You retain plenty of Breeze benefits: querying, validation, change-tracking, batch saves, entity navigation (those reference lists work "for free" in Breeze).
As a refinement, I don't know that I would automatically destroy the JobEditor view/viewmodel/manager when I returned to the JobsView. In my experience, people often return to the same Job that they just left. I might hold on to a view so you could go back and forth quickly. But now I'm getting tricky.
I am creating one desktop application in which I want to track user activity on the system like opened Microsoft Excel with file name and worked for ... much of time on that..
I want to create on xml file to maintain that log.
Please provide me help on that.
This feels like one of those questions where you have to figure out what is meant by the question itself. Taken at face value, it sounds like you want to monitor how long a user spends in any process running in their session, however it may be that you only really want to know if, and for how long a user spends time in a specific subset of all running processes.
Since I'm not sure which of these is the correct assumption to make, I will address both as best I can.
Regardless of whether you are monitoring one or all processes, you need to know what processes are running when you start up, and you need to be notified when a new process is created. The first of these requirements can be met using the GetProcesses() method of the System.Diagnostics.Process class, the second is a tad more tricky.
One option for checking whether new processes exist is to call GetProcesses after a specified interval (polling) and determine whether the list of processes has changed. While you can do this, it may be very expensive in terms of system resources, especially if done too frequently.
Another option is to look for some mechanism that allows you to register to be notified of the creation of a new process asynchronously, I don't believe such a thing exists within the .NET Framework 2.0 but is likely to exist as part of the Win32 API, unfortunately I cant give you a specific function name because I don't know what it is.
Finally, however you do it, I recommend being as specific as you can about the notifications you choose to subscribe for, the less of them there are, the less resources are used generating and processing them.
Once you know what processes are running and which you are interested in you will need to determine when focus changes to a new process of interest so that you can time how long the user spends actually using the application, for this you can use the GetForegroundWindow function to get the window handle of the currently focused window.
As far as longing to an XML file, you can either use an external library such as long4net as suggested by pranay's answer, or you can build the log file using the XmlTextWriter or XmlDocument classes in the System.Xml namespace
A bit of backstory: I am working on an web application that requires quite a bit of time to prep / crunch data before giving it to the user to edit / manipulate. The data request task ~ 15 / 20 secs to complete and a couple secs to process. Once there, the user can manipulate vaules on the fly. Any manipulation of values will require the data to be reprocessed completely.
Update: To avoid confusion, I am only making the data call 1 time (the 15 sec hit) and then wanting to keep the results in memory so that I will not have to call it again until the user is 100% done working with it. So, the first pull will take a while, but, using Ajax, I am going to hit the in-memory data to constantly update and keep the response time to around 2 secs or so (I hope).
In order to make this efficient, I am moving the intial data into memory and using Ajax calls back to the server so that I can reduce processing time to handle the recalculation that occurs w/ this user's updates.
Here is my question, with performance in mind, what would be the best way to storing this data, assuming that only 1 user will be working w/ this data at any given moment.
Also, the user could potentially be working in this process for a few hours. When the user is working w/ the data, I will need some kind of failsafe to save the user's current data (either in a db or in a serialized binary file) should their session be interrupted in some way. In other words, I will need a solution that has an appropriate hook to allow me to dump out the memory object's data in the case that the user gets disconnected / distracted for too long.
So far, here are my musings:
Session State - Pros: Locked to one user. Has the Session End event which will meet my failsafe requirements. Cons: Slowest perf of the my current options. The Session End event is sometimes tricky to ensure it fires properly.
Caching - Pros: Good Perf. Has access to dependencies which could be a bonus later down the line but not really useful in current scope. Cons: No easy failsafe step other than a write based on time intervals. Global in scope - will have to ensure that users do not collide w/ each other's work.
Static - Pros: Best Perf. Easies to maintain as I can directly leverage my current class structures. Cons: No easy failsafe step other than a write based on time intervals. Global in scope - will have to ensure that users do not collide w/ each other's work.
Does anyone have any suggestions / comments on what I option I should choose?
Thanks!
Update: Forgot to mention, I am using VB.Net, Asp.Net, and Sql Server 2005 to perform this task.
I'll vote for secret option #4: use the database for this. If you're talking about a 20+ second turnaround time on the data, you are not going to gain anything by trying to do this in-memory, given the limitations of the options you presented. You might as well set this up in the database (give it a table of its own, or even a separate database if the requirements are that large).
I'd go with the caching method of for storing the data across any page loads. You can name the cache you want to store the data in to avoid conflicts.
For tracking user-made changes, I'd go with a more old-school approach: append to a text file each time the user makes a change and then sweep that file at intervals to save changes back to DB. If you name the files based on the user/account or some other session-unique indicator then there's no issue with conflict and the app (or some other support app, which might be a better idea in general) can sweep through all such files and update the DB even if the session is over.
The first part of this can be adjusted to stagger the write out more: save changes to Session, then write that to file at intervals, then sweep the file at larger intervals. you can tune it to performance and choose what level of possible user-change loss will be possible.
Use the Session, but don't rely on it.
Simply, let the user "name" the dataset, and make a point of actively persisting it for the user, either automatically, or through something as simple as a "save" button.
You can not rely on the session simply because it is (typically) tied to the users browser instance. If they accidentally close the browser (click the X button, their PC crashes, etc.), then they lose all of their work. Which would be nasty.
Once the user has that kind of control over the "persistent" state of the data, you can rely on the Session to keep it in memory and leverage that as a cache.
I think you've pretty much just answered your question with the pros/cons. But if you are looking for some peer validation, my vote is for the Session. Although the performance is slower (do you know by how much slower?), your processing is going to take a long time regardless. Do you think the user will know the difference between 15 seconds and 17 seconds? Both are "forever" in web terms, so go with the one that seems easiest to implement.
perhaps a bit off topic. I'd recommend putting those long processing calls in asynchronous (not to be confused with AJAX's asynchronous) pages.
Take a look at this article and ping me back if it doesn't make sense.
http://msdn.microsoft.com/en-us/magazine/cc163725.aspx
I suggest to create a copy of the data in a new database table (let's call it EDIT) as you send the initial results to the user. If performance is an issue, do this in a background thread.
As the user edits the data, update the table (also in a background thread if performance becomes an issue). If you have to use threads, you must make sure that the first thread is finished before you start updating the rows.
This allows a user to walk away, come back, even restart the browser and commit whenever she feels satisfied with the result.
One possible alternative to what the others mentioned, is to store the data on the client.
Assuming the dataset is not too large, and the code that manipulates it can be handled client side. You could store the data as an XML data island or JSON object. This data could then be manipulated/processed and handled all client side with no round trips to the server. If you need to persist this data back to the server the end resulting data could be posted via an AJAX or standard postback.
If this does not work with your requirements I'd go with just storing it on the SQL server as the other comment suggested.