How do I clear the isolated storage of another application? - isolatedstorage

Application1 uses Microsoft Enterprise Library Caching Application Block with IsolatedStorageBackingStore to cache data from a database accessed by Internet. The task is to create another Application2 (helper application like "Adobe Flash Player uninstaller") which will be able to clear the cache of Application1 (this will be one of many features of Application2).
The question is - how can Application2 get the caching folder path of Application1 (in order to clear it)?
The path will be something like "<System Drive>\Documents and Settings\<User>\Local Settings\Application Data\IsolatedStorage\tqli5mdv.xa5\htccao3l.ksb\StrongName.1r3fiexsbrusebdd0maaohl2i5cz4lhq\StrongName.mwjgo5cc1qomwe5tbepbfnyucq0obm3f\Files\<BackingStore PartitionName>"?
It's a pity, but the User has several applications with the same <BackingStore PartitionName> and this can't be changed, so Application2 can't find this path by <BackingStore PartitionName> string (course there will be several folders - one for each application - and I don't know how Application2 can choose which folder belongs to Application1). I was Googled a lot but with no result. Please help.

I'm not sure about the most recent versions of the Caching block, but previous versions wouldn't allow you to do this. They partition the store by the application's assembly name (amongst other things) so two applications can't see each others data.

Related

Too Many Files Open while monitoring file changes

I am developing a document-browser-based app for the iPad. I have been using SKQueue to monitor changes to the files in order to make sure their metadata remains current when the user performs actions in the document browser. The code for initiating monitoring:
// Set up the queue
let delegate = self
queue = SKQueue(delegate: delegate)!
// Remove all existing paths
queue?.removeAllPaths()
// Get the list of PDF URLs using a function that enumerates a folder's contents
let pdfFiles = getFolderContents(rootFolder: myDocumentsFolder, extensionWanted: "pdf")
for pdfFilePath in pdfFiles.filePaths {
queue?.addPath(pdfFilePath.path)
}
for pdfFolderPath in pdfFiles.folderPaths {
queue?.addPath(pdfFolderPath.path)
}
I developed my own logic to respond to notifications from this queue, but I do not remove any items from the queue during the app's runtime.
The problem - it seems that when the number of items watched is over 200 (files and folders) the system hits a wall and the console reports error 24: Too Many Files Open. After that, no reading/writing of any file can be performed.
From what I was able to gather from searching, it seems that iOS and iPadOS do not allow more than 256 files descriptors to be accessed at the same time, and that would mean that the GCD approach to monitoring file changes would suffer from the same limitation.
Is there any way to monitor files changes that is not subject to such a limitation? Any other suggestions?
After a lot of research and experimentation, I can finally verify that indeed, the default maximum number allowed of open file descriptors is 256 - for MacOS, iOS and iPadOS. This can be changed easily in MacOS - see article here. However, iOS and iPadOS being much more closed in nature, there is no reliable way of changing this limit on these platforms.
Good practices are therefore:
Try to avoid designing an app that would call for so many file descriptors open.
Monitor directories, and not individual files. With most tools available for monitoring the file system, you get notifications for any file change within a monitored directory. Just implement your logic from there, by enumerating the folder and comparing its new state with a saved state. You can find good code for that in the top two answers of this SO thread.
Note: I recommend enumeration rather than other methods of getting the file system state, because other methods tend to give incompatible results between the simulator and actual devices (different treatment of symlink resolution).
Make sure your method of choice for monitoring the file system can be queried for the number of items being watched, and issue an alert if the user approaches a set limit. Remember that the 256 open files must also include all files used by the app, including ones in the app's bundle and ones that are in actual use. So take a nice safety margin.
In my case, my app uses a UIDocumentBrowserViewController, or in other words - Apple's own Files app, to allow users to manage their files. I have to keep my metadata up-to-date with the file system state, and I have no control over the file management habits of my users. To complicate matters, the Files app itself can be used to modify the app's file system - while my app is not active.
Therefore, I do two things:
I save a detailed state of the file system from the App Delegate's applicationDidEnterBackground and applicationWillTerminate methods into a json file in Application Support, and on the app's launch I compare it to a fresh enumeration of the file system - and alert the user for any mismatch, suggesting to use the app's own file browser next time.
I created my own swift package, named SFSMonitor, for monitoring the file system. It is based on the wonderfully convenient SKQueue (which I highly recommend), but instead of monitoring kevent, it uses Dispatch Sources - a more modern approach, which Apple advocates. It is similar to Apple's own Directory Monitor (reference to which you can find here), but instead of monitoring one directory, it allows you to create and manage a whole queue of them. This class allows you to set a maximum number of monitored file descriptors, and be notified when that limit is reached.

What is the proper way for a program to open and write to a mapped drive without allowing the computer user to do so?

I am working with a program designed to record and display user-input data for tracking courses in a training process. One of the requirements was that we be able to keep a copy of each course's itinerary (in .pdf format) to display alongside the course. This program is being written in Delphi 7, expected to run on Windows 7 machines.
I've managed to get a remote location set up on the customer's main database (running CentOS 6), as a samba share, to store the files. However, I'm now running into a usability issue with the handling of the files in question.
The client doesn't want the process to go to a mapped drive; they've had problems in the past with individual users treating the mapped drive another set of programs require as personal drive space. However, without that, the only method I could come up with for saving/reading back the .pdf files was a direct path to the share (that is, setting the program to copy to/read from \\server\share\ directly) - which is garnering complaints that it takes too long.
What is the proper way to handle this? I've had several thoughts on the issue, but I can't determine which path would be the best to follow:
I know I could map the drive at the beginning of the program execution, then unmap it at the end, but that leaves it available for the end user to save to while the program is up, or if the program were to crash.
The direct 'write-to-share' method, bypassing the need for a mapped drive, as I've said, is considered too slow (probably because it's consistently a bit sluggish to display the files).
I don't have the ability to set a group policy on these machines, so I can't hide a drive that way - and I really don't think it's a wise idea for my program to attempt to change the registry on the user's machine, which also lets that out.
I considered trying to have the drive opened as a different user, but I'm not sure that helps - after looking at it, I'm thinking (perhaps inaccurately) that it wouldn't be any defense; the end user would still have access to the drive as opened during the use window.
Given that these four options seem to be less than usable, what is the correct way to handle these requirements?
I don't think it will work with a samba share.
However you could think about using (secure) ftp or if there is a database just uploading them as a blob.
This way you don't have to expose user credentials to a user.

iOS - how to structure database to conform to iCloud backup rules

I've been having trouble getting an app submitted to the App Store. This is due to the fact that that database, which is updatable, is too large for the iCloud backup limitations. Most of the data in the db is static, but one table records the user's schedule for reviewing words (this is a vocabulary quiz).
As far as I can tell, I have two or three realistic options. The first is to put the whole database into the Library/Cache directory. This should be accepted, because it's not backed up to iCloud. However, there's no guarantee that it will be maintained during app updates, per this entry in "Make App Backups More Efficient" at this url:
http://developer.apple.com/library/IOs/#documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/PerformanceTuning/PerformanceTuning.html
Files Saved During App Updates
When a user downloads an app update, iTunes installs the update in a new app directory. It then moves the user’s data files from the old installation over to the new app directory before deleting the old installation. Files in the following directories are guaranteed to be preserved during the update process:
<Application_Home>/Documents
<Application_Home>/Library
Although files in other user directories may also be moved over, you should not rely on them being present after an update.
The second option is to put the data into the NSDocuments or NSLibrary directory, as mark it with the skipBackupFlag. However, one problem is this flag doesn't work for iOS 5.0 and previous per this entry in "How do I prevent files from being backed up to iCloud and iTunes?" at
https://developer.apple.com/library/ios/#qa/qa1719/_index.html
Important The new "do not back up" attribute will only be used by iOS 5.0.1 or later. On iOS 5.0 and earlier, applications will need to store their data in <Application_Home>/Library/Caches to avoid having it backed up. Since this attribute is ignored on older systems, you will need to insure your app complies with the iOS Data Storage Guidelines on all versions of iOS that your application supports
This means that even if I use the "skipBackupFlag", I'll still have the problem that the database is getting backed up to the cloud, I think.
So, the third option, which is pretty much of an ugly hack, is to split the database into two. Put the updatable part into the NSLibrary or NSDocuments directory, and leave the rest in application resources. This would have the small, updatable part stored on the cloud, and leave the rest in the app resources directory. The problem is that this splits the db for no good reason, and introduces possible performance issues with having two databases open at once.
So, my question is, is my interpretation of the rules correct? Am I going to have to go with option 3?
p.s. I noticed in my last post cited urls were edited to links without the url showing. How do I do this?
Have you considered using external file references as described in https://developer.apple.com/library/IOS/#releasenotes/DataManagement/RN-CoreData/_index.html . Specifically, refer to "setAllowsExternalBinaryDataStorage:" https://developer.apple.com/library/IOS/documentation/Cocoa/Reference/CoreDataFramework/Classes/NSAttributeDescription_Class/reference.html#//apple_ref/occ/instm/NSAttributeDescription/setAllowsExternalBinaryDataStorage: . Pushing out large data into a separate file can help reduce database size .

ASP.NET MVC 3 in-memory data store

I have a project which provides users with a list of current tasks that need to be completed. Any user can complete any task, and so to ensure that only one user is working on a task at a time I need to be able to 'lock' it. I'm using SignalR for this, so a user requests a lock on a task, and if they are successful (ie. if noone else has locked it) then they will be able to access the further information that they need.
My problem is how to store the list of locked tasks. The original plan was simply to add an additional bit field 'IsLocked' to the Task table and update this when the user requested a lock and when the task was unlocked. We have about 300 concurrent users, however, and a task takes only about 3-4 minutes, meaning huge numbers of additional - and tiny - queries on the database. Therefore we were wondering about in-memory storage, simply storing a list of task ids in a 'lockedTasks' list.
I had considered using caching, but am unsure on the best ways to do this, or even if better alternatives exist. If anyone has any experience in this then some advice would be great thanks
I would avoid memory completely as IIS is not that great with it, if you found your self in the IIS need for refreshing the Application Pool for some sort of reason, your list is simply gone!
Maybe a MemCache system? If it does not loose things in the above way, but...
I would advice to be in the middle, IO File is fast that request data to a Database, specially if it's not in the same machine (witch for security reasons, it should never be), so... why not, and just to hold your list, you don't use one of the currently famous NoSQL database?
MongoDB is a document database that has a .NET Library and it's easy to use, it is not as fast as Memmory, but extremely quicker than Physical databases for what you want.
Normally the NoSQL Database will be hosted in the App_Data folder so it will be extremely fast to access and you can just hold there the task_id and user_id of all locked tasks.
Have you considered stateful filters?
Check out this links for more info:
ASP.NET MVC Filters and Statefulness
Brad Wilson: Advanced MVC
3 - (Video)
Brad Wilson: Advanced MVC 3 - (PDF)
I'm sorry, but if your app can't handle a single query every 3-4 minutes x 300 users, then you're doing something very wrong. Just browsing a site typically generates orders of magnitude more queries than that.

How to convert a registered version of an application back to a trial version when it is copied to another computer?

I would like to include some type of copy protection scheme on my applications that would make a retail registered version of my software revert back to a trial version of my software if/when it gets installed on another computer.
In the old days I would simply store all the user information in a record that I tacked on to the end of the exe file. During the registration process I would simply poke those values into the data record on the end of the exe file. This worked great until good ol Norton started flagging my product as a virus because the exe file changed.
I stopped doing that a long time ago. I'm getting ready to create an updated version of my software and I'd like to know how you have accomplshed this.
The information that makes it a retail version should be stored on the target computer, not with the original program. That way, when they try to move the program, it reverts to the trial version because the retail information is missing on the new computer.
The retail information is added via a registration process, using a unique key. There are a number of ways to make this key work only once. One way is to transmit it directly to the program over the internet, where the user never sees it, so they can't manually transfer it to the new computer.
You should save the information in multiple locations to minimize the chance a savvy user can find it (using e.g. Process Monitor). I would suggest
a registry key in HKCU and
a hidden file in the local application directory.
Also save some information which is bound to the local computer, so even if the average user finds your file and registry entry copying won't succeed because they don't know how to obtain the updated data on a new PC. This information can also be a key generated by you based on some hardware ID the user has to send you.
Regarding the key generation algorithm: if the protection is "against" the average user then just make something up. This one depends a little bit on your target group. A simple one like ROT47 might be enough.
Perhaps you can use the same thing.
Except in stead of saving the data in the exe (invoking a false positive of the AV) hash and save the data in a separate file.

Resources