iOS file renaming thread safety - ios

Say I have some data in my iOS app that I want to write to a file. I use the writeToFile:atomically: method on NSData, which writes the data to a temp file and then renames the temp file to the location I specified.
Is this operation thread safe? If I do this writing from a background thread and then happen to ask at a very unfortunate moment from another thread if that file exists (or just grab the contents of that file), is it possible to get an invalid result?

By definition atomically is thread safe, if you grab it before the "Atomic" operation is finished, it will not exist. If you access it after the operation is finished, then it will be ok.
It is similar to the atomic property of properties (that we usually set to nonatomic). It makes sets & gets "atomic", which just means that they happen in "one instant" and there is no in-between state.

I think that the worst that can happen is the file won't be found at the given path. Your app needs to handle this situation correctly.
The Apple documentation for [NSFileManager fileExistsAtPath:] has this useful advice (emphasis mine):
Note: Attempting to predicate behavior based on the current state of
the file system or a particular file on the file system is not
recommended. Doing so can cause odd behavior or race conditions. It's
far better to attempt an operation (such as loading a file or creating
a directory), check for errors, and handle those errors gracefully
than it is to try to figure out ahead of time whether the operation
will succeed.

Related

How to access iCloud Documents files in an asynchronous manner

I am trying to add support for iCloud Documents in my existing app, but I am struggling badly with how to do that.
Apple seems to prefer that you use the UIDocument class for that. But UIDocument does not give direct access to the file in the file system, it expects to maintain a copy of the contents of the file in an NSData object instead. That is simply not doable in my case. All my current code and half of the 3rd party libraries that I use, work directly with the file on the file system, not with NSData. Rewriting all that code is simply not doable.
When not using the UIDocument class, Apple expects you to use the NSFileCoordinator to coordinate access to the file's contents. I am trying to do that with my code, but the methods on the NSFileCoordinator seem to expect that all reading and writing will be done in one synchronous sequence. All the methods of NSFileCoordinator take a block as argument, and expect all the reading/writing to be performed inside that block. When the block returns, you are not allowed to make any file access anymore, as far as I understand.
That is not doable in my case as well. Some of my code, and the 3rd party libraries, do the reading / writing in an asynchronous manner on background threads. I can identify the start and end of the period that the code needs access to the file contents, so if there was a separate requireAccess method, and separate relinquishAccess on the NSFileCoordinator, that would enable me to achieve the goal. But that does not seem to be the case.
It is unclear to me what the role of NSFilePresenter in this is. Some of the documentation, especially that of relinquishPresentedItemToReader() in NSFilePresenter seem to indicate that you can actually acquire / relinquish access separately:
If you want to be notified when the reader has completed its task,
pass your own block to the reader and use that block to reacquire the
file or URL for your own uses.
But it does not explain anywhere how to "reacquire" the file.
So the concrete question: I need to do the following steps in an asynchronous manner:
acquire access to the file, and obtain a file: URL for the file on the local file system
do multiple, asynchronous,
read/write operations on the file with normal file system
operations on that file: URL
relinquish access to the file
Does anybody know whether it is possible to do this, and how to do step 1 and step 3 ?

Most reliable way to determine if URL is a directory?

This post gives a good answer, but the question did not specify which is the 'most reliable' way. For instance fileExistsAtPath has the following documentation.
Attempting to predicate behavior based on the current state of the
file system or a particular file on the file system is not
recommended. Doing so can cause odd behavior or race conditions. It’s
far better to attempt an operation (such as loading a file or creating
a directory), check for errors, and handle those errors gracefully
than it is to try to figure out ahead of time whether the operation
will succeed. For more information on file-system race conditions, see this.
The property hasDirectoryPath does not have any warnings in it's documentation. In fact, the documentation for it is pretty sparse.
So I'm now confused, because surely the above warning should say "use hasDirectoryPath instead of this".

When and why should you use NSUserDefaults's synchronize() method?

So I've had a look at the apple documentation on the NSUserDefaults's synchronize() method. See below for reference:
https://developer.apple.com/reference/foundation/userdefaults/1414005-synchronize
The page currently reads:
Because this method is automatically invoked at periodic intervals, use this method only if you cannot wait for the automatic synchronization (for example, if your application is about to exit) or if you want to update the user defaults to what is on disk even though you have not made any changes.
However, what I still don't understand is when should this method be called? For example, should it be called every time the user changes the app's settings? Or should I just trust that the background api is going to handle that? And does the leaving of the view immediately after a settings change in memory result in that change being lost?
Also, when might a failure to call synchronize() result in user settings not getting changed correctly?
Furthermore, what is the cost (performance, memory or otherwise) of calling this method? I know it involves reading and writing from/to the disk but does that really take that much effort on phones?
There seems to be so much confusion about user defaults. Think of it this way. It's essentially the same as you having a global dictionary available throughout your app. If you add/edit/remove a key/value to the global dictionary, that change is immediately visible anywhere in your code. Since this dictionary is in memory, all would be lost when your app terminates if it wasn't persisted to a file. NSUserDefaults automatically persists the dictionary to a file every once in a while.
The only reason there is a synchronize method is so your app can tell NSUserDefaults to persist the dictionary "now" instead of waiting for the automatic saving that will eventually happen.
And the only reason you ever need to do that is because your app might be terminated (or crash) before the next automatic save.
In my own apps, the only place I call synchronize is in the applicationDidEnterBackground delegate method. This is to ensure the latest unsaved changes are persisted in case the app is terminated while in the background.
I think much of the confusion comes from debugging an app during development. It's not uncommon during development that you kill the app with the "stop" button in the debugger. And many times this happens before the most recent NSUserDefaults changes have been persisted. So I've developed the habit of putting my app in the background by pressing the Home button before killing the app in the debugger whenever I want to make sure the latest updates are persisted.
Given the above summary, let's review your questions:
should it be called every time the user changes the app's settings?
No. As described above, any change is automatically available immediately.
Or should I just trust that the background api is going to handle that?
Yes, trust the automatic persistence with the exception of calling synchronize when your app enters the background.
And does the leaving of the view immediately after a settings change in memory result in that change being lost?
This has no effect. Once you add/edit/delete a key/value in NSUserDefaults, the change is made.
Also, when might a failure to call synchronize() result in user settings not getting changed correctly?
The only time a change can be lost is if your app is terminated before the latest changes have been persisted. Calling synchronize when your app enters the background solves most of these issues. The only remaining possible problem is if your app crashes. Any unsaved changes that have not yet been persisted will be lost. Fix your app so it doesn't crash.
Furthermore, what is the cost (performance, memory or otherwise) of calling this method? I know it involves reading and writing from/to the disk but does that really take that much effort on phones?
The automatic persistence is done in the background and it simply writes a dictionary to a plist file. It's very fast unless you are not following recommendations. It will be slower if you are misusing NSUserDefaults to store large amounts of data.
Apple's documentation for synchronize() has been updated and now reads:
Waits for any pending asynchronous updates to the defaults database and returns; this method is unnecessary and shouldn't be used.
UPDATE
As anticipated, it has been deprecated as mentioned in Apple Doc
synchronize()
Waits for any pending asynchronous updates to the defaults database and returns; this method is unnecessary and shouldn't be used.
Original Answer
-synchronize is deprecated and will be marked with the NS_DEPRECATED macro in a future release.
-synchronize blocks the calling thread until all in-progress set operations have completed. This is no longer necessary. Replacements for previous uses of -synchronize depend on what the intent of calling synchronize was. If you synchronized…
— …before reading in order to fetch updated values: remove the synchronize call
— …after writing in order to notify another program to read: the other program can use KVO to observe the default without needing to notify
— …before exiting in a non-app (command line tool, agent, or daemon) process: call CFPreferencesAppSynchronize(kCFPreferencesCurrentApplication)
— …for any other reason: remove the synchronize call
As far i know, synchronize is used to sync the data immediately but iOS can handle that in smart way. So you dont need to call it everytime. If you call it everytime then it will turn to performance issue.
Check apple documentation:
Official Link

nonatomic append failure outcomes

I've got a file that I want to append to. It's pretty important to me that this is both fast (I'm calling it 50hz on an iPhone 4) and safe.
I've looked at atomic appending. It seems to me like I would have to copy the whole file, append to it, and then use the NSFileManager's replaceItemAtURL to move them over, which sounds rather slow.
On the other hand, I could simply suck up a non-atomic append, assuming that the failure conditions are strictly that some subset of bytes at the end of the data I'm trying to write are not written. My file format writes out the length of each chunk first, so if there's not enough space for the length data or the length data is bigger than the available bytes, I can detect a partial write and discard.
The question is, how feasible would it be to use an atomic append to rapidly atomically append small amounts of data (half a kilobyte or so at a time), and what exactly are the failure outcomes of a non-atomic append?
Edit: I am the only one appending to this file. I am concerned only with external failure conditions, e.g. process termination, device running out of power, disk full, etc. I am currently using a synchronous append.
POSIX gives no guarantees about atomicity of write(2) when writing to a file.
If the platform does not provide any other means of writing that grants additional characteristics (and I'm not aware of any such API in iOS) you basically have to live with the possibility that the write could be partial.
The workaround of many Cocoa APIs (like -[NSData writeToFile:atomically:]) is the mechanism you mentioned: Perform the work on a temporary file and then atomically link(2) the new over the old file. This strategy does not apply well to your use case as it requires a copy of the old contents.
I would suggest the non-atomic approach you already considered. Actually I once used a very similar mechanism in an iOS app where I had to write a transcript of user actions for crash recovery. The recovery code thoroughly tested the transcript for integrity and would bail out on unexpected errors. Yet, I never received a single report of a corrupt file.

Overlapped serial port and Blue Screen of Death

I created a class that handles serial port asynchronously. I use it to communicate with a modem. I have no idea why, but sometimes, when I close my application, I get the Blue Screen and my computer restarts. I logged my code step by step, but when the BSOD appeared, and my computer restarted, the file into which I was logging data contained only white spaces. Therefore I have no idea, what the reason of the BSOD could be.
I looked through my code carefully and I found several possible reasons of the problem (I was looking for all that could lead to accessing unallocated memory and causing AV exceptions).
When I rethought the idea of asynchronous operations, a few things came to my mind. Please verify whether these are right:
1) WaitCommEvent() takes a pointer to the overlapped structure. Therefore, if I call WaitCommEvent() inside a function and then leave the function, the overlapped structure cannot be a local variable, right? The event mask variable and event handle too, right?
2) ReadFile() and WriteFile() also take references or pointers to variables. Therefore all these variables have to be accessible until the overlapped read or write operations finish, right?
3) I call WaitCommEvent() only once and check for its result in a loop, in the mean time doing other things. Because I have no idea how to terminate asynchronous operations (is it possible?), when I destroy my class that keeps a handle to a serial port, I first close the handle, and then wait for the event in the overlapped structure that was used when calling the WaitCommEvent() function. I do this to be sure that the thread that waits asynchronously for a comm event does not access any fields of my class which is destroyed. Is it a good idea or is it stupid?
try
CloseHandle(FSerialPortHandle);
if Assigned(FWaitCommEvent) then
FWaitCommEvent.WaitFor(INFINITE);
finally
FSerialPortHandle := INVALID_HANDLE_VALUE;
FreeAndNil(FWaitCommEvent);
end;
Before I noticed all these, most of the variables mentioned in point one and two were local variables of the functions that called the three methods above. Could it be the reason of the BSOD or should I look for some other mistakes in my code?
When I corrected the code, the BSOD stopped occuring, but It might be a coincidence. How do you think?
Any ideas will be appreciated. Thanks in advance.
I read the CancelIo() function documentation and it states that this method cancells all I/O operations issued by the calling thread. Is it OK to wait for the FWaitCommEvent after calling CancelIo() if I know that WaitCommEvent() was issued by a different thread than the one that calls CancelIo()?
if Assigned(FWaitCommEvent) and CancelIo(FSerialPortHandle) then
begin
FWaitCommEvent.WaitFor(INFINITE);
FreeAndNil(FWaitCommEvent);
end;
I checked what happens in such case and the thread calling this piece of code didn't get deadlocked even though it did not issue WaitCommEvent(). I tested in on Windows 7 (if it matters). May I leave the code as is or is it dangerous? Maybe I misunderstood the documentation and this is the reason of my question. I apologize for asking so many questions, but I really need to be sure about that.
Thanks.
An application running as a standard user should never be able to cause a bug check (a.k.a. BSOD). (And an application running as an Administrator should have to go well out of its way to do so.) Either you ran into a driver bug or you have bad hardware.
By default, Windows is configured to save a minidump in %SystemRoot%\minidump whenever a bug check occurs. You may be able to determine more information about the crash by loading the minidump file in WinDbg, configuring WinDbg to use the Microsoft public symbol store, and running the !analyze -v command in WinDbg. At the very least, this should identify what driver is probably at fault (though I would guess it's your modem driver).
Yes, you do need to keep the TOverlapped structure available for the duration of the overlapped operation. You're going to call GetOverlappedResult at some point, and GetOverlappedResult says it should receive a pointer to a structure that was used when starting the overlapped operation. The event mask and handle can be stored in local variables if you want; you're going to have a copy of them in the TOverlapped structure anyway.
Yes, the buffers that ReadFile and WriteFile use must remain valid. They do not make their own local copies to use internally. The documentation for ReadFile even says so:
This buffer must remain valid for the duration of the read operation. The caller must not use this buffer until the read operation is completed.
If you weren't obeying that rule, then you were likely reading into unreserved stack space, which could easily cause all sorts of unexpected behavior.
To cancel an overlapped I/O operation, use CancelIo. It's essential that you not free the memory of your TOverlapped record until you're sure the associated operation has terminated. Likewise for the buffer you're reading or writing. CancelIo does not cancel the operation immediately, so your buffers might still be in use even after you call it.

Resources