Safely write to files with conflicting names in an NSOperationQueue - ios

This is probably a pretty basic NSOperationQueue question, but maybe it will help some other people out who are just learning this as well.
I'm trying to copy multiple .plist files from the ~/Documents directory to the ~/Library directory in an iOS application. I want to use NSOperations to copy each file to speed this up and take the import process off the main thread.
In my implementation, it's possible that two files of the same name could be copied into the same place. What I'd like to do is make sure that one of the operations changes the filename to one that doesn't exist before it writes the file. What would be the most straight-forward way to go about this?
Thanks,
-c

Related

What is the most reliable way to move or copy large files (> 100 MB) on iOS?

Right now I'm moving very large files in iOS with this method:
[fileManager moveItemAtURL:srcURL toURL:toURL error:&error];
This is a method from NSFileManager.
Because the files are so large I try to move them instead of copying and then deleting the source file.
Is there a safer way to do this?
A file move is an extremely light weight operation; it doesn't involve copying anything as it simply moves a directory entry from one point in the filesystem to another.
It should be quite safe.
If you really really want to be paranoid, then:
copy all bytes from A to B
verify B is coherent
delete A
Which is what the "atomically" variants for the write/copy APIs do under the covers, save for the verification part because the filesystem, itself, should do that.
What you are doing is correct and efficient. Moving a file (if to the same file system) is essentially instant. But a copy and delete is very slow. Please note that moving a file to a different file system is actually done with a copy and delete.

Using .pch file to include applcation files

I just came to know that if we include anything in .pch file, we don't have to include later in other files. Now I am thinking of adding all of my files in it so that I don't have to include them in other files and create a mess at the start of each file. But I was thinking if this is a good practice? If not, why? And if good, why?
"Hi Alex .. I'll take bad ideas for $1000" :D
Aside from Oscar's answer, you will also subvert the build process. As every file knows about and hence depends on every other file, then changing a single file will mean that all compilation units in the project will be forced to be built on every recompilation - rather than just building what has changed.
It is definitely not a good practice, you should however definitely include the files that will be needed in all or most of the files, singletons and data models would be a good candidate for example.
The reason why this is a bad idea is cycling references, if you were to include all your files in that .pch file you will soon get some errors about cycling references.

Archive format suggestions for exporting iPad app data? Tarball?

I have an nascent iPad application, which stores "documents" internally on the device in the file system as a series of distinct files in a folder.
I'd like to try incorporating an import/export function through iTunes, using the features for OS 3.2 for this. I want to put all the document pieces that I keep internally into one container file for export.
So, smart folks of Stack Overflow: What's the simplest solution that will put a file hierarchy (or could be flat list in a pinch) into one file? There will not in theory need to be manipulation of the "archive"/container outside the app-- so random access isn't super important here, although it would be a bonus of course.
A tar file type thing springs to mind immediately. Roll my own? Any other thoughts or gotchas? (And if anyone can point me to code that reads/writes from a tar file, I'm all ears.)
Thanks!
Update: Made community wiki, since there's no single right answer here.
Try libarchive which is a friendly licensed, BSD derived (easier for iPhone OS) library for handling archive files.

J2ME Properties

J2ME lacks the java.util.Properties class. Although it is possible to put application settings in the JAD file this is not recommended for many properties. (Since, some platforms limits the size of JAD file.) I want to put a configuration file inside my jar file and parse it. And I do not want to go with XML because it will be overshooting for my case.
Question is, is there an already existing library for J2ME that can parse properties files or something similar such as INI file. Or would you recommend another method to solve the initial problem?
The best solution probably depends on what is going to be generating the properties files.
If you've got other non-JavaME projects using the same properties files, then stick with them, and write or find a parser. (There is a simple one from GoBible available on Google Code)
However you might find it just as easy to keep your configuration as static final String myproperty="myvalue"; in a Configuration.java file which you compile, and include in the jar instead, since you then do not need any special code to locate, open, read, and parse them.
You do then pick up a limitation on what you call them though, since you can no longer use the common dot separated namespacing idiom.

keep rsync from removing unfinished source files

I have two machines, speed and mass. speed has a fast Internet connection and is running a crawler which downloads a lot of files to disk. mass has a lot of disk space. I want to move the files from speed to mass after they're done downloading. Ideally, I'd just run:
$ rsync --remove-source-files speed:/var/crawldir .
but I worry that rsync will unlink a source file that hasn't finished downloading yet. (I looked at the source code and I didn't see anything protecting against this.) Any suggestions?
It seems to me the problem is transferring a file before it's complete, not that you're deleting it.
If this is Linux, it's possible for a file to be open by process A and process B can unlink the file. There's no error, but of course A is wasting its time. Therefore, the fact that rsync deletes the source file is not a problem.
The problem is rsync deletes the source file only after it's copied, and if it's still being written to disk you'll have a partial file.
How about this: Mount mass as a remote file system (NFS would work) in speed. Then just web-crawl the files directly.
How much control do you have over the download process? If you roll your own, you can have the file being downloaded go to a temp directory or have a temporary name until it's finished downloading, and then mv it to the correct name when it's done. If you're using third party software, then you don't have as much control, but you still might be able to do the temp directory thing.
Rsync can exclude files matching certain patters. Even if you can't modify it to make it download files to a temporary directory, maybe it has a convention of naming the files differently during download (for example: foo.downloading while downloading for a file named foo) and you can use this property to exclude files which are still being downloaded from being copied.
If you have control over the crawling process, or it has predictable output, the above solutions (storing in a tempfile until finished, then mv'ing to the completed-downloads place, or ignoring files with a '.downloading' kind of name) might work. If all of that is beyond your control, you can make sure that the file is not opened by any process by doing 'lsof $filename' and checking if there's a result. Clearly if no one has the file open, it's safe to move it over.

Resources