I add some metadata to the IR file in an EP_EarlyAsPossible pass. However, I find that some of the metadata is discarded. I guess it is due to optimization. Anyway to stop discarding metadata?
Related
Recently we added a change in our app that changed the resource addition to Photos - Until now we called addResourceWithType:fileURL:options: with PHAssetResourceCreationOptions.shouldMoveFile set to YES, and when we changed it to NO (the default value) we observed much more asset creation failures. Specifically we see a new error code in the procedure - PHPhotosErrorNotEnoughSpace. One can clearly see a connection between adding more storage to the file system and an asset creation failure that is related to storage, but we are struggling to understand a few things:
The storage of the device is always higher than the video size, usually by a great amount - We observed failures for devices with 120GB free storage, while the video size was 200MB.
Generally we save quite a lot of resources to the file system, so it is quite surprising to see supposedly storage issues when adding a relatively low amount of extra storage.
The asset creation is part of a bigger procedure of encoding a video to a file system, and then moving/copying it to Photos. Is it that big of a difference to copy a video of 100MB-200MB instead of moving it, such that the overall procedure failure will increase drastically?
Appreciate any help.
A piece of software I'm working on outputs quite a lot of files which are the stored on a server. During its runtime I've had one file go corrupt on me. These files are critical to the operation, so this cannot happen. I'm therefore trying to come up with a way of adding error correction to the files to prevent this from ever happening again.
I've read up on Reed-Solomon, which encodes k blocks of data plus m blocks of parity, and can then reconstruct up to m missing blocks. So what I'm thinking is taking the data stream, split it into these blocks, and then store them in sequence on disk, first the data blocks, then the parity blocks. Repeat until entire file is stored. k, m, and block sizes are of course variables I'll have to investigate and play with.
However, it's my understanding that Reed-Solomon requires you to know which blocks are corrupt. How could I possibly know that? My thinking is I'd have to add some extra, simpler, error detection code to each of the blocks as I write them, otherwise I can't know if they're corrupted. Like CRC32 or something.
Have I understood this correctly, or is there a better way to accomplish this?
This is a bit of an older question, but (in my mind) is always something that is useful and in some cases necessary. Bit rot will never be completely cured (hush ZFS community; ZFS only has control of what's on it's filesystem while it's there), so we always have to come up with proactive prevention and recovery plans.
While it was designed to facilitate piracy (specifically storing and extracting multi-GB files in chunks on newsgroups where any chunk could go missing or be corrupted), "Parchives" are actually exactly what you're looking for (see the white paper, though don't implement that scheme directly as it has a bug and newer schemes are available), and they work in practice as follows:
The complete file is input in to the encoder
Blocks are processed and Reed-Solomon blocks are generated
.par files containing those blocks are output along side the original file
When integrity is checked (typically on the other end of a file transfer), the blocks are rechecked and any blocks that need to be used to reconstruct missing data are pulled from the .par files.
Things eventually settled in to "PAR2" (essentially a rewrite with additional features) with the following scheme:
Large file compressed with RAR and split in to chunks (typically around 100MB each as that was a "usually safe" max of usenet)
An "index" file is placed along side the file (for example bigfile.PAR2). This has no recovery chunks.
A series of par files totaling 10% of the original data size are along side in increasingly larger filesizes (bigfile.vol029+25.PAR2, bigfile.vol104+88.PAR2, etc)
The person on the other end can then gets all .rar files
An integrity check is run, and returns a MB count of out how much data needs recovery
.PAR2 files are downloaded in an amount equal to or greater than the need
Recovery is done and integrity verified
RAR is extracted, and the original file is successfully transferred
Now without a filesystem layer this system is still fairly trivial to implement using the Parchive tools, but it has two requirements:
That the files do not change (as any change to the file on-disk will invalidate the parity data (of course you could do this and add complexity with a copy-on-change writing scheme))
That you run both the file generation and integrity check/recovery when appropriate.
Since all the math and methods are both known and battle-tested, you can also roll your own to meet whatever needs to have (as a hook in to file read/write, spanning arbitrary path depths, storing recovery data on a separate drive, etc). For initial tips, refer to the pros: https://www.backblaze.com/blog/reed-solomon/
Edit: The same research that led me to this question led me to a whole subset of already-done work that I was previously unaware of
https://crates.io/crates/solana-reed-solomon-erasure (as well as a bunch of other implementations in the Rust crate registry)
https://github.com/klauspost/reedsolomon (based on the BackBlaze code, and processes 1Gbps per core)
Etc. Look for "Reed-Solomon file recovery "
I used log4j (v. 1) in the past and was glad to know that a major refactoring was done to the project, resulting in log4j 2, which solves the issues that plagued version 1.
I was wondering if I could use log4j 2 to write to data files, not only log files.
The application I will be soon developing will need to be able to receive many events from different sources and write them very fast either to a data file or to a database (I haven't decided which yet).
The thread that receives the events must not be blocked by I/O while attempting to write events, so log4j2's Asynchronous Loggers, based on the LMAX Disruptor library, will definitely fit this scenario.
Moreover, my application must be able to recover either from a 'not enough space on disk' or 'unable to reach database' conditions, when writing to a data file or to a database table, respectively. In other words, when the application runs out of disk space or the database is temporarily unavailable, my application needs to store events in memory and wait for storage to become available and when it does, write all waiting events to disk or database.
Do you think I can do this with log4j?
Many thanks for your help.
Regards,
Nuno Guerreiro
Yes.
I'm aware of at least one production implementation in a similar scenario, where in gathered events are written to disk at high throughput.
Write to a volume other than your system volume to minimize the chances of system crashes due to disk space overrun.
Upfront capacity planning can help in ensuring h/w configuration with adequate resources to handle projected average load and bursts, for a reasonable period of time.
Do not let the system run out of disk space :). Keep track of disk usage, and proactively drop older data in extreme circumstances.
I'm struggling with memory management in iOS while downloading relatively large files from the web (such as videos with 350MB size).
The goal here is to download these kind of files and store it on CoreData on a Binary Data field.
At the moment I'm using NSURLSession.dataTaskWithUrl and NSURLSession.dataTaskWithRequest methods to retrieve these files, but it looks like these methods don't treat problems such as memory usage, they just keep on filling the memory until it reaches its maximum usage, leaving me with a memory warning when I reach 380MB~.
Initial Memory Usage
Memory Warning
What's the best strategy to perform this kind of large data retrieval from the web without reaching a memory warning? Does AlamoFire and other libs can deal with this problem?
It is better to use download task.
And save the video as a file to Document or Library directory.
Then save the relative path to CoreData
If you use download task
You can resume if last download fail
Need less memory
You can try AFNetworking to download large files.
I am writing a series of test for a GPU's DRAM (global) memory. Specifically targeting AMD GCN architecture of Tahiti and Hawaii model lines. The archs have a write-back L2 caches.
What I want is to ensure that the stores to global memory are indeed written through to global memory before another thread does a read.
The barrier and mem_fence documentation in the spec states:
CLK_GLOBAL_MEM_FENCE - The barrier function will queue a memory fence to ensure correct ordering of memory operations to global memory. This can be useful when work-items, for example, write to buffer or image objects and then want to read the updated data.
However, this only enforces correct ordering. My question is does this trigger a write to global memory of the L2 cache data?
OpenCL 1.2 gives next to no control of this. The fences are very poorly defined and technically if you read carefully only affect the work-group. So most likely nothing will force the cache to flush until the kernel completes.
OpenCL 2.0 gives you full ordering control. Ordering is all you get, not explicit cache operations.
If you do a release write to all_svm_devices scope then by the time you can see that in a work-item on a different device you know that every write before it must be visible too. This may mean the cache has been flushed if the cache was not using a standard ownership-based coherence protocol.
If you release to device scope only and L2 is shared across the whole device there would be no need to flush it to guarantee that ordering.
The memory model is defined entirely in terms of ordering, not in terms of caches, but with scopes it is intended to allow efficient implementation on very relaxed cache hierarchies.