the How to Read MultiFrame DICOM Image - c#-2.0

I am able to read the first frame but how will I read the other frames? There are 60 frames in a file.

For uncompressed images, the frames are stored in one continuous blob. For compressed images, the first data item (FFFE,E000) has a list of 4-byte offsets to the start of each frame.
For example, a 200x200x16-bit uncompressed frame takes 80000 bytes. If your file has 50 frames, it will have 80K x 50 = 4MB of image data. The frames are stacked together in order so frame N will be at offset N x 80K bytes.
For compressed frames the start of the data item contains a list of 4 byte integers with absolute offsets into the file for each frame. Each frame's compressed data length is gotten from the data item pointed to by the list.

Related

MPSImageIntegral returns all zeroes when images are smaller

I have a Metal shader that processes an iPad Pro video frame to generate a (non-displayed) RGBA32Float image in a color attachment. That texture is then put through an MPSImageIntegral filter, encoded into the same command buffer as the shader, which results in an output image of the same size and format. In the command buffer’s completion handler, I read out the last pixel in the filtered image (containing the sum of all pixels in the input image) using this code:
let src = malloc(16) // 4 Floats per pixel * 4 bytes/Float
let region = MTLRegionMake2D(imageWidth - 1, imageHeight - 1, 1, 1) // last pixel in image
outputImage!.getBytes(src!, bytesPerRow: imageWidth * 16, from: region, mipmapLevel: 0)
let sum = src!.bindMemory(to: Float.self, capacity: 4)
NSLog("sum = \(sum[0]), \(sum[1]), \(sum[2]), \(sum[3])")
That works correctly as long as the textures holding the input and filtered images are both the same size as the IPad’s display, 2048 x 2732, though it's slow with such large images.
To speed it up, I had the shader generate just a ¼ size (512 x 683) RGBA32Float image instead, and use that same size and format for the filter’s output. But in that case, the sum that I read out is always just zeroes.
By capturing GPU frames in the debugger, I can see that the dependency graphs look the same in both cases (except for the reduced texture sizes in the latter case), and that the shader and filter work as expected in both cases, based on the appearance of the input and filtered textures as shown in the debugger. So why is it I can no longer successfully read out that filtered data, when the only change was to reduce the size of the filter's input and output images?
Some things I’ve already tried, to no avail:
Using 512 x 512 (and other size) images, to avoid possible padding artifacts in the 512 x 683 images.
Looking at other pixels, near the middle of the output image, which also contain non-zero data according to the GPU snapshots, but which read as 0 when using the smaller images.
Using a MTLBlitCommandEncoder in the same command buffer to copy the output pixel to a MTLBuffer, instead of, or in addition to, using getBytes. (That was suggested by the answer to this MacOS question, which is not directly applicable to iOS.)
I've found that if I change the render pass descriptor's storeAction for the shader's color attachment that receives the initial RGBA32Float input image from .dontCare to .store, then the code works for 512 x 683 images as well as 2048 x 2732 ones.
Why it worked without that for the larger images I still don't know.
I also don't know why this store action matters, as the filtered output image was already being successfully generated, even when its input was not stored.

Why can't I get a manually modified MPEG-4 extended box (chunk) size to work?

Overview
As part of a project to write an MPEG-4 (MP4) file parser, I need to understand how an extended box (or chunk) size is processed within an MP4 file. When I tried to manually simulate an MP4 file with an extended box size, media players report that the file is invalid.
Technical Information
Paraphrasing the MPEG-4 specification:
An MP4 file is formed as a series of objects called 'boxes'. All data is contained in boxes, there is no other data within the file.
Here is a screen capture of Section 4.2: Object Structure, which describes the box header and its size and type fields:
Most MP4 box headers contain two fields: a 32-bit compact box size and a 32-bit box type. The compact box size supports a box's data up to 4 GB. Occasionally an MP4 box may have more data than that (e.g., a large video file). In this case, the compact box size is set to 1, and eight (8) octets are added immediately following the box type. This 64-bit number is known as the 'extended box size', and supports a box's size up to 2^64.
To understand the extended box size better, I took a simple MP4 file and wanted to modify the moov/trak/mdia box to use the extended box size, rather than the compact size.
Here is what the MP4 file looks like before modifying it. The three box headers are highlighted in RED:
My plan was as follows:
Modify the moov/trak/mdia box
In the moov/trak/mdia, insert eight (8) octets immediately following the box type ('mdia'). This will eventually be our extended box size.
Copy the compact box size to the newly-inserted extended box size, adding 8 to the size to compensate for the newly inserted octets. The size is inserted in big-endian order.
Set the compact size to 1.
Modify the moov/trak box
Add 8 to the existing compact box size (to compensate for the eight octets added to mdia).
Modify the moov box
Add 8 to the existing compact box size (again, to compensate for the eight octets in mdia)
Here's what the MP4 file looks like now, with the modified octets are in RED:
What have we done?
We have told the MP4 parser/player to take the moov/trak/mdia box size from the extended field rather than the compact size field, and have increased all parent boxes by eight (8) to compensate for the newly-inserted extended box size in the mdia box.
What's the problem?
When I attempt to play the modified MP4 file I receive error messages from different media players:
Why do the media players see the modified file as invalid MP4?
Did I need to alter any other fields?
Does the extended box size have to be greater than 2^32?
Can it be that only specific box types support extended box size (e.g., Media Data)?
A tip of the hat to #Alan Birtles for pointing out that the chunk offsets would also need to be modified. Indeed, the stco (sample table chunk offset?) box contains absolute file offsets to the data chunks in the mdat box (rather than relative offsets within a box). This can be seen in the specification document:
The chunk offsets need to be increased by the number of octets we added to the file before the mdat box. In our case, this is the eight (8) octet extended box size inserted in the mdia box.
All that remained was to manually change the chunk offsets found in the two stco boxes (both video and audio tracks), adding eight (8) to each chunk offset. Here are the stco boxes before adding 8 to their chunk offsets:
Now the file passes validity tests of both the ffmpeg and ffprobe tools. Interestingly, although VLC succeeds in playing the modified file, other media players (e.g., Windows Media Player, MS Photos, MS Movies & TV, MS MovieMaker) report the file as corrupted. It is not clear why they fail to play the file. Unverified possibilities include:
Not supporting the extended box size for any box other than mdat
Balking if the extended box size is less than 2^32
In summary, if any fields are added to boxes (e.g., extended box size), the stco chunk offsets need to be incremented by the number of octets inserted in the MP4 file preceding each stco box.

Best way to animate sequence in iOS

I have a set of ~400 PNG images that I am current using UIImageView to animate. This is in Objective C for iOS 7.
I was wondering if there is a more efficient way to display this animation?
For information, the animation will be on one part of the screen in the background, while other actions are taking place.
400 images is a lot of images. If these images are large, you are going to eat up a lot of memory in displaying them, and may crash. (Images in memory take 3 bytes per pixel, or 4 bytes if the image has an alpha channel as most PNG images do.) For a a 400 x 400 point retina image (800x800 pixels) that's 800x800x4, or 2,560,000 bytes per image. With 400 image, that's 1,024,000,000 bytes, or about 976 MB. Way, way more than you should take for a single animation in your app.
You might want to convert the image sequence into a video and display the video in your app. Video uses hardware-accellerated streaming that only loads a frame at a time into memory.

png snapshot of a specific swf frame

I have two swfs. the first is my as3 application. the second one is contains 50 odd frames. The first swf has only two elements: a text box wherein the user types an integer (the frame number of the second swf actually) and a border container 100 x 300. When the user keys in an integer in the first swfs text box, I need to access the second swf from the first one, take a bitmap snapshot of the frame specified by the user, convert to png (minimum resolution 300 dpi) and display it inside the first swfs border container.
Can anyone guide me on how to take a bitmap snapshot (or convert) a specific frame of an external swf and pass it back to the master (controlling) swf?
Thanks

Maximum image dimensions in a browser/CSS spec?

I want to display a page containing about 6000 tiny image thumbnails (40x40 each). To avoid having to make 6000 HTTP requests, I am exploring CSS sprites, i.e. concatenating all these thumbnails into one long strip and using CSS to crop the required images out. Unfortunately, I have discovered that JPEG files cannot be larger than 65500 pixels in any one dimension. Wary of further limits in the web stack, I am wondering: are any of the following unable to cope with an image with dimensions of 40x240000?
Internet Explorer
Opera
WebKit
Any CSS spec
Any HTML spec
The PNG spec
Edit: the purpose of this is simply to display an entire image collection at once, requiring that the user at most has to scroll. I want the "micro-thumbnails" to flow into an existing CSS layout, so I can't just use a big rectangular image. I don't want the user to have to click through multiple pages to see everything. The total number of pixels is not that great - only twice what would fit on a 2560x1600 display. The total file size of all the micro-thumbnails is only a couple of megabytes. Assuming every image is manipulated uncompressed in the browser's memory, taking 8 bytes of storage per pixel (RGBA plus 100% overhead fudge factor), we are talking RAM usage in the low hundreds of megabytes; not unreasonable for a specialized application in the year 2010. The only unreasonable thing is the volume of HTTP requests that would be generated if all micro-thumbnails were sent individually.
Well, Safari/iOS lists these limits:
The maximum size for decoded GIF, PNG, and TIFF images is 3 megapixels.
That is, ensure that width * height ≤ 3 * 1024 * 1024. Note that the decoded size is far larger than the encoded size of an image.
The maximum decoded image size for JPEG is 32 megapixels using subsampling.
JPEG images can be up to 32 megapixels due to subsampling, which allows JPEG images to decode to a size that has one sixteenth the number of pixels. JPEG images larger than 2 megapixels are subsampled—that is, decoded to a reduced size. JPEG subsampling allows the user to view images from the latest digital cameras.
Individual resource files must be less than 10 MB.
This limit applies to HTML, CSS, JavaScript, or nonstreamed media.
http://developer.apple.com/library/safari/#documentation/AppleApplications/Reference/SafariWebContent/CreatingContentforSafarioniPhone/CreatingContentforSafarioniPhone.html
Based on your update, I'd still really recommend not using this approach. Don't you think there's a reason that Google's image search doesn't work like this?
As such, I'd recommend simply loading images as required via Ajax. (i.e.: When the user scrolls below the currently visible set of images.) Whilst this will use more connections, it'll mean that you can have sensibly sized thumbnails and as a general approach is much more manageable than having to re-generate pre-generated thumbnail image "sheets" on the back-end when a new image is added, etc.

Resources