Is it possible to process(filter) HDR images through Core Image? I couldn't find much documentation on this, so I was wondering if someone possibly had an answer to it. I do know that it is possible to do the working space computations with RGBAh when you initialize a CIContext, so I figured that if we could do computations with floating point image formats, that it should be possible..
What, if it is not possible, are alternatives if you want to produce HDR effects on iOS?
EDIT: I thought I'd try to be a bit more concise. It is to my understanding that HDR images can be clamped and saved as .jpg, .png, and other image formats by clamping the pixel values. However, I'm more interested in doing tone mapping through Core Image on a HDR image that has not been converted yet. The issue is encoding a CIImage with a HDR image, supposedly with the .hdr extention.
EDIT2: Maybe it would be useful to useful to use CGImageCreate , along with CGDataProviderCreateWithFilename ?
I hope you have basic understanding of how HDR works. An HDR file is generated by capturing 2 or more images at different exposures and combining it. So even if there's something like .HDR file, it would be a container format with more than one jpg in it. Technically you can not give two image files at once as an input to a generic CIFilter.
And in iOS, as I remember, it's not possible to access original set of photos of an HDR but the processed final output. Even if you could, you'd have to manually do the HDR process and generate a single HDR png/jpg anyway before feeding it to a CIFilter.
Since there are people who ask for a CI HDR Algorithm, I decided to share my code on github. See:
https://github.com/schulz0r/CoreImage-HDR
It is the Robertson HDR algorithm, so you cannot use RAW images. Please see the unit tests if you want to know how to get the camera response and obtain the hdr image. CoreImage saturates pixel values outside [0.0 ... 1.0], so the HDR is scaled into said interval.
Coding with metal always causes messy code for me, so I decided to use MetalKitPlus which you have to include in your project. You can find it here:
https://github.com/LRH539/MetalKitPlus
I think you have to check out the dev/v2.0.0 branch. I will merge this into master in the future.
edit: Just clone the master branch of MetalKitPlus. Also, I added a more detailed description to my CI-HDR project.
You can now(iOS 10+) capture Raw images(coded on 12 bits) and then filter them the way you like using CIFilter. You might not get a dynamic range as wide as the one you get by using bracketed captures; nevertheless, it is still wider than capturing 8-bits images.
Check Apple's documentation for capturing and processing RAW images.
I also recommend you watch wwdc2016 video by Apple(move to the raw processing part).
Related
Are there any image formats for the web with full HDR image support? 10/12-bit channels, DCI-P3/Rec.2020 colour space, etc.
It seems like none of the conventional formats support it, and no one is talking about it, even when YouTube accepts HDR uploads and HDR monitor adoption is increasing.
I am by no means an expert on this topic, but I found this question while working on a 2021/22 solution to the problem and I'd like to share my thoughts and progress. Maybe somebody gets use out of it.
Trigger HDR mode in the browser
It seems, it's possible to trick browsers on Apple platforms to switch to HDR mode, as documented on kidi.ng/wanna-see-a-whiter-white
There, they use a combination of a tiny HDR video and the CSS properties filter/backdrop-filter with brightness(10) to make HTML elements and their colors reach into HDR space. It works and it is a cool trick, if a bit gimmicky.
AVIF HDR support with PQ
As mentioned by Валерий Заподовников, the AVIF file format seems to support HDR in a sense when the image is tagged PQ (Perceptual quantizer).
I found files provided by Netflix (example) demonstrating this on the AVIF codec Github. They do seem to display brighter than regular CSS content in Chrome (see image) with background-color: white;, but I have not been able to create images like these myself. Also: the PNG images didn't yield the same result for me.
Platform limitations
The experiments did not produce any usable results for me, mainly because I have few HDR capable displays to test on and also, Safari does not support AVIF images yet. It seems, it could be a while before it does, but I'll get back to testing then.
My other hope was that the HDR-capable format that Apple does use, .HEIF/.HEIC, would display in Safari and I could work with that, but it doesn't. And it does not look like it will, since it's not a format engineered for web use.
Y. Mano and colleagues at Netflix investigated just this question. They concluded that several commonly supported image formats (notably JPEG2000 and 16-bit PNG) can support HDR images already, as long as a color profile is embedded in the corresponding images. The article I linked to is also a good introduction to HDR and wide color gamut images in general.
Having two or more images that partially overlap, like in this screenshot, I want to combine/merge them into one:
The coloured squares would be the source images, in lossless format, and no rotation is required.
The result I want is like using the "Auto-Blend Layers" command from Adobe Photoshop, so auto-align and auto-blend is performed automatically:
https://helpx.adobe.com/photoshop/using/combine-images-auto-blend-layers.html
Thank you all for the comments. The software that suits best in this case is OpenCV with the cv::Stitcher API as #aergustal pointed out. It works extremely well provided that pictures have a decent overlapping, otherwise the following error will be displayed:
Can't stitch images, error code = 1
Note that to be able to use the ./cpp-example-stitching command, you have to compile it from source code. Even Windows doesn't come with it precompiled, at least the version I've downloaded. More information:
High level stitching API (Stitcher class)
I'm looking for a way to inspect the the ICC color profile data provided CGColorSpace's copyICCData() method.
Specifically, I'm loading PNG images into UIImages on iOS, and trying to find a way to use the let iccData:CFData? = aUIImage.cgImage!.colorSpace!.copyICCData() to determine the gamma for the image file. This is for a game that uses 3D rendering— if the source image has a standard 2.2 gamma, I'll load the image data into a texture as sRGB (e.g. MTLPixelFormatRGBA8Unorm_sRGB) and if it has a gamma of 1.0 I'll instead load it as a linear texture (e.g. MTLPixelFormatRGBA8Unorm).
Note: The solution of just passing a UIImage/CGImage to the rendering system (SceneKit/Metal) and letting it sort it out won't work here because: 1. Some of the rendering I'm doing is assembling 2D images into a 3D texture, so that's something I need to do with raw data, not something I can just read from a standard image file format; 2. I'm specifically trying to pass gamma-1.0 images into the rendering system to avoid the overhead of sRGB→linear conversion (rendering is in linear space).
Also: Manual ICC-parsing solutions, Apple-API-using solutions, and open-source library suggestions are all acceptable answers. This is not specifically a query for tool recommendation — any solution that'll work is a good one — but in my research, manual ICC parsing would be unwieldy and Apple's APIs don't seem to expose any ICC properties. So I believe the most likely answer is a pointer to some library out there that I haven't been able to find via Google or GitHub or CocoaPods or StockOverflow, and will be gladly accepted.
Your best bet is to use sample icc 'https://sourceforge.net/projects/sampleicc/'. Just get the profile data as you described, then use OpenIccProfile to load it up. From there, get a ref to the header structure (.m_Header) and pull the info you need. I'd also recommend you take a look at RefIccMax 'https://github.com/InternationalColorConsortium/RefIccMAX' which is a newer version of the same lib, but not ready for primetime.
I have a JPEG image stored in memory as a blob and am looking to apply some basic transformations to it (e.g. resize, convert to greyscale, rotate etc.)
I am currently using Google Scripts which doesn't have a native image library as far as I can tell.
Are there standard algorithms or similar which would allow me to work with the raw binary array, knowing it represents a JPEG image, to achieve such a transformation?
Not the answer you are looking for I guess, but...
To be able to do image processing using JPEG files as input, you need to decode the images. Well, actually, 90/180/270 degree rotation, flipping and cropping is possible as lossless operations, and thus without decoding the image data. But for anything more advanced, like resizing, you need to work with a decoded image.
Both the file structure (JIF/JFIF) and algorithms used to compress the image data in standard JPEG format are well defined and properly documented. But at the same time, the specification is quite complex. It's certainly doable if you have the time and know what you are doing. And if you are lucky, and your JPEG blobs are all written the same way, you might get away with implementing only some of the spec. But even then, you will need to (re-)implement large parts of the spec, and it might just not be worth it.
Using a 3rd party service to convert it for you, or create your own using a known library, like libjpeg or Java's ImageIO, etc. might be your best bets, if you need a quick solution, and don't have too strict requirements for performance.
There are no straightfoward image processing capabilities available in Apps Script. You'll have either expose your Python as a web service and call it from Apps Script or use the Drive REST API to access the files from your Python app or use any api webservices.
GAE Python has Image processing capabilities check the below url:
https://developers.google.com/appengine/docs/python/images/
Available image transformations
The Images service can resize, rotate, flip, and crop images, and enhance photographs. It can also composite multiple images into a single image.
I am doing image processing on the iPhone, but I need to get the raw bytes outside of cocoa so I can optimise the algorithm on a more image friendly platform like Matlab. I am using Brad Larson's excellent GPUIMAGE and can get the raw bytes no problem, but when I use NSData writetofile method the text file obtained (which I get by downloading the app from xcode organiser which gives me access to the file in the documents folder) is in a strange format. I am a Matlab programmer so I'm relatively new to cocoa, so I reckon I'm must be missing something basic as I imagine I may be able to just use c functions to print to a file.
Any discussions I have found only involve reading and writing within cocoa and the app sandbox. I also could probably use the GpuImageMovieWriter but I imagine AVAssetWriter compresses the image data, and I need the uncompressed raw bytes. It doesn't matter what way the bytes are organised as I can parse it easily in matlab.
So basically what is the easiest way to get say an outputted txt or csv file that looks like say 124,255,0,166,255 etc etc (i.e image ints) from the cocoa environment. I had a similar problem before designing an accelerometer algorithm where in the end I just printed the raw data to the console copied to a text file and then parsed in matlab, however given that I'm dealing in images now this is not practical.
Any help in this matter, pointers to relevant text etc, would be greatly appreciated.
For one thing, that's not a text file you're writing out. It's the binary representation of the raw bytes of the image. You can't treat it as a text file.
The structure of the image data is simple, with each pixel represented as one set of four bytes. Each byte is a 0-255 color component value, in the order of blue, green, red, and alpha. The image is scanned from left to right, top to bottom, so you just need to know the width of the image in pixels to figure out how things loop around. From that, it should be easy to import into whatever application you want.