Combine two or more images that partially overlap - opencv

Having two or more images that partially overlap, like in this screenshot, I want to combine/merge them into one:
The coloured squares would be the source images, in lossless format, and no rotation is required.
The result I want is like using the "Auto-Blend Layers" command from Adobe Photoshop, so auto-align and auto-blend is performed automatically:
https://helpx.adobe.com/photoshop/using/combine-images-auto-blend-layers.html

Thank you all for the comments. The software that suits best in this case is OpenCV with the cv::Stitcher API as #aergustal pointed out. It works extremely well provided that pictures have a decent overlapping, otherwise the following error will be displayed:
Can't stitch images, error code = 1
Note that to be able to use the ./cpp-example-stitching command, you have to compile it from source code. Even Windows doesn't come with it precompiled, at least the version I've downloaded. More information:
High level stitching API (Stitcher class)

Related

How to remove spot color (s) from an image

Is there a command line tool to remove all spot color channels from a vector input image (type can be ai, eps) and keep only the CMYK or RGB color channels .
What I ve been able to come up with so far is using ghostscript tiffsep device and then recombine the color channel images to one image using imagemagicks -combine option. The drawback of this method is that it is quite compicated and I end up with a tiff image, instead of the original (vector) format.
'Image' has a defined meaning in PostScript, it means a bitmap, a raster. I think, from the context, that you mean something more general.
The simple answer is no, in general you can't do this, and I don't know of any tool which will.
The reason is that to do so would lose information; the marks defined in Separation or DeviceN space would be lost entirely, and its generally regarded as a Bad Idea to discard random parts of the document.
Perhaps you could explain what you are trying to achieve with this (ie why are you doing this), and it might be possible to suggest an alternative method.
If you are a competent C programmer you could produce a Ghostscript subclass device using the existing FILTER device (in gdevflt.c) as a template. That device looks at the type of operation, and either passes it on to the output device, or throws it away. It would be reasonably simple to look at the current colour space and discard Separation or DeviceN space. If you then uses the pdfwrite/ps2write/eps2write outptu device you'd get an EPS, PostScript program or PDF file as the output.
Whether you go down this route, continue with what you have, or find an alternative approach, there are a couple of things you need to think about; how do you plan to tackle Separation inks with process colour names ? Eg /Separation /Black. What about DeviceN spaces where some of the inks are process colours ? Eg a duotone Black and Pantone ink. Should these be preserved or dicarded ?
Your current approach will use the parts of the object which mark process plates, but not those which mark spot colorus, which could give some very peculiar results.
[EDIT]
PDF, PostScript and EPS don't have 'layers' (PDF has a feature, Optional Content, which uses the term 'layers' as a description in the specification but that's all).
An application such as Photoshop and Illustrator can have layers, but in general what they export to has to have those 'layers' converted into something else. That 'something else' depends on what you are saving it as.
Part of the problem is that you are apparently trying to deal with 3 different kinds of input, you say Illustrator (PDF, more or less), Photoshop (raster image) and EPS (PostScript). There is little common ground between the 3, is there a reason to support all of them ?
If you are content to stick with just Illustrator you might be able to do something with Optional Content. I'm not terribly familiar with modern versions of Illustrator, but wouldn't it be simpler to save two versions of the file, one with the answer layer and one without ?
Anyway, Ghostscript can honour Optional Content, so if you can save a PDF file (not PostScript or EPS) from Illustrator, it may be that the layers will persist into the PDF as Optional Content. I suspect they will going by a quick Google. In that case you might be able to run the file through Ghostscript, telling it not to honour the Optional Content portion, and get a PDF file without it present.
Another solution (again limited to PDF) would be to open the PDF file with an editing application such as Acrobat Pro, and simply delete the bits you don't want. Deletion of that kind is relatively reliable.
It still feels like rather a long-winded way to get a PDF file with some of the content removed though. I can't help feeling that just saving two versions from the creating application would be easier.

Inspect CGColorSpace ICC color profile data on iOS

I'm looking for a way to inspect the the ICC color profile data provided CGColorSpace's copyICCData() method.
Specifically, I'm loading PNG images into UIImages on iOS, and trying to find a way to use the let iccData:CFData? = aUIImage.cgImage!.colorSpace!.copyICCData() to determine the gamma for the image file.  This is for a game that uses 3D rendering— if the source image has a standard 2.2 gamma, I'll load the image data into a texture as sRGB (e.g. MTLPixelFormatRGBA8Unorm_sRGB) and if it has a gamma of 1.0 I'll instead load it as a linear texture (e.g. MTLPixelFormatRGBA8Unorm).
Note: The solution of just passing a UIImage/CGImage to the rendering system (SceneKit/Metal) and letting it sort it out won't work here because: 1. Some of the rendering I'm doing is assembling 2D images into a 3D texture, so that's something I need to do with raw data, not something I can just read from a standard image file format; 2. I'm specifically trying to pass gamma-1.0 images into the rendering system to avoid the overhead of sRGB→linear conversion (rendering is in linear space).
Also: Manual ICC-parsing solutions, Apple-API-using solutions, and open-source library suggestions are all acceptable answers.  This is not specifically a query for tool recommendation — any solution that'll work is a good one — but in my research, manual ICC parsing would be unwieldy and Apple's APIs don't seem to expose any ICC properties.  So I believe the most likely answer is a pointer to some library out there that I haven't been able to find via Google or GitHub or CocoaPods or StockOverflow, and will be gladly accepted.
Your best bet is to use sample icc 'https://sourceforge.net/projects/sampleicc/'. Just get the profile data as you described, then use OpenIccProfile to load it up. From there, get a ref to the header structure (.m_Header) and pull the info you need. I'd also recommend you take a look at RefIccMax 'https://github.com/InternationalColorConsortium/RefIccMAX' which is a newer version of the same lib, but not ready for primetime.

HDR images through Core Image?

Is it possible to process(filter) HDR images through Core Image? I couldn't find much documentation on this, so I was wondering if someone possibly had an answer to it. I do know that it is possible to do the working space computations with RGBAh when you initialize a CIContext, so I figured that if we could do computations with floating point image formats, that it should be possible..
What, if it is not possible, are alternatives if you want to produce HDR effects on iOS?
EDIT: I thought I'd try to be a bit more concise. It is to my understanding that HDR images can be clamped and saved as .jpg, .png, and other image formats by clamping the pixel values. However, I'm more interested in doing tone mapping through Core Image on a HDR image that has not been converted yet. The issue is encoding a CIImage with a HDR image, supposedly with the .hdr extention.
EDIT2: Maybe it would be useful to useful to use CGImageCreate , along with CGDataProviderCreateWithFilename ?
I hope you have basic understanding of how HDR works. An HDR file is generated by capturing 2 or more images at different exposures and combining it. So even if there's something like .HDR file, it would be a container format with more than one jpg in it. Technically you can not give two image files at once as an input to a generic CIFilter.
And in iOS, as I remember, it's not possible to access original set of photos of an HDR but the processed final output. Even if you could, you'd have to manually do the HDR process and generate a single HDR png/jpg anyway before feeding it to a CIFilter.
Since there are people who ask for a CI HDR Algorithm, I decided to share my code on github. See:
https://github.com/schulz0r/CoreImage-HDR
It is the Robertson HDR algorithm, so you cannot use RAW images. Please see the unit tests if you want to know how to get the camera response and obtain the hdr image. CoreImage saturates pixel values outside [0.0 ... 1.0], so the HDR is scaled into said interval.
Coding with metal always causes messy code for me, so I decided to use MetalKitPlus which you have to include in your project. You can find it here:
https://github.com/LRH539/MetalKitPlus
I think you have to check out the dev/v2.0.0 branch. I will merge this into master in the future.
edit: Just clone the master branch of MetalKitPlus. Also, I added a more detailed description to my CI-HDR project.
You can now(iOS 10+) capture Raw images(coded on 12 bits) and then filter them the way you like using CIFilter. You might not get a dynamic range as wide as the one you get by using bracketed captures; nevertheless, it is still wider than capturing 8-bits images.
Check Apple's documentation for capturing and processing RAW images.
I also recommend you watch wwdc2016 video by Apple(move to the raw processing part).

simple image recognition with any api/library

Is it possible to do very basic image recognition to compare an image against a database of images(resource folder images or any web servers images if we have) and determine which image in the database is the best match? I don't need to do any processing of any of the images, but simply differentiate between a finite list of images.
Is it any open source code available ?
I would recommend using OpenCV if you simply want to compare images (i.e. decide if two images are the same).
Here is a similar question on SO:
iOS image comparison
I would also go about reading a little bit about what Core Image (the iOS image library) has to offer, before going about OpenCV or other 3rd party.
I hope this helps.

How can I process a -dynamic- videostream and find the (relative) location of a "match" in that videostream?

As the question states: how is it possible to process some dynamic videostream? By saying dynamic, i actually mean I would like to just process stuff on my screen. So the imagearray should be some sort of "continuous screenshot".
I'd like to process the video / images based on certain patterns. How would I go about this?
It would be perfect if there already was (and there probably is) existing components. I need to be able to use the location of the matches (or partial matches). A .NET component for the different requirements could also be useful I guess...
You will probably need to read up on Computer Visual before you attempt this. There is nothing really special about video that seperates it from still imgaes. The process you might want to look at is:
Acquire the data
Split the data into individual frames
Remove noise (Use a Gaussian filter)
Segment the image into the sections you want
Remove the connected components of the image
Find a way to quantize the image for comparison
Store/match the components to a database of previously found components
With this database/datastore you'll have information on matches later in the database. Do what you like with it.
As far as software goes:
Most of these algorithms are not too difficult. You can write them yourself. They do take a bit of work though.
OpenCV does a lot of the basic stuff, but it won't do everything for you
Java: JAI, JHLabs [for filters], Various other 3rd party libraries
C#: AForge.net

Resources