Is it possible for an iOS app to take an image and then analyze the colors present in said image? - ios

For example after taking the image, the app would tell you the relative amount of red, blue, green, and yellow present in the picture and how intense each color is.
That's super specific I know, but I would really like to know if it's possible and if anyone has any idea how to go about that.
Thanks!

Sure it's possible. You've have to load the image into a UIImage, then get the underlying CGImage, and get a pointer to the pixel data. If you average the RGB values of all the pixels you're likely to get a pretty muddy result, though, unless you're sampling an image with large areas of strong primary colors.
Erica Sadun's excellent iOS Developer Cookbook series has a section on sampling pixel image data that shows how it's done. In recent versions there is a "core" and an "extended" volume. I think it's in the Core iOS volume. My copy of Mac iBooks is crashing repeatedly right now, so I can't find it for you. Sorry about that.
EDIT:
I got it to open on my iPad finally. It is in the Core volume, in recipe 1-6, "Testing Touches Against Bitmap Alpha Levels." As the title implies, that recipe looks at an image's alpha levels to figure out if you've tapped on an opaque image pixel or missed the image by tapping on a transparent pixel. You'll need to adapt that code to come up with the average color for an image, but Erica's code shows the hard part - getting and interpreting the bytes of image data. That book is all in Objective-C. Post a comment if you have trouble figuring it out.

Related

Best approach for coding a painting app on iOS / iPad

I’m trying to build a drawing/painting app for the iPad, with textured brush tips and paper.
So far, all drawing app example codes I've come across seem to work by stroking a path. However, I'd like to actually apply a texture all along the path, to simulate say, an oil brush, or charcoal.
Here is an example of a brush tip texture: Bursh tip
The result when painting with the same brush tip: Result
In the results, the top output is what it looks like when the "brush tip" texture is applied far apart along the path.
The bottom result is the texture applied with very small steps along the path. Those who've worked in Photoshop with custom brushes will find this familiar.
I had once prototyped this in Processing years ago (I've since lost the source code), and got it to work in real-time.
In Processing, I converted both the brush tip PNG and the canvas (or the image I'm painting on to) into an array of integers. Then, I simply copied the values from the brush tip to the canvas texture, at the appropriate index. At the end of the cycle, I displayed the image, for that time-step. Repeat this dozens of times in-between each point returned by the mouse.
How would I approach this in iOS, and in real-time? I tried this (https://blog.avenuecode.com/how-to-use-uikit-for-low-level-image-processing-in-swift) but it's way too slow.
This makes me believe Metal might be the only way forward. Is that true, or am complicating this unnecessarily?
Thank you for any guidance!
PS. I'm coding in Swift 5, targeting iOS 13, in Xcode 11.5.
Welcome!
I recommend you check out Core Image. It's Apple's framework for image processing (on a higher level than Metal, though it can integrate with Metal). Unfortunately, the documentation is a bit out-dated, but I'm sure you can translate it into Swift.
Here Apple describes how you would realize a painting app with Core Image and here you can download the corresponding sample project.

iOS: Is there a way to alter the color of every pixel on the screen?

How does apple alter the color of every single pixel on the screen (i.e. grayscale / inversion of colors), regardless of what object the color belongs to. It obviously isn't reading background color properties since it even affects images, as well.
How would one approach this?
To clarify my question, how can I change the intensity / hue of every pixel on the screen, similar to how f.lux does it?
How does apple alter the color of every single pixel on the screen?
Apple probably uses an API called CGSetDisplayTransferByTable which is not publicly available on iOS.
The display transfer table controls how each possible value in each of the three RGB channels is displayed on screen and can convert it to a different value. It works similar to Photoshop's "Curves" tool. By using the right transfer table it's possible to invert the screen, adjust the hue or enhance contrast.
Since the transfer table is part of the graphics hardware and is always active, there's zero performance overhead involved. On Mac OS there are actually two transfer tables: one for the application and one for the OS.
how can I change the intensity / hue of every pixel on the screen
Without jailbreak, you can't.

Get Pixel color of Camera in Xcode iOS

I´m trying to develop an app to help people with vision diseases and one of the function is to recognize the color of the object that user is pointing.
I´ve read this article about how to get the pixel color from an UIImage
Get Pixel color of UIImage
Anyone have a idea how i can get this pixel color from camera?
Marcelo, there is quite a bit of setup to get the pixel color from camera. You will need to get up to speed with AVFoundation at:
http://developer.apple.com/library/mac/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html
After you have done that, you should pull the sample code Rosy Writer:
http://developer.apple.com/library/ios/#samplecode/RosyWriter/Introduction/Intro.html
This code is meant to take a live camera image and change one of the pixel colors.
I recommend you spend time on those 2 and then come back with more specific questions as you start build the app.
Hope this helps.

Replace particular color of image in iOS

I want to replace the particular color of an image with other user selected color. While replacing color of image, I want to maintain the gradient effect of that original color. for example see the attached images.
I have tried to do so with CoreGraphics & I got success to replace color. But the replacing color do not maintain the gradient effect of the original color in the image.
Can someone help me on this? Is the CoreGraphics is right way to do this?
Thanks in advance.
After some struggling almost with the same problem (but with NSImage), made a category for replacing colors in NSImage which uses ColorCube CIFilter.
https://github.com/braginets/NSImage-replace-color
inspired by this code for UIImage (also uses CIColorCube):
https://github.com/vhbit/ColorCubeSample
I do a lot of color transfer/blend/replacement/swapping between images in my projects and have found the following publications very useful, both by Erik Reinhard:
Color Transfer Between Images
Real-Time Color Blending of Rendered and Captured Video
Unfortunately I can't post any source code (or images) right now because the results are being submitted to an upcoming conference, but I have implemented variations of the above algorithms with very pleasing results. I'm sure with some tweaks (and a bit of patience) you might be able to get what you're after!
EDIT:
Furthermore, the real challenge will lie in separating the different picture elements (e.g. isolating the wall). This is not unlike Photoshop's magic wand tool which obviously requires a lot of processing power and complex algorithms (and is still not perfect).

Best way to get photoshop to optimise 35 related pictures for fast transmission

I have 35 pictures taken from a stationary camera aimed at a lightbox in which an object is placed, rotated at 10 degrees in each picture. If I cycle through the pictures quickly, the image looks like it is rotating.
If I wished to 'rotate' the object in a browser but wanted to transmit as little data as possible for this, I thought it might be a good idea to split the picture into 36 pictures, where 1 picture is any background the images have in common, and 35 pictures minus the background, just showing the things that have changed.
Do you think this approach will work? Is there a better route? How would I achieve this in photoshop?
Hmm you'd probably have to take a separate picture of just the background, then in the remaining pictures, use Photoshop to remove the background and keep only the object. I guess if the pictures of the background have transparency in the place where the background was this could work.
How are you planning to "rotate" this? Flash? JavaScript? CSS+HTML? Is this supposed to be interactive or just a repeating movie? Do you have a sample of how this has already been done? Sounds kinda cool.
If you create a multiple frame animated GIF in Photoshop you can control the quality of the final output, including optimization that automatically converts the whole sequence to indexed color. The result is that your background, though varied, will share most of the same color space, and should be optimized such that it won't matter if it differ slightly in each frame. (Unless your backgrounds are highly varied between photos, though by your use of a light box, they shouldn't be.) Photoshop will let you control the overall output resolution, and color remapping, which will affect the final size.
Update: Adobe discontinued ImageReady in Photoshop CS3+, I am still using CS2 so I wasn't aware of this until someone pointed it out.
Unless The background is much bigger than the gif in the foreground i doubt that you would benefit greatly from using separate transparent images. Even if they are smaller in size,
Would the difference be large enough to improve the speed, taken into consideration the average speed with which pages are loaded?

Resources