Compare drawing with original - ios

For the last week I've been attempting to write some code in which I'd be able to trace an image and compare it with the original by pixel color count. So far the only two definite thing working is the drawing and pixel count (which obviously returns 0 since the rest isn't working)
To make the UIImageView of the drawing into a PNG for comparison I take a screenshot.However, I'm trying to call the saved drawing (screenshot) immediately after the save for a pixel color count and comparison. This whole process is not working whatsoever.
The pixel count code asks for a PNG at the moment,so I'm trying to work around that.
Is there maybe an easier way to do this whole process instead of first taking a screenshot, then calling it and then getting the pixel color count? I'm trying to make the code as simple as possible by comparing only two colors.
Any help would be appreciated. I would post code, but I'd rather get a fresh take on this since I've tried many different ways already. Thanks in Advance.

Related

Looking to extract a random color value from an image. Code type doesn't matter

So, I need a tool that will pick a random color from an image and give me the hex code for it. In specific, from an image of a 4-point gradient. I'm hoping to be able to make it so that I can load any image (by pasting in a link) and using said image, but if I have to code for a specific image edit the code each time I need a different one, that's okay too.
My thought was taking the image and randomizing between the height and width in pixels, and then selecting that pixel and getting the hex code of it, which would then be output. However, being fairly new to coding, I haven't found anything from searching online that would let me do something like this.
I have played around in JS fiddle for a few hours, and I can get it to load an image from the web, but the actual selection of a pixel isn't something I can figure out, although I assume it's possible with so many javascript color-pickers out there.
If there's an easier way to do this with a different type of code, I'm completely open to it, I just need to be pointed in the right direction. Thanks everyone!
Coming back to say I figured this out. What I had been looking for (as far as javascript goes, anyway,) was converting my image to base64 and then using that to get the pixels from. Then it was just a matter of randomizing between image height and image width, and selecting the corresponding pixel.
x = Math.floor(Math.random() * img.width+1);
y = Math.floor(Math.random() * img.height+1);
I'm sure this code isn't amazing, as I relied heavily on other people's code to figure out what I was doing, but in case it helps anyone this is what the end result looks like:
http://jsfiddle.net/AmoraChinchilla/qxh6mr9u/40/

Is it possible for an iOS app to take an image and then analyze the colors present in said image?

For example after taking the image, the app would tell you the relative amount of red, blue, green, and yellow present in the picture and how intense each color is.
That's super specific I know, but I would really like to know if it's possible and if anyone has any idea how to go about that.
Thanks!
Sure it's possible. You've have to load the image into a UIImage, then get the underlying CGImage, and get a pointer to the pixel data. If you average the RGB values of all the pixels you're likely to get a pretty muddy result, though, unless you're sampling an image with large areas of strong primary colors.
Erica Sadun's excellent iOS Developer Cookbook series has a section on sampling pixel image data that shows how it's done. In recent versions there is a "core" and an "extended" volume. I think it's in the Core iOS volume. My copy of Mac iBooks is crashing repeatedly right now, so I can't find it for you. Sorry about that.
EDIT:
I got it to open on my iPad finally. It is in the Core volume, in recipe 1-6, "Testing Touches Against Bitmap Alpha Levels." As the title implies, that recipe looks at an image's alpha levels to figure out if you've tapped on an opaque image pixel or missed the image by tapping on a transparent pixel. You'll need to adapt that code to come up with the average color for an image, but Erica's code shows the hard part - getting and interpreting the bytes of image data. That book is all in Objective-C. Post a comment if you have trouble figuring it out.

GPUImageView flashes between filtered and unfiltered view when adding a GPUImageFilter

I'm trying to display a GPUImageView with a live camera feed (using GPUImageStillCamera or GPUImageVideoCamera) and display a series of filters below it. When a filter is tapped, I want it to apply that filter to the live feed so that the GPUImageView shows a live, filtered feed of the camera input. I have all of it set up, but for some reason when I tap on pretty much any included GPUImageOutput filter (Vignette, Smooth Toon, Emboss, etc), the video feed flashes like crazy. It seems like its alternating between the filtered view and the unfiltered view. When i switch to a different filter, i can tell that the filter is working properly for a tiny tiny fraction of a second before it switches to a different filter.
The grayscale and sepia filters don't flash but instead only show at half-strength. I've tried setting the intensity to 1.0 (and a bunch of other values) for the sepia filter, but the grayscale one doesn't have any settings to change and it seems like some things are gray but there's still color. I tried to take a screenshot of the grayscale view but when i look at the screenshots, the image is either properly grayscaled or not grayscaled at all, even though its not what i see on my actual device. My theory is that its switching between the filtered view and the non-filtered view really fast, therefore creating the illusion of a grayscale filter at 50% strength.
I have no idea why this would be happening, because the standard GPUImage example projects work just fine, and I'm not doing much differently in my project.
If anyone could help me out or at least point me in the right direction, it would be very much appreciated. I have been trying to debug this issue for 3 days straight and I simply cannot figure it out.
EDIT: when I call capturePhotoAsImageProcessedUpToFilter on my GPUImageStillCamera, it returns nil for both the UIImage and for the NSError in the completion block (even though the GPUImageStillCamera is not nil. Not sure if this is related, but I figured it was worth mentioning.
EDIT 2: I just realized it was returning a nil image because no filters were set. But if that's the case, how do you take a photo without having any filters active? And does that possibly have anything to do with my original issue? I set a grayscale filter (and I'm still seeing the half-strength version of it), and the image returned in the completion block is the actual proper grayscale image, despite the fact that the live feed looks different.
You probably have two inputs targeting your view.
Can't tell without seeing your code, but I found that drawing a graph with all my input / outputs really helped debugging filters.

OpenCv Issue of Image Subtraction?

i am trying to subtract 2 image using the function cvAbsDiff(img1, img2, dest);
it working but sometimes when i bring my hand before my head or body the hand is not clear and background comes into picture... the background image(head) overlays my foreground.(hand)..
it works correctly on plain surfaces i.e when the background is even like a wall.
please check out my image...so that you can better understand my problem...!!!!
http://www.2shared.com/photo/hJghiq4b/bg_overlays_foreground.html
if you have any solution/hint please help me.......
There's nothing wrong with your code . Background subtraction is not a preffered way for motion detection or silhoutte detection because its not very robust.The problem is coming because both the background and the foreground are similar in colour at many regions which on subtractions pushes the foreground to back . You might try using
- optical flow for motion detection
- If your task is just detecting silhoutte or hand try training a HOG classifier over it
In case you do not want to try a new approach . You may try around playing with the threshold value(in your case 30).So when you subtract similar colour image there difference is less than 30 . And later you threshold with 30 so it just blacks out. Also you may try HSV or some other colourspace as well .
Putting in the relevant code would help. Also knowing what you're actually trying to achieve.
Which two images are you subtracting? I've done subtracting subsequent images (so, images taken with a delay of a fraction of a second), and the background subtraction generally results in the edges of moving objects, for example the edges of a hand, and not the entire silhouette of a hand. I'm guessing you're taking the difference of the current frame and a static startup frame. It's possible that parts aren't different enough (skin+skin).
I've got some computer problems tonight, I'll test it out tomorrow (pls put up at least the steps you actually carry thorough though) and let you know.
I'm still not sure what your ultimate goal is, although I'm guessing you want to do some gesture-recognition (since you have a vector called "fingers").
As Manpreet said, your biggest problem is robustness, and that is from the subjects having similar color.
I reproduced your image by having my face in the static comparison image, then moving it. If I started with only background, it was already much more robust and in anycase didn't display any "overlaying".
Quick fix is, make sure to have a clean subject-free static image.
Otherwise, you'll want to have dynamic comparison image, simplest would be comparing frame_n with frame_n-1. This will generally give you just the moving edges though, so if you want the entire silhouette you can either:
1) Use a different segmenting algorithm (what I recommend. Background subtraction is fast and you can use it to determine a much smaller ROI in which to search, and then use a different algorithm for more robust segmentation.)
2) Try to make a compromise between the static and dynamic comparison image, for example as an average of the past 10 frames or something like that. I don't know how well this works, but would be quite simple to implement, worth a try :).
Also, try with CV_THRESH_OTSU instead of 30 for your threshold value, see if you like that better.
Also, I noticed often the output flares (regions which haven't changed switch from black to white). Checking with the live stream, I'm quite certain it because of the webcam autofocusing/adjusting white balance etc.. If you're getting that too, turning off the autofocus etc. should help (which btw isn't done through openCV but depends on the camera. Possibly check this: How to programatically disable the auto-focus of a webcam?)

recognize the moving objects and differentiate them from the background?

iam working in a project that i take a vedio by a camera and convert this vedio to frames (this part of project is done )
what iam facing now is how to detect moving object in these frames and differentiate them from the background so that i can distinguish between them ?
I recently read an awesome CodeProject article about this. It discusses several approaches to the problem and then walks you step by step through one of the solutions, with complete code. It's written at a very accessible level and should be enough to get you started.
One simple way to do this (if little noise is present, I recommend smoothing kernel thought) is to compute the absolute difference of two consecutive frames. You'll get an image of things that have "moved". The background needs to be pretty static in order to work. If you always get the abs diff from the current frame to the nth frame you'll have a grayscale image with the object that moved. The object has to be different from the background color or it will disappear...

Resources