Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am using open CV 3.4. I am getting feed from an RTSP input camera. I want to add a condition in my code such that if camera is covered using any thing Alert should go to user. Checking blackness of the frame doesn't do any justice because when covered with white color cloth, the frame will be white. Can anyone suggest some logic for this? How can we accomplish this using openCV?
You can check whether the camera is in focus or not. For example, here's a blurry photo of my palm and of my window:
Here's the function that calculates a sharpness "score" of each image:
def sharpness(img):
img = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
lap = cv.Laplacian(img, cv.CV_16S)
mean, stddev = cv.meanStdDev(lap)
return stddev[0,0]
Testing:
The blurry picture has a much lower score. You can set the threshold to e.g. 20 and anything below that is considered blurry and therefore the camera is covered or something else is wrong with it.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 days ago.
Improve this question
I am new to Swift and Xcode. Currently I have imagepicker working on my app and I am trying to detect multiple objects in the image that is uploaded from the image library, but I cannot find any tutorials where it depicts still image object detection with bounding boxes from image picker.
I know that I need to start off with VNImageRequestHandler in SwiftUI, but from there how do you input your machine learning model and create the bounding boxes. I have already tried piecing the code together from the Still images tutorial on Apple developer and the Vision object detection on Apple Documentation.
Any tips or information would be extremely helpful!
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am not able to detect surface using ARKit. I have also added the image of the surface.
How to detect a surface with less texture accurately?
To successfully detect a horizontal or vertical plane using ARKit or RealityKit you need to track:
surfaces with distinguished textures;
in a well-lit environment;
you must physically move around a room;
You do not need to track:
surfaces with "poor" or repetitive textures;
glare, reflective or refractive surfaces;
transparent objects (like glass tables);
However, to overcome most of these limitations, you can use the iPad Pro with a LiDAR scanner. iPad Pro 2020 detects surfaces at unprecedented speed (at tens of nanoseconds) in a poorly-lit room.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to implement a gray scale filter and sepia filter on an image loaded into the UIImageView from the photo library. i found in tutorials (I'm new to core Image) but then I want to put the sepia image back into the same same UIImageView when ever I try to put the new image into the view it just disappears I have tested to see if the image view contains an image and it does but it is not visible. any suggestions on what to do?
You can check these links. Hope you will get the solution..
https://github.com/BradLarson/GPUImage
http://www.willpowell.co.uk/blog/2014/09/14/15-image-filtering-processing-ocr-utilities-helper-libraries-frameworks-ios-iphone-ipad-development/
Please check below link. Hope you will get the proper solution..
https://github.com/esilverberg/ios-image-filters
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In the Image above (Top Image), suppose the black boundary is the phone.
What i was trying to achieve is to randomly generate the red path from the top of the screen and at the same time the red line (path) moves downwards.
Notice how the red path is random and does not have a uniform shape.
My question is how do i achieve this?
I know this has something to do with the random function.
But then generating the random path has been my main obstacle since 8 hours.
I could not generate a shape at every interval of the timer with a specific x coordinate and y coordinate but then as you can see in the next image, how would i generate the line at an angle (rotated)
Have tried hard to search everywhere on the internet but failed.
I always keep stackoverflow my last destination after I fail to achieve any functionality after numerous hours.
Would really appreciate if anyone could help me out with this.
It looks like you could achieve the effect you wish by starting at the top center, and repeatedly choosing 2 random numbers: how far down to go, and how far horizontally to go (positive or negative), until you got to the bottom of the screen. You'd have to be careful not to go off either edge, or you could instead choose a random x-coordinate each step.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am making a application that you can apply different effects to photos using GPU Image Framework by Brad Larson.I want to add X-ray Effect filter in GPU image app.Any pointers will be appreciated
You want something like the GPUImageColorInvertFilter:
If that doesn't produce the exact effect you want, you could create a custom filter based on it and have your fragment shader first convert to luminance, and then apply a greenish tint based on the inverse of the pixel's luminance. That would provide the exact effect you show above.
I'll leave the coding of such a shader as an exercise for the reader.