Is there a way to detect real darkness with ios camera devices? - ios

I am developing an app for blind users and I am blind too. Working with photos images cameras and so on is not a rewarding work. I am encountering this brain teaser:
I implemented an auto torch mode algoritm. The torch switches on and off almost apropriatelly based on exifdata brightness values. The problem is that on a dark scene the torch switch on and off repeatly as it met the right condition. What I can not determine is which darkness is real and which darkness is artificial. For example if I put my hand behind the camera it becomes artificial if I put my phone on the table the darkness is not real.What kind of data and values I could use to mitigate or eliminate this issue?

Related

ios ARKit 3 with iPad Pro 2020, how to use front camera data with back camera tracking?

The ARKit API supports simultaneous world and face tracking via the back and front cameras, but unfortunately due to hardware limitations, the new iPad Pro 2020 is unable to use this feature (probably because the LIDAR camera takes a lot more power). This is a bit of a step back.
Here is an updated reference in the example project:
guard ARWorldTrackingConfiguration.supportsUserFaceTracking else {
fatalError("This sample code requires
iOS 13 / iPad OS 13, and an iOS device with
a front TrueDepth camera. Note: 2020 iPads
do not support user face-tracking while world tracking.")
}
There is also a forum conversation proving that this is an unintentional hardware flaw.
It looks like the mobile technology is not "there yet" for both. However, for my use case I just wanted to be able to switch between front and back tracking modes seamlessly, without needing to reconfigure the tracking space. For example, I would like a button to toggle between "now you track and see my face" mode and "world tracking" mode.
There are 2 cases: it's possible or it's impossible, but maybe there are some alternative approaches depending on that.
Is it possible, or would switching AR tracking modes necessitate setting-up the tracking space again? If so, how would it be achieved?
If it's impossible:
Even if I don't get face-tracking during world-tracking, is there a way to get a front-facing camera feed that I can use with the Vision framework, for example?
Specifically: how do I enable back-facing tracking and get front and back facing camera feeds simultaneously, and disable one or the other selectively? If it's possible even without front-facing tracking and only the basic feed, this will work.

OpenCV Colour Detection Error

I am writing a script on the raspberry pi to detect the majority colour featured in a frame of a webcam and I seem to be having an issue. The following image is me holding up my phone with a blank red image on it. I seem to be getting an orange colour instead.
Now when I angle the phone I do in fact produce the red colour expected.
I am not sure why this is the case.
I am using a logitech c920 webcam that emits a blue light when activated and also have the monitor going. I am wondering whether the light from these two are causing this issue and when I angle it, these lights are not hitting it front on and thus not distributing the image.
I am still not heavily experienced in this area so I would enjoy hearing explanations and possible work arounds for my problem.
Thanks
There are a few things that can mess this up:
As you already mention, the light from the monitor and the camera.
The iPhone screen is a display, so flicker and sync might also be coming to play.
Reflection from the iPhone screen.
If your camera has automatic control for exposure and color balance etc., the picture quality can change as you move around.
I suggest using a colored piece of non-glossy paper so that you can remove the iPhone display's effects.

How to track an opened hand in any environment with RGB camera?

I want to make a movable camera that tracks an opened hand (toward the floor). It just needs to track the opened hand but it has to also know the rotation (2d rotation).
This is what I searched for so far:
Contour- As the camera is movable, the background is unknown, even the lighting is not fixed. It's hard for me to get a clear hand
segment in real time.
Haar- It seems this just returns a rect and can't deal with rotation.
Feature detect- A hand doesn't have enough detail for this.
I am using the Opencv Unity plugin to do this.
EDIT
https://www.codeproject.com/Articles/826377/Rapid-Object-Detection-in-Csharp
I see another library can do something like this. Can OpenCV also do this?

DeepEnd(cydia tweak) like 3D background for iOS

I am trying to emulate 3D background in one of the application we are developing.
Check this video on what i am trying to do : http://www.youtube.com/watch?v=429kM-yXGz8
Here is what i am trying to do to emulate this 3D illusion in our
application for iPad.
I have a RootView with 3 rounded buttons centered on the screen which animates in circular motion.
On bottom screen i have some banners of size (600*200) which keeps rotating with flip animation.
I also have some graphical text that is part of the background which contains the "Welcome message"
All elements are individual graphics, and hence when the user moves the iPad we only move the background based on the position of iPad using x,y,z coordinates of accelerometer.
the background moves accordingly, however this is not enough to have 3D illusion, so we decided to add some shadows to graphical elements(buttons, banners, text) and move the shadow accordingly with the iPad's position.
However the result is not convincing, and accelerometer is not updating value if user moves iPad to left and right on stand up position facing iPad straight to the head.
I was wondering if anyone have tried to achieve something similar with success? or any resource to help on how to achieve this? i am just confused whether by using only accelerometer will work or should i go with gyroscope?
Using face detection for simulating a 3D effect already has been done (by me). You can download a complete sample from http://evict.nl/code/face-tracking See the video on that page for a quick demo.
You should definitely use both. accelerometer (movement) and gyroscope (device angle). But for a true 3d effect you probably need to use the camera + face detection.

iOS face tracking while detection with CIDetector

Here is the actual problem: During face detection in streamed video I need to track which faces where detected in previous iterations and which are new one. This could be possibly done with
CIFaceFeature trackingID
property, but here comes the hard part.
First of all: CIDetector returns array of
CIFaceFeatureInternal
objects instead of CIFaceFeature. They are almost like CIFaceFeature, but doesn't contain any tracking id or eyes data.
Currently I tried it on iOS 5, so as
CIDetectorTracking
option for CIDetector is available only from iOS 6 maybe thats something expected. Anyway, I need to target iOS 5 in my application. I could possibly try to determine if some face is still present on screen by calculating detected faces rectangles, but without additional information like eyes and mouth position that will be very uncertain.
So here comes the question:
How can I detect faces from video output in iOS 5 and also get some tracking id for found faces?
If I can get a direction at least, maybe some 3rd party library like openCV or some explanation that would be very helpful.

Resources