Detect UV light with iOS Camera - ios

Is there a way to detect the UV ray from the iPhone camera without any hardwares? I don't want to find the exact UV index of the light.
I tried to take a photo from a UV light, but if I analyse that photo, I can only get the RGB pixels. How do I get to know that the photo is taken from UV light? How do the photos taken from the UV light, are differ from the photos taken in the normal light?
I found an app called GammaPix in the app store which says it could detect the radio activity in an area by using only the camera. I want something like that to detect the UV rays.

Fundamentally, most CCD digital sensors can see UV light, but they contain filters to block light outside of the visible range from reaching the detector. There is a good discussion of this here: https://photo.stackexchange.com/questions/2262/are-digital-sensors-sensitive-to-uv
The spectral response of the iPhone drops to essentially zero for wavelengths shorter than 400 nm (edge of the visible) due to these filters. [See page 35 of http://epublications.uef.fi/pub/urn_nbn_fi_uef-20141156/urn_nbn_fi_uef-20141156.pdf]
Really high energy photons, like gamma rays, can pass through most materials and reach the sensor, even when visible light to the camera is completely blocked. This is how the GammaPix app likely works. This doesn't work for UV photons, since they will be blocked by the filters on the sensor.
So, unless you can somehow crack open the iPhone and remove the filters from the sensor (probably not possible without destroying the sensor), then it will not be able to detect UV photons.
To detect UV, a "windowless CCD" (or similar) is required: http://www.mightexsystems.com/index.php?cPath=1_88
Another option might be to use a material that can absorb UV photons and emit visible photons. For example, this card will glow when illuminated with UV light.

It is unlikely you can detect UV light via a standard smartphone camera, at least not without a lot of work.
The GammaPix app does not detect gamma rays directly, but instead "looks for a particular 'signature' left behind by gamma rays" on the camera. The technology behind this signature detection is patented, and was developed as part of a $679,000 DARPA grant.
Relevant patents: US7391028, US7737410

I can't say for sure but I do not believe the iPhone camera can "see" UV light, most digital camera's can't. Note that white luminescence is not UV light, just caused by it.
Slightly off-topic but... Infra-red however is another story, some of the older iPhone camera's could see Infra-Red (before Apple added the IR filter). The iPhone 6 front camera can see still IR light as it doesn't have a IR filter.
Don't believe me? Point your TV IR remote at your front camera and you will see the LED light up

A standard smartphone camera cannot detect UV or IR as only the visible light spectrum is allowed. You can check out this device from Nurugo but it is not compatible with an iPhone yet.

You could use weather APIs to get UV Index on-line, for example OpenUV - Global Real-time UV Index API.

Related

Screen detection using opencv / Emgucv

I'm working on computer screen detection using emgucv (a c# opencv wrapper ).
I want to detect my computer screnn and draw a rectangle on it.
To help in this process, I used 3 Infrared Leds on the screen of the computer which I detect firtsly and after the detection, I could find the screen areas below those 3 leds.
Here is the results after the detection of the 3 leds.
The 3 red boxes are the detected leds.
.
And in general I have something like this
Does anyone have an idea about how I can proceed to detect the whole screan area ?
This is just a suggestion but, if you know for a fact that your computer screen is below your LEDs, you could try using OpenCV GrabCut algorithm. Draw a rectangle below the LEDs, large enough to contain the screen (maybe you could guess the size from the space between the LEDs) and use it to initialize the GrabCut.
Let me know what kind of results you get.
You can try to use a camera with no IR filter(This is mostly all night vision cameras) so that you can get a more intense light from the LEDs hence making it stand out than what your display would have then its a simple blob detection to get there position.
Another solution would be using ARUCO markers on the display if the view angle you are tending to use are not very large then its should be a compelling option and even the relative position of the camera with the display can be predicted also if that is what you want. With the detection of ARUCO you can get the angles that the plane of the display is placed at hence making the estimation of the display area with them.

Can color of image be improved upon

I am not sure if this is the place to post this type of question.
I purchased a USB Digital camera - specifically for outdoor usage (eBay item from China).
The quality of some of the colors -namely the green color comes out as purplish!
Assuming there is nothing I can do with the camera settings is it possible to enhance the colors to what it really should look like?
this is the image taken from that USB camera:
this is the image taken from my analogue camera:
It seems like your camera does not have an infrared cut filter. This allows all 3 color channels to get extra exposure via the near IR range of the spectrum.
Some outdoor cameras have night mode which slides the IR cut out of the way to allow higher sensitivity in the dark. In daylight, the filter is returned so that only visible color light can enter the sensor.
See if your camera has such a switch.

Is it possible to extract the foreground in a video where the biggest part of the background is a huge screen (playing a video)?

I am working on a multi-view telepresence project using an array of kinect cameras.
To improve the visual quality we want to extract the foreground, e.g. the person standing in the middle using the color image, the color image and not the depth image because we want to use the more reliable color image to repair some artefacts in the depth image.
The problem now is that the foreground objects (usually 1-2 persons) are standing in front of a huge screen showing another party, which is also moving all the time, of the telepresence system and this screen is visible for some of the kinects. Is it still possible to extract the foreground for these kinects and if so, could you point me in the right direction?
Some more information regarding the existing system:
we already have a system running that merges the depth maps of all the kinects, but that only gets us so far. There are a lot of issues with the kinect depth sensor, e.g. interference and distance to the sensor.
Also the color and depth sensor are slightly shifted, so when you map the color (like a texture) on a mesh reconstructed using the depth data you sometimes map the floor on the person.
All these issues decrease the overall quality of the depth data, but not the color data, so one could view the color image silhouette as the "real" one and the depth one as a "broken" one. But nevertheless the mesh is constructed using the depth data. So improving the depth data equals improving the quality of the system.
Now if you have the silhouette you could try to remove/modify incorrect depth values outside of the silhouette and/or add missing depth values inside
Thanks for every hint you can provide.
In my experience with this kind of problems the strategy you propose is not the best way to go.
As you have a non-constant background, the problem you want to solve is actually 2D segmentation. This is a hard problem, and people are typically using depth to make segmentation easier and not the other way round. I would try to combine / merge the multiple depth maps from your Kinects in order to improve your depth images, maybe in a Kinect fusion kind of way, or using classic sensor fusion techniques.
If you are absolutely determined to follow your strategy, you could try to use your imperfect depth maps to combine the RGB camera images of the Kinects in order to reconstruct a complete view of the background (without occlusion by the people in front of it). However, due to the changing background image on the screen, this would require your Kinects' RGB cameras to by synchronized, which I think is not possible.
Edit in the light of comments / updates
I think exploiting your knowledge of the image on the screen is your only chance of doing background subtraction for silhouette enhancement. I see that this is a tough problem as the sceen is a stereoscopic display, if I understand you correctly.
You could try to compute a model that describes what your Kinect RGB cameras see (given the stereoscopic display and their placement, type of sensor etc) when you display a certain image on your screen, essentially a function telling you: Kinect K sees (r, g, b) at pixel (x, y) when I show (r',g',b') at pixel (x',y') on my display. To do this you will have do create a sequence of calibration images which you show on the display, without a person standing in front of it, and film with the Kinect. This would allow you to predict the appearance of your screen in the Kinect cameras, and thus compute background subtraction. This is a pretty challenging task (but it would give a good research paper if it works).
A side note: You can easily compute the geometric relation of a Kinect's depth camera to its color camera, in order to avoid mapping the floor on the person. Some Kinect APIs allow you to retrieve the raw image of the depth camera. If you cover the IR projector you can film a calibration pattern with both, depth and RGB camera, and compute an extrinsic calibration.

How to detect infrared light with OpenCV

I'm trying to use OpenCV to detect IR point using in-built camera. My camera can see infrared light. However I don't have a clue how to distinguish between visible light and IR light.
After transformation to RGB we can't distinguish, but maybe OpenCV has some methods to do it.
Does anybody know about such OpenCV functions? Or how to do it in other way?
--edit
Is it possible to recognise for example light wavelength using laptop in-build camera ? Or it's just impossible to distinguish between visible and infrared light without using special camera?
You wouldn't be able do anything in OpenCV because by the time it goes to work on it, it will just be another RGB like the visible light (you sort of mention this).
You say your camera can see infrared...Does this mean it has a filter which separates IR light from the visible light? In which case by the time you have your image inside OpenCV you would be only focusing on IR. Then look at intensities etc?
In your setting, assuming you have RGB +IR camera, probably your camera will display these three channels:
R + IR
G + IR
B + IR
So it would be difficult to identify IR pixels directly from the image. But nothing is impossible. R, G, B and IR are broad bands so information on all wavelengths is in the channels.
One thing You can do is to train classification model to classify non-IR and IR pixels in an image with lots of image data with pre-determined classes. With that model trained, you could identify IR pixels of new image.
There is no way to separate IR from visible light with software, because your camera in fact "transforms" IR light into for your eyes visible light.
I assume the only way to solve that would be using 2 cameras, one IR camera with IR-transmitting filter and one normal camera with IR blocking filter. Then you can match the images and pull out the information you need.

Fiducial marker detection in the presence of camera shake

I'm trying to make my OpenCV-based fiducial marker detection more robust when the user moves the camera (phone) violently. Markers are ArTag-style with a Hamming code embedded within a black border. Borders are detected by thresholding the image, then looking for quads based on the found contours, then checking the internals of the quads.
In general, decoding of the marker is fairly robust if the black border is recognized. I've tried the most obvious thing, which is downsampling the image twice, and also performing quad-detection on those levels. This helps with camera defocus on extreme nearground markers, and also with very small levels of image blur, but doesn't hugely help the general case of camera motion blur
Is there available research on ways to make detection more robust? Ideas I'm wondering about include:
Can you do some sort of optical flow tracking to "guess" the positions of the marker in the next frame, then some sort of corner detection in the region of those guesses, rather than treating the rectangle search as a full-frame thresholding?
On PCs, is it possible to derive blur coeffiients (perhaps by registration with recent video frames where the marker was detected) and deblur the image prior to processing?
On smartphones, is it possible to use the gyroscope and/or accelerometers to get deblurring coefficients and pre-process the image? (I'm assuming not, simply because if it were, the market would be flooded with shake-correcting camera apps.)
Links to failed ideas would also be appreciated if it saves me trying them.
Yes, you can use optical flow to estimate where the marker might be and localise your search, but it's just relocalisation, your tracking will have broken for the blurred frames.
I don't know enough about deblurring except to say it's very computationally intensive, so real-time might be difficult
You can use the sensors to guess the sort of blur you're faced with, but I would guess deblurring is too computational for mobile devices in real time.
Then some other approaches:
There is some really smart stuff in here: http://www.robots.ox.ac.uk/~gk/publications/KleinDrummond2004IVC.pdf where they're doing edge detection (which could be used to find your marker borders, even though you're looking for quads right now), modelling the camera movements from the sensors, and using those values to estimate how an edge in the direction of blur should appear given the frame-rate, and searching for that. Very elegant.
Similarly here http://www.eecis.udel.edu/~jye/lab_research/11/BLUT_iccv_11.pdf they just pre-blur the tracking targets and try to match the blurred targets that are appropriate given the direction of blur. They use Gaussian filters to model blur, which are symmetrical, so you need half as many pre-blurred targets as you might initially expect.
If you do try implementing any of these, I'd be really interested to hear how you get on!
From some related work (attempting to use sensors/gyroscope to predict likely location of features from one frame to another in video) I'd say that 3 is likely to be difficult if not impossible. I think at best you could get an indication of the approximate direction and angle of motion which may help you model blur using the approaches referenced by dabhaid but I think it unlikely you'd get sufficient precision to be much more help.

Resources