I am not sure if this is the place to post this type of question.
I purchased a USB Digital camera - specifically for outdoor usage (eBay item from China).
The quality of some of the colors -namely the green color comes out as purplish!
Assuming there is nothing I can do with the camera settings is it possible to enhance the colors to what it really should look like?
this is the image taken from that USB camera:
this is the image taken from my analogue camera:
It seems like your camera does not have an infrared cut filter. This allows all 3 color channels to get extra exposure via the near IR range of the spectrum.
Some outdoor cameras have night mode which slides the IR cut out of the way to allow higher sensitivity in the dark. In daylight, the filter is returned so that only visible color light can enter the sensor.
See if your camera has such a switch.
Related
When capturing a photo using AVFoundation classes stains appear on certain areas of the image.
Happens on iOS 14.4, iPhone 12 Pro.
I managed to reproduce it using different custom ISO and exposure time settings and using the default auto setting.
Both for single photo and bracket captures.
Both with maxPhotoQualityPrioritization set to quality and to balanced.
Both with ultra wide angle and wide angle cameras.
It's not deterministic. Seems like it is most prominent in images with high light contrast and different light sources (where the natural light mixes with artificial light and some areas are more lit than others). Also more prominent when capturing multiple images using bracket settings with both negative and positive exposure biases. example image
Does anybody know any fix or a workaround for this?
What you describe as "stains" look like areas that are "blown out" (Areas where one or more color channels is at maximum and all detail is lost. This is known as "blown highlights" in photography.) This creates blobs of a solid color where there is a loss of all detail.
In your case, it looks like the "stains" are completely blown in all 3 color channels.
If you use Photoshop you can display a histogram of the image, as well as a mode where areas that are oversaturated are shown in red.
See this link, for example, for a description of how to do that.
I would like to build a very simple AR app, which is able to detect a white sheet of A4 paper in its surrounding. I thought it would be enough to use Apple's image recognition sample project as well as a white sample image in the ratio of a A4 sheet but the ARSession will fail.
One or more reference images have insufficient texture: white_a4,
NSLocalizedRecoverySuggestion=One or more images lack sufficient
texture and contrast for accurate detection. Image detection works
best when an image contains multiple high-contrast regions distributed
across its extent.
Is there a simple way, to detect sheets of paper using ARKit? Thanks!
I think even ARKit 3.0 isn't ready for an abstract white sheet's detection at the moment.
If you have a white sheet with some markers at its corners, or some text on it, or, even, a white sheet placed inside definite environment (it's a kind of detection based on surroundings, not on the sheet itself) – then it has some sense.
But simple white paper has no distinct marks on it, hence ARKit has no understanding what it is, what its color is (outside a room it has cold tint, for instance, but inside a room it has warm tint), what a contrast is (contrast's important property in image detection) and how it's oriented (this mainly depends on your PoV).
Suppose the common sense of image detection is that ARKit detects image, not its absence.
So, for successive detection you'll need to give ARKit not only a sheet but its surrounding as well.
Also, you can look at Apple's recommendations when working with image detection technique:
Enter the physical size of the image in Xcode as accurately as possible. ARKit relies on this information to determine the distance of the image from the camera. Entering an incorrect physical size will result in an ARImageAnchor that’s the wrong distance from the camera.
When you add reference images to your asset catalog in Xcode, pay attention to the quality estimation warnings Xcode provides. Images with high contrast work best for image detection.
Use only images on flat surfaces for detection. If an image to be detected is on a nonplanar surface, like a label on a wine bottle, ARKit might not detect it at all, or might create an image anchor at the wrong location.
Consider how your image appears under different lighting conditions. If an image is printed on glossy paper or displayed on a device screen, reflections on those surfaces can interfere with detection.
I must add that you need a unique texture pattern, not a repetitive one.
What you could do is run a simple ARWorldTrackingConfiguration where you periodically analyze the camera image for rectangles using the Vision framework.
This post (https://medium.com/s23nyc-tech/using-machine-learning-and-coreml-to-control-arkit-24241c894e3b) describes how to use ARKit in combination with CoreML
I'm working on computer screen detection using emgucv (a c# opencv wrapper ).
I want to detect my computer screnn and draw a rectangle on it.
To help in this process, I used 3 Infrared Leds on the screen of the computer which I detect firtsly and after the detection, I could find the screen areas below those 3 leds.
Here is the results after the detection of the 3 leds.
The 3 red boxes are the detected leds.
.
And in general I have something like this
Does anyone have an idea about how I can proceed to detect the whole screan area ?
This is just a suggestion but, if you know for a fact that your computer screen is below your LEDs, you could try using OpenCV GrabCut algorithm. Draw a rectangle below the LEDs, large enough to contain the screen (maybe you could guess the size from the space between the LEDs) and use it to initialize the GrabCut.
Let me know what kind of results you get.
You can try to use a camera with no IR filter(This is mostly all night vision cameras) so that you can get a more intense light from the LEDs hence making it stand out than what your display would have then its a simple blob detection to get there position.
Another solution would be using ARUCO markers on the display if the view angle you are tending to use are not very large then its should be a compelling option and even the relative position of the camera with the display can be predicted also if that is what you want. With the detection of ARUCO you can get the angles that the plane of the display is placed at hence making the estimation of the display area with them.
Is there a way to detect the UV ray from the iPhone camera without any hardwares? I don't want to find the exact UV index of the light.
I tried to take a photo from a UV light, but if I analyse that photo, I can only get the RGB pixels. How do I get to know that the photo is taken from UV light? How do the photos taken from the UV light, are differ from the photos taken in the normal light?
I found an app called GammaPix in the app store which says it could detect the radio activity in an area by using only the camera. I want something like that to detect the UV rays.
Fundamentally, most CCD digital sensors can see UV light, but they contain filters to block light outside of the visible range from reaching the detector. There is a good discussion of this here: https://photo.stackexchange.com/questions/2262/are-digital-sensors-sensitive-to-uv
The spectral response of the iPhone drops to essentially zero for wavelengths shorter than 400 nm (edge of the visible) due to these filters. [See page 35 of http://epublications.uef.fi/pub/urn_nbn_fi_uef-20141156/urn_nbn_fi_uef-20141156.pdf]
Really high energy photons, like gamma rays, can pass through most materials and reach the sensor, even when visible light to the camera is completely blocked. This is how the GammaPix app likely works. This doesn't work for UV photons, since they will be blocked by the filters on the sensor.
So, unless you can somehow crack open the iPhone and remove the filters from the sensor (probably not possible without destroying the sensor), then it will not be able to detect UV photons.
To detect UV, a "windowless CCD" (or similar) is required: http://www.mightexsystems.com/index.php?cPath=1_88
Another option might be to use a material that can absorb UV photons and emit visible photons. For example, this card will glow when illuminated with UV light.
It is unlikely you can detect UV light via a standard smartphone camera, at least not without a lot of work.
The GammaPix app does not detect gamma rays directly, but instead "looks for a particular 'signature' left behind by gamma rays" on the camera. The technology behind this signature detection is patented, and was developed as part of a $679,000 DARPA grant.
Relevant patents: US7391028, US7737410
I can't say for sure but I do not believe the iPhone camera can "see" UV light, most digital camera's can't. Note that white luminescence is not UV light, just caused by it.
Slightly off-topic but... Infra-red however is another story, some of the older iPhone camera's could see Infra-Red (before Apple added the IR filter). The iPhone 6 front camera can see still IR light as it doesn't have a IR filter.
Don't believe me? Point your TV IR remote at your front camera and you will see the LED light up
A standard smartphone camera cannot detect UV or IR as only the visible light spectrum is allowed. You can check out this device from Nurugo but it is not compatible with an iPhone yet.
You could use weather APIs to get UV Index on-line, for example OpenUV - Global Real-time UV Index API.
The iPhone and iPad have different AvCaptureDevice methods to lock or put on Auto: white balance and exposure settings.
I am trying to understand the mechanism of the "Backside illumination sensor" on the back camera and now the front camera and whether it adjusts the white balance and exposure setting and if that is the case, would locking the WB and exposure mode would be interfering with the "Backside illumination sensor" job.
Or, is the "Backside illumination sensor" simply and only boosting the RGB values of the pixels in low light?
Thanks
I think you're a little confused about what the term "backside illuminated sensor" means here. This is a just a type of CMOS sensor used in the new iPhones (and other mobile phones). It claims to have better low-light performance than older CMOS imagers, but it is just what captures the photos and videos, not a separate sensor for detecting light levels. There is a light sensor on the front face of the device, but that's just for adjusting the brightness of the screen in response to lighting conditions.
In my experience, all automatic exposure and gain correction done by the iPhone is based on the average luminance of the scene captured by the camera. When I've done whole-image luminance averaging, I've found that the iPhone camera almost always maintains an average luminance of around 50%. This seems to indicate that it uses the image captured by the sensor to determine exposure and gain settings for the camera (and probably white balance leveling, too).