Fake colors through camera - ios

in an iOS application I need to recognize colors through camera, but analyzing the problem, I noticed that different kinds of light make the colors observed in the captured picture a little bit different from the real ones. For example, under a high neon light a light blue seems like a gray.
What is the cause and what kind of approch I could follow to solve this problem of "fake colors"?

The colors are not fake, they are just different than what you expect them to be. As #Piglet said this has a lot to do with the physics of light and white balance may help.
If you want to read more about it look at:
Color Rendering Index
Color Metamerism
Sensitivity Metamerism Index
Color Constancy
These all refer to the physics behind why different illuminations create different colors. There is also the camera color pipeline that contributes its share, so you can also read about white balance and tone mapping...

Related

Ambient light not realistic

Do you have an idea how to make my light sources more realistic and ambient? Lights from windows is just a material with color and luminance. But the lamps (highlighted) are made with a basic Light. But as you can see, it doesn't lighten the are around, but it only creates a "light ball" area that is also passing trough objects(the bridge for example) and enabling shadows didn't fix the problem. Also, I turned off Global Illumination cause it just slowed my render, but the quality was EXACTLY the same as it is without GI. Any suggestions would help. (Btw, I added Fog to my Physical Sky to create a moody atmosphere)

Device brightness with video on screen affecting white color

Video demonstration (please watch, 18s): https://youtu.be/4HtkadKEnWM
When I reduce the brightness of the device, iOS seems to be changing white to off-white. For my use case, it is important that the view background is actually white so the video appears to seamlessly blend into the background. Interestingly, when I take a screenshot or screen recording this effect does not appear.
I am on light mode and don't have any custom color space settings.
Does anyone have any ideas on how to fix this? Thank you!
You have a misunderstanding of colours.
White doesn't really exist, but it is defined as (non-specular) reflection of standard illumination (ISO standard about paper say nearly 90% of reflection).
So white depends on brightness, so with the same light, you can see black, grey, white, or super-white. Just out eyes adapt quickly to a reference white.
For emissive devices, this is more tricky because emission doesn't depend on ambient light. For this reason, since years we have some sort of ambient light detection, so that software can tell screen to reduce brightness (or/and to change white balance [in this case "white balance" is about colours, not brightness).
If one person dim the screen, probably it is because it has less ambient light, and such person is not seeing white correctly. In any case, the eyes (and brain) will adapt quickly.
In short: white is a uniform colour at maximum brightness. if you dim the screen, it will appear grey for a short amount of time, because we had a different idea of maximum brightness. but we adapt.
Note: we have two adaptation: brightness and colours. Good filmakers may use it for good, but also colourists must handle it (on long grey scenes, not to make us to adapt and see it white).
Note: hardware may have two kind of brightness: one, the classic done by RGB, and one about global brightness (the backlight, also if now there is not more backlight). If you save screenshot, it takes just the RGB, and not the corrected values. But also if you take a photo, every cameras check the brightness, and automatically expose the photo (the average brightness of an image should be rendered as mid-grey). So also in this case you see no differences (if you really want to see it, you should set your camera in manual exposition mode, and take the two photos with same setting).
After communicating with Apple support, we discovered that the video_full_range_flag was not being set by ffmpeg. Re-encoding the videos using the following ffmpeg command worked: ffmpeg -i input.mp4 -map 0 -c copy -bsf:v h264_metadata=video_full_range_flag=1 output.mp4.

SKEmitterNode how to maintain same effect with blend mode "add" across different backgrounds

I have a really cool effect that I like that I made using sks files in xcode and the blend mode 'add'. Now I didn't realize it at the time but after looking at the apple docs I saw that the effect is actually based off the background color, specifically:
Adds the pixel values of the particle and underlying images. Creates a white pixel if this value is greater than 1
Now, I want to have the same effect across every different background color but as far as I know the only way to do that is to use the "Alpha" blend effect. But this only gives me the option of having solid colors. This is the graphics that I want to apply across all different background colors:
How can I go about having this effect across all different background colors? I'm using the default spark particle file.
UPDATE:
I'm leaving this question unanswered until either apple comes up with a way to do what I want or someone else finds a way to do it.
Due to the unique nature of particle systems AND the very limited masking facilities of SpriteKit, I don't think this can be done.
Availability of inversion masking, in an unnested way that's not the clusterfuck of masking in SpriteKit as we currently know it, would instantly solve this problem.
The way to do this, ordinarily without inversion masking, would be to have two instances of the exact same particle system, one acting as a mask to cut out the excess black, one the visual elements you see over the black, that's then composited (as a whole) over your background.
Here's KnightOfDragon suffering with the individuality of particle systems for another use case: Duplicating a particle emitter effect in Sprite Kit

I have hand-drawn some work on grid paper and scanned it, how can I use Photoshop to remove the gridlines

The grid is a blue/green. The work is in a black ink, and has a fair bit of variety of pressures, which I want to retain.
Here's a link to a small selection.
I have Photoshop v3
My attempts have involved using Select, Color Range, and sampling some grid, then inverting.
Is there a better way?
I also have some experience with Python and PIL, if that's a useful alternative.
This is a Photoshop answer, rather than a programming answer, but that seems to match your question's needs.
I applied a Black and White filter, and enabled a Blue filter, then set the Blue channel sensitivity to 300%, like this in Photoshop CC.
and got pretty good results like this:
In an older vsersion of Photoshop, you may need to go to Image->Mode->Lab Color and then go into the Channels palette and deselect Lab leaving just a and b channels selected, then use Select->Color Range to get the blues (or maybe the blacks!!!!) before going back to RGB mode.

Replace particular color of image in iOS

I want to replace the particular color of an image with other user selected color. While replacing color of image, I want to maintain the gradient effect of that original color. for example see the attached images.
I have tried to do so with CoreGraphics & I got success to replace color. But the replacing color do not maintain the gradient effect of the original color in the image.
Can someone help me on this? Is the CoreGraphics is right way to do this?
Thanks in advance.
After some struggling almost with the same problem (but with NSImage), made a category for replacing colors in NSImage which uses ColorCube CIFilter.
https://github.com/braginets/NSImage-replace-color
inspired by this code for UIImage (also uses CIColorCube):
https://github.com/vhbit/ColorCubeSample
I do a lot of color transfer/blend/replacement/swapping between images in my projects and have found the following publications very useful, both by Erik Reinhard:
Color Transfer Between Images
Real-Time Color Blending of Rendered and Captured Video
Unfortunately I can't post any source code (or images) right now because the results are being submitted to an upcoming conference, but I have implemented variations of the above algorithms with very pleasing results. I'm sure with some tweaks (and a bit of patience) you might be able to get what you're after!
EDIT:
Furthermore, the real challenge will lie in separating the different picture elements (e.g. isolating the wall). This is not unlike Photoshop's magic wand tool which obviously requires a lot of processing power and complex algorithms (and is still not perfect).

Resources