Rendering White in USDZ files - ios

Problem: USDZ files appear at about 80% white (light grey) even if perfect (0,0,0) is set on the texture files.
Troubleshooting: with/without AO files, tested our own USDZs and also created a simple project in Reality Converter with a primitive object at full white glossy paint.
Q: Is it possible to render bright whites in AR in ARKit Quicklook?

ARKit in QuickLook uses PBR (Physically-Based Rendering) to render your models. In real life, white objects never really look fully white because they only reflect the light they receive, so I'm not surprised you can't get a fully white object.
There is an exception to this, both in PBR and real life, and that is light sources.
PBR shaders have an "emissive" channel. Try setting that channel to white and it'll probably look a lot whither.
Doing a bit of googling, I found this Sketchfab resource that suggests exactly this:
Quick Look does not have a Shadeless mode, so 3D scans and other models set to Shadeless may look darker than expected. A workaround could be to duplicate the base color texture in the emission channel.

Related

Device brightness with video on screen affecting white color

Video demonstration (please watch, 18s): https://youtu.be/4HtkadKEnWM
When I reduce the brightness of the device, iOS seems to be changing white to off-white. For my use case, it is important that the view background is actually white so the video appears to seamlessly blend into the background. Interestingly, when I take a screenshot or screen recording this effect does not appear.
I am on light mode and don't have any custom color space settings.
Does anyone have any ideas on how to fix this? Thank you!
You have a misunderstanding of colours.
White doesn't really exist, but it is defined as (non-specular) reflection of standard illumination (ISO standard about paper say nearly 90% of reflection).
So white depends on brightness, so with the same light, you can see black, grey, white, or super-white. Just out eyes adapt quickly to a reference white.
For emissive devices, this is more tricky because emission doesn't depend on ambient light. For this reason, since years we have some sort of ambient light detection, so that software can tell screen to reduce brightness (or/and to change white balance [in this case "white balance" is about colours, not brightness).
If one person dim the screen, probably it is because it has less ambient light, and such person is not seeing white correctly. In any case, the eyes (and brain) will adapt quickly.
In short: white is a uniform colour at maximum brightness. if you dim the screen, it will appear grey for a short amount of time, because we had a different idea of maximum brightness. but we adapt.
Note: we have two adaptation: brightness and colours. Good filmakers may use it for good, but also colourists must handle it (on long grey scenes, not to make us to adapt and see it white).
Note: hardware may have two kind of brightness: one, the classic done by RGB, and one about global brightness (the backlight, also if now there is not more backlight). If you save screenshot, it takes just the RGB, and not the corrected values. But also if you take a photo, every cameras check the brightness, and automatically expose the photo (the average brightness of an image should be rendered as mid-grey). So also in this case you see no differences (if you really want to see it, you should set your camera in manual exposition mode, and take the two photos with same setting).
After communicating with Apple support, we discovered that the video_full_range_flag was not being set by ffmpeg. Re-encoding the videos using the following ffmpeg command worked: ffmpeg -i input.mp4 -map 0 -c copy -bsf:v h264_metadata=video_full_range_flag=1 output.mp4.

Best approach for coding a painting app on iOS / iPad

I’m trying to build a drawing/painting app for the iPad, with textured brush tips and paper.
So far, all drawing app example codes I've come across seem to work by stroking a path. However, I'd like to actually apply a texture all along the path, to simulate say, an oil brush, or charcoal.
Here is an example of a brush tip texture: Bursh tip
The result when painting with the same brush tip: Result
In the results, the top output is what it looks like when the "brush tip" texture is applied far apart along the path.
The bottom result is the texture applied with very small steps along the path. Those who've worked in Photoshop with custom brushes will find this familiar.
I had once prototyped this in Processing years ago (I've since lost the source code), and got it to work in real-time.
In Processing, I converted both the brush tip PNG and the canvas (or the image I'm painting on to) into an array of integers. Then, I simply copied the values from the brush tip to the canvas texture, at the appropriate index. At the end of the cycle, I displayed the image, for that time-step. Repeat this dozens of times in-between each point returned by the mouse.
How would I approach this in iOS, and in real-time? I tried this (https://blog.avenuecode.com/how-to-use-uikit-for-low-level-image-processing-in-swift) but it's way too slow.
This makes me believe Metal might be the only way forward. Is that true, or am complicating this unnecessarily?
Thank you for any guidance!
PS. I'm coding in Swift 5, targeting iOS 13, in Xcode 11.5.
Welcome!
I recommend you check out Core Image. It's Apple's framework for image processing (on a higher level than Metal, though it can integrate with Metal). Unfortunately, the documentation is a bit out-dated, but I'm sure you can translate it into Swift.
Here Apple describes how you would realize a painting app with Core Image and here you can download the corresponding sample project.

Fake colors through camera

in an iOS application I need to recognize colors through camera, but analyzing the problem, I noticed that different kinds of light make the colors observed in the captured picture a little bit different from the real ones. For example, under a high neon light a light blue seems like a gray.
What is the cause and what kind of approch I could follow to solve this problem of "fake colors"?
The colors are not fake, they are just different than what you expect them to be. As #Piglet said this has a lot to do with the physics of light and white balance may help.
If you want to read more about it look at:
Color Rendering Index
Color Metamerism
Sensitivity Metamerism Index
Color Constancy
These all refer to the physics behind why different illuminations create different colors. There is also the camera color pipeline that contributes its share, so you can also read about white balance and tone mapping...

Strange rendering behavior with transparent texture in WebGL

I've been writing a little planet generator using Haxe + Away3D, and deploying to HTML5/WebGL. But I'm having a strange issue when rendering my clouds. I have the planet mesh, and then the clouds mesh slightly bigger in the same position.
I'm using a perlin noise function to generate the planetary features and the cloud formations, writing them to a bitmap and applying the bitmap as the texture. Now, strangely, when I deploy this to iOS or C++/OSX, it renders exactly how I wanted it to:
Now, when I deploy to WebGL, it generates an identical diffuse map, but renders as:
(The above was at a much lower resolution, due to how often I was reloading the page. The problem persisted at higher resolutions.)
The clouds are there, and the edges look alright, wispy and translucent. But the inside is opaque and seemingly being rendered differently (each pixel is the same color, only the alpha channel is changed)
I realize this is likely something to do with how the code is ultimately compiled/generated in haxe, but I'm hoping it's something simple like a render setting or blending mode I'm not setting. But since I'm not even sure exactly what is happening, I wouldn't know where to look.
Here's the diffuse map being produced. I overlaid it on red so the clouds would be viewable.
Bitmapdata.perlinNoise does not work on html5.
You should implement it by yourself, or you could use pre-rendered image.
public function perlinNoise (baseX:Float, baseY:Float, numOctaves:UInt, randomSeed:Int, stitch:Bool, fractalNoise:Bool, channelOptions:UInt = 7, grayScale:Bool = false, offsets:Array = null):Void {
openfl.Lib.notImplemented ("BitmapData.perlinNoise");
}
https://github.com/openfl/openfl/blob/c072a98a3c6699f4d334dacd783be947db9cf63a/openfl/display/BitmapData.hx
Also, WebGL-Inspector is very useful for debugging WebGL apps. Have you used it?
http://benvanik.github.io/WebGL-Inspector/
Well, then, did you upload that image from ByteArray?
Lime once allowed access ByteArray with array index operator, even though it shouldn't on js. This is fixed in the lastest version of Lime to avoid mistakes.
I used __get and __set method instead of [] to access a byte array.
Away3d itself might be the cause of this issue too, because the code of backend is generated from different source files depending on the target you use.
For example, byteArrayOffset parameter of Texture.uploadFromByteArray is supported on html5, but not on native.
If away3d is the cause of the problem, which part of the code is causing the problem? I'm not sure for now.
EDIT: I've also experienced a problem with OpenFL's latest WebGL backend. I think legacy OpenFL doesn't have this problem. OpenFL's sprite renderer was changing colorMask (and possibly other OpenGL render states) without my knowledge! This problem occured because my code and OpenFL's sprite renderer was actually using the same OpenGL context. I got rid of this problem by manually disabling OpenFL's sprite renderer.

Replace particular color of image in iOS

I want to replace the particular color of an image with other user selected color. While replacing color of image, I want to maintain the gradient effect of that original color. for example see the attached images.
I have tried to do so with CoreGraphics & I got success to replace color. But the replacing color do not maintain the gradient effect of the original color in the image.
Can someone help me on this? Is the CoreGraphics is right way to do this?
Thanks in advance.
After some struggling almost with the same problem (but with NSImage), made a category for replacing colors in NSImage which uses ColorCube CIFilter.
https://github.com/braginets/NSImage-replace-color
inspired by this code for UIImage (also uses CIColorCube):
https://github.com/vhbit/ColorCubeSample
I do a lot of color transfer/blend/replacement/swapping between images in my projects and have found the following publications very useful, both by Erik Reinhard:
Color Transfer Between Images
Real-Time Color Blending of Rendered and Captured Video
Unfortunately I can't post any source code (or images) right now because the results are being submitted to an upcoming conference, but I have implemented variations of the above algorithms with very pleasing results. I'm sure with some tweaks (and a bit of patience) you might be able to get what you're after!
EDIT:
Furthermore, the real challenge will lie in separating the different picture elements (e.g. isolating the wall). This is not unlike Photoshop's magic wand tool which obviously requires a lot of processing power and complex algorithms (and is still not perfect).

Resources