The task is to show glasses shadow on the user's face. Right now there is no shadow under the glasses. AnchorEntity(.face) is being used as the main anchor for glasses!
How it works now:
How it should work:
Limited Raytracing options for glass
In RealityKit 2.0 there are very limited raytracing options for transparent and semi-transparent objects (like glasses, vases or windows). There are no properties that control how raytracing should work. Remember, RealityKit's renderer isn't the same as Arnold in Autodesk Maya, for example. So there are no robust semi-transparent shadows behind glasses in RealityKit. Only frames cast opaque shadows, but these shadows are insignificant, barely noticeable.
Solution I
Here is a first solution for this situation - you need to use baked shadows (fake shadows) on the texture of a canonical face mesh. But, of course, using this approach, you can't "cast" shadows on real-user eyes to get a robust shading experience.
Solution II
To shade real-user eyes and areas around eyes in AR app you need to create two alpha-channel masks to apply a lower intensity for eyes and areas around them. For changing intensity of certain areas of background video you need to use compositing methods (CI filters) available in CoreImage framework.
Related
I would like to draw an interface with knob, similar to "overdrive" (green) in this photo:
In iOS, such as vector graphics should I use? Quartz, OpenGL ES, or something else?
I'm sure can be done with OpenGL, but I think it's very complicated. So if you can I would avoid it using something more "simple."
It depends to a large extent how the rest of your GUI is rendered. However, unless you're already using OpenGL, Quartz or CoreAnimation are probably your best bet.
Looking at the screenshot, it seems you could probably achieve the effect with two image layers, a background (static) and a foreground (rotating).
The background image could have the scale (painted on the pedal) and shadow, then the black knob border and shiny metal middle. Then you can just draw the black tick mark indicator at the appropriate angle, either using Quartz or using a CGLayer and rotating it (especially if you wanted to have part of the button texture rotating with it).
It looks like the knob is smooth, so you don't need to worry about rotating the edges. And assuming the light source is fixed, the highlight on the top-left edge and the shine on the metal middle can be static too.
However, if you wanted to be more realistic, you could try having a third layer with just the shiny middle, and rotating this back and forth slightly to animate the knob middle as the pointer rotates. It doesn't need to go around all the way; maybe 10 degrees or so of variation should help sell the effect.
I am using OpenCV to process some videos where a user is placing their hands on different parts of a wall. I've selected some regions of interest and I'm currently just using cv2.absdiff on the original image of the wall with no user and the current frame to detect whether the user has their hand in a region of interest by looking at the average pixel difference. If it's above some threshold, I consider that region "activated".
The problem I'm having is that some of the video clips contain lighting and positions that result in the user casting a shadow over certain ROIs, such that they are above the threshold. Is there a good way to filter out shadows when diffing images?
OpenCV has a Mixture of Gaussian based background subtractor which also has an option to account for shadow. You can use this instead of absdiff. MOG can be a bit slow though, compared to absdiff.
Alternatively, you can convert to HSV, and check that the Hue doesn't change.
You could first detect shadow regions in the original images, and exclude them from the difference imaging part. This paper provides a simple but effective method to detect shadows in images. They explore a colour space that is invariant to shadows.
I want to rectangular crop the eye from one face and paste it on another face, so that in the resulting image skin color of portion of eye blend nicely with the face color of the persons on which we are pasting eyes. I am able to crop and paste, but having problem with blending. Currently, the boundaries of the rectangular cropped eye after pasting are very much visible. I want to reduce this effect, so that the eyes nicely blend with face and resulting image won't look fake.
My suggestion is to do the blending in code. First, you need do create two bitmap contexts so you have the bits of your face and the bits of your new eye.
in the overlap area only, you need to determine the outer most "skin" area by evaluating the colors of the two areas, and create a mapping of those areas in both that are "skin". you would be working from the outermost areas and work towards the center.
for color evaluation, you should turn colors into HSV (or HCL) and look at hue and saturation.
you will need to figure out some criteria for determining what is skin and what is eye
once you have defined the outer area - the one NOT an eye, but skin, you will blend. The blend will use more of the original based on its distance from the center of the eye (or distance to the ellipse defining the eye. Thus initially, the outer color will be say 5% new, 95% original.
as you get close to the eye, you will use more of the eye overlay skin color.
This should produce a really nice image. The biggest problem of course will be getting a good algorithm for separating eye from skin.
Hi I'm using Firemonkey because of it's cross platform capabilities. I want to render a particle system. Now I'm using a TMesh which works well enough to display the particles fast. Each particle is represented in the mesh via a two textured triangles. Using different texture coordinates I can show many different particle types with the same mesh. The problem is, that every particle can have its own transparency/opacity. With my current approach I cannot set the tranparency individually for each triangle (or even vertex). What can I do?
I realized that there are some other properties in TMesh.Data.VertexBuffer, like Diffuse or other sets of textures (TexCoord1-3), but these properties are not used (not even initalized) in TMesh. It also seems not easy to simply change this behavior by inheriting from TMesh. It seems one have to inherit from a lower level control to initialize the VertextBuffer with more properties. Before I try that, I'd like to ask if it would be possible to control the transparency of a triangle with that. E.g. can I set a transparent color (Diffuse) or use a transparent texture (TextCoor1)? Or is there a better way to draw the particles in Firemonkey.
I admit that I don't know much about that particular framework, but you shouldn't be able to change transparency via vertex points in a 3D model. The points are usually x,y,z coordinates. Now, the vertex points would have an effect on how the sprites are lighted if you are using a lighting system. You can also use the vertex information to apply different transparency effects
Now, there's probably a dozen different ways to do this. Usually you have a texture with different degrees of alpha values that can be set at runtime. Graphics APIs usually have some filtering function that can quickly apply values to sprites/textures, and a good one will use your graphics chip if available.
If you can use an effect, it's usually better since the nuclear way is to make a bunch of different copies of a sprite and then apply effects to them individually. If you are using Gouraud Shading, then it gets easier since Gouraud uses code to fill in texture information.
Now, are you using light particles? Some graphics APIs actually have code that makes light particles.
Edit: I just remembered Vertex Shaders, which could.
As a learning project I'm attempting to re-create the procedurally generated hills from Tiny Wings using the HTML5 canvas. My goal is to generate textures like the hill in this picture:
Thus far, I have a seamless repeating texture that I've generated. It looks a little like this:
As you can see, this is part way there, however in Tiny Wings, the sinusoid patterns are often rotated on an angle. My question is this: Is it possible to take a seamlessly repeating pattern, rotate it, then clip it to a rectangle and still have a seamlessly repeating pattern?
I originally thought this was trivial, that any rotated repeating pattern clipped to it's original dimensions would still repeat. However my investigations lead me to believe this is not the case.
If what I'm describing isn't possible, how would I use a rotated version of the image I have generated as the pattern / fill for a shape? So far the only solution I can think of is to use a canvas clip region. Are there any other ways to accomplish this?
Related Questions:
html5 canvas shapes fill
HTML5 Canvas - Fill circle with image
To achieve what is in the image from tiny wings using the shape(texture) you supplied.
draw your texture-shape vertically to screen (it looks like it has been skew'ed not rotated)
apply a few semi-transparent hill shaped lines with a wide stroke width to create the phong shading effect.
clip the texture-shape with the shape of the hill.
apply a semi-transparent grunge texture to the whole canvas.