Using an Image As an Opacity Mask - image-processing

The following link shows using an Image As an Opacity Mask.
https://wpf.2000things.com/2012/05/14/557-using-an-image-as-an-opacity-mask/
But I am not able to get intended result...
Are you able to get intended result?
I think we should make one of that images is transparent.
If so please tell me how can I do that.
If you are able to get intended result then please put your images to OneDrive in order to I can download.
By the way it looks easy job but it is not easy.
By the way I have used paint.net for designing transparent images.

Anything through transparent regions in opacity mask will not be shown
Here's a complete example:
Code:
<Window x:Class="WpfApp1.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d" SizeToContent="WidthAndHeight">
<StackPanel Orientation="Horizontal">
<Image Source="rocks.png" Margin="10" />
<Image Source="rocks.png" Margin="10">
<Image.OpacityMask>
<ImageBrush ImageSource="mask1.png" />
</Image.OpacityMask>
</Image>
</StackPanel>
</Window>
https://learn.microsoft.com/en-us/dotnet/framework/wpf/graphics-multimedia/opacity-masks-overview#using-an-image-as-an-opacity-mask
I have put everything you need to understand how it works, open the opacity mask in something like Paint.NET and you should understand how to replicate that.

Related

AR.js is difficult for vertically placed image tracking - does AR even make sense?

We have a big mural on a big wall. It is requested that, when viewing this mural on your handheld device, like a smartphone's camera, image overlays should be placed at specific positions within that mural (that mural has left out parts and the respective cutouts should be displayed on top).
Now, I followed the ar.js tutorial on image tracking and it kind of works, but I have the feeling that this is almost solely designed for horizontal and small placements. Like putting a car on your desk. The objects I managed to place on top of the mural are impossible to position, even when you add an orientation changer or rotate the objects.
This is what I have so far, tested with different sizes, rotations, positions:
<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe#1c2407b26c61958baa93967b5412487cd94b290b/dist/aframe-master.min.js"></script>
<script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script>
<title></title>
</head>
<body style="margin : 0px; overflow: hidden;">
<a-scene
vr-mode-ui="enabled: false;"
renderer="logarithmicDepthBuffer: true;"
embedded
arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;"
>
<a-nft
type="nft"
url="url"
smooth="true"
smoothCount="10"
smoothTolerance=".01"
smoothThreshold="5"
size="1,2"
>
<a-plane color="#FFF" height="10" width="10" rotation="45 45 45" position="0 0 0"></a-plane>
</a-nft>
<a-entity camera></a-entity>
</a-scene>
</body>
</html>
It would be interesting to know how the sizing and widths and heights really function alltogether (for instance, in the documentation it say size is the nft size in meters, but is that really important? What about the children then?)
So I wondered, do I even need AR? Actually, it would be enough to detect image A in that mural (i. e. camera stream) and place another image B on top of that (or replace it), respecting the perspective.
The below is based on my experience.
The idea of creating the AR environment is to mimic the real-world surroundings the best you can. It's never perfect because of the approximations but there are ways to help the algorithms. One of them is the size of the marker. When using something like a camera that captures the 2d images of the real world, extracting the X and Y coordinates is "simple", but the depth must be deducted from the camera movement and the relative change in the object's position on the 2d image. The marker size is a hint of how far that particular object should be so I would say that the size of the marker is indeed important - if you decide to specify that.
Take a look at the example below:
This is a great simplification but try to imagine these two images are potential candidates for the marker position. But with a specified size - let's say you set it smaller than the real object - the camera would settle with the closer one.
Solution?
As far as I know, you don't need to specify the size of the marker - that way everything is left for the AR app to calculate.
But you can also take measurements and enter the correct size for better tracking.
Also, just a side note, please correct me if I'm wrong. Usually in A-frame attributes are separated with white-space and not ,. That would mean the size would be size="1 2" and not size="1, 2". But don't take my word for it, this would need to be verified.
What about the children?
The a-nft entity is placed where the marker was detected. It behaves like every other element so its children would inherit its placement as a local space. That would mean every transformation done in the local space would be placed on top of the parent transformation. For example, in A-frame, the position="X Y Z" is performed in local space.
Regarding the overlapping the images
If you are working with a rectangular image, that you want to project on a rectangular wall, then I would say your idea is good enough. I think that the most straightforward way would be to detect the 4 corners of the wall and warp the image so the corners fit (Four Corner Image Warp). That would cover the perspective transformation if you just use rectangular elements. But still, you have to somehow detect the mural.
But you may also think in advance in case one day you would like to enhance the experience and add some depth or 3D to the scene, then you would need the AR.

Using X3Dom MovieTexture on a sphere doesn't show whole movie

I'm trying to use a movie as a texture on a sphere using X3Dom's MovieTexture. It is in equirectangular projection which would allow the user to look around (similar to Google StreetView).
The movie is mp4 or ogv and plays fine on e.g. a box shape from the example code from the x3dom docs.
However, on the sphere only 20 percent of the surface is covered with the movie texture while the rest is stretched over the surface.
The relevant code looks like this:
<x3d width='500px' height='400px'>
<scene>
<shape>
<appearance>
<MovieTexture repeatS="false" repeatT="false" loop='true' url='bigBuckBunny.ogv'></MovieTexture>
</appearance>
<sphere></sphere>
</shape>
</scene>
</x3d>
Looks like it is supposed to work but currently a bug in x3dom when repeatS="false" is set on the texture.
The problem also occurs with a generic <texture> element that includes a <canvas> or <video> element.
The workaround that worked for me is to use a <canvas> with a power-of-two size to avoid setting repeatS="false"
An alternative would be to scale to original video.

Filling an image with color

I have an image and would like to fill it with color. The image is a flat bubble like the iOS 7 message app bubble. Basically, I want to use it as a shape. How can I do this?
I dont get your question to be honest :)
If you want a picture filled with color, either fill the picture in photoshop, and then use it in your code afterwards.
Otherwise make a circle with a div :) And just use background-color: black; or whatever color you want.

What is inside this Jquery UI?

This is demo slideshow:http://www.pixedelic.com/plugins/camera/. The transformation between the last two images, it makes the picture into a grid and animates it one by one. How can this be achieved?
My thought is that when there is a transformation, it will create many div elements, every div will use the same background image, and use background position, to make every div present a different area of that image, that way it looks like the image was taken apart into a grid. Just use .animate() in jQuery to make it animate or some CSS3 effect, like rotate or scale to generate the slide effect.
I do not know, is what i am thinking correct? Does anyone know the mechanism behind that effect?

Google-charts: Transparency not working?

I'm trying to make a google-chart with transparency, but it seems not to work. It just draws a solid white background.
Does anybody succeded with transparency?
Am I doing something wrong?
Thanks in advance!
Info about google charts: solid fill
Test URL:
Google-charts example
For a transparent background, use
chf=bg,s,FFFFFF00
Example:

Resources