Remeber state of pixel in HLSL DirectX10 - directx

i have a little problem and i wanna know it's a good way to resolve it.
I change many pixel color on my application (cellular automata) on GPU.
I swap render targets to get actual back-buffer and later i put it to my Pixel-shader, in next frame operation is repeat.
My problem i when wanna know the pixel is changed in last frame.
I know i can solve it by use one more render target (3 RT) and remember my specific data per pixel, but i think it can be made some performance issue. Maybe is some other way to do it. I use DirectX10.
Really thanks for help.

One simple common way (I'm not sure that it applies in your case), is if you only use 3 channels for color, you can store this information in the alpha channel.

Related

Creating text / number objects in ARKit

I want to create some objects (boxes, cylinders, pyramids, doesnt really matter) which display text / a number on the side / on all of it's sides. Short of making individual materials with the numbers displayed on them by hand, is there a simple way to achieve this?
I am using Swift 4 in XCode.
First thing, please try not to be discouraged. Thank you for reaching out to the ARKit community on stack :-)
We are here to help each other.
(I do feel your pain…and why I am trying to help)
Here is an interesting stack page that has helped me with placing items on the sides of objects(like boxes cylinders, pyramids).
I hope it can help you or others.
SCNBox different colour or texture on each face
Rickster pointed out some other possibilities.
We all learn by sharing what we know.
Smartdog
Depends on what you mean by "by hand". If you want the text displayed on the surface of the geometry, like a texture map, then texture-mapping it is the way to go. If you draw your text into a UIImage, you can set that as the material contents, which is a bit more dynamic than, say, creating a bunch of PNGs that each have a different number on them. Just make sure to choose an image size/resolution that looks good at the size your objects are displayed at.
For anyone lost in the internet trying to find an answer to this it's stupidly simple. Use SCNText and set it as a node. I just wasted 7 hours of my life trying to make number .dae models position themselves next to each other because there is no mention of this feature anywhere.
I hope I saved you as much pain as I just endured discovering this.

non-destructive filter in OpenGL

I am making a photography app for the iPhone and will be using OpenGL to apply effects to the images. Now I'm a bit of an OpenGL noob and was wondering is there a way to build a filter(saturation & blur) that can be easily reversed?
To explain, the user takes a picture and then applies a blur of 5 and a saturation of 3(arbitrary values), but then comes back and turns it down to a blur of 3 and a saturation of 2, would the result be same as if he had given the original image a blur of 3 and a saturation of 2?
Save the original image and store the filter changes as an array of instructions that you can replay at a later date. This will also give you selective undo ability.
You cannot redo filters like blur. Such filters looses some of information about the image so it is hard to get it back. See discussion here.
Using OpenGL (or any other Api) you can easily apply filter as "postprocessing" effects. Just render a quad with your texture to some render target and then you will have transformed image as an output.
Here is a link to oZone3D how to do that.
You can save the created output (but as some other filename!).
Non-destructive editing is API agnostic, you can implement it with OpenGL or in software or anything, all you really need to do is keep aside the source data instead of overwriting it. You can even push back "history" to disk to avoid bloating the ram and gpu memory.
From the context of your question I can assume you are using some of the out-of-the-box ready-to-use functions apple provides for their API, it this case you rely on a stock implementation, so you are stuck with its destructive behavior until you come up with something better yourself.

Is it correct that CoreGraphics doesn't handle 24-bit, but only 32-bit images?

I'm rendering an image to my context and then mess with it's pixels.
For my purpose I don't need an alpha channel.
The Supported Pixel Formats Table tells me that I still have to use a fourth channel, also I get exceptions if I try otherwise.
Does this mean I have to waste 1/4th of the memory?
Think of it not as wasting 1/4 your memory but as gaining valuable hardware acceleration.
It is not straight to complain about the spec, however you are not wasting anything. The image itself could be stored without alpha and the video and underlying libs need the alpha anyway.
Or dou you want ios to remove the alpha? ;-)

Clear single viewport in DirectX 10

I am preparing to start on a C++ DirectX 10 application that will consist of multiple "panels" to display different types of information. I have had some success experimenting with multiple viewports on one RenderTargetView. However, I cannot find a definitive answer regarding how to clear a single viewport at a time. These panels (viewports) in my application will overlap in some areas, so I would like to be able to draw them from "bottom to top", clearing each viewport as I go so the drawing from lower panels doesn't show through on the higher ones. In DirectX 9, it seems that there was a Clear() method of the device object that would clear only the currently set viewport. DirectX 10 uses ClearRenderTargetView(), which clears the entire drawing area, and I cannot find any other option that is equivalent to the way DirectX 9 did it.
Is there a way in DirectX 10 to clear only a viewport/rectangle within the drawing area? One person speculated that the only way may be to draw a quad in that space. It seems that another possibility would be to have a seprate RenderTargetView for each panel, but I would like to avoid that as it requires other redundant resources, such as a separate depth/stencil buffers (unless that is a misunderstanding on my part).
Any help will be greatly appreciated! Thanks!
I would recommend using one render target per "viewport", and compositing them together using quads for the final view. I know of no way to scissor a clear in DX 10.
Also, according to the article here, "An array of render-target views may be passed into ID3D10Device::OMSetRenderTargets, however all of those render-target views will correspond to a single depth stencil view."
Hope this helps.
Could you not just create a shader together with the appropriate blendstate settings and a square mesh (or other shape of mesh) and use it to clear the area where you want to clear? I haven't tried this but I think it can be done.

recognize the moving objects and differentiate them from the background?

iam working in a project that i take a vedio by a camera and convert this vedio to frames (this part of project is done )
what iam facing now is how to detect moving object in these frames and differentiate them from the background so that i can distinguish between them ?
I recently read an awesome CodeProject article about this. It discusses several approaches to the problem and then walks you step by step through one of the solutions, with complete code. It's written at a very accessible level and should be enough to get you started.
One simple way to do this (if little noise is present, I recommend smoothing kernel thought) is to compute the absolute difference of two consecutive frames. You'll get an image of things that have "moved". The background needs to be pretty static in order to work. If you always get the abs diff from the current frame to the nth frame you'll have a grayscale image with the object that moved. The object has to be different from the background color or it will disappear...

Resources