Vulkan Framebuffer conflict on attachments image usage - attachment

When I try to create a framebuffer via vkCreateFramebuffer, I get an error in my debug report callback about the conflict in VkFramebufferCreateInfo attachments. It says that my image views have a conflict in their image usages, while I don't expect this error because as a rule one of them must be a color attachment and the other must be a depth-stencil attachement.
The exact error message is:
Framebuffer Attachment(0) conflicts with image's IMAGE_USAGE flags (VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT).
I even already have seen other examples, they are exactly the same.
My source code (Rust):
https://github.com/Hossein-Noroozpour/vulkust/blob/master/src/vulkan/swapchain.rs#L218

The usage of the images in the framebuffer is defined by the renderpass. Which means that if attachment 0 is used as a depth/stencil in the renderpass then the image needs to have been created with the VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT
That means you need to double check the subpass descriptions you pass to the renderpass creation and make sure you haven't accidentally used attachment 0 as a depth.

Well, I can explain how the error works. If in doubt it is useful to dig into layers source code:
https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers/tree/master/layers
It would be issued on vkCreateFramebuffer().
It would check the provided render pass and its subpasses vs the image views.
If a VkImageView is used at least once as an Input Attachment then it expects the VkImage of the VkImageView to have been created with VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT.
Similarly for Color Attachment with VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT, and DS Attachment with VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT.
Check that you meet these requirements.
Layer bugs are a thing too. If you are running the latest ones and confirm a bug, then reports belong here:
https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers/issues
UPDATE (after seeing your source code):
I fail to see where you set depth_reference.attachment. You preinitialize it to 0, so that could mean you assign the color attachment as depth to the subpass.

Related

Edit Photos via Photoshop on a server

I wart to create a web app where a user enters certain data via a form and then receives a custom rendered image. The image is from a smart object in a psd. It's kind of like a mock-up which definitely requires needs some photoshop filters to be properly rendered.
This should all happen in real time and should be doable from my understanding since the rendering of a single images doesn't need much computing power
I've done some research and haven't really found a solution the matches my problem. Is it necessary to run Photoshop on a server and then remotely run a photoshop script and then upload the generated image somewhere else?
I've used The After Effects Plugin Template by DataClay in the past which offers similar functionality but for video.
Looking forward to hearing your ideas.
Thanks
You can use the Dataclay plugin to handle still image exports out of After Effects. Make a single-frame duration composition in After Effects and rig the layers with the Templater plugin. Then use the PNG Sequence output module to render out a single frame.
From Dataclay's forums:
Exporting
A few extra steps are required to correctly render a project file as a PNG sequence using Templater. By default, a file rendered as a PNG sequence will have the frame number appended to the end of the file name, i.e.:
filename.png00000, filename.png00001, filename.png00002, etc.
In order to designate where in the filename the frame number should be added, we’ll need to use the output column. First, add a column named output to your data source. Next, add a filename with a set of brackets with five # signs to designate where the frame numbering should be added. For example:
filename[#####] would result in filename00001.png
or
[#####]filename would result in 00001filename.png

How to include a photo in Moderncv Casual

Well, to start with, I don't know much about Latex. I am failing to include a picture in to the document using "Moderncv Casual". A lot of the CV's and cover letter's template using:
\photo[64pt][0.4pt]{filename}
What's the deal with this? Is it not just to type the pictures's filename, compile, and the picture should be added to the document?
That's exactly it. The \photo macro is set up in such a way that it stores your input and makes it part of the CV title (set with \makecvtitle).
The reasoning behind this is to provide the end-user with a generic command to would capture a picture. However, depending on the template used, this picture may appear on the left/right/middle (or wherever). The generic input abstracts this placement from the rest of the code.
Specific to the command \photo; it is defined inside the class moderncv.cls file as:
\NewDocumentCommand{\photo}{O{64pt}O{0.4pt}m}
{\def\#photowidth{#1}\def\#photoframewidth{#2}\def\#photo{#3}}
An input like
\photo[64pt][0.4pt]{filename}
defines the photo to be kept in \#photo - it references the image file filename (with an image extension) - to have a width of 64pt (stored in \#photowidth) and frame width 0.4pt (stored in \#photoframewidth).

What is the result of sample with an uninitialized texture in DirectX?

The OpenGL has its description about this, but how about DirectX?
In my guess, the sample result is float(0, 0, 0, 0), or arise a crash by the driver. Whatever, it just my guess, or partial case if tested it myself only. I want to make it clear.
The uninitialized texture means, did not pass any data with D3DDevice::CreateTexture2D(), and also, did not map nor update resource.
I want to take the description about DirectX 11 version if possible.
From the Create2DTexture function (that you linked):
If you don't pass anything to pInitialData, the initial content of the memory for the resource is undefined. In this case, you need to write the resource content some other way before the resource is read.
Undefined can mean anything, the driver will determine the exact behavior. It's unlikely that it will crash, but as with undefined behavior, anything is possible. A sample from this texture is certainly not guaranteed to be float4(0,0,0,0) as it is with an unbound texture.
It's analogous to accessing uninitialized system memory. The contents might be filled memory written from previous operations that had allocated the same memory (depending on the allocator's behavior). I would suggest if you want consistent behavior, either use an unbound texture instead, or, initialize the contents.
Yes you get 0 for all values its the same if you sample from a null texture.
I cant remember where it was that I read it but I know its in the MSDN doc, I also happen to do this from time to time because I allocate most of the textures\buffers\views at the start and fill as I go and if I sample from null or uninitialized then it just read 0.

Multisampling in SharpDX

So I have recently started using SharpDX, and have stumbled into a problem. I have no idea how to get SharpDX to multisample. I have found two things related; you can specify a SampleDescription when creating the SwapChainDescription, but any input other than (1, 0) throws a Wrong Parameter exception.
The other thing I found was SamplerState, which I put on my pixel shader, didn't do anything. I played around a lot with the parameters, but there was no visible change whatsoever.
I am sure I am missing something, but without any previous directX knowlegde I have no idea really what exactly to look for.
This will come in handy in your case:
int maxsamples = Device.MultisampleCountMaximum;
int res = device.CheckMultisampleQualityLevels(SharpDX.DXGI.Format.R8G8B8A8_UNorm, samplecount);
If res returns 0 then this Sample count is not supported.
Also please note that some options are not compatible, so if you create your SwapChain with:
sd.Usage = (other usages) | Usage.UnorderedAccess;
You are not allowed to use multisampling.
Another very useful technique to spot the problems for those errors:
Create your device with DeviceCreationFlags.Debug
In your startup project properties (debug section), tick "Enable native code debugging".
Any API call that fails will give you an error description in the debug output window.
I had the same problem, could not get Multisampling to work until I enabled the debugging and got a good hint (really wished I had done this hours ago and saved a whole lot of testing!).
Initially I read somewhere that the DepthStencilBuffer had the same SampleDescription as the Render texture - but I'm not so sure as it appears to work without this as a quick test just showed.
The thing for me was to create the DepthStencilView with a DepthStencilViewDescription that has "Dimension = DepthStencilViewDimension.Texture2DMultisampled".
Just a heads up on when you are doing multisampling.
When you set your rendertarget, if passing a rendertarget and depthstencil, you need to ensure they both have the same multisampling level.
So, for rendering to the backbuffer you have defined with MSAA, you will need to create a depth buffer with the same MSAA level.
BUT, if you are have a rendertarget that will be a texture that is fed back into the pipeline, you can define a non MSAA texture and a NON MSAA depth buffer, which is handy as you can use a sampler on the texture (you cant use a normal sampler for a MSAA Resource texture).
Most of this info maybe not new for you.

What is CvBlobTrackerAuto class in OpenCV?

I am trying to understand blobtrack.cpp code provided as a sample code with OpenCV. In this code class named CvBlobTrackerAuto is used. I tried to find some documentation about this class but it does not provide a detailed explanation.
I am particularly interested in
CvBlobTrackerAuto::Process(IplImage *pImg, IplImage *pMask = NULL) function. What does this do and what is the task of this mask used here?
Thank you in advance
I've been working with CvBlobTrackerAuto in the last few weeks. Here are some of things I have figured out.
CvBlobTrackerAuto::Process is used to process the last captured image in order to update the tracking information (blob ids and positions). Actually, CvBlobTrackerAuto is an abstract class since it doesn't provide an implementation for CvBlobTrackerAuto::Process. The only concrete implementation there is (as far as I can tell) is CvBlobTrackerAuto1, which can be found in blobtrackingauto.cpp.
What CvBlobTrackerAuto1::Process does is to implement the following pipeline:
Foreground detection: this produces a binary mask corresponding to the foreground.
Blob tracking: updates the position of blobs. It may use mean shift, particle filters or a combination of these.
Post processing: (I'm not sure of what this section does).
Blob deletion: it is "experimental and simple" according to a comment in there. It deletes blobs which have been too small or near the image borders in the last frames.
Blob detection: detects new blobs. See enteringblobdetection.cpp.
Trajectory generation: (not sure of what it does).
Track analysis: (not sure of what it does. But I do remember having read the code and deciding that it had no influence on blob tracking, so I disabled it.)
In this particular implementation of CvBlobTrackerAuto::Process, the pMask parameter is used for nothing at all. It has a default value of NULL and it is assigned to a variable once, only to be overwritten some lines later.
The OpenCv sample to be found in samples/c/blobtrack_sample.cpp is built around this CvBlobTrackerAuto1 class, providing different options to each module in the pipeline.
I hope it helps.
I was directed to this link when I posted the same question in OpenCV mailing group. This doc explains OpenCV Blobtracker and its modules.
Hope this helps anyone interested.

Resources