Managed Direct3D: Lock entire Vertex Buffer - directx

I have a Mesh object returned from Mesh::TextFromFont and I am trying to set the color of each vertex. I am calling the vertex buffer's Lock function like this:
mesh->VertexBuffer->Lock(0, LockFlags::None);
However, this call throws an exception. Another overload of Lock seems to work fine, however it requires me to pass the rank of the returned vertex array. What is the solution here? How do I lock the vertex buffer of a mesh returned from TextFromFont?

The answer might probably lie here:
When using this method to retrieve an
array from a resource that was not
created with a type, always use the
overload that accepts a type.
In true MSDN fashion, there is no further explanation.

Related

Copy framebuffer data from one WebGLRenderingContext to another?

Please refer to the background section below if the following does not make much sense, I omitted most of the context as to make the problem as clear as possible.
I have two WebGLRenderingContexts with the following traits:
WebGLRenderingContext: InputGL (Allows read and write operations on its framebuffers.)
WebGLRenderingContext: OutputGL (Allows only write operations on its framebuffers.)
GOAL: Superimpose InputGL's renders onto OutputGL's renders periodically within 33ms (30fps) on mobile.
Both the InputGL's and OutputGL's framebuffers get drawn to from separate processes. Both are available (and with complete framebuffers) within one single window.requestAnimationFrame callback. As InputGL requires read operations, and OutputGL only supportes write operations, InputGL and OutputGL cannot be merged into one WebGLRenderingContext.
Therefore, I would like to copy the framebuffer content from InputGL to OutputGL in every window.requestAnimationFrame callback. This allows me to keep read/write supported on InputGL and only use write on OutputGL. Neither of them have (regular) canvasses attached so canvas overlay is out of the question. I have the following code:
// customOutputGLFramebuffer is the WebXR API's extended framebuffer which does not allow read operations
let fbo = InputGL.createFramebuffer();
InputGL.bindFramebuffer(InputGL.FRAMEBUFFER, fbo)
// TODO: Somehow get fbo data into OutputGL (I guess?)
OutputGl.bindFramebuffer(OutputGl.FRAMEBUFFER, customOutputGLFramebuffer);
// Drawing to OutputGL here works, and it gets drawn on top of the customOutputGLFramebuffer
I am not sure if this requires binding in some particular order, or some kind of texture manipulation of some sorts, any help with this would be greatly appreciated.
Background: I am experimenting with Unity WebGL in combination with the unreleased WebXR API. WebXR uses its own, modified WebGLRenderingContext which disallows reading from its buffers (as a privacy concern). However, Unity WebGL requires reading from its buffers. Having both operate on the same WebGLRenderingContext gives errors on Unity's read operations, which means they need to be kept separate. The idea is to periodically superimpose Unity's framebuffer data onto WebXR's framebuffers.
WebGL2 is also supported in case this is required.
You can not share resources across contexts period.
The best you can do is use one via some method as a source to the other via texImage2D
For example if the context is using a canvas then draw the framebuffer to the canvas and then
destContext.texImage2D(......., srcContext.canvas);
If it's a OffscreenRenderingContext use transferToImageBitmap and then pass the resulting bitmap to texImage2D

How to do glDiscardFramebufferEXT in metal

I need to port the glDiscardFramebufferEXT() OpenGL method to metal and I haven't found anything useful on the internet yet. How can I do that?
Its functionality is in MTLRenderPassDescriptor:
A MTLRenderPassDescriptor object contains a collection of attachments that are the rendering destination for pixels generated by a rendering pass. The MTLRenderPassDescriptor class is also used to set the destination buffer for visibility information generated by a rendering pass.
See especially members {color/depth}Attachments.storeAction and {color/depth}.loadAction.
MTLLoadActionDontCare means ignoring.

Where is the swapBuffer OpenGL call in WebGL

Noticed that SwapBuffer functionality is not there in WebGL, If that is the case how do we change state across draw calls and draw multiple objects in WebGL, at what point of time is swapBuffer called internally by WebGL?
First off there is no SwapBuffers in OpenGL. SwapBuffers is a platform specific thing that is not part of OpenGL.
In any case though the equivalent of SwapBuffers is implicit in WebGL. If you call any WebGL functions that affect the drawingbuffer (eg, drawArray, drawElements, clear, ...) then the next time the browser composites the page it will effectively "swapbuffers".
note that whether it actually "swaps" or "copies" is up to the browser. For example if antialiasing is enabled (the default) then internally the browser will effectively do a "copy" or rather a "blit" that converts the internal multisample buffer to something that can actually be displayed.
Also note that because the swap is implicit WebGL will clear the drawingBuffer before the next render command. This is to make the behavior consistent regardless of whether the browser decides to swap or copy internally.
You can force a copy instead of swap (and avoid the clearing) by passing {preserveDrawingBuffer: true} to getContext as the 2nd parameter but of course at the expensive of disallowing a swap.
Also it's important to be aware that the swap itself and when it happens is semi-undefined. In other words calling gl.drawXXXor gl.clear will tell the browser to swap/copy at the next composite but between that time and the time the browser actually composites other events could get processed. The swap won't happen until your current event exits, for example a requestAnimationFrame event, but, between the time your event exits and the time the browser composites more events could happen (like say mousemove).
The point of all that is that if don't use {preserveDrawingBuffer: true} you should always do all of your drawing during one event, usually requestAnimationFrame, otherwise you might get inconsistent results.
AFAIK, swap buffers call usually doesn't change any visible GL state. There're plenty of GL calls to change that state between draw calls though. As for buffer swapping, browser does that for you sometime after a callback with rendering code returns (and yes, there's no direct control over when this will actually happen).

Openlayer 3 - check if a feature is within bounds of extent

I have a list of features and a vector layer and I need to know, whether each feature is within the bounds of the view of the map or not.
I'm using openlayers v3.9.0 and in the corresponding documentation there is a function containsExtent() (link) which takes an extent and returns a boolean. Seems to be exactly the function I'm looking for. But an error is thrown saying that containsExtent is not a function.
Uncaught TypeError: extent.containsExtent is not a function
code snippet:
// someVectorSource is of type ol.source.Vector
// allMyFeatures is an array of features of type ol.Feature
var extent = someVectorSource.getExtent();
_.each(allMyFeatures, function(feature) {
if (extent.containsExtent(feature.getGeometry().getExtent())) {
// do something
}
});
What is the problem here?
If the is a better way, to get only those features which are within the extent, in a single call without iterating through the list, would be even better.
The correct syntax is:
ol.extent.containsExtent(extent, feature.getGeometry().getExtent())
If you look closer at the doc page, you'll see that the method is a static one, not part of a ol.Extent object. FYI, there's no actual ol.Extent object in ol3. It's just an array of 4 numbers. I think ol.Extent is just a reference for the compiler.
HTH

Parsing variable length descriptors from a byte stream and acting on their type

I'm reading from a byte stream that contains a series of variable length descriptors which I'm representing as various structs/classes in my code. Each descriptor has a fixed length header in common with all the other descriptors, which are used to identify its type.
Is there an appropriate model or pattern I can use to best parse and represent each descriptor, and then perform an appropriate action depending on it's type?
I've written lots of these types of parser.
I recommend that you read the fixed length header, and then dispatch to the correct constructor to your structures using a simple switch-case, passing the fixed header and stream to that constructor so that it can consume the variable part of the stream.
This is a common problem in file parsing. Commonly, you read the known part of the descriptor (which luckily is fixed-length in this case, but isn't always), and branch it there. Generally I use a strategy pattern here, since I generally expect the system to be broadly flexible - but a straight switch or factory may work as well.
The other question is: do you control and trust the downstream code? Meaning: the factory / strategy implementation? If you do, then you can just give them the stream and the number of bytes you expect them to consume (perhaps putting some debug assertions in place, to verify that they do read exactly the right amount).
If you can't trust the factory/strategy implementation (perhaps you allow the user-code to use custom deserializers), then I would construct a wrapper on top of the stream (example: SubStream from protobuf-net), that only allows the expected number of bytes to be consumed (reporting EOF afterwards), and doesn't allow seek/etc operations outside of this block. I would also have runtime checks (even in release builds) that enough data has been consumed - but in this case I would probably just read past any unread data - i.e. if we expected the downstream code to consume 20 bytes, but it only read 12, then skip the next 8 and read our next descriptor.
To expand on that; one strategy design here might have something like:
interface ISerializer {
object Deserialize(Stream source, int bytes);
void Serialize(Stream destination, object value);
}
You might build a dictionary (or just a list if the number is small) of such serializers per expected markers, and resolve your serializer, then invoke the Deserialize method. If you don't recognise the marker, then (one of):
skip the given number of bytes
throw an error
store the extra bytes in a buffer somewhere (allowing for round-trip of unexpected data)
As a side-note to the above - this approach (strategy) is useful if the system is determined at runtime, either via reflection or via a runtime DSL (etc). If the system is entirely predictable at compile-time (because it doesn't change, or because you are using code-generation), then a straight switch approach may be more appropriate - and you probably don't need any extra interfaces, since you can inject the appropriate code directly.
One key thing to remember, if you're reading from the stream and do not detect a valid header/message, throw away only the first byte before trying again. Many times I've seen a whole packet or message get thrown away instead, which can result in valid data being lost.
This sounds like it might be a job for the Factory Method or perhaps Abstract Factory. Based on the header you choose which factory method to call, and that returns an object of the relevant type.
Whether this is better than simply adding constructors to a switch statement depends on the complexity and the uniformity of the objects you're creating.
I would suggest:
fifo = Fifo.new
while(fd is readable) {
read everything off the fd and stick it into fifo
if (the front of the fifo is has a valid header and
the fifo is big enough for payload) {
dispatch constructor, remove bytes from fifo
}
}
With this method:
you can do some error checking for bad payloads, and potentially throw bad data away
data is not waiting on the fd's read buffer (can be an issue for large payloads)
If you'd like it to be nice OO, you can use the visitor pattern in an object hierarchy. How I've done it was like this (for identifying packets captured off the network, pretty much the same thing you might need):
huge object hierarchy, with one parent class
each class has a static contructor that registers with its parent, so the parent knows about its direct children (this was c++, I think this step is not needed in languages with good reflection support)
each class had a static constructor method that got the remaining part of the bytestream and based on that, it decided if it is his responsibility to handle that data or not
When a packet came in, I've simply passed it to static constructor method of the main parent class (called Packet), which in turn checked all of its children if it's their responsibility to handle that packet, and this went recursively, until one class at the bottom of the hierarchy returned the instantiated class back.
Each of the static "constructor" methods cut its own header from the bytestream and passed down only the payload to its children.
The upside of this approach is that you can add new types anywhere in the object hierarchy WITHOUT needing to see/change ANY other class. It worked remarkably nice and well for packets; it went like this:
Packet
EthernetPacket
IPPacket
UDPPacket, TCPPacket, ICMPPacket
...
I hope you can see the idea.

Resources