TTexture can have the style [TTextureStyle.RenderTarget]
the doc of delphi says:
Specifies the rendering target for this canvas object. Use the value
of RenderTarget to access the specific properties of the target on
which the canvas is drawn. RenderTarget returns a Direct2D interface
that can be used directly.
This is a little obscure, can someone explain me exactly the purpose of TTextureStyle.RenderTarget and in which condition we need to use it ?
To make the question also useful for everyone, if someone can also explain all the other different values (and their purpose) that TTextureStyle can take :
TTextureStyle = (MipMaps, Dynamic, RenderTarget, Volatile);
Related
I've been trying to modify the StrokeStyle for the TDirect2DCanvas.Pen in C++Builder.
The documentation says this about the property:
Determines the stroke style in which the pen draws lines.
Use StrokeStyle to specify a more complex style in which the lines are drawn. StrokeStyle accepts an interface that provides a set of methods, each returning a certain drawing option.
The documentation gives no examples. When I try to set this property to anything, I get a compile error saying "cannot write a property that has no write specifiers" (it looks like this property is only set up to read the StrokeStyle; even though the documentation seems to indicate otherwise).
My desire here is to get lines to be rendered with rounded ends rather than the flat ends that it seems to default to when using TDirect2DCanvas. Does anybody know how to accomplish this?
I'm using C++Builder 10.2 and the clang compiler. I'm trying to use TDirect2DCanvas rather than the regular TCanvas because it can draw anti-aliased lines.
The documentation is misleading. The TDirect2DPen::StrokeStyle property is indeed read-only, as it represents the current Direct2D ID2D1StrokeStyle object, as created internally by TDirect2DPen. TDirect2DPen does not provide any way to customize any of the stroke settings other than its dashStyle.
The only way to affect the TDirect2DPen::StrokeStyle is to set the TDirect2DPen::Style property. Setting the Style will release the current ID2D1StrokeStyle, and then if the Style is set to a value other than psSolid, psClear, or psInsideFrame then TDirect2DPen will call ID2D1Factory::CreateStrokeStyle() to create a new ID2D1StrokeStyle, specifying the following properties for it:
startCap = D2D1_CAP_STYLE_FLAT
endCap = D2D1_CAP_STYLE_FLAT
dashCap = D2D1_CAP_STYLE_ROUND
lineJoin = D2D1_LINE_JOIN_ROUND
miterLimit = 10
dashStyle = one of the following, depending on the TDirect2DPen.Style:
D2D1_DASH_STYLE_DASH
D2D1_DASH_STYLE_DOT
D2D1_DASH_STYLE_DASH_DOT
D2D1_DASH_STYLE_DASH_DOT_DOT
dashOffset = 0
dashes = nil
dashesCount = 0
This behavior is hard-coded and cannot be changed.
So, if you want more control over the StrokeStyle, you cannot use TDirect2DCanvas at all. You will have to use the Direct2D API directly instead.
Please refer to the background section below if the following does not make much sense, I omitted most of the context as to make the problem as clear as possible.
I have two WebGLRenderingContexts with the following traits:
WebGLRenderingContext: InputGL (Allows read and write operations on its framebuffers.)
WebGLRenderingContext: OutputGL (Allows only write operations on its framebuffers.)
GOAL: Superimpose InputGL's renders onto OutputGL's renders periodically within 33ms (30fps) on mobile.
Both the InputGL's and OutputGL's framebuffers get drawn to from separate processes. Both are available (and with complete framebuffers) within one single window.requestAnimationFrame callback. As InputGL requires read operations, and OutputGL only supportes write operations, InputGL and OutputGL cannot be merged into one WebGLRenderingContext.
Therefore, I would like to copy the framebuffer content from InputGL to OutputGL in every window.requestAnimationFrame callback. This allows me to keep read/write supported on InputGL and only use write on OutputGL. Neither of them have (regular) canvasses attached so canvas overlay is out of the question. I have the following code:
// customOutputGLFramebuffer is the WebXR API's extended framebuffer which does not allow read operations
let fbo = InputGL.createFramebuffer();
InputGL.bindFramebuffer(InputGL.FRAMEBUFFER, fbo)
// TODO: Somehow get fbo data into OutputGL (I guess?)
OutputGl.bindFramebuffer(OutputGl.FRAMEBUFFER, customOutputGLFramebuffer);
// Drawing to OutputGL here works, and it gets drawn on top of the customOutputGLFramebuffer
I am not sure if this requires binding in some particular order, or some kind of texture manipulation of some sorts, any help with this would be greatly appreciated.
Background: I am experimenting with Unity WebGL in combination with the unreleased WebXR API. WebXR uses its own, modified WebGLRenderingContext which disallows reading from its buffers (as a privacy concern). However, Unity WebGL requires reading from its buffers. Having both operate on the same WebGLRenderingContext gives errors on Unity's read operations, which means they need to be kept separate. The idea is to periodically superimpose Unity's framebuffer data onto WebXR's framebuffers.
WebGL2 is also supported in case this is required.
You can not share resources across contexts period.
The best you can do is use one via some method as a source to the other via texImage2D
For example if the context is using a canvas then draw the framebuffer to the canvas and then
destContext.texImage2D(......., srcContext.canvas);
If it's a OffscreenRenderingContext use transferToImageBitmap and then pass the resulting bitmap to texImage2D
I need to port the glDiscardFramebufferEXT() OpenGL method to metal and I haven't found anything useful on the internet yet. How can I do that?
Its functionality is in MTLRenderPassDescriptor:
A MTLRenderPassDescriptor object contains a collection of attachments that are the rendering destination for pixels generated by a rendering pass. The MTLRenderPassDescriptor class is also used to set the destination buffer for visibility information generated by a rendering pass.
See especially members {color/depth}Attachments.storeAction and {color/depth}.loadAction.
MTLLoadActionDontCare means ignoring.
I'm not sure if this is the right place to ask this, since it's not really a technical question but more a question of style and coding practices...
I've always ben a fan of using "const" to define variables that will not be changing throughout their lifetime, most especially when they are parameters to functions/methods. This probably stems from my history with C++, where objects could be passed by reference rather than by pointer, but you wanted to ensure that the original value wasn't accidentally altered, either by you or by someone else on your team who was working on the same code snippet.
When looking through the headers for both Objective-C in general and Cocos2d specifically, I've noticed that there is a noticeable lack of use of this item. Now, I'm not against developing code as quickly as possible, and leaving off constraints such as these leave the developer the option to modify values as their code develops and evolves, but there are some instances where I believe that this laxity does not belong.
For example, in Cocos2D/UIKit, the "UIFont fontWithName" method takes "(NSString *)" as the parameter for the font name: does this method really need to reserve the right to alter the original string that was passed in? I personally like to define constant strings as "const" items, and I don't like the necessity of casting these as non-"const" when calling these methods.
Enough proselytizing: My question - Is the direction now moving towards less well-defined interfaces and more towards "lazy references" (which I do not consider to be a derogative term)?
Thanks in advance for any feedback....
Const wouldn't mean anything for Objective C class pointers, because it would have to be overloaded in a very confusing way for Objective C types. This is because there's no way to mark a method as const, as there is in C++, so the compiler could never enforce it.
That said, at my old company, we did declare global string constants using something like:
NSString* const kMyCoolString = #"Hello, world!";
The point being that it at least couldn't be reassigned to something else.
The closest analog in Objective C/Cocoa/Foundation are mutable/immutable versions of data structures, which doesn't really help your case.
So, my idea is to do something like that (the code is simplified of course):
var gl;
function Renderer(canvas) {
gl = this.gl = canvas.getContext('experimental-webgl');
}
function Object() {
}
Object.prototype.render = function() {
...
gl.drawElements(...);
}
The gl variable itself can be placed into a namespace for better consistency, it can also be incapsulated by wrapping all the code into an anonymous function to make sure it won't clash with anything.
I can see one obvious tradeoff here: problems with running multiple WebGL canvases on the same page. But I'm totally fine with it.
Why doing that? Because otherwise it's more painful to call any WebGL functions, you have to pass your renderer as a parameter here and there. That's actually the thing I don't like about Three.js: all the graphics stuff is handled inside a Renderer object, which makes the whole Renderer object huge and complicated.
If using a globally visible context, you don't have to bother about OpenGL constants, you don't have to worry about your renderer object's visibility, and so on.
So, my question is: should I expect any traps with this approach? Aside from potential emptiness of the gl variable, of course.
Define bad
Lots of WebGL programs do this. OpenGL does this by default since the functions are global in scope. In normal OpenGL you have to call eglMakeCurrent (or equivalent) to switch contexts which effectively is just doing a hidden gl = contextToMakeCurrent under the hood.
So, basically it's up to you. If you think someday you're going to need multiple WebGL contexts then it might be wise to not have your contexts use global variables. But you can always fallback to the eglMakeCurrent style of coding. Both have their pluses and minuses.