I'm just new to Stagexl and Dart.
I'm reading the little starting guide on stageXL.org (http://www.stagexl.org/docs/getting-started.html) and I'm trying the sample code to draw a red circle and it doesn't work (the canvas is empty).
None of the samples I found on some page didn't work.
For example, this one with the four square: http://www.stagexl.org/docs/wiki-articles.html?article=introduction, the canvas is always empty..
Need some help ..
If you know a good tutorial for learning StageXl please share it to me !
Thanks.
The current version of StageXL (version 0.12) does not support vector graphics when the WebGL renderer is used. Please opt-out from the WebGL renderer and use the Canvas2D renderer like this:
// do this before the Stage constructor is used
StageXL.stageOptions.renderEngine = RenderEngine.Canvas2D;
The next versions of StageXL (starting with 0.13) will support more vector graphics commands even when the WebGL renderer is used. Please be aware that WebGL is optimized for rendering textures (Bitmap/BitmapDatas) and drawing vector graphics may be a costly operation depending on the use case.
Related
Im trying to make render to texture for reflection and refration texture for water shader... but glClipPlane is not defined in kivy opengl.. Here are some ScreenShots
Test with PyOpengl
Test with kivy Opengl
From Clipping-planes in OpenGL ES 2.0, it looks like this wasn't part of OpenGL ES 2.0, which Kivy nominally targets. If you do want to use it, you probably can, but it isn't part of Kivy's exposed API (these low level opengl calls are usually considered internal to Kivy).
I'm surprisingly struggling a lot to display text with OpenGL ES 2.0. There are a ton of posts on stackoverflow, debating the subject, showing a few lines of code, or showing links from 2010 but working with OpenGL ES 1.x (not compatible).
But they are quite vague, and to my knowledge there is not complete code or convenient way to display text with version 2.
Do you know if there is a modern way to display text ? Like just adding a pod and writing something like this ?
[font drawText:#"This is a text" size:#12];
Thanks a lot in advance, any help would be very much liked.
EDIT 1 : I have to use OpenGL 2.0, and I can't use something else, for internal reasons.
EDIT 2 : I've found two libraries that do it :
FTGLES : It crashes at runtime when I try to use it
https://github.com/chandl34/public/tree/master/personal/c%2B%2B/Font
This one is simple but written for ES1, so I need to port the code to ES2.
EDIT 3 : ES1 and ES2 are different : ES2 works with shaders.
Since you are targeting ES2 and shaders are supported it might be a bit hard to create a tool that does this for you. But if you find such a tool it might be conflicted with your flow in many ways such as binding its own shaders and buffers, using settings on blending, depth buffer... Then if it would do that you are having trouble as you may not have a good control over how and where the text is drawn (on a 3D rotating box for instance).
But from your question (it's snippet) it seems more like all you want to do is add some 2D overlay with text which would look exactly like using an UILabel. If this is the case then I suggest you to actually use UILabel to draw these texts. You can add all of these views easily on your view that shows openGL content.
In the other case where you still need to draw text on a 3D object and want to do it very easily I suggest you to still use UILabel but create its screenshot and push it to a new (or atlas) texture. Then you can draw it as any other object. UILabel will then handle all fonts, alignments, colors, multiline and text wrapping, font size adjustments... So if you already have a system to draw a texture in the scene you should not be too far from creating yourself a tool that draws some text on the screen since you use a texture to transfer the data.
Nothing has changed since OpenGL ES1. Usually text is displayed in planar projection and created quads that are textured using font texture. There are many tutorials on this topic.
One example how to do it.
However, this is quite a lot of work when you started from scratch.
There might be better way, but depending on what you plan to do might not be suitable. You may mix UIKit with OpenGL view, having text drawn with UIKit as overlay. (UILabel, UIButton..etc.)
I've been writing a little planet generator using Haxe + Away3D, and deploying to HTML5/WebGL. But I'm having a strange issue when rendering my clouds. I have the planet mesh, and then the clouds mesh slightly bigger in the same position.
I'm using a perlin noise function to generate the planetary features and the cloud formations, writing them to a bitmap and applying the bitmap as the texture. Now, strangely, when I deploy this to iOS or C++/OSX, it renders exactly how I wanted it to:
Now, when I deploy to WebGL, it generates an identical diffuse map, but renders as:
(The above was at a much lower resolution, due to how often I was reloading the page. The problem persisted at higher resolutions.)
The clouds are there, and the edges look alright, wispy and translucent. But the inside is opaque and seemingly being rendered differently (each pixel is the same color, only the alpha channel is changed)
I realize this is likely something to do with how the code is ultimately compiled/generated in haxe, but I'm hoping it's something simple like a render setting or blending mode I'm not setting. But since I'm not even sure exactly what is happening, I wouldn't know where to look.
Here's the diffuse map being produced. I overlaid it on red so the clouds would be viewable.
Bitmapdata.perlinNoise does not work on html5.
You should implement it by yourself, or you could use pre-rendered image.
public function perlinNoise (baseX:Float, baseY:Float, numOctaves:UInt, randomSeed:Int, stitch:Bool, fractalNoise:Bool, channelOptions:UInt = 7, grayScale:Bool = false, offsets:Array = null):Void {
openfl.Lib.notImplemented ("BitmapData.perlinNoise");
}
https://github.com/openfl/openfl/blob/c072a98a3c6699f4d334dacd783be947db9cf63a/openfl/display/BitmapData.hx
Also, WebGL-Inspector is very useful for debugging WebGL apps. Have you used it?
http://benvanik.github.io/WebGL-Inspector/
Well, then, did you upload that image from ByteArray?
Lime once allowed access ByteArray with array index operator, even though it shouldn't on js. This is fixed in the lastest version of Lime to avoid mistakes.
I used __get and __set method instead of [] to access a byte array.
Away3d itself might be the cause of this issue too, because the code of backend is generated from different source files depending on the target you use.
For example, byteArrayOffset parameter of Texture.uploadFromByteArray is supported on html5, but not on native.
If away3d is the cause of the problem, which part of the code is causing the problem? I'm not sure for now.
EDIT: I've also experienced a problem with OpenFL's latest WebGL backend. I think legacy OpenFL doesn't have this problem. OpenFL's sprite renderer was changing colorMask (and possibly other OpenGL render states) without my knowledge! This problem occured because my code and OpenFL's sprite renderer was actually using the same OpenGL context. I got rid of this problem by manually disabling OpenFL's sprite renderer.
I have a huge application rendering a 3d scene in a c++ QGLWidget. Is it possible using Qt5 & webgl to add a scripting layer to the application in order to interactively "paint" on the QGLWidget? How?
In QGLWidget, you would use Native openGL API to draw, and input APIs to take input. In WebGL code, you would use WebGL (along with Canvas wrappers) to draw. What is the meaning of "interactive paint" operation here ?
This is my first iPhone/iPad app :S
It is a drawing app and I would like that user could save the current work and after that continue drawing on that same image (updating).
I already did something based on this and this. This uses Quartz.
So I would have to save it to some format that can be read back into the memory and displayed on the screen for updating (user draws another line or erases some before).
The images would be saved on server and I would like it to be in a format that in future Android devices also can read it (just read it, not update).
Also, a lot of transformations is going to be on that images after they are finished drawing (scale, projection...). I found that Open GL ES is great for this transformations --> Open GL ES
So the question is,
should I use Quartz for drawing since it is simple and convert the image somehow to Open GL because Open GL is good for transformations? And in which format to save the drawing so it could be used latter for updating and that Android devices could also read it?
To port to android latter Quartz won't help you, openGL is faster, in this case more portable and great for transformations (and effects, even better with ES 2.0 and shaders).
However if you haven't used openGL yet it's quite a different journey from quartz, so maybe read some tutorials on programming in openGL first to get a fell for it.