add mouseevents to webgl objects - webgl

im using xtk to visualize medical data in a webgl canvas. currently im playing around with this lesson:
lesson 10
this library is pretty good but not very well documented. i want to get rid of that gui and add some mouseevents. if i load the mesh from the gui how can i add a mouse event to the mesh? i actually don't know where to start. it's a little bit confusing to get started with this library....
i tried
mesh.click(function(){
alert("yes");
})
or
mesh.mousedown(function(){
alert("yes");
}

Objects rendered in WebGL are not part of the DOM, and as such don't generate events like DOM elements do. This means that for events like these you have to implement the mouse interaction code yourself.
Traditionally in WebGL/OpenGL this process is known as "Picking", and there's several decent resources for it online. (For example: http://webgldemos.thoughtsincomputation.com/engine_tests/picking) The core process is something like this, though:
For each pickable object in your scene, assign it a color. Put this in a lookup table somewhere
Re-render the entire scene to a texture, rendering each pickable object with it's assigned color
Once the scene is rendered, determine your mouse coordinates and read back the color of the texture at that X/Y.
Fetch the object associated with that color from your lookup table. This is the object your mouse cursor is pointing at!
As you can see, while not a difficult method conceptually this also involves several mid-level WebGL topics, such as rendering to a texture, and as such is not usually recommended for beginners. I'm not sure if there are any features in xtk to assist with this (honestly I had never heard of the library before your post), but I would guess that this is something that you'll have to implement on your own.

DOM events are not supported but you can do it with xtk. Check out this JSFiddle
http://jsfiddle.net/haehn/r7Ugf/
// create and initialize a 3D renderer
var r = new X.renderer3D();
r.init();
// create a cube and a sphere
cube = new X.cube();
sphere = new X.sphere();
sphere.center = [-20, 0, 0];
r.interactor.onMouseMove = function() {
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id != 0) {
// grab the object and turn it red
r.get(_id).color = [1, 0, 0];
} else {
// no object under the mouse
cube.color = [1, 1, 1];
sphere.color = [1, 1, 1];
}
r.render();
}
r.interactor.onMouseDown = function(left, middle, right) {
// only observe right mouse clicks
if (!right) return;
// grab the current mouse position
var _pos = r.interactor.mousePosition;
// pick the current object
var _id = r.pick(_pos[0], _pos[1]);
if (_id == sphere.id) {
// turn the sphere green
sphere.color = [0, 1, 0];
r.render();
}
}
r.add(cube); // add the cube to the renderer
r.add(sphere); // and the sphere as well
r.render(); // ..and render it
Easy, no?

XTK implements picking the way Toji explained (i.e. with a frameBuffer where every object is rendered in a different RGBA "color"). It will work while you have less than 255^4 objects, so almost always. There are other methods like unprojecting but they would be longer I think.
So with X.renderer.pick and X.renderer.get you can find the object under the mouse and change its properties. However for the moment you can only change vizualisation properties (see the setGetter and setSetter in every class) but you cannot move an X.object (since X.object._transform attribute is private and there is no getter/setter for it yet).
That's something interesting to deal with : adding a pair of getter/setter for X.object's transform would allow, for example, an user to put medical stuff (modelized by a mesh or something else) in the scene and place to mesure distances or see if it will fit for an operation or something like that. Shouldn't be a good idea Haehn ? And it's a minor change in the framework.

Related

Use of 'drawPolygonGeometry()' on postCompose event with vectorContext

I'm trying to draw a Circle around every kind of geometry (could be every ol.geom type: point,polygon etc.) in an event called on 'postcompose'. The purpose of this is to create an animation when a certain feature is selected.
listenerKeys.push(map.on('postcompose',
goog.bind(this.draw_, this, data)));
this.draw_ = function(data, postComposeRender){
var extent = feature.getGeometry().getExtent();
var flashGeom = new ol.geom.Polygon.fromExtent(extent);
var vectorContext = postComposeRender.vectorContext;
...//ANIMATION CODE TO GET THE RADIUS WITH THE ELAPSED TIME
var imageStyle = this.getStyleSquare_(radius, opacity);
vectorContext.setImageStyle(imageStyle);
vectorContext.drawPolygonGeometry(flashGeom, null);
}
The method
drawPolygonGeometry( {ol.geom.Polygon} , {ol.feature} )
is not working. However, it works when I use the method
drawPointGeometry({ol.geom.Point}, {ol.feature} )
Even if the type of flashGeom is
ol.geom.Polygon that I just built from an extent. I don't want to use this method because extents from polygons could be received and it animates for every point of the polygon...
Finally, after analyzing the way drawPolygonGeometry in OL3 works in the source code, I realized that I need to to apply the style with this method before :
vectorContext.setFillStrokeStyle(imageStyle.getFill(),
imageStyle.getStroke());
DrawPointGeometry and drawPolygonGeometry are not using the same style instance.

SKEffectNode - CIFilter Blur Size Limit - Big Black Box

I am trying to blur multiple SKNode objects. I do this by having a parent SKEffectNode with a CIFilter set to #"CIGaussianBlur". Like so:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
blurNode.shouldRasterize = YES;
[blurNode setShouldEnableEffects:NO];
[blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
This works fine for a bunch of nodes currently onscreen. But when I space these notes far away from each other (about 3000 pixels), the blurring no longer happens and I get a big black box. This happens regardless of whether the SKNodes I'm blurring are SKShapeNodes or SKSpriteNodes. Here's a sample project with this issue: Sample Project. (By the way, thanks to BobMoff for the initial version found here):
Here's happy blur (when nodes are less than 3000 pixels away from each other):
Sad blur (when nodes are more than 3000 pixels away from each other):
UPDATE
This behavior occurs whenever an SKEffectNode is the parent. It doesn't matter if it's enabling effects, blurring, etc. If the parent node is an SKNode, it's fine. i.e. Even if the parent blur node is created like it is below, you will get the blackness:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
// blurNode.shouldRasterize = YES;
// [blurNode setShouldEnableEffects:NO];
// [blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
// keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
I had a similar problem, with a very wide, panning scene that I wanted to blur.
To get the blur effect to work, I removed any nodes that were sticking out too far past the edges of the scene:
// Property declarations, elsewhere in the class:
var blurNode: SKEffectNode
var mainScene: SKScene
var exParents: [SKNode : SKNode] = [:]
/**
* Remove outlying nodes from the scene and activate the SKEffectNode
*/
func blurScene() {
let FILTER_MARGIN: CGFloat = 100
let widthMax: CGFloat = mainScene.size.width + FILTER_MARGIN
let heightMax: CGFloat = mainScene.size.height + FILTER_MARGIN
// Recursively iterate through all blurNode's children
blurNode.enumerateChildNodesWithName(".//*", usingBlock: {
[unowned self]
node, stop in
if node.parent != nil && node.scene != nil { // Ignore nodes we already removed
if let sprite = node as? SKSpriteNode {
// Calculate sprite node position in scene coordinates
let sceneOrig = sprite.scene!.convertPoint(sprite.position, fromNode: sprite.parent!)
// Find left, right, bottom and top edges of sprite
let l = sceneOrig.x - sprite.size.width*sprite.anchorPoint.x
let r = l + sprite.size.width
let b = sceneOrig.y - sprite.size.height*sprite.anchorPoint.y
let t = b + sprite.size.height
if l < -FILTER_MARGIN || r > widthMax || b < -FILTER_MARGIN || t > heightMax {
self.exParents[sprite] = sprite.parent!
sprite.removeFromParent()
}
}
}
})
blurNode.shouldEnableEffects = true
}
/**
* Disable blur and reparent nodes we removed earlier
*/
func removeBlur() {
self.blurNode.shouldEnableEffects = false
for (kid, parent) in exParents {
parent.addChild(kid)
}
exParents = [:]
}
NOTES:
This does remove content from your effect node, so extremely wide nodes won't show up in the final result:
You can see the mountain highlighted in red stuck out too far and was removed from the resulting blur.
This code only considers SKSpriteNodes. Empty SKNodes don't seem to break the effect node, but if you're using other visible nodes like SKShapeNodes or SKLabelNodes, you'll have to modify this code to include them.
If you have ignoreSiblingOrder = false, this code might mess up your z-ordering since you can't guarantee what order the nodes are added back to the scene.
Stuff I tried that didn't work
Simply saying node.hidden = true instead of using removeFromParent() doesn't work. That would be WAY too easy ;)
Using an SKCropNode to crop out outlying content didn't work for me. I tried having the SKEffectNode parent the SKCropNode and the other way around, but the black square appeared no matter how small I made the cropped area. This might still be worth looking into if you're desperate for a cleaner solution.
As noted here, SKScenes are secretly SKEffectNodes and you can set their filter just like our blurNode above. SKScenes don't show a black screen when their content is too big. Unfortunately, they seem to just silently disable the filter instead. Again, I might have missed something, so you could explore this option further if you're trying to apply an effect across the entire scene.
Alternate Solutions
You can capture an image of the whole screen and apply a filter to that, as suggested here. I ended up going with an even simpler solution; I took a generic screenshot of the stuff I wanted to blur, then applied a very heavy blur so you can't see the precise details. I used that as the blurred background and you can hardly tell it's not the real thing ;) This also saves a healthy chunk of memory and avoids a small UI hiccup.
Musings
This is a pretty nasty bug, and I hope Apple comes up with a solution soon. You can click this cute picture of a camera to get a GPU trace and some insight on what's happening:
The device seems to be discarding the framebuffer for the effect node because it takes up too much memory. This is affirmed by the fact that when there's more memory pressure on the device, it's easier to get the 'black square' on smaller content in the SKEffectNode.
I used a method that worked for my game but it requires the blurred area to be static without movement.
On iOS 10 using Swift 3 I used SKSpriteNode, SKView, SKEffectNode, CIFilter. I created a sprite from a texture returned from the SKView method "texture from node" and passed the current scene as the parameter because it inherits from SKNode. So essentially I was taking a "screenshot" of the scene and creating a sprite from it. I then put it in an SKEffectNode with a blur filter. (set "should rasterize" to true for better performance as I only needed to blur once). Finally I added the new sprite to the scene. From there you could add sprites to the scene and place them above the new blurred node.
let blurFilter = CIFilter(name: "CIGaussianBlur")!
let blurAmount = 15.0
blurFilter.setValue(blurAmount, forKey: kCIInputRadiusKey)
let blurEffect = SKEffectNode()
blurEffect.shouldRasterize = true
let screenshotNode = SKSpriteNode(texture: gameScene.view!.texture(from: gameScene))
blurEffect.addChild(screenshotNode)
blurEffect.filter = blurFilter
gameScene.addChild(blurEffect)
Possible workaround for the bug:
Use a camera, zoom WAY out, so you can see most everything of your background, take a screenshot style rendering of this image. Crop it to your needs, and then blur it. Then rasterise this.
Then scale this image back up, and slice it up if needs be, and place accordingly.
SKEffectNode renders into a texture. In most iOS systems the maximum size for a texture is 2048x2048. If an SKEffectNode is trying to render content larger than that, it will just use a 2048x2048 texture and anything outside of it will just not appear in the texture. It won't give you any error or warning about this happening; it simply does it silently.
And no, there is no way to tell SKEffectNode to use a texture of a specific size, and pan&clamp the content into it. It always uses a texture that will cover all the child nodes, and if the texture would be too large, it just silently uses that 2048x2048 texture.

How to create a skin for fl.controls.Slider with custom height in actionscript only?

I created a skin called customSliderTrack in the graphical editor of Adobe Flash CS5.5. This Slider is now in the "library" of the FLA file.
I can apply this skin with the following code:
var cls:Class = getDefinitionByName("CustomSliderTrack") as Class;
var tmpTrack:Sprite = new cls();
slider.setStyle("sliderTrackSkin",tmpTrack);
However due to the binary nature of the FLA file and lack of compatibility of different Versions of Adobe Flash I need to implement it all in Actionscript.
I understand that cls is a MovieClip object but I cant get the same results with new MovieClip(). I think this might be related to the dashed Lines in the graphical editor(I modified the default SliderTrack_skin). I havn't found out yet what they mean and how to replace them with Actionscript code.
setStyle automatically sets the track.height and track.width. In case of the track.height the slider.height attribute does not seem to have any effect. To work around this problem simply set the track.height to the best value.
To access the track extend the Slider class and override the configUI Function:
public class CustomSlider extends Slider
{
override protected function configUI():void
{
// Call configUI of Slider
super.configUI();
// The sprite that will contain the track
var t:Sprite = new Sprite();
// Draw the content into the sprite
t.graphics.beginFill(0x000000, 0.1);
t.graphics.drawRect(0, -15, width, 30);
t.graphics.endFill();
// Set the Sprite to be the one used by the Slider
this.setStyle("sliderTrackSkin",t);
// Reset the height to the value that it should be
track.height = 30;
}
}
Depending on the complexity of your track asset, you could accomplish this with the drawing API: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/Graphics.html
A very simple example would be:
var track:Sprite = new Sprite();
track.graphics.lineStyle(2, 0xcccccc);
track.graphics.beginFill(0x000000, 1);
track.graphics.drawRect(0, 0, 400, 20);
track.graphics.endFill();
track.scale9Grid = new Rectangle(2, 2, 396, 16);
slider.setStyle("sliderTrackSkin",track);
This creates a track that is just a black rectangle, 400x20 pixels in size. You can set the scale9grid in code to control how the skin scales. In the example above the rectangle's border wont scale, but the black rectangle inside will. Experimenting with the methods in the drawing API could be all you need.
If you need a more complex asset, I'd recommend loading an image and then passing that in to slider.setStyle.

How can I perform clipping on rotated rectangles?

So I have this Panel class. It's a little like a Window where you can resize, close, add buttons, sliders, etc. Much like the status screen in Morrowind if any of you remember. The behavior I want is that when a sprite is outside of the panel's bounds it doesn't get drawn and if it's partially outside only the part inside gets drawn.
So what it does right now is first get a rectangle that represents the bounds of the panel, and a rectangle for the sprite, it finds the rectangle of intersection between the two then translates that intersection to the local coordinates of the sprite rectangle and uses that for the source rectangle. It works and as clever as I feel the code is I can't shake the feeling that there's a better way to do this. Also, with this set up I cannot utilize a global transformation matrix for my 2D camera, everything in the "world" must be passed a camera argument to draw. Anyway, here's the code I have:
for the Intersection:
public static Rectangle? Intersection(Rectangle rectangle1, Rectangle rectangle2)
{
if (rectangle1.Intersects(rectangle2))
{
if (rectangle1.Contains(rectangle2))
{
return rectangle2;
}
else if (rectangle2.Contains(rectangle1))
{
return rectangle1;
}
else
{
int x = Math.Max(rectangle1.Left, rectangle2.Left);
int y = Math.Max(rectangle1.Top, rectangle2.Top);
int height = Math.Min(rectangle1.Bottom, rectangle2.Bottom) - Math.Max(rectangle1.Top, rectangle2.Top);
int width = Math.Min(rectangle1.Right, rectangle2.Right) - Math.Max(rectangle1.Left, rectangle2.Left);
return new Rectangle(x, y, width, height);
}
}
else
{
return null;
}
}
and for actually drawing on the panel:
public void DrawOnPanel(IDraw sprite, SpriteBatch spriteBatch)
{
Rectangle panelRectangle = new Rectangle(
(int)_position.X,
(int)_position.Y,
_width,
_height);
Rectangle drawRectangle = new Rectangle();
drawRectangle.X = (int)sprite.Position.X;
drawRectangle.Y = (int)sprite.Position.Y;
drawRectangle.Width = sprite.Width;
drawRectangle.Height = sprite.Height;
if (panelRectangle.Contains(drawRectangle))
{
sprite.Draw(
spriteBatch,
drawRectangle,
null);
}
else if (Intersection(panelRectangle, drawRectangle) == null)
{
return;
}
else if (Intersection(panelRectangle, drawRectangle).HasValue)
{
Rectangle intersection = Intersection(panelRectangle, drawRectangle).Value;
if (Intersection(panelRectangle, drawRectangle) == drawRectangle)
{
sprite.Draw(spriteBatch, intersection, intersection);
}
else
{
sprite.Draw(
spriteBatch,
intersection,
new Rectangle(
intersection.X - drawRectangle.X,
intersection.Y - drawRectangle.Y,
intersection.Width,
intersection.Height));
}
}
}
So I guess my question is, is there a better way to do this?
Update: Just found out about the ScissorRectangle property. This seems like a decent way to do this; it requires a RasterizerState object to be made and passed into the spritebatch.Begin overload that accepts it. Seems like this might be the best bet though. There's also the Viewport which I can apparently change around. Thoughts? :)
There are several ways to limit drawing to a portion of the screen. If the area is rectangular (which seems to be the case here), you could set the viewport (see GraphicsDevice) to the panel's surface.
For non-rectangular areas, you can use the stencil buffer or use some tricks with the depth buffer. Draw the shape of the surface in the stencil buffer or the depth buffer, set your render state to draw only pixels located in the shape you just rendered in the stencil/depth buffer, finally render your sprites.
One way of doing this is simple per-pixel collision. Although this is a bad idea if the sprites are large or numerous, this can be a very easy and fast way to get the job done with small sprites. First, do a bounding circle or bounding square collision check against the panel to see if you even need to do per-pixel detection.
Then, create a contains method that checks if the position, scale, and rotation of the sprite put it so far inside the panel that it must be totally enclosed by the panel, so you don't need per-pixel collision in that case. This can be done pretty easily by just creating a bounding square that has the width and height of the length of the sprite's diagonal, and checking for collision with that.
Finally, if both of these fail, we must do per-pixel collision. Go through and check against every pixel in the sprite to see if it is within the bounds of the panel. If it isn't set the alpha value of the pixel to 0.
Thats it.

Textured Primitives in XNA with a first person camera

So I have a XNA application set up. The camera is in first person mode, and the user can move around using the keyboard and reposition the camera target with the mouse. I have been able to load 3D models fine, and they appear on screen no problem. Whenever I try to draw any primitive (textured or not), it does not show up anywhere on the screen, no matter how I position the camera.
In Initialize(), I have:
quad = new Quad(Vector3.Zero, Vector3.UnitZ, Vector3.Up, 2, 2);
quadVertexDecl = new VertexDeclaration(this.GraphicsDevice, VertexPositionNormalTexture.VertexElements);
In LoadContent(), I have:
quadTexture = Content.Load<Texture2D>(#"Textures\brickWall");
quadEffect = new BasicEffect(this.GraphicsDevice, null);
quadEffect.AmbientLightColor = new Vector3(0.8f, 0.8f, 0.8f);
quadEffect.LightingEnabled = true;
quadEffect.World = Matrix.Identity;
quadEffect.View = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up);
quadEffect.Projection = this.Projection;
quadEffect.TextureEnabled = true;
quadEffect.Texture = quadTexture;
And in Draw() I have:
this.GraphicsDevice.VertexDeclaration = quadVertexDecl;
quadEffect.Begin();
foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes)
{
pass.Begin();
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalTexture>(
PrimitiveType.TriangleList,
quad.Vertices, 0, 4,
quad.Indexes, 0, 2);
pass.End();
}
quadEffect.End();
I think I'm doing something wrong in the quadEffect properties, but I'm not quite sure what.
I can't run this code on the computer here at work as I don't have game studio installed. But for reference, check out the 3D audio sample on the creator's club website. They have a "QuadDrawer" in that project which demonstrates how to draw a textured quad in any position in the world. It's a pretty nice solution for what it seems you want to do :-)
http://creators.xna.com/en-US/sample/3daudio

Resources