Jung: prevent user from moving a vertex outside the Visualization size? - jung

When in picking mode, I want to limit the user from dragging a vertex outside of the defined layout bounds. I've set the ISOMLayout, VisualizationModel, and the VisualizationViewer to be the same size. But if I zoom out (I'm using a CrossoverScalingControl) I can drag vertices way outside the layout/vv's bounds. This results in the scrollbars of my GraphZoomScrollPane not working as expected: there can be vertices floating out there that you can't scroll to and you have to zoom out to see them.
Surely there's a way to lock the user into a certain boundary?
Dimension preferredDimension = new Dimension(1200, 800);
Layout<CNode,CEdge> layout = new ISOMLayout<>(graph);
layout.setSize(preferredDimension);
VisualizationModel<CNode, CEdge> visualizationModel = new DefaultVisualizationModel<>(layout, preferredDimension);
vv = new VisualizationViewer<>(visualizationModel, preferredDimension);

If you want to set a boundary outside which you can't manually move a vertex, you can do that in your code (specifically, in the part that responds to you dragging a selected vertex; you can specify limits there on how far out you can drag a vertex). It's not JUNG's responsibility to prevent you from setting a vertex location to something that the Layout wouldn't use; as far as the JUNG is concerned, you can do that if you want to. :)

Related

Swift SceneKit - I am trying to figure out if a Node object goes off the screen

Using SceneKit
I want to make the gray transparent box to disappear and only show the colored boxes when the user zooms in.
So I want to detect when that box's edges are starting to fall off the screen as I zoom, so I can hide the gray box accordingly.
First thoughts, but there may be better solutions:
You could do an unprojectPoint on the node and check against screen coordinates, do the +/- math on object size and skip Z. I "think" that would work
You can do some physics based collision detection against an invisible box or plane geometries that acts as your screen edges, has some complexity if your view is changing, but testing would be easy - just leave visible until you get what you want, then isVisible=false
isNode(insideFrustomof: ) - returns boolean on whether it "might" be visible. I'm assuming "might" means obscured by other geometry which in your case, shouldn't matter (edit) on second thought, that doesn't solve your problem but I'll leave it in here for reference.

How can I position an a-frame object to bottom left corner of the marker, and make its width equal to the marker's width?

I'm trying to create a basic scene an AR.JS with NFT (so it's not just the basic marker-based tracking; it tracks a custom image) using A-frame to place down and position my objects, but I've noticed that e.g.: if I place a 1*1*1 size box in the scene, it will appear at different places on different devices. And also, if I don't scale it up to like 200, it will appear as a very-very tiny box.
E.g.: If I try to view my scene on my phone, the object appears at the exact center of the marker, but if I check it on a different phone, it will appear almost completely outside the marker. Also, if I check it with a webcam, it will appear yet again in a different place, and even in a different size.
I wonder if there is any option to make the marker images bottom left (or any other) corner the 0 0 0 point, so I can position my objects more precisely, and also set the object's width to equal the marker images width, so I don't have to scale up the object like this.
At this moment there is any option to display a model in the center of the NFT marker. This because AR.js depend on jsartoolkit5 and this last has no yet this feature. But if you know the width, height and dpi you can display the object in the center of the maker with this formula (pseudo code):
obj.position.y = (marker.height / marker.dpi * 2.54 * 10)/2.0;
obj.position.x = (marker.width / marker.dpi * 2.54 * 10)/2.0;
You can acquire width, height and dpi while creating your marker or using the dispFeatureSet display app distributed by the Artoolkit5 SDK you can find binaries here https://github.com/artoolkitx/artoolkit5/releases/tag/5.4.0 or from artookitx website https://www.artoolkitx.org/docs/downloads/

Get SCNMaterial's frame

I have an iPhone 3D model in my SceneKit application, it has a material which I get. It is called iScreen (the image that "is on" the screen of the iPhone):
var iScreen: SCNMaterial!
iScreen = iphone.geometry?.materialWithName("Screen")!
I decided to somehow project a webView there instead of an image.
Therefore I need the frame / position / size of the screen where iScreen "draws" to set the UIWebView's frame. Is that possible?
Note
Of course I tried position, frame, size, etc. but that all was not available :/
you will first have to know to which SCNGeometryElement the material applies (an SCNGeometry is made of one or several SCNGeometryElement).
That geometry element is essentially a list of indices to retrieve vertex data contained in the geometry's SCNGeometrySources. Here you are interested in the position source (it gives you the 3D coordinates of the vertices).
By iterating over these positions you'll be able to find the element's width and height.

Getting the same coordinates from UIScrollView whether it's zoomed or not

Is it possible to get the same coordinates from UIScrollView whether it's zoomed or not.
That is, For example, consider a plain screen of 320.0F and 480.0F.
Tap on a point; view will give me something like (60.0F, 80.0F).
Zooming-in or zooming-out on the view so that it will have either bigger or smaller zoom scale but making sure the zoomed area to contain the point that was tapped which was (60.0F, 80.0F) according to previous zoom scale.
Tap on the point again, the view will give me different coordinate value.
The Thing is, I want to have the same coordinate value whether the view is zoomed or not. The idea is simple. I want to show images zoomed & interactive without changing its coordinate. Considering an UIScrollView applied of the idea with its height of 1.0 and width of 0.66, I think there would be some pros programming this way, when making an interactive app without using OpenGL, cocos2d or whatever 3D engines out there.
Do you guys have any idea if it's supported or not? Either case, please don't wait any second to reply. Thanks
You can calculate it by yourself using content size as follows,
x = (original_Width/ width_after_zooming) * point.x
y = (original_height/ height_after_zooming) * point.y

How to adjust GLCamera to show entire GLScene

I have a GLScene object of varying (but known) size. It is completely surrounded by a TGLDummyCube.
I want to position the GLCamera (with CameraStyle: glPerspective) so that the object is completely visible on screen. I got this running basically - the object is visible, but the distance is sometimes too far, or the object is larger than the screen and clipped.
How can I do that? I suppose that this can be done by a clever combination of camera distance and focal length, but I have not been successful so far.
This seems to be different in GLScene compared to OpenGL. I'm using GLScene and Delphi 2007.
Although varying the camera distance and focal length will change the object's visual size, it has the drawback of also changing the perspective, thus leading to a somewhat distorted view. I suggest to use the camera's SceneScale property instead.
Alas, I have no valid steps to calculate the correct value for that. In my case I have to scale to a cube with varying size, while the window size of the viewer is constant. So I placed two dummycubes at the position of the target cube, each sized to fit either the width or the height of the viewer with appropriate values for SceneScale, camera distance and FocalLength. During runtime I calculate the new SceneScale by the ratio of the target cube size in respect to the dummycube sizes. This works quite well in my case.
Edit: Here is some code I make for the calculations.
ZoomRefX and ZoomRefY are those DummyCubes
TargetDimX and TargetDimY give the size of the current object
DesignWidth and DesignHeight are the size of MyGLView at design time
DesignSceneScale is the camera's SceneScale at design time
The calculation code:
ScaleX := (ZoomRefX.CubeSize*MyGLView.Width)/(DesignWidth*TargetDimX);
ScaleY := (ZoomRefY.CubeSize*MyGLView.Height)/(DesignHeight*TargetDimY);
NewSceneScale := Min(ScaleX, ScaleY)*DesignSceneScale;
The DummyCubes ZoomRefX and ZoomRefY are sized so that they have a small margin to either the left-right or top-bottom edges of the viewing window. The are both positioned so that the front faces match. Also the target object is positioned to match its front face with those of these DummyCubes.
The formulas above allow the window size to be different from design time, but I actually didn't test this feature.
#Andreas if you've been playing with SceneScale (as you mentioned in comments) that means that you are looking for a proper way to fit object within camera view by either changing camera distance/focal length or by resizing object. If so, the easiest way to resize single object to fit the screen is to use its BoundingSphereRadius property like this:
ResizeMultiplier := 2; //play with it, it depends on your camera params
GLFreeForm1.Scale.Scale(ResizeMultiplier / GLFreeForm1.BoundingSphereRadius);
You can add GLDummyCube as root object for all other scene objects and then resize GLDummyCube with method mentioned above.

Resources