I need to get a polygon comment into a pdf and revise it's shape. I'm able to do so now by merging the pdf and a blank pdf with just the polygon, then I am able to update the vertices and the rect.
However, the polygon shape still looks the old one when opening the new pdf, even though it will be refreshed after a few clicks on the shape. I need to have this fixed and found this is probably caused by the data stream in the annotation object, which seems to still contain the old polygon shape. But I cannot figure out how to overwrite that before saving the new pdf. I used code similar below to update the vertices and rect, but cannot figure out how to update the data stream.
annot.getObject().update({NameObject('/Rect'):ArrayObject([FloatObject(min(xcoords)), FloatObject(min(ycoords)), FloatObject(max(xcoords)), FloatObject(max(ycoords))])})
Please see image in link
I would appreciate any information.
In case someone has a similar problem, just wanted to share my solution --
I don't find a way to update the stream data, however, I am able to get rid of the "ghost" shape by completely removing that object within the annotation object.
annot.getObject().pop('/AP')
Without that ghost shape, the annotation polygon displays properly! Not sure why the use of '/AP' object though. But it looks alright.
Related
Currently, RealityKit doesn't have any method that provides the currently visible entities. In SceneKit we do have a method for that particular functionality—nodesInsideFrustum(pointOfView).
Our internal solution is to create a big fake bounding box in front of the camera. We then check intersections between the "frustum" bounding box and each entity's bounding box. That, of course, is a bit cumbersome and inaccurate. I wonder if someone can come up with a better solution who is willing to share it.
You could combine two ARView methods:
ARView.project(position) to get the 2D point in screen space
ARView.bounds.contains(point) to know if it's visible on screen
But it's not enough, you also have to check if the object is behind you:
Entity.position(relativeTo: cameraAnchor) (with cameraAnchor being an AnchorEntity(.camera)) to have the local position
the sign of localPosition.z shows if it's in front or behind the camera
I'm trying to create polygons with an inner border in Konva.
I found this example of doing this with globalCompositeOperation which works well in Konva, as long as there is only one shape. As soon as I try to add a second shape, this obviously doesn't work anymore and the first shape disappears.
It would work if I were to use a different layer for every shape, but of course that's not a solution that scales well.
I tried using a temporary layer as is done in the example but couldn't get it to work.
So I found this example of using group.cache(), which works fine ... until I try to scale the stage, at which point I would have to refresh the cache, otherwise I only get the scaled up cache, which looks bad.
This codesandbox illustrates the problem. (Please note that this uses simple triangles, in reality I work with arbitray polygons)
So is there a way to use cache with scaling? Or alternatively a better way to use globalCompositeOperation with multiple shapes in the same layer? Or some alternative solution?
I found a solution: calling group.cache({pixelRatio: scaleFactor}). I updated the sandbox.
No idea, if this is the best solution, but it works.
I drew a lot points in my program with webgl. Now I want to pick any point and move this point new position. The case is I don't know how to select point. So am I supposed to add actionlistener to each point?
WebGL is a rasterization library. It has no concept of movable, clickable position or points. It just draws pixels where you ask it to.
If you want to move things it's up to you to make your own data, use that data to decide if the mouse was clicked on something, update the data to reflect how the mouse changed it, and finally use WebGL to re-render something based on the data.
Notice none of those steps except the last one involve WebGL. WebGL has no concept of an actionlistener since WebGL has no actions you could listen to. It just draws pixels based on what you ask it to do. That's it. Everything else is up to you and outside the scope of WebGL.
Maybe you're using some library like three.js or X3D or Unity3d but in that case your question would be about that specific library as all input/mouse/object position related issues would be specific to that library (because again, WebGL just draws pixels)
I have been struggling with this problem for a time and being unable to solve it led me here. I'm recently new to Actionscript (2.0). I want to do something similar to:
http://gnarshmallow.com/
Were i want something to be painted behind a moving object in real time.
I would like some advice on how to approach the problem.
You need to use line drawing to do this. You will need two points, and it will draw a line from one to the next. I recommend having it run on every movement call. Have it draw the line between the racer's location in the previous frame, and his location in the current frame. For further reference, check out this page.
http://www.actionscript.org/resources/articles/730/1/Drawing-lines-with-AS2/Page1.html
I have a 2D numpy array that I need to plot as an image with a certain scale. Within that image I need to be able to select a ROI or at least be able to display the mouse coordinates (of a specific target contained in the image). I tried using pyqtgraph but I can't seem to plot an image as a data source rather than just an image (i.e. can't seem to set axes, etc)... what would be the best way to do this, then? The image browser is compiled as a widget with a slider that scrolls through frames of the file; this widget is then embedded in a main window with a few table widgets.
I think imshow in matplotlib might work for you. It is easy to zoom, pan, and scale, and works easily with numpy.
(If this answer doesn't work for you, could you please refine your question. I'm unsure whether you're looking for any tool that will do the job, or something that works within the context of a gui that you've already implemented. If the later, I think you'll probably need to do the ROI yourself, by, say, selecting areas of the numpy array to plot, e.g. a[xmin:xmax, ymin:ymax].)