I am currently doing some research into the following:
I have a relational database with a table that contains circles. To keep it simple, let us assume that all circles have the same radius and each circle has an x and y position. I have some ORM plus server side technology that spews out SVG of all circles.
Users should be able to drag and drop additional circles onto the SVG ‘canvas’ that depicts the latest state of the circles database table. I am not too sure about the drag and drop functionality. I would prefer the drag and drop events to update the database rather than the client side SVG (i.e. drag and drop issue ajax calls to the backend). Angular JS (or jquery or whatever) would take care of synching backend and the frontend’s SVG.
Any pointers regarding relevant front end technology and/or examples would be very much appreciated. Thanks.
Raphael handles drag and drop better than any UI library out there. Why not stick to its handlers? As for syncing, take a look at http://meteor.com if you want something big, or http://sharejs.org if you want something small. They both should do the job pretty well
You can invoke sync function from those drag and drop callbacks. If you want to just track final positions, do the sync on drop. If you want to track the movement, sync also on move, but make sure to debounce it (using Sugar.js, or build-in Underscore).
Related
I am making an app where there is a 32x64 grid and when you click any square it will light up. I want the user to be able to drag their finger and it fills up all the squares the finger touches.
I am basically setting the state of my component to be an array of all of the squares and when a user touches the screen it switches the state to be a new array of squares (with one more filled) and renders the view.
With so many squares (components) on screen and with the re-rendering the performance is really bad on my phone. It is decent on my computer phone simulator, but could be better. I have tried adding a key to all of the squares in the array and I changed the square from a regular Component to a Pure Component, and although that did help performance it still could be a lot better.
After researching for a while, I decided I needed to reach out for guidance. I am trying to do this for IPhone, so do you think I should do the whole thing in swift if I want better performance or is there other ways to optimize the performance of a lot of components in React Native?
Keys are required when returning multiple nodes.
PureComponent could help but you can use two more techniques.
Use simpler functional, stateless components.
Introduce <Row /> component to divide updating complexity.
I am trying to build a gantt control with Konva (does it make sense to use Konva for this)? I have tried to sketch the control below:
I was thinking of breaking down the Konvas stage as follows:
One stage with 4 layers: activity names, timeline, activity views, and scrollbar view.
The scrollbar layer would contain a "custom control" mimicking a standard scrollbar control.
At this stage I have a couple if questions:
What would be the best approach for synchronizing the different layers from an event handling perspective? For example if the user click's on the scrollbar's down arrow shape, I would need to "scroll" all layers one unit down.
How does the Konva coordinate system work? Is the drawing of shapes done relative to the containing layer?
What's the difference between a layer and a group? Does it make more sense to use a group instead of layers?
I realize my questions are very broad in nature, but at this point I need to get the design right.
I am responding here rather than as a comment because I have more to say than a comment allows.
I have made Gantts with both HTML elements, and another canvas lib, and Konva. I used Divs with jquery first and it was viable but I felt it got quite complicated and it ran out of steam in the area of zooming the view. You can't hide from the complexity of course. Switching to HTML5 canvas I realised that a lib like Konva would accelerate production. And zooming in canvas is simple.
As per #lavrton's comment, the text is primitive on HTML5 canvas when compared to GDI, or other, more mature tech. My answer for the labels on tasks was to use off-screen text drawing then converting to images which works very well. For popup editing, I revert to HTML divs etc. I did not use animations in the Gantt but I have elsewhere and canvas should be fine - there are plenty of bouncy-ball / particle tests around to confirm that.
As a coding design suggestion, the data model and functionality of the Gantt is consistent whatever tech you use to draw it with. I recommend you consider proceeding with a layered approach where your interaction with drawing functions is wrapped as class methods in a drawing class so that you can switch out the drawing tech itself should you feel the need. You could insulate yourself from the choice of tech and/or library that way.
Turning to aspects of your question:
layers are a useful concept. Physically each layer is an HTML5 canvas element. So multiple layers in one diagram are really multiple canvases over the same stage. The benefit here is in redrawing specific layers instead of the entire canvas where there are performance savings. But mostly you can ignore the physical and just get on and use the concept which works well.
groups: a group is a collection of shapes on a layer. If you have to draw things made of many shapes, grouping them is very useful because you can move the group as a whole, hide it, delete it, etc. You might, for example, consider making each taskbar, composed of at least a rectangle and text, as being a group. One consideration for groups is that the location and size of the group is that of the bounding rectangle that encloses the shapes within it. This can cause some confusion until you work out an approach. You will find yourself using layers and groups, but mostly groups for drawing controls.
Zooming / scaling: this is easy with a canvas. Less easy is the math for how to change the offset to keep the same view as you zoom, but again it is achievable.
Synchronised scrolling layers is not going to take any time to develop - just set the layer y-position for each layer.
Drawing the grid of rows for activity and columns for days/weeks/months/etc should not be underestimated as a task, but as you develop it you will learn the fundamentals of working with Konva.
Final point - the docs and examples for Konva could be a bit better, but the community support here and at https://konvajs.github.io/docs/ is good, and the Konva source code is also at that site so you can delve right in to understand what is happening, though you do not need to do that at all if it is not your thing.
I'd like to use a more sophisticated graphics or GUI widget library in my Hammerspoon config file, in order to get user input and do more advanced drawing on the screen than Hammerspoon allows (as far as I can tell) by default. I'm new to Lua and Hammerspoon, and so far I've been unable to figure out how to get this working. (Simple drawing on-screen is not a problem, so examples of geometric shapes are not helpful. I can do that already with no difficulty.)
I initially thought one of the Lua libraries designed for building games would have more than I could possibly need, and looked into love2d, but it did not appear to be possible to use with Hammerspoon in any straightforward manner.
To give two concrete examples of things I'd like to do:
I'd like to display a dialog box in which the user can enter two values, to specify how many rows and how many columns they want in their screen grid. A native Cocoa dialog would be better, but something graphically drawn on screen with Lua would be fine, as long as the details of the image are abstracted away for me, and I can just define the text and fields and buttons in the dialog.
I'd like to draw a dotted-line rectangle with curved corners and a shadow around specified grid segments as a preview of where a window would be moved if the user completed a certain command.
There's a lot more, but anything that allows me to do those things should allow me to do anything else I want.
We don't yet have a good answer to generating dialog boxes, although it is possible to do it with AppleScript, which you can call from Hammerspoon with hs.osascript.
As for drawing things like dotted-line rectangles, we can't currently do that, but if you'd like to file an issue on our GitHub project, it's something we can look at for a future release :)
While modifying a feature with dragging (e.g. moving a point of a polygon), I would like to show the distances of the modified vertices. I intend to use an overlay with some div elements in the proper position on the map (much like the measure example: http://openlayers.org/en/v3.12.1/examples/measure.html?q=measure).
The things I think that are needed for this, is to attach to the proper events (modifystart, modifyend, change:geometry, ...), and being able to determine which vertices are being modified. For each vertex/segment, I can then put such a label on the map.
I would like to know what the best method is to achieve this (mainly to determine which vertices are being modified)? This seems difficult to achieve.
Some options I was investigating:
The easiest way would be to have access to the dragSegments_ member of the modify interaction. Sadly this seems not to be exposed from the API. For example it would have been nice if some modifydrag event existed, that contained this dragSegments_ collection. Or that this had been already a member of the existing ModifyEvent (not sure whether at the time the modifystart event is raised, this dragSegments_ collection is already filled in).
I know I can listen to the modifystart and modifyend event. This has a mapBrowserPointerEvent member, which I can use to know the coordinate of the mouse cursor. However, in that case I would have to write some logic to find all the segments matching that coordinate, of the features that are being modified. So in fact, I'm rewriting the same code as the modify interaction used to determine the dragSegments_ collection. And then have to hope that I end up with the same collection as the internal dragSegments_ (and keep this up-to-date with future OL3 code changes).
I can also listen to the change:geometry events of the features being modified. In that case I would have to scan the new geometry, and compare each point with the original version (store a copy when modification started), and see which coordinates changed. Not sure how good an approach this is (looks ugly at first sight to scan through all the points and compare them one-on-one).
Is there an elegant solution for this?
I have a requirement in my product to create an widget to have workflow kind of process.
If you see attached image , I am looking for similar kind of chart.
As we have already fusion charts with us, is it possible to have similar stuff?
Since you already have FusionCharts, I would recommend using the Drag Node Chart. The chart is exactly of the type you need and the dragging functionality can be easily turned off!
http://www.fusioncharts.com/charts/drag-node-charts/
In fact, one of the examples look very close to the image in your question and as far as I can tell, the chart can be customised to look almost same -
http://www.fusioncharts.com/charts/drag-node-charts/
Only downside that I can think of is that you will have to manually calculate the position of your workflow nodes (the connectors will connect automatically.) However, the positions can be easily computed by writing simple function on the beforeDataUpdate event fired by the chart.
PS: Using the annotations feature, you can dynamically position texts labels, images, and other such items arounds your process diagram.