Ferrum mouse.up doesn't work for click & drag method - ruby-on-rails

I'm writing a test which runs over Cuprite::Ferrum, where I need to click, drag and drop an element below another element on the page. Which works fine while I am doing it manually, but when i try to put it in a test, it almost work, but not quite.
I was unable to find he drag and drop API for Ferrum or Cuprite, so I created something like:
def click_and_drag(draggable, droppable, offset_x, offset_y)
x1, y1 = draggable.native.node.find_position
x2, y2 = droppable.native.node.find_position
mouse = page.driver.browser.mouse
mouse.move(x: x1, y: y1)
mouse.down
mouse.move(x: x2 + offset_x, y: y2 + offset_y)
mouse.up
end
Pretty simple and works fine until it get to the mouse.up part. The element that needs to be picked up is found by the draggable = page.find(element), and it needs to be dropped below the droppable = page.find(element).
Everything is going great until the method gets to the mouse.up part, where the dragged element should be dropped and put in it's new place, but it just goes back to it starting position.
I am referring to the Ferrum doc https://www.rubydoc.info/gems/ferrum/0.5/Ferrum/Mouse but cant seam to find the answer.

So, the method works and all is well, the problem was with offset_x and offset_y, if you somehow find this and want to use it, keep it in mid, and keep them small!

Related

How to implement an "Excel like" "add cell" into a canvas

I'm sorry for the title, that maybe can't describe properly what I would like to achieve. I'm starting to develop a new software which should present a "grid" to the user that can be manipulated by him adding "rows" or "columns" in any point of this "grid". The problem is that I'm not sure a real grid is the suitable solution, because there are some "graphical" requirements like changing invididual cells sizes, nesting them, zooming/stretching, etc. So I was starting to analyze a solution in WPF that uses DrawingVisual elements (for performance reason).
I'm able to draw the "grid" in the desired way. I'm also able to add rows or columns at the edges of the drawing. But I can't figure any solution to modify it in the "middle" (except redrawing the whole thing). I'll explain me better with an image. On the left there's the "grid" after it has been drawn for the first time. On the right there's a new grid that should be drawn after the user performs an operation.
An more complex example is the following, where the "row" is added inside an existing cell, causing all the cells to "grow".
As I said, I know I could redraw the whole thing, but I'm concerned about performance. Keep in mind that in a real scenario there could be thousands of blocks and many nesting levels.
Any suggestion is appreciated. The use of WPF is not mandatory, but it will be a desktop app in .NET 5.0. The use of a DrawingVisual is neither mandatory. I can evaluate any solution. Thank you.
A simple technique is to keep positions of columns relative to the left of the canvas in a variable when you first draw the tables. When you want to add a new column, you can crop the image from that point, and in a larger canvas, copy the left and right pieces and just draw the middle column from the beginning.
Of course, the coordinates of each column can be calculated with image processing techniques, but it reduces performance.
I wrote this code with Python, but I do not think it would be difficult to convert it to C#.
import cv2
import numpy as np
# copy image over another
def imdraw(im, over, x, y):
y1, y2 = y, y + over.shape[0]
x1, x2 = x, x + over.shape[1]
for c in range(0, 3):
im[y1:y2, x1:x2, c] = over[:, :, c]
return im
pt = 220
col = 300
off = 15
im = cv2.imread("grid.png", 1)
h, w = im.shape[:2]
crop_left = im[0 : 0 + h, 0:pt]
crop_right = im[0 : 0 + h, pt:w]
cv2.imwrite("left.jpg", crop_left)
cv2.imwrite("right.jpg", crop_right)
# Create an Empty image with white background
out = 255 * np.ones(shape=[h, w + col, 3], dtype=np.uint8)
out = imdraw(out, crop_left, 0, 0)
out = imdraw(out, crop_right, pt + col, 0)
out = cv2.rectangle(
out,
pt1=(pt + off, off),
pt2=(pt + col - off, h - off),
color=(128, 0, 200),
thickness=5,
lineType=cv2.LINE_AA,
)
cv2.imwrite("out.jpg", out)
Output:

How to get window decorations pixel size in LUA

I am using rdesktop with seamlessrdp. This way I can open Windows apps on my Linux machines. Also I added devilspie2 to the mix so I could control the window decorations. devilspie2 uses lua as its config management. I made everything work. The only issue left is to move the opening (dialog) windows a couple of pixels because the VNC windows will appear as if they had decorations (but without them). I got the code working by hard coding the amount of pixels needed to move. The issue is we have more than one distros here and they have different pixel sizes for their window decorations.
What I want is to GET the decoration size in pixels instead of hard coding them so it will work perfectly for all my distros.
Here is the piece of code that does it atm:
if get_window_class()=="SeamlessRDP" then
undecorate_window();
--x-1 and y-28 works for one distro but for the other I need to use x-6 and y-27
if get_window_type()=="WINDOW_TYPE_DIALOG" then
x, y = xy();
xy(x-1, y-28);
end
end
As you can see from the script. It would be much better if I could somehow call the size of the window decorations and then use them rather than hard coded pixels.
EDIT (ANSWER):
Even though I found the answer before the following post, I wanted to accept it anyways because it did show the right path to follow. I am only further commenting here to show the full answer:
--get x and y's for decorated and non-decorated windows
x1, y1, width1, height1 = get_window_geometry();
x2, y2, width2, height2 = get_window_client_geometry();
--calculate pixels to slide window
xpixel = x2-x1;
ypixel = y2-y1;
--check if class is seamlessrdp
if get_window_class()=="SeamlessRDP" then
undecorate_window();
--if window is a dialog then move it
if get_window_type()=="WINDOW_TYPE_DIALOG" then
xy(x1-xpixel, y1-ypixel);
end
end
devilspie2 provides only two ways to get the window size, get_window_geometry and get_window_client_geometry.
Whereby the last one excludes the window borders. If this does not work for you, you can create a file with a table for all values to make them easily editable. You could also use the window class names as table keys if possible to make the use easier.

trying to create a bounding box by click and drag in a web component

I am dealing with event handlers, and getting the start and end position of a mouse.
That way i can create a selection box.
I start by on mousedown, i store the current position: new Point(event.target.clientLeft, event.target.clientTop); which seems to work for that location when i set the selection div accordingly.
The next step is where everything seems wrong. While in the mousemove event, I am trying to get the coordinates of the mouse do i can use the difference to define the Height and Width of the bounding box. It seems that everything is off by the coordinates of the web component i had created.
How should I go about this?
I have been dabbling with the event more, trying to get the position of the mouse, but i think what happens is that my start point is in reference to the webcomponent I created, and not the absolute position.
Has anyone else figured out how to do this correctly, because I have been setting things as absolute, but not correctly rendering.
As a side note, if i could subtract the absolute position of the webcomponent this is in, i think it should render the height and width correctly.
You might have an easier time using global position information rather than target-relative position. You can get the global position point from a MouseEvent directly with event.screen.
since a PolymerElement will use HtmlElements, you will have access to a full suite of functions built into the application. For example:
var top = selectedHtml.getBoundingClientRect().top - getBoundingClientRect().top + selectedHtml.clientHeight;
var left = selectedHtml.getBoundingClientRect().left - getBoundingClientRect().left + selectedHtml.clientWidth;
Point coord = new Point(left,top);
then you can just determine the distance from that point to get your width/height:
var top = selectedHtml.getBoundingClientRect().top - getBoundingClientRect().top + selectedHtml.clientHeight;
var left = selectedHtml.getBoundingClientRect().left - getBoundingClientRect().left + selectedHtml.clientWidth;
var h = top - coord.y;
var w = left - coord.x;
and then apply it to your Div.
_item.style.top = top;
_item.style.left = left;
_item.style.width = w;
_item.style.height = h;

Changing a moving objects direction of travel in corona

I'm new to Corona and looking for a little help manipulating moving objects:
Basically I want a set up where when I can click on a moving object, a dialog box will pop up giving me the option to change the speed of the object and the vector of travel. I'm pretty sure I can figure out the event handling and the dialog but I'm stuck on simply changing the direction of travel to the new vector
in a simple example, I have a rect moving up the screen as follows:
obj1 = display.newRect(500, 800, 10, 40)
transition.to(obj1,{x=500, y = 100, time = 40000})
I know I can change the speed by adjusting the time, but if I use
obj1:rotate(30)
to turn the object 30 degrees, how do I make it travel in the new direction?
Should I be using physics - linear impulse for example, instead of transitions?
Apologies if this is a stupid question but I have searched without success for a solution.
This sounds like what you are trying to do. You would have to modify bits to fit your code but this is a working example. So if you copy it to a new main.lua file and run it you can see how it works. (Click to rotate obj).
local obj = display.newRect(50,50, 10, 40)
local SPEED = 1
local function move(event)
obj.x = obj.x + math.cos(math.rad(obj.rotation)) * SPEED
obj.y = obj.y + math.sin(math.rad(obj.rotation)) * SPEED
end
local function rotate(event)
obj.rotation = obj.rotation + 45
end
Runtime:addEventListener("enterFrame", move)
Runtime:addEventListener("tap", rotate)
Basically I used the "enterFrame" event to 'move' the rectangle, by recalculating the x and y of the objects location using its rotation (which is easy enough to modify) every frame.

How to make css3 scaled elements draggable

I've noticed that the CSS3 scale attribute does really bad things to jquery ui, specifically to sortable. The problem is that the mouse still needs to move as much as if the elements were not scaled. Check out this jsfiddle example.
Does anybody have thoughts on how to fix this? Is it possible to change the speed that the mouse moves? I'm going to look into the html5 native drag and drop next, and try to write my own sortable function.
UPDATE:
Jquery ui draggable works ok with CSS3 scaled elements, here is a fiddle for it.
It turns out the real answer does not require writing special move functions. Jquery ui sortable can be used as long as the items being sorted have been wrapped in a div of the appropriate size with overflow hidden. Check this jsfiddle for an example.
The problem was that I had forced the scaled divs to be close to one another using a negative margin. Then when I started to drag an item it was taking up the wrong amount of space. With the scaled item wrapped in a non-scaled div everything works as expected.
I don't have a solution for working with jquery ui but I do have a solution for working with Raphael and by extension other svg objects.
First, using chrome or firefox, go drag the dots around in this jsfiddle. Be sure to drag both dots and to use the slider at the bottom to change the scale of the box. The slider has a default scale range of .4 to 1.2. In Chrome the slider is actually a slider but in Firefox it shows up as a textbox. If you are using firefox enter values that are 100 x the scale, i.e. 70 => 0.7.
What you should have just experienced is that the black dot tracks with the mouse regardless of the scale and the red dot only tracks when the scale is 1.0.
The reason for this is the two dots are using different 'onMove' functions. The black dot moves according to 1/scale while the red dot moves normally.
var moveCorrected = function (dx, dy) {
// move will be called with dx and dy
this.attr({
cx: this.ox + (1/scale)*dx,
cy: this.oy + (1/scale)*dy
});
}
var move = function (dx, dy) {
// move will be called with dx and dy
this.attr({
cx: this.ox + dx,
cy: this.oy + dy
});
}
So, in answer to my original question. You can't (and shouldn't) be able to change how the mouse moves, that is clearly user defined behavior, but you can change the move function of the object being moved to track with the mouse.

Resources