react-konva How to set Shape width and height to fill it's Group container? - konvajs

I want to have the same animation movement on both a Text and a Rect. I put them into a group and apply a tween on the group. However, the x and y props are inherited, width and height are not, which means the Rect's width and height don't have smooth tween. Is there any way to set Rect's width and height to equal to 100% of its Group container's width and height all the time and along with their change?
const { scaleFactor, DynamicData, isPause } = props;
const groupRef = useRef<Konva.Group>(null);
const start = DynamicData.startData;
const end = DynamicData.endData;
useEffect(() => {
if (start && end && groupRef.current) {
const animation = new Tween({
node: groupRef.current,
duration: DynamicData.endTime - DynamicData.startTime,
x: scaleFactor * end.x,
y: scaleFactor * end.y,
width: scaleFactor * end.width,
height: scaleFactor * end.height,
easing: Easings.Linear,
});
if (isPause) {
animation.pause();
} else {
animation.play();
}
}
});
return (
<>
{!_.isUndefined(end) && (
<Group
ref={groupRef}
x={scaleFactor * start.x}
y={scaleFactor * start.y}
width={scaleFactor * start.width}
height={scaleFactor * start.height}
>
<Text
text={start.concept}
/>
<Rect
stroke={start.stroke}
/>
</Group>
)}
</>
);
};
update: for now I add a new Tween on Rect to track the width and height changes:
const sizeAnimation = new Tween({
node: rectRef.current,
duration: DynamicData.endTime - DynamicData.startTime,
width: scaleFactor * end!.width,
height: scaleFactor * end!.height,
easing: Easings.Linear,
});
if (isPause) {
sizeAnimation.pause();
} else {
sizeAnimation.play();
}

width and height properties of Konva.Group do nothing. They don't affect rendering.
You have to add another tween for rectangle.

Related

Distorting images using FabricJS filters and custom controls, by dragging the corner control points image resizes from center

I have created a subclass in Fabric.js 4.3.0 extending fabric.Image, this helps me change the render function so that image will always fit in the bounding box.
I have also created a custom filter for Fabric, using which, by giving 4 corner coordinates, I can distort the image, similar to Photoshop's free transform -> distort tool.
While my code works, the issue is that when I drag the corner controls, the image always resizes from center, moving the other controls points as well.
I am trying to follow the instructions on how to resize objects in fabric using custom control points, the instructions own on polygons, and other shapes, but it does not yield the result required with images.
The result that I want to achieve, is when dragging one of the green control points, the image should distort, but image and the other control points must stay in their own positions without moving, similar to what you see here: https://youtu.be/Pn-9qFNM6Zg?t=274
Here is a JSFIDDLE for the demo: https://jsfiddle.net/human_a/p6d71skm/
fabric.textureSize = 4096;
// Set default filter backend
fabric.filterBackend = new fabric.WebglFilterBackend();
fabric.isWebglSupported(fabric.textureSize);
fabric.Image.filters.Perspective = class extends fabric.Image.filters.BaseFilter {
/**
* Constructor
* #param {Object} [options] Options object
*/
constructor(options) {
super();
if (options) this.setOptions(options);
this.applyPixelRatio();
}
type = 'Perspective';
pixelRatio = fabric.devicePixelRatio;
bounds = {width: 0, height: 0, minX: 0, maxX: 0, minY: 0, maxY: 0};
hasRelativeCoordinates = true;
/**
* Array of attributes to send with buffers. do not modify
* #private
*//** #ts-ignore */
vertexSource = `
precision mediump float;
attribute vec2 aPosition;
attribute vec2 aUvs;
uniform float uStepW;
uniform float uStepH;
varying vec2 vUvs;
vec2 uResolution;
void main() {
vUvs = aUvs;
uResolution = vec2(uStepW, uStepH);
gl_Position = vec4(uResolution * aPosition * 2.0 - 1.0, 0.0, 1.0);
}
`;
fragmentSource = `
precision mediump float;
varying vec2 vUvs;
uniform sampler2D uSampler;
void main() {
gl_FragColor = texture2D(uSampler, vUvs);
}
`;
/**
* Return a map of attribute names to WebGLAttributeLocation objects.
*
* #param {WebGLRenderingContext} gl The canvas context used to compile the shader program.
* #param {WebGLShaderProgram} program The shader program from which to take attribute locations.
* #returns {Object} A map of attribute names to attribute locations.
*/
getAttributeLocations(gl, program) {
return {
aPosition: gl.getAttribLocation(program, 'aPosition'),
aUvs: gl.getAttribLocation(program, 'aUvs'),
};
}
/**
* Send attribute data from this filter to its shader program on the GPU.
*
* #param {WebGLRenderingContext} gl The canvas context used to compile the shader program.
* #param {Object} attributeLocations A map of shader attribute names to their locations.
*/
sendAttributeData(gl, attributeLocations, data, type = 'aPosition') {
const attributeLocation = attributeLocations[type];
if (gl[type + 'vertexBuffer'] == null) {
gl[type + 'vertexBuffer'] = gl.createBuffer();
}
gl.bindBuffer(gl.ARRAY_BUFFER, gl[type+'vertexBuffer']);
gl.enableVertexAttribArray(attributeLocation);
gl.vertexAttribPointer(attributeLocation, 2, gl.FLOAT, false, 0, 0);
gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW);
}
generateSurface() {
const corners = this.perspectiveCoords;
const surface = verb.geom.NurbsSurface.byCorners(...corners);
const tess = surface.tessellate();
return tess;
}
/**
* Apply the resize filter to the image
* Determines whether to use WebGL or Canvas2D based on the options.webgl flag.
*
* #param {Object} options
* #param {Number} options.passes The number of filters remaining to be executed
* #param {Boolean} options.webgl Whether to use webgl to render the filter.
* #param {WebGLTexture} options.sourceTexture The texture setup as the source to be filtered.
* #param {WebGLTexture} options.targetTexture The texture where filtered output should be drawn.
* #param {WebGLRenderingContext} options.context The GL context used for rendering.
* #param {Object} options.programCache A map of compiled shader programs, keyed by filter type.
*/
applyTo(options) {
if (options.webgl) {
const { width, height } = this.getPerspectiveBounds();
options.context.canvas.width = width;
options.context.canvas.height = height;
options.destinationWidth = width;
options.destinationHeight = height;
this.hasRelativeCoordinates && this.calculateCoordsByCorners();
this._setupFrameBuffer(options);
this.applyToWebGL(options);
this._swapTextures(options);
}
}
applyPixelRatio(coords = this.perspectiveCoords) {
for(let i = 0; i < coords.length; i++) {
coords[i][0] *= this.pixelRatio;
coords[i][1] *= this.pixelRatio;
}
return coords;
}
getPerspectiveBounds(coords = this.perspectiveCoords) {
coords = this.perspectiveCoords.slice().map(c => (
{
x: c[0],
y: c[1],
}
));
this.bounds.minX = fabric.util.array.min(coords, 'x') || 0;
this.bounds.minY = fabric.util.array.min(coords, 'y') || 0;
this.bounds.maxX = fabric.util.array.max(coords, 'x') || 0;
this.bounds.maxY = fabric.util.array.max(coords, 'y') || 0;
this.bounds.width = Math.abs(this.bounds.maxX - this.bounds.minX);
this.bounds.height = Math.abs(this.bounds.maxY - this.bounds.minY);
return {
width: this.bounds.width,
height: this.bounds.height,
minX: this.bounds.minX,
maxX: this.bounds.maxX,
minY: this.bounds.minY,
maxY: this.bounds.maxY,
};
}
/**
* #description coordinates are coming in relative to mockup item sections
* the following function normalizes the coords based on canvas corners
*
* #param {number[]} coords
*/
calculateCoordsByCorners(coords = this.perspectiveCoords) {
for(let i = 0; i < coords.length; i++) {
coords[i][0] -= this.bounds.minX;
coords[i][1] -= this.bounds.minY;
}
}
/**
* Apply this filter using webgl.
*
* #param {Object} options
* #param {Number} options.passes The number of filters remaining to be executed
* #param {Boolean} options.webgl Whether to use webgl to render the filter.
* #param {WebGLTexture} options.originalTexture The texture of the original input image.
* #param {WebGLTexture} options.sourceTexture The texture setup as the source to be filtered.
* #param {WebGLTexture} options.targetTexture The texture where filtered output should be drawn.
* #param {WebGLRenderingContext} options.context The GL context used for rendering.
* #param {Object} options.programCache A map of compiled shader programs, keyed by filter type.
*/
applyToWebGL(options) {
const gl = options.context;
const shader = this.retrieveShader(options);
const tess = this.generateSurface(options.sourceWidth, options.sourceHeight);
const indices = new Uint16Array(_.flatten(tess.faces));
// Clear the canvas first
this.clear(gl); // !important
// bind texture buffer
this.bindTexture(gl, options);
gl.useProgram(shader.program);
// create the buffer
this.indexBuffer(gl, indices);
this.sendAttributeData(gl, shader.attributeLocations, new Float32Array(_.flatten(tess.points)), 'aPosition');
this.sendAttributeData(gl, shader.attributeLocations, new Float32Array(_.flatten(tess.uvs)), 'aUvs');
gl.uniform1f(shader.uniformLocations.uStepW, 1 / gl.canvas.width);
gl.uniform1f(shader.uniformLocations.uStepH, 1 / gl.canvas.height);
this.sendUniformData(gl, shader.uniformLocations);
gl.viewport(0, 0, options.destinationWidth, options.destinationHeight);
// enable indices up to 4294967296 for webGL 1.0
gl.getExtension('OES_element_index_uint');
gl.drawElements(gl.TRIANGLES, indices.length, gl.UNSIGNED_SHORT, 0);
}
clear(gl) {
gl.clearColor(0, 0, 0, 0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
}
bindTexture(gl, options) {
if (options.pass === 0 && options.originalTexture) {
gl.bindTexture(gl.TEXTURE_2D, options.originalTexture);
} else {
gl.bindTexture(gl.TEXTURE_2D, options.sourceTexture);
}
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
}
indexBuffer(gl, data) {
const indexBuffer = gl.createBuffer();
// make this buffer the current 'ELEMENT_ARRAY_BUFFER'
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
// Fill the current element array buffer with data
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, data, gl.STATIC_DRAW);
}
};
/**
* Returns filter instance from an object representation
* #static
* #param {Object} object Object to create an instance from
* #param {function} [callback] to be invoked after filter creation
* #return {fabric.Image.filters.Perspective} Instance of fabric.Image.filters.Perspective
*/
fabric.Image.filters.Perspective.fromObject = fabric.Image.filters.BaseFilter.fromObject;
/**
* Photo subclass
* #class fabric.Photo
* #extends fabric.Photo
* #return {fabric.Photo} thisArg
*
*/
fabric.Photo = class extends fabric.Image {
type = 'photo';
repeat = 'no-repeat';
fill = 'transparent';
initPerspective = true;
cacheProperties = fabric.Image.prototype.cacheProperties.concat('perspectiveCoords');
constructor(src, options) {
super(options);
if (options) this.setOptions(options);
this.on('added', () => {
const image = new Image();
image.setAttribute('crossorigin', 'anonymous');
image.onload = () => {
this._initElement(image, options);
this.width = image.width / 2;
this.height = image.height / 2;
this.loaded = true;
this.setCoords();
this.fire('image:loaded');
};
image.src = src;
this.on('image:loaded', () => {
!this.perspectiveCoords && this.getInitialPerspective();
this.togglePerspective();
this.canvas.requestRenderAll();
});
});
}
cacheProperties = fabric.Image.prototype.cacheProperties.concat('perspectiveCoords');
/**
* #private
* #param {CanvasRenderingContext2D} ctx Context to render on
*//** #ts-ignore */
_render(ctx) {
fabric.util.setImageSmoothing(ctx, this.imageSmoothing);
if (this.isMoving !== true && this.resizeFilter && this._needsResize()) {
this.applyResizeFilters();
}
this._stroke(ctx);
this._renderPaintInOrder(ctx);
}
/**
* #private
* #param {CanvasRenderingContext2D} ctx Context to render on
*//** #ts-ignore */
_renderFill(ctx) {
var elementToDraw = this._element;
if (!elementToDraw) return;
ctx.save();
const elWidth = elementToDraw.naturalWidth || elementToDraw.width;
const elHeight = elementToDraw.naturalHeight || elementToDraw.height;
const width = this.width;
const height = this.height;
ctx.translate(-width / 2, -height / 2);
// get the scale
const scale = Math.min(width / elWidth, height / elHeight);
// get the top left position of the image
const x = (width / 2) - (elWidth / 2) * scale;
const y = (height / 2) - (elHeight / 2) * scale;
ctx.drawImage(elementToDraw, x, y, elWidth * scale, elHeight * scale);
ctx.restore();
}
togglePerspective(mode = true) {
this.set('perspectiveMode', mode);
// this.set('hasBorders', !mode);
if (mode === true) {
this.set('layout', 'fit');
var lastControl = this.perspectiveCoords.length - 1;
this.controls = this.perspectiveCoords.reduce((acc, coord, index) => {
const anchorIndex = index > 0 ? index - 1 : lastControl;
let name = `prs${index + 1}`;
acc[name] = new fabric.Control({
name,
x: -0.5,
y: -0.5,
actionHandler: this._actionWrapper(anchorIndex, (_, transform, x, y) => {
const target = transform.target;
const localPoint = target.toLocalPoint(new fabric.Point(x, y), 'left', 'top');
coord[0] = localPoint.x / target.scaleX * fabric.devicePixelRatio;
coord[1] = localPoint.y / target.scaleY * fabric.devicePixelRatio;
target.setCoords();
target.applyFilters();
return true;
}),
positionHandler: function (dim, finalMatrix, fabricObject) {
const zoom = fabricObject.canvas.getZoom();
const scalarX = fabricObject.scaleX * zoom / fabric.devicePixelRatio;
const scalarY = fabricObject.scaleY * zoom / fabric.devicePixelRatio;
var point = fabric.util.transformPoint({
x: this.x * dim.x + this.offsetX + coord[0] * scalarX,
y: this.y * dim.y + this.offsetY + coord[1] * scalarY,
}, finalMatrix
);
return point;
},
cursorStyleHandler: () => 'cell',
render: function(ctx, left, top, _, fabricObject) {
const zoom = fabricObject.canvas.getZoom();
const scalarX = fabricObject.scaleX * zoom / fabric.devicePixelRatio;
const scalarY = fabricObject.scaleY * zoom / fabric.devicePixelRatio;
ctx.save();
ctx.translate(left, top);
ctx.rotate(fabric.util.degreesToRadians(fabricObject.angle));
ctx.beginPath();
ctx.moveTo(0, 0);
ctx.strokeStyle = 'green';
if (fabricObject.perspectiveCoords[index + 1]) {
ctx.strokeStyle = 'green';
ctx.lineTo(
(fabricObject.perspectiveCoords[index + 1][0] - coord[0]) * scalarX,
(fabricObject.perspectiveCoords[index + 1][1] - coord[1]) * scalarY,
);
} else {
ctx.lineTo(
(fabricObject.perspectiveCoords[0][0] - coord[0]) * scalarX,
(fabricObject.perspectiveCoords[0][1] - coord[1]) * scalarY,
);
}
ctx.stroke();
ctx.beginPath();
ctx.arc(0, 0, 4, 0, Math.PI * 2);
ctx.closePath();
ctx.fillStyle = 'green';
ctx.fill();
ctx.stroke();
ctx.restore();
},
offsetX: 0,
offsetY: 0,
actionName: 'perspective-coords',
});
return acc;
}, {});
} else {
this.controls = fabric.Photo.prototype.controls;
}
this.canvas.requestRenderAll();
}
_actionWrapper(anchorIndex, fn) {
return function(eventData, transform, x, y) {
if (!transform || !eventData) return;
const { target } = transform;
target._resetSizeAndPosition(anchorIndex);
const actionPerformed = fn(eventData, transform, x, y);
return actionPerformed;
};
}
/**
* #description manually reset the bounding box after points update
*
* #see http://fabricjs.com/custom-controls-polygon
* #param {number} index
*/
_resetSizeAndPosition = (index, apply = true) => {
const absolutePoint = fabric.util.transformPoint({
x: this.perspectiveCoords[index][0],
y: this.perspectiveCoords[index][1],
}, this.calcTransformMatrix());
this._setPositionDimensions({});
const penBaseSize = this._getNonTransformedDimensions();
const newX = (this.perspectiveCoords[index][0]) / penBaseSize.x;
const newY = (this.perspectiveCoords[index][1]) / penBaseSize.y;
this.setPositionByOrigin(absolutePoint, newX + 0.5, newY + 0.5);
apply && this._applyPointsOffset();
}
/**
* This is modified version of the internal fabric function
* this helps determine the size and the location of the path
*
* #param {object} options
*/
_setPositionDimensions(options) {
const { left, top, width, height } = this._calcDimensions(options);
this.width = width;
this.height = height;
var correctLeftTop = this.translateToGivenOrigin(
{
x: left,
y: top,
},
'left',
'top',
this.originX,
this.originY
);
if (typeof options.left === 'undefined') {
this.left = correctLeftTop.x;
}
if (typeof options.top === 'undefined') {
this.top = correctLeftTop.y;
}
this.pathOffset = {
x: left,
y: top,
};
return { left, top, width, height };
}
/**
* #description this is based on fabric.Path._calcDimensions
*
* #private
*/
_calcDimensions() {
const coords = this.perspectiveCoords.slice().map(c => (
{
x: c[0] / fabric.devicePixelRatio,
y: c[1] / fabric.devicePixelRatio,
}
));
const minX = fabric.util.array.min(coords, 'x') || 0;
const minY = fabric.util.array.min(coords, 'y') || 0;
const maxX = fabric.util.array.max(coords, 'x') || 0;
const maxY = fabric.util.array.max(coords, 'y') || 0;
const width = Math.abs(maxX - minX);
const height = Math.abs(maxY - minY);
return {
left: minX,
top: minY,
width: width,
height: height,
};
}
/**
* #description This is modified version of the internal fabric function
* this subtracts the path offset from each path points
*/
_applyPointsOffset() {
for (let i = 0; i < this.perspectiveCoords.length; i++) {
const coord = this.perspectiveCoords[i];
coord[0] -= this.pathOffset.x;
coord[1] -= this.pathOffset.y;
}
}
/**
* #description generate the initial coordinates for warping, based on image dimensions
*
*/
getInitialPerspective() {
let w = this.getScaledWidth();
let h = this.getScaledHeight();
const perspectiveCoords = [
[0, 0], // top left
[w, 0], // top right
[w, h], // bottom right
[0, h], // bottom left
];
this.perspectiveCoords = perspectiveCoords;
const perspectiveFilter = new fabric.Image.filters.Perspective({
hasRelativeCoordinates: false,
pixelRatio: fabric.devicePixelRatio, // the Photo is already retina ready
perspectiveCoords
});
this.filters.push(perspectiveFilter);
this.applyFilters();
return perspectiveCoords;
}
};
/**
* Creates an instance of fabric.Photo from its object representation
* #static
* #param {Object} object Object to create an instance from
* #param {Function} callback Callback to invoke when an image instance is created
*/
fabric.Photo.fromObject = function(_object, callback) {
const object = fabric.util.object.clone(_object);
object.layout = _object.layout;
fabric.util.loadImage(object.src, function(img, isError) {
if (isError) {
callback && callback(null, true);
return;
}
fabric.Photo.prototype._initFilters.call(object, object.filters, function(filters) {
object.filters = filters || [];
fabric.Photo.prototype._initFilters.call(object, [object.resizeFilter], function(resizeFilters) {
object.resizeFilter = resizeFilters[0];
fabric.util.enlivenObjects([object.clipPath], function(enlivedProps) {
object.clipPath = enlivedProps[0];
var image = new fabric.Photo(img, object);
callback(image, false);
});
});
});
}, null, object.crossOrigin || 'anonymous');
};
const canvas = new fabric.Canvas(document.getElementById('canvas'), {
backgroundColor: 'white',
enableRetinaScaling: true,
});
function resizeCanvas() {
canvas.setWidth(window.innerWidth);
canvas.setHeight(window.innerHeight);
}
resizeCanvas();
window.addEventListener('resize', () => resizeCanvas(), false);
const photo = new fabric.Photo('https://cdn.artboard.studio/private/5cb9c751-5f17-4062-adb7-6ec2c137a65d/user_uploads/5bafe170-1580-4d6b-a3be-f5cdce22d17d-asdasdasd.jpg', {
left: canvas.getWidth() / 2,
top: canvas.getHeight() / 2,
originX: 'center',
originY: 'center',
});
canvas.add(photo);
canvas.setActiveObject(photo);
body {
margin: 0;
}
<script src="https://cdn.jsdelivr.net/npm/lodash#4.17.20/lodash.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/verb-nurbs-web#2.1.3/build/js/verb.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/fabric#4.3.0/dist/fabric.min.js"></script>
<canvas id="canvas"></canvas>
I suspect that the reference to absolutePoint in _resetSizeAndPosition needs to take into account the origin for the image and that there is a simple fix to this issue. However, I didn't find a good way to do this and resorted to manually "correcting" this issue in _resetSizeAndPosition.
The modified version of _resetSizeAndPosition looks like so:
_resetSizeAndPosition = (index, apply = true) => {
const absolutePoint = fabric.util.transformPoint({
x: this.perspectiveCoords[index][0],
y: this.perspectiveCoords[index][1],
}, this.calcTransformMatrix());
let { height, width, left, top } = this._calcDimensions({});
const widthDiff = (width - this.width) / 2;
if ((left < 0 && widthDiff > 0) || (left > 0 && widthDiff < 0)) {
absolutePoint.x -= widthDiff;
} else {
absolutePoint.x += widthDiff;
}
const heightDiff = (height - this.height) / 2;
if ((top < 0 && heightDiff > 0) || (top > 0 && heightDiff < 0)) {
absolutePoint.y -= heightDiff;
} else {
absolutePoint.y += heightDiff;
}
this._setPositionDimensions({});
const penBaseSize = this._getNonTransformedDimensions();
const newX = (this.perspectiveCoords[index][0]) / penBaseSize.x;
const newY = (this.perspectiveCoords[index][1]) / penBaseSize.y;
this.setPositionByOrigin(absolutePoint, newX + 0.5, newY + 0.5);
apply && this._applyPointsOffset();
}
The basic principle for this approach is that the left and top properties of the object are never being updated. This can be seen in your example through the console by modifying the image and checking the properties on the image. Therefore, we need to apply a correction to the position properties based on the changing width and height. This ensures that other points stay fixed in place, since we compensate for the changing height and width of the image in its position.
By comparing the values of width and this.width it's possible to determine if the image is increasing or decreasing in size. The value of left indicates whether the stretch is occurring to the left or right side of the image. If the user is stretching the image to the left or shrinking it from the right then we need. By combining the conditions for these, we can tell how we need to modify the position of the image to compensate. The same approach used for the horizontal values is also applied to the vertical values.
JSFiddle: https://jsfiddle.net/0x8caow6/

How do I layout elements in a circle without rotating the element?

Currently, I'm using offset and rotation to position elements in KonvaJS in a circle. Is there another method that would still layout the elements in a circle without rotating the text (eg like a clock.)
Output looks like this:
Code looks like this:
function drawNumber(radius, number, step) {
var segmentDegree = 360/16
var rotation = -90 + step * segmentDegree
var label = new Konva.Text({
x: patternOriginX,
y: patternOriginY,
text: number.toString(),
fontSize: 12,
fill: '#636363',
rotation: rotation
});
label.offsetX(-radius)
return label
}
You can use trigonometry to find the position of the text on its angle:
var centerX = stage.width() / 2;
var centerY = stage.height() / 2;
var QUANTITY = 10;
var RADIUS = 50;
var dAlhpa = Math.PI * 2 / QUANTITY;
for (var i = 0; i < QUANTITY; i++) {
var alpha = dAlhpa * i;
var dx = Math.cos(alpha) * RADIUS;
var dy = Math.sin(alpha) * RADIUS;
layer.add(new Konva.Text({
x: centerX + dx,
y: centerY + dy,
text: i.toString()
}))
}
Demo: https://jsbin.com/fizucotaxe/1/edit?html,js,output

How to swipe Left to Right in Appium?

Since swipe() is deprecated, I am unable to swipe the screen from Left to Right. My App has 4 banners in it and I want to swipe to view all the banners.
This applies in all directions:
enum:
public enum DIRECTION {
DOWN, UP, LEFT, RIGHT;
}
actual code:
public static void swipe(MobileDriver driver, DIRECTION direction, long duration) {
Dimension size = driver.manage().window().getSize();
int startX = 0;
int endX = 0;
int startY = 0;
int endY = 0;
switch (direction) {
case RIGHT:
startY = (int) (size.height / 2);
startX = (int) (size.width * 0.90);
endX = (int) (size.width * 0.05);
new TouchAction(driver)
.press(startX, startY)
.waitAction(Duration.ofMillis(duration))
.moveTo(endX, startY)
.release()
.perform();
break;
case LEFT:
startY = (int) (size.height / 2);
startX = (int) (size.width * 0.05);
endX = (int) (size.width * 0.90);
new TouchAction(driver)
.press(startX, startY)
.waitAction(Duration.ofMillis(duration))
.moveTo(endX, startY)
.release()
.perform();
break;
case UP:
endY = (int) (size.height * 0.70);
startY = (int) (size.height * 0.30);
startX = (size.width / 2);
new TouchAction(driver)
.press(startX, startY)
.waitAction(Duration.ofMillis(duration))
.moveTo(startX, endY)
.release()
.perform();
break;
case DOWN:
startY = (int) (size.height * 0.70);
endY = (int) (size.height * 0.30);
startX = (size.width / 2);
new TouchAction(driver)
.press(startX, startY)
.waitAction(Duration.ofMillis(duration))
.moveTo(startX, endY)
.release()
.perform();
break;
}
}
usage:
swipe(driver,DIRECTION.RIGHT);
Hope this helps,
try below method. It works with Appium 1.16.0 version.
I created this method to swipe left or right based on a particular element location on the screen. It takes 3 parameters
Element X: It is the X coordinate of the element on which swipe touch action needs to be performed.
Element Y: It is the Y coordinate of the element.
Direction: Left/Right
//method to left and right swipe on the screen based on coordinates
public void swipeAction(int Xcoordinate, int Ycoordinate, String direction) {
//get device width and height
Dimension dimension = driver.manage().window().getSize();
int deviceHeight = dimension.getHeight();
int deviceWidth = dimension.getWidth();
System.out.println("Height x Width of device is: " + deviceHeight + " x " + deviceWidth);
switch (direction) {
case "Left":
System.out.println("Swipe Right to Left");
//define starting and ending X and Y coordinates
int startX=deviceWidth - Xcoordinate;
int startY=Ycoordinate; // (int) (height * 0.2);
int endX=Xcoordinate;
int endY=Ycoordinate;
//perform swipe from right to left
new TouchAction((AppiumDriver) driver).longPress(PointOption.point(startX, startY)).moveTo(PointOption.point(endX, endY)).release().perform();
break;
case "Right":
System.out.println("Swipe Left to Right");
//define starting X and Y coordinates
startX=Xcoordinate;
startY=Ycoordinate;
endX=deviceWidth - Xcoordinate;
endY=Ycoordinate;
//perform swipe from left to right
new TouchAction((AppiumDriver) driver).longPress(PointOption.point(startX, startY)).moveTo(PointOption.point(endX, endY)).release().perform();
break;
}
}
To fetch the element X,Y coordinates. try below methods
int elementX= driver.findElement(elementLocator).getLocation().getX();
int elementY= driver.findElement(elementLocator).getLocation().getY();
Assuming you created driver instance of AndroidDriver you can swipe left:
// Get location of element you want to swipe
WebElement banner = driver.findElement(<your_locator>);
Point bannerPoint = banner.getLocation();
// Get size of device screen
Dimension screenSize = driver.manage().window().getSize();
// Get start and end coordinates for horizontal swipe
int startX = Math.toIntExact(Math.round(screenSize.getWidth() * 0.8));
int endX = 0;
TouchAction action = new TouchAction(driver);
action
.press(PointOption.point(startX, bannerPoint.getY()))
.waitAction(WaitOptions.waitOptions(Duration.ofMillis(500)))
.moveTo(PointOption.point(endX, bannerPoint.getY()))
.release();
driver.performTouchAction(action);
Use latest appium-java-client 6.1.0 and Appium 1.8.x server
This should work,
Dimension size = driver.manage().window().getSize();
System.out.println(size.height+"height");
System.out.println(size.width+"width");
System.out.println(size);
int startPoint = (int) (size.width * 0.99);
int endPoint = (int) (size.width * 0.15);
int ScreenPlace =(int) (size.height*0.40);
int y=(int)size.height*20;
TouchAction ts = new TouchAction(driver);
//for(int i=0;i<=3;i++) {
ts.press(PointOption.point(startPoint,ScreenPlace ))
.waitAction(WaitOptions.waitOptions(Duration.ofMillis(1000)))
.moveTo(PointOption.point(endPoint,ScreenPlace )).release().perform();
This is for iOS Mobile:
//Here i am trying to swipe list of images from right to left
//First i am getting parent element (table/cell) id
//Then using predicatestring am searching for the element present or not then trying to click
List<MobileElement> ele = getMobileElement(listBtnQuickLink).findElements(By.xpath(".//XCUIElementTypeButton"));
for(int i=1 ;i<=20;i++) {
MobileElement ele1 = ele.get(i);
String parentID = getMobileElement(listBtnQuickLink).getId();
HashMap<String, String> scrollObject = new HashMap<String, String>();
scrollObject.put("element", parentID); //This is parent element id (not same element)
scrollObject.put("predicateString", "label == '"+ele1.getText()+"'");
scrollObject.put("direction", "left");
driver.executeScript("mobile:swipe", scrollObject); // scroll to the target element
System.out.println("Element is visible : "+ele1.isDisplayed());
}
Unfortunately i was noted that TouchAction doesn't work on Android 11 with Selenium 4. So if you use Selenide and Appium you can try this:
public class SwipeToLeft implements Command<SelenideElement> {
#Nullable
#Override
public SelenideElement execute(SelenideElement proxy, WebElementSource locator, #Nullable Object[] args) throws IOException {
Selenide.sleep(2000);
var driver = WebDriverRunner.getWebDriver();
var element = proxy.getWrappedElement();
((JavascriptExecutor) driver).executeScript("mobile: swipeGesture", ImmutableMap.of(
"elementId", ((RemoteWebElement) element).getId(),
"direction", "left",
"percent", 0.75
));
return proxy;
}
}
And then you can use:
$('your seleniumLocator').shouldBe(visible).execute(new SwipeToLeft());

Adjust igMap Marker Size

I am attempting to map out certain data points using ignite UI's igMap control. What I want to happen is based on the larger realized rate per hour, make the map marker larger or smaller. The documentation through infragistics doesn't seem to go into this very much, so if anyone has an input, I'd appreciate it
#model IEnumerable<OpsOverallGeoMapViewModel>
<style>
#tooltipTable {
font-family: Verdana, Arial, Helvetica, sans-serif;
width: 100%;
border-collapse: collapse;
}
#tooltipTable td, #tooltipTable th {
font-size: 9px;
border: 1px solid #28b51c;
padding: 3px 7px 2px 7px;
}
#tooltipTable th {
font-weight: bold;
font-size: 11px;
text-align: left;
padding-top: 5px;
padding-bottom: 4px;
background-color: #28b51c;
color: #ffffff;
}
</style>
<script id="tooltipTemplate" type="text/x-jquery-tmpl">
<table id="tooltipTable">
<tr><th class="tooltipHeading" colspan="2">${item.Country}</th></tr>
<tr>
<td>Total Hours:</td>
<td>${item.Hours}</td>
</tr>
<tr>
<td>Total Billing:</td>
<td>${item.Billing}</td>
</tr>
<tr>
<td>Realized Rate Per Hour:</td>
<td>${item.RealizedRatePerHour}</td>
</tr>
</table>
</script>
<div id="map"></div>
<script>
$(function () {
var model = #Html.Raw(Json.Encode(Model));
$("#map").igMap({
width: "700px",
height: "500px",
windowRect: { left: 0.225, top: 0.1, height: 0.6, width: 0.6 },
series: [{
type: "geographicSymbol",
name: "worldCities",
dataSource: model, //JSON Array defined above
latitudeMemberPath: "Latitude",
longitudeMemberPath: "Longitude",
markerType: "automatic",
markerOutline: "#28b51c",
markerBrush: "#28b51c",
showTooltip: true,
tooltipTemplate: "tooltipTemplate"
}],
});
});
</script>
<div id="map"></div>
I figured it out by running through the example for marker templates on infragistics website. By changing the circle radius of the marker, it makes this into a sort of heat map which is what I was looking for
$(function () {
var model = #Html.Raw(Json.Encode(Model.OrderBy(x => x.Billing)));
$("#map").igMap({
width: "700px",
height: "500px",
windowRect: { left: 0.1, top: 0.1, height: 0.7, width: 0.7 },
// specifies imagery tiles from BingMaps
backgroundContent: {
type: "bing",
key: "Masked Purposely",
imagerySet: "Road", // alternative: "Road" | "Aerial"
},
series: [{
type: "geographicSymbol",
name: "ratesGraph",
dataSource: model, //JSON Array defined above
latitudeMemberPath: "Latitude",
longitudeMemberPath: "Longitude",
markerType: "automatic",
markerCollisionAvoidance: "fade",
markerOutline: "#1142a6",
markerBrush: "#7197e5",
showTooltip: true,
tooltipTemplate: "customTooltip",
// Defines marker template rendering function
markerTemplate: {
measure: function (measureInfo) {
measureInfo.width = 10;
measureInfo.height = 10;
},
render: function (renderInfo) {
createMarker(renderInfo);
}
}
}]
});
});
function createMarker(renderInfo) {
var ctx = renderInfo.context;
var x = renderInfo.xPosition;
var y = renderInfo.yPosition;
var size = 10;
var heightHalf = size / 2.0;
var widthHalf = size / 2.0;
if (renderInfo.isHitTestRender) {
// This is called for tooltip hit test only
// Rough marker rectangle size calculation
ctx.fillStyle = renderInfo.data.actualItemBrush().fill();
ctx.fillRect(x - widthHalf, y - heightHalf, size, size);
} else {
var data = renderInfo.data;
var name = data.item()["CountryName"];
var type = data.item()["Country"];
var billing = data.item()["Billing"];
// Draw text
ctx.textBaseline = "top";
ctx.font = '8pt Verdana';
ctx.fillStyle = "black";
ctx.textBaseline = "middle";
wrapText(ctx, name, x + 3, y + 6, 80, 12);
// Draw marker
ctx.beginPath();
//SET THE CIRCLE RADIUS HERE*******
var circleRadius = 3;
var radiusFactor = billing / 100000;
if (radiusFactor > 4)
circleRadius = radiusFactor;
if (circleRadius > 10)
circleRadius = 10;
ctx.arc(x, y,circleRadius, 0, 2 * Math.PI, false);
ctx.fillStyle = "#36a815";
ctx.fill();
ctx.lineWidth = 1;
ctx.strokeStyle = "black";
ctx.stroke();
}
}
// Plots a rectangle with rounded corners with a semi-transparent frame
function plotTextBackground(context, left, top, width, height) {
var cornerRadius = 3;
context.beginPath();
// Upper side and upper right corner
context.moveTo(left + cornerRadius, top);
context.lineTo(left + width - cornerRadius, top);
context.arcTo(left + width, top, left + width, top + cornerRadius, cornerRadius);
// Right side and lower right corner
context.lineTo(left + width, top + height - cornerRadius);
context.arcTo(left + width, top + height, left + width - cornerRadius, top + height, cornerRadius);
// Lower side and lower left corner
context.lineTo(left + cornerRadius, top + height);
context.arcTo(left, top + height, left, top + height - cornerRadius, cornerRadius);
// Left side and upper left corner
context.lineTo(left, top + cornerRadius);
context.arcTo(left, top, left + cornerRadius, top, cornerRadius);
// Fill white with 75% opacity
context.globalAlpha = 1;
context.fillStyle = "white";
context.fill();
context.globalAlpha = 1;
// Plot grey frame
context.lineWidth = 1;
context.strokeStyle = "grey";
context.stroke();
}
// Outputs text in a word wrapped fashion in a transparent frame
function wrapText(context, text, x, y, maxWidth, lineHeight) {
var words = text.split(" ");
var line = "";
var yCurrent = y;
var lines = [], currentLine = 0;
// Find the longest word in the text and update the max width if the longest word cannot fit
for (var i = 0; n < words.length; i++) {
var testWidth = context.measureText(words[i]);
if (testWidth > maxWidth)
maxWidth = metrics.width;
}
// Arrange all words into lines
for (var n = 0; n < words.length; n++) {
var testLine = line + words[n];
var testWidth = context.measureText(testLine).width;
if (testWidth > maxWidth) {
lines[currentLine] = line;
currentLine++;
line = words[n] + " ";
}
else {
line = testLine + " ";
}
}
lines[currentLine] = line;
// Plot frame and background
if (lines.length > 1) {
// Multiline text
plotTextBackground(context, x - 2, y - lineHeight / 2 - 2, maxWidth + 3, lines.length * lineHeight + 3);
}
else {
// Single line text
var textWidth = context.measureText(lines[0]).width; // Limit frame width to the actual line width
plotTextBackground(context, x - 2, y - lineHeight / 2 - 2, textWidth + 3, lines.length * lineHeight + 3);
}
// Output lines of text
context.fillStyle = "black";
for (var n = 0; n < lines.length; n++) {
context.fillText(" " + lines[n], x, yCurrent);
yCurrent += lineHeight;
}
}

Android - button disappears when resized

I am trying to have a button which scales dynamically. At run time, I want its width and height to be 70% of the current size. However, the button is disappearing. Here is my code:
Button btn = (Button) v.findViewById(R.id.button_delete_transaction);
btn.setMinWidth(0);
btn.setMinHeight(0);
btn.measure(MeasureSpec.UNSPECIFIED, MeasureSpec.UNSPECIFIED);
int width = btn.getMeasuredWidth();
int height = btn.getMeasuredHeight();
ViewGroup.LayoutParams params = btn.getLayoutParams();
params.width = (int) .7 * width;
params.height = (int) .7 * height;
btn.setLayoutParams(params);
And the xml:
<Button
android:id="#+id/button_delete_transaction"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentRight="true"
android:background="#drawable/add_img"
android:focusable="false"
/>
EDIT:
Ahh...it is because you are not casting the right thing as an int. You are casting 0.7 as an int (which goes to zero) and then multiplying it, instead of multiplying and then casting. You can use (int) (.7 * width) instead of (int) .7 * width.
See my example: http://ideone.com/NSGwGF
Anyway, my advice below still stands.
Why not use:
btn.setWidth((int) Math.round(.7 * width));
btn.setHeight((int) Math.round(.7 * height));
instead of:
ViewGroup.LayoutParams params = btn.getLayoutParams();
params.width = (int) .7 * width;
params.height = (int) .7 * height;
btn.setLayoutParams(params);

Resources