JetPack Compose accompanist Pager lerp not found - android-jetpack-compose

I'm learning accompanist pager and I want to set effect on the pager.
I want to use the lerp method like the document
Modifier.graphicsLayer {
// Calculate the absolute offset for the current page from the
// scroll position. We use the absolute value which allows us
// to mirror any effects for both directions
val pageOffset =
calculateCurrentOffsetForPage(page).absoluteValue
// We animate the scaleX + scaleY, between 85% and 100%
lerp(
start = 0.85f,
stop = 1f,
fraction = 1f - pageOffset.coerceIn(0f, 1f)).also { scale ->
scaleX = scale
scaleY = scale
}
// We animate the alpha, between 50% and 100%
alpha = lerp(
start = 0.5f,
stop = 1f,
fraction = 1f - pageOffset.coerceIn(0f, 1f)
)
but I can't find the right import for the lerp method for animating the scale like documentation.
any suggestions are welcome

I faced this issue. The issue solved after add this dependency:
implementation "androidx.compose.ui:ui-util:$compose_version"

Related

AudioKit: How to make Knob from AnalogSynthX-Example exponential instead of linear?

For volume controls in most cases it would be better if knob values would change exponential or logarithmical instead of linear.
Where would be the best place within the Knob.swift of the AudioKit AnalogSynthX-Example class to scale the value to any kind of curve?
I think of
func setPercentagesWithTouchPoint(_ touchPoint: CGPoint) {
// Knobs assume up or right is increasing, and down or left is decreasing
let horizontalChange = Double(touchPoint.x - lastX) * knobSensitivity
value += horizontalChange * (maximum - minimum)
let verticalChange = Double(touchPoint.y - lastY) * knobSensitivity
value -= verticalChange * (maximum - minimum)
lastX = touchPoint.x
lastY = touchPoint.y
// TODO: map to exponential/log/any curve if -> knobType is .exp
// ...
delegate?.updateKnobValue(value, tag: self.tag)
}
but maybe someone did invent this wheel already? Thnx!
Thanks for asking. The cutoff knob in the Analog Synth X repo scales logarithmically. You can look at that for a simple example.
Plus, there are new knobs in the AudioKit ROM Player repo. These improved knob controls have adjustable taper curve scaling and range settings:
https://github.com/AudioKit/ROMPlayer

How to change SHAPE of UIImage in iOS

Hey guys how to change the shape of UIImage like this:
This is the original one:
And this is the way I wanna it be:
If you get any idea about this, please leave a message. Thx:)
You can achieve this by layer transformation.
self.imgView.layer.transform = CATransform3DScale(self.imgView.layer.transform, 1, 1, 1.5)
Here you need to play with z position of layer transformation.
Please refer CATransform3D for more detail...
You can achive this with transform. But not scale - you need to setup perspective, and rotate you view a bit around x axis:
// Take Identity matrix
var perspectiveMatrix = CATransform3DIdentity
// Add perspective.
perspectiveMatrix.m34 = -1.0 / 1000.0
// Rotate by 30 degree
let rotationMatrix = CATransform3DRotate(self.perspectiveMatrix,CGFloat.pi / 6.0, 1, 0, 0)
//Apply transform
view.layer.transform = rotationMatrix
You can apply transform in UIView.animation block to make smooth animation.

Canvas zoomIn/zoomOut: how to avoid loss of image quality?

I have this code that I used for scaling images. To zoomIn and zoomOut is use the code scalePicture(1.10, drawingContext); and scalePicture(0.90, drawingContext);. I perform that operations on a off screen canvas and then copy the image back to the original screen.
I make use of the offscreen processing since the browser optimizes the image operations by using double buffering. I am still having the issue that when I zoomIn by around 400% and then zoomOut back to the original size, there is a significant loss of image quality.
I am not depending on the original image because the user can perform many operations such as clip, crop, rotate, annotate and I need to stack all the operations on the original image.
Can anyone throw some advice/suggestions around any means to preserve the quality of the image while not sacrificing the performance and quality.
scalePicture : function(scalePercent, operatingCanvasContext) {
var w = operatingCanvasContext.canvas.width,
h = operatingCanvasContext.canvas.height,
sw = w * scalePercent,
sh = h * scalePercent,
operatingCanvas = operatingCanvasContext.canvas;
var canvasPic = new Image();
operatingCanvasContext.save();
canvasPic.src = operatingCanvas.toDataURL();
operatingCanvasContext.clearRect (0,0, operatingCanvas.width, operatingCanvas.height);
operatingCanvasContext.translate(operatingCanvas.width/2, operatingCanvas.height/2);
canvasPic.onload = function () {
operatingCanvasContext.drawImage(canvasPic, -sw/2 , -sh/2 , sw, sh);
operatingCanvasContext.translate(-operatingCanvas.width/2, -operatingCanvas.height/2);
operatingCanvasContext.restore();
};
}
Canvas is draw and forget. There is no way to preserve original quality without referencing the original source.
I would suggest to reconstruct the recorded stack but using a transformation matrix for the changes in scale, rotation etc. Then apply the accumulated matrix on the original image. This will preserve the optimal quality as well as provide some gain in performance (as you only draw the last and current state).
Similar for clipping, calculate and merge the clipping regions using the same matrix and apply clip before drawing in the original image in the final step. And similar with text etc.
It's a bit too broad to show an example that does all these steps, but here is an example showing how to use accumulated matrix transforms on the original image preserving optimal quality. You can see that you can zoom in and out, rotate and the image will in each instance render at optimal quality.
Example of Concept
var ctx = c.getContext("2d"), img = new Image; // these lines just for demo init.
img.onload = demo;
ctx.fillText("Loading image...", 20, 20);
ctx.globalCompositeOperation = "copy";
img.src = "http://i.imgur.com/sPrSId0.jpg";
function demo() {
render();
zin.onclick = zoomIn; // accumulates transform, but render
zout.onclick = zoomOut; // based on original image using.
zrot.onclick = rotate; // current transformation matrix
}
function render() {ctx.drawImage(img, 0, 0)} // render original image
function zoomIn() {
ctx.translate(c.width * 0.5, c.height * 0.5); // pivot = center
ctx.scale(1.05, 1.05);
ctx.translate(-c.width * 0.5, -c.height * 0.5);
render();
}
function zoomOut() {
ctx.translate(c.width * 0.5, c.height * 0.5);
ctx.scale(1/1.05, 1/1.05);
ctx.translate(-c.width * 0.5, -c.height * 0.5);
render();
}
function rotate() {
ctx.translate(c.width * 0.5, c.height * 0.5);
ctx.rotate(0.3);
ctx.translate(-c.width * 0.5, -c.height * 0.5);
render();
}
<button id=zin>Zoom in</button>
<button id=zout>Zoom out</button>
<button id=zrot>Rotate</button><br>
<canvas id=c width=640 height=378></canvas>

How to "center" SKTexture in SKSpriteNode

I'm trying to make Jigsaw puzzle game in SpriteKit. To make things easier I using 9x9 squared tiles board. On each tile is one childNode with piece of image from it area.
But here's starts my problem. Piece of jigsaw puzzle isn't perfect square, and when I apply SKTexture to node it just place from anchorPoint = {0,0}. And result isn't pretty, actually its terrible.
https://www.dropbox.com/s/2di30hk5evdd5fr/IMG_0086.jpg?dl=0
I managed to fix those tiles with right and top "hooks", but left and bottom side doesn't care about anything.
var sprite = SKSpriteNode()
let originSize = frame.size
let textureSize = texture.size()
sprite.size = originSize
sprite.texture = texture
sprite.size = texture.size()
let x = (textureSize.width - originSize.width)
let widthRate = x / textureSize.width
let y = (textureSize.height - originSize.height)
let heightRate = y / textureSize.height
sprite.anchorPoint = CGPoint(x: 0.5 - (widthRate * 0.5), y: 0.5 - (heightRate * 0.5))
sprite.position = CGPoint(x: frame.width * 0.5, y: frame.height * 0.5)
addChild(sprite)
Can you give me some advice?
I don't see a way you can get placement right without knowing more about the piece texture you are using because they will all be different. Like if the piece has a nob on any of the sides and the width width/height the nob will add to the texture. Hard to tell in the pic but even if it doesn't have a nob and instead has an inset it might add varying sizes.
Without knowing anything about how the texture is created I am not able to offer help on that. But I do believe the issue starts with that. If it were me I would create a square texture with additional alpha to center the piece correctly. So the center of that texture would always be placed in the center of a square on the grid.
With all that being said I do know that adding that texture to a node and then adding that node to a SKNode will make your placement go smoother with the way you currently have it. The trick will then only be placing that textured piece correctly within the empty SKNode.
For example...
let piece = SKSpriteNode()
let texturedPiece = SKSpriteNode(texture: texture)
//positioning
//offset x needs to be calculated with additional info about the texture
//for example it has just a nob on the right
let offsetX : CGFloat = -nobWidth/2
//offset y needs to be calculated with additional info about the texture
//for example it has a nob on the top and bottom
let offsetY : CGFloat = 0.0
texturedPiece.position = CGPointMake(offsetX, offsetY)
piece.addChild(texturedPiece)
let squareWidth = size.width/2
//Now that the textured piece is placed correctly within a parent
//placing the parent is super easy and consistent without messing
//with anchor points. This will also make rotations nice.
piece.position = CGPoint(x: squareWidth/2, y: squareWidth/2)
addChild(piece)
Hopefully that makes sense and didn't confuse things further.

UIImage transform/scaling issues

Finally, I have a reason to ask something, instead of scouring endless hours of the joys of Stack Overflow.
Here's my situation: I have an UIImageView with one UIImage inside it. I'm transforming the entire UIImageView via CGAffineTranforms, to scale it height-wise and keep it at a specific angle.
I'm feeding it this transform data through two CGPoints, so it's essentially just calculating the angle and scale between these two points and transforming.
Now, the transforming is working like a charm, but I recently came across the UIImage method resizableImageWithCapInsets, which works just fine if you set the frame of the image manually (ie. scale using a frame), but it seems that using transforms overrides this, which I guess is sort of to be expected since it's Core Graphics doing it's thing.
My question is, how would I go about either a) adding cap insets after transforming the image or b) doing the angle & scaling via a frame?
Note that the two points providing the data are touch points, so they can differ very much, which is why creating a scaled rectangle at a specific angle is tricky at best.
To keep you code hungry geniuses happy, here's a snippet of the current way I'm handling scaling (only doing cap insets when creating the UIImage):
float xDiff = (PointB.x - PointA.x) / 2;
float yDiff = (PointB.y - PointA.y) / 2;
float angle = [self getRotatingAngle:PointA secondPoint:PointB];
CGPoint pDiff = CGPointMake(PointA.x + xDiff, PointA.y + yDiff);
self.center = pDiff;
// Setup a new transform
// Set it up with a scale and an angle
double distance = sqrt(pow((PointB.x - PointA.x), 2.0) + pow((PointB.y - PointA.y), 2.0));
float scale = 1.0 * (distance / self.image.size.height);
CGAffineTransform transformer = self.transform;
transformer = CGAffineTransformConcat(CGAffineTransformMakeScale(1.0, scale), CGAffineTransformMakeRotation(angle));
// Apply the transformer
self.transform = transformer;
Adding a proper answer to this. The answer to the problem can be found here.

Resources