In this gesture multi-point control demo, it should be zooming and rotating with two fingers, but I found that one finger can be dragged in the test.
I want to drag with two fingers, how can I modify it?
#Composable
fun TransformableSample() {
// set up all transformation states
var scale by remember { mutableStateOf(1f) }
var rotation by remember { mutableStateOf(0f) }
var offset by remember { mutableStateOf(Offset.Zero) }
val state = rememberTransformableState { zoomChange, offsetChange, rotationChange ->
scale *= zoomChange
rotation += rotationChange
offset += offsetChange
}
Box(
Modifier
// apply other transformations like rotation and zoom
// on the pizza slice emoji
.graphicsLayer(
scaleX = scale,
scaleY = scale,
rotationZ = rotation,
translationX = offset.x,
translationY = offset.y
)
// add transformable to listen to multitouch transformation events
// after offset
.transformable(state = state)
.background(Color.Blue)
.fillMaxSize()
)
}
Modifier.transformable doesn't have much customization options, you can copy source code file to your project and replace
val panChange = event.calculatePan()
With
val panChange = if (event.changes.count() > 1) event.calculatePan() else Offset.Zero
You can create a Modifier as
Modifier
.pointerInput(Unit) {
awaitEachGesture {
// Wait for at least one pointer to press down
awaitFirstDown()
do {
val event = awaitPointerEvent()
// You can set this as required
if(event.changes.size==2){
offset = event.calculatePan()
// zoom
event.calculateZoom()
// rotation
event.calculateRotation()
}
// This is for preventing other gestures consuming events
// prevent scrolling or other continuous gestures
event.changes.forEach { pointerInputChange: PointerInputChange ->
pointerInputChange.consume()
}
} while (event.changes.any { it.pressed })
}
}
I also wrote a library that returns transform values with start, move, and end callbacks and number of pointer down that you can also use
https://github.com/SmartToolFactory/Compose-Extended-Gestures
Modifier.pointerInput(Unit) {
detectTransformGestures(
onGestureStart = {
transformDetailText = "GESTURE START"
},
onGesture = { gestureCentroid: Offset,
gesturePan: Offset,
gestureZoom: Float,
gestureRotate: Float,
mainPointerInputChange: PointerInputChange,
pointerList: List<PointerInputChange> ->
},
onGestureEnd = {
borderColor = Color.LightGray
transformDetailText = "GESTURE END"
}
)
}
Or this one that checks number of pointers and prequsite condition before returning zoom, rotate, drag and so on
Modifier
.pointerInput(Unit) {
detectPointerTransformGestures(
numberOfPointers = 1,
requisite = PointerRequisite.GreaterThan,
onGestureStart = {
transformDetailText = "GESTURE START"
},
onGesture = { gestureCentroid: Offset,
gesturePan: Offset,
gestureZoom: Float,
gestureRotate: Float,
numberOfPointers: Int ->
},
onGestureEnd = {
transformDetailText = "GESTURE END"
}
)
}
Related
I have an isometric grid with images and need to make an overlay for each of them. Tried to use Modifier.zIndex recommended in this answer but didn't succeed. So, I decided to pass images second time after my first layer is completed, adding an extra loop. Can it be simplified with zIndex or other methods?
Box {
repeat(2) {
for (y in 0 until gridHeight) {
for (x in 0 until gridWidth) {
Box(
modifier = Modifier.padding(start = start.dp, top = top.dp)
) {
if (it == 0)
itemContent(index, data[index]) //The basic image
else
Box(
modifier = Modifier.size(Dimens.imageWidth.dp),
contentAlignment = BiasAlignment(0f, 0.5f)
)
{
Icon( //The overlay
imageVector = Icons.Filled.Image,
contentDescription = null,
modifier = Modifier.size(32.dp)
)
}
}
}
}
}
}
I'm trying to implement rotary input scrolling (currently with Galaxy Watch 4, so the bezel controls rotary input) on a Horizontal Pager. I can get it to move forward, but not backwards, no matter what direction I move the bezel in. How do I detect counter clockwise rotary to make the pager go back instead of forward?
Note:
pagerState.scrollBy(it.horizontalScrollPixels) does work forwards and backwards but doesn't snap to the next page, only slowly scrolls partially. This can be solved this way too (barring janky animation, animateToPage flows better, but presents the same lack of backwards scroll issue). I will accept an answer that can get a value to snap to the next page for all different screen sizes centered using scrollBy. I'm thinking it's it.horizontalScrollPixels times something ("it" is a RotaryScrollEvent object)
This code moves the pager forward
val focusRequester = remember { FocusRequester() }
LaunchedEffect(Unit) {
focusRequester.requestFocus()
}
HorizontalPager(
count = 4,
state = pagerState,
// Add 32.dp horizontal padding to 'center' the pages
modifier = Modifier.fillMaxSize().onRotaryScrollEvent {
coroutineScope.launch {
pagerState.animateScrollToPage(pagerState.targetPage, 1f)
}
true
}
.focusRequester(focusRequester)
.focusable()
You should check how the direction and size of the scroll event.
Also scroll by changing the target page, not the offset within that page. You code happens to work because scrolling to 1.0 in the page moves to the next page.
/**
* ScrollableState integration for Horizontal Pager.
*/
public class PagerScrollHandler(
private val pagerState: PagerState,
private val coroutineScope: CoroutineScope
) : ScrollableState {
override val isScrollInProgress: Boolean
get() = totalDelta != 0f
override fun dispatchRawDelta(delta: Float): Float = scrollableState.dispatchRawDelta(delta)
private var totalDelta = 0f
private val scrollableState = ScrollableState { delta ->
totalDelta += delta
val offset = when {
// tune to match device
totalDelta > 40f -> {
1
}
totalDelta < -40f -> {
-1
}
else -> null
}
if (offset != null) {
totalDelta = 0f
val newTargetPage = pagerState.targetPage + offset
if (newTargetPage in (0 until pagerState.pageCount)) {
coroutineScope.launch {
pagerState.animateScrollToPage(newTargetPage, 0f)
}
}
}
delta
}
override suspend fun scroll(
scrollPriority: MutatePriority,
block: suspend ScrollScope.() -> Unit
) {
scrollableState.scroll(block = block)
}
}
val state = rememberPagerState()
val pagerScrollHandler = remember { PagerScrollHandler(state, coroutineScope) }
modifier = Modifier
.fillMaxSize()
.onRotaryScrollEvent {
coroutineScope.launch {
pagerScrollHandler.scrollBy(it.verticalScrollPixels)
}
true
}
.focusRequester(viewModel.focusRequester)
.focusable()
Also you should check that targetPage + offset is a valid page.
I tested this on a Galaxy Watch 4. Using this guide in the official documentation (https://developer.android.com/training/wearables/user-input/rotary-input#kotlin) I printed the delta values when I scrolled using bezel of the watch. For each scroll that I made i clockwise direction I got a delta of 128 and -128 for each counterclockwise scroll.
Using simple if else blocks I was able to distinguish when I scrolled above and below.
override fun onGenericMotionEvent(event: MotionEvent?): Boolean {
if (event?.action == MotionEvent.ACTION_SCROLL && event.isFromSource(InputDeviceCompat.SOURCE_ROTARY_ENCODER)) {
runBlocking {
val delta = -event.getAxisValue(MotionEventCompat.AXIS_SCROLL) *
ViewConfigurationCompat.getScaledVerticalScrollFactor(
ViewConfiguration.get(baseContext), baseContext
)
if (delta < 127) {
scalingLazyListState.scrollBy(-100f)
} else {
scalingLazyListState.scrollBy(100f)
}
}
}
return super.onGenericMotionEvent(event)
}
When I use AnimatedVisibility around Canvas, it doesn't work.
AnimatedVisibility(
visible = firstShowVisible,
modifier = Modifier.align(Alignment.Center),
enter = fadeIn(0f, tween(300, 3100, LinearEasing))
) {
Canvas(modifier = Modifier) {
drawCircleBackground(color, radius, strokeWidth)
drawCircleProgress(color, progress, radius, strokeWidth)
}
}
The Canvas item would show immediately, rather than fade in slowly.
And firstShowVisible is changed by
var firstShowVisible by remember { mutableStateOf(false) }
LaunchedEffect(true) {
firstShowVisible = true
}
It works for other items, cannot work for Canvas only to me
It's solved.
It's because the arc I draw is not in the area of the outer Box.
When I make the outer Box fillMaxSize, it works.
I am trying to implement a graph drawing view in OSX using Cocoa and Quartz framework using NSBezierPath and add/delete data points as I go.
Doing so in drawRect worked fine as the graph was updating frequently but then I encountered performance problem when I need to increase total datapoints/sampling rate.
I decided to move to drawLayer: inContext: but as the function is called at 60fps, the view isn't updating the graph when the function is call and instead update at 1fps.
What am I doing wrong here?
class CustomDrawLayer: CALayer {
convenience init(view: NSView, drawsAsynchronously : Bool = false) {
self.init()
self.bounds = view.bounds
self.anchorPoint = CGPointZero
self.opaque = false
self.frame = view.frame
self.drawsAsynchronously = drawsAsynchronously
// for multiple draws in hosting view
// self.delegate = self
}
override func actionForLayer(layer: CALayer, forKey event: String) -> CAAction? {
return nil
}}
override func drawLayer(layer: CALayer, inContext ctx: CGContext) {
if layer == self.layer {
Swift.print("axes drawing")
graphBounds.origin = self.frame.origin
graphAxes.drawAxesInRect(graphBounds, axeOrigin: plotOrigin, xPointsToShow: CGFloat(totalSecondsToDisplay), yPointsToShow: CGFloat(totalChannelsToDisplay))
}
if layer == self.board {
Swift.print(1/NSDate().timeIntervalSinceDate(fpsTimer))
fpsTimer = NSDate()
drawPointsInGraph(graphAxes, context: ctx)
}
}
func drawPointsInGraph(axes: AxesDrawer, context: CGContext)
{
color.set()
var x : CGFloat = 0
var y : CGFloat = 0
for var channel = 0; channel < Int(totalChannelsToDisplay); channel++ {
path.removeAllPoints()
var visibleIndex = (dirtyRect.origin.x - axes.position.x) / (axes.pointsPerUnit.x / samplingRate)
if visibleIndex < 2 {
visibleIndex = 2
}
for var counter = Int(visibleIndex); counter < dataStream![channel].count; counter++ {
if dataStream![channel][counter] == 0 {
if path.elementCount > 0 {
path.stroke()
}
break
}
let position = axes.position
let ppY = axes.pointsPerUnit.y
let ppX = axes.pointsPerUnit.x
let channelYLocation = CGFloat(channel)
x = position.x + CGFloat(counter-1) * (ppX / samplingRate)
y = ((channelYLocation * ppY) + position.y) + (dataStream![channel][counter-1] * (ppY))
path.moveToPoint(CGPoint(x: align(x), y: align(y)))
x = position.x + CGFloat(counter) * (ppX / samplingRate)
y = ((channelYLocation * ppY) + position.y) + (dataStream![channel][counter] * (ppY) )
path.lineToPoint(CGPoint(x: align(x), y: align(y)))
if x > (axes.position.x + axes.bounds.width) * 0.9 {
graphAxes.forwardStep = 5
dirtyRect = graphBounds
for var c = 0; c < Int(totalChannelsToDisplay); c++ {
for var i = 0; i < Int(samplingRate) * graphAxes.forwardStep; i++
{
dataStream![c][i] = 0
}
}
return
}
}
path.stroke()
}
if inLiveResize {
dirtyRect = graphBounds
} else {
dirtyRect.origin.x = x
dirtyRect.origin.y = bounds.minY
dirtyRect.size.width = 10
dirtyRect.size.height = bounds.height
}
}
It is incredibly rare that you should ever call a function at 60 Hz. In no case should you ever try to call a drawing function at 60 Hz; that never makes sense in Cocoa. If you really mean "at the screen refresh interval," see CADisplayLink, which is specifically built to allow you to draw at the screen refresh interval. This may be slower than 60 Hz. If you try to draw exactly at 60 Hz, you can get out of sync and cause beats in your animation. But this really only intended for things like real-time video. If that what you have, then this is the tool, but it doesn't really sound like it.
It's a bit difficult to understand your code. It's not clear where your 60fps comes in. But I'm assuming what you're trying to do is animate drawing the graph. If so, as Mark F notes, see CAShapeLayer. It has automatic path animations built-in, and is definitely what you want. It automatically handles timings and syncing with the screen refresh and GPU optimizations, and lots of other things that you shouldn't try to work around.
Even if CAShapeLayer isn't what you want, you should be looking at Core Animation, which is designed to work with you to animate values and redraw as necessary. It automatically will handle rendering your layer on multiple cores for instance, which will dramatically improve performance. For more on that, see Animating Custom Layer Properties.
If your path needs to be drawn that frequently, check out CAShapeLayer, where you can just change the path property. That will be hardware accelerated and much faster than drawRect or drawLayer.
I'm trying to catch a JSON object with a mouse click event. I use ray to identify the object, but for some reason, the objects are not always identified. I suspect that it is related to the fact that I move the camera, because when I click nearby the object, i is identified.
Can you help me figure out how to set the ray correctly, in accordance with the camera move?
Here is the code :
this is the part of the mouse down event *
document.addEventListener("mousemove", onDocumentMouseMove, false);
document.addEventListener("mouseup", onDocumentMouseUp, false);
document.addEventListener("mouseout", onDocumentMouseOut, false);
mouseXOnMouseDown = event.clientX - windowHalfX;
targetRotationOnMouseDown = targetRotation;
var ray, intersections;
_vector.set((event.clientX / window.innerWidth) * 2 - 1, -(event.clientY / window.innerHeight) * 2 + 1, 0);
projector.unprojectVector(_vector, camera);
ray = new THREE.Ray(camera.position, _vector.subSelf(camera.position).normalize());
intersections = ray.intersectObjects(furniture);
if (intersections.length > 0) {
selected_block = intersections[0].object;
_vector.set(0, 0, 0);
selected_block.setAngularFactor(_vector);
selected_block.setAngularVelocity(_vector);
selected_block.setLinearFactor(_vector);
selected_block.setLinearVelocity(_vector);
mouse_position.copy(intersections[0].point);
block_offset.sub(selected_block.position, mouse_position);
intersect_plane.position.y = mouse_position.y;
}
}
this is the part of the camera move *
camera.position.x = (Math.cos(timer) * 10);
camera.position.z = (Math.sin(timer) * 10);
camera.lookAt(scene.position);
Hmmm, It is hard to say what your problem might be without seeing some kind of demonstration of how your program is actually acting. I would suggest looking at my demo that I have been working on today. I handle my camera, controls, and rays. I am using a JSON as well.
First you can view my demo: here to get an idea of what it is doing, what your describing sounds similar. You should be able to adapt my code if you can understand it.
--If you would like a direct link to the source code: main.js
I also have another you might find useful where I use rays and mouse collisions to spin a cube. --Source code: main.js
Finally I'll post the guts of my mouse events and how I handle it with the trackball camera in the first demo, hopefully some of this will lead you to a solution:
/** Event fired when the mouse button is pressed down */
function onDocumentMouseDown(event) {
event.preventDefault();
/** Calculate mouse position and project vector through camera and mouse3D */
mouse3D.x = mouse2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
projector.unprojectVector(mouse3D, camera);
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
var intersects = ray.intersectObject(maskMesh);
if (intersects.length > 0) {
SELECTED = intersects[0].object;
var intersects = ray.intersectObject(plane);
offset.copy(intersects[0].point).subSelf(plane.position);
killControls = true;
}
else if (controls.enabled == false)
controls.enabled = true;
}
/** This event handler is only fired after the mouse down event and
before the mouse up event and only when the mouse moves */
function onDocumentMouseMove(event) {
event.preventDefault();
/** Calculate mouse position and project through camera and mouse3D */
mouse3D.x = mouse2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
projector.unprojectVector(mouse3D, camera);
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
if (SELECTED) {
var intersects = ray.intersectObject(plane);
SELECTED.position.copy(intersects[0].point.subSelf(offset));
killControls = true;
return;
}
var intersects = ray.intersectObject(maskMesh);
if (intersects.length > 0) {
if (INTERSECTED != intersects[0].object) {
INTERSECTED = intersects[0].object;
INTERSECTED.currentHex = INTERSECTED.material.color.getHex();
plane.position.copy(INTERSECTED.position);
}
}
else {
INTERSECTED = null;
}
}
/** Removes event listeners when the mouse button is let go */
function onDocumentMouseUp(event) {
event.preventDefault();
if (INTERSECTED) {
plane.position.copy(INTERSECTED.position);
SELECTED = null;
killControls = false;
}
}
/** Removes event listeners if the mouse runs off the renderer */
function onDocumentMouseOut(event) {
event.preventDefault();
if (INTERSECTED) {
plane.position.copy(INTERSECTED.position);
SELECTED = null;
}
}
And in order to get the desired effect shown in my first demo that I wanted, I had to add this to my animation loop in order to use the killControls flag to selectively turn on and off the trackball camera controls based on the mouse collisions:
if (!killControls) controls.update(delta);
else controls.enabled = false;