In my web application I utilize this beautiful 3D scatter chart rendered using Highcharts.
It does rotate nicely using the mouse by clicking on it and dragging around.
However, on neither my phone nor my tablet it does rotate also.
The code for enabling rotation looks as follows (adopted from the Highcharts sample page):
// Add mouse events for rotation
$(chart.container).on('mousedown.hc touchstart.hc', function (eStart) {
eStart = chart.pointer.normalize(eStart);
var posX = eStart.pageX,
posY = eStart.pageY,
alpha = chart.options.chart.options3d.alpha,
beta = chart.options.chart.options3d.beta,
newAlpha,
newBeta,
sensitivity = 5; // lower is more sensitive
$(document).on({
'mousemove.hc touchdrag.hc': function (e) {
// Run beta
newBeta = beta + (posX - e.pageX) / sensitivity;
chart.options.chart.options3d.beta = newBeta;
// Run alpha
newAlpha = alpha + (e.pageY - posY) / sensitivity;
chart.options.chart.options3d.alpha = newAlpha;
chart.redraw(false);
},
'mouseup touchend': function () {
$(document).off('.hc');
}
});
});
I assume that the events where this logic is registered are not available on my devices (recent Samsung Galaxy Tab and Samsung Galaxy S5).
Any ideas?
The problem is, a touch event doesn't have pageX or pageY. Instead of using event.pagePosition, you should use event.chartPosition. And chartPositions are only available after normalized by chart.pointer.normalize.
Also, there is no touchdrag event, it should be touchmove
Related
This question is purely based on GestureDetector flutter.
For Example:
In Application, GestureDetector class is implemented so multitouch is supported by default, now we need to disable the multitouch so what would be the best way to do it?. Otherwise in a drawing app using GestureDetector in flutter cause multi touch issue.
So how to disable multitouch in gesture detector?
I faced the same problem but I solve it by measuring the distance between two points.
The rule of how to measure the distance between two points
// Convert a rule to the code
double distanceBetweenTwoPoints(double x1,double y1 ,double x2, double y2){
double x = x1 - x2;
x = x * x;
double y = y1 - y2;
y = y * y;
double result = x + y;
return sqrt(result);
}
First of all, declare two variables with their values
// These two variables are to save the previous points
var fingerPostionY = 0.0,fingerPostionX = 0.0;
Then inside the onPanUpdate method, I took two points to calculate the distance between them. After that, I made a comparison, if the distance was large (e.g. 50) then there are many fingers, so I ignore it otherwise it will be just one finger on the screen.
onPanUpdate: (details) {
if (fingerPostionY < 1.0){
// assigen for the first time to compare
fingerPostionY = details.globalPosition.dy;
fingerPostionX = details.globalPosition.dx;
}else{
// they use a lot of fingers
double distance = distanceBetweenTwoPoints(details.globalPosition.dx,details.globalPosition.dy,
fingerPostionX,fingerPostionY);
// the distance between two fingers must be above 50
// to disable multi touch
if(distance > 50)
return;
// update to use it in the next comparison
fingerPostionY = details.globalPosition.dy;
fingerPostionX = details.globalPosition.dx;
}
// the code of drawing
setState(() {
RenderBox renderBox = context.findRenderObject();
points.add(TouchPoints(
points: renderBox.globalToLocal(details.globalPosition),
paint: Paint()
..strokeCap = strokeType
..isAntiAlias = true
..color = activeColor.withOpacity(opacity)
..strokeWidth = strokeWidth));
});
},
IMPORTANT NOTES:
Inside onPanEnd method, you must write this line, because it
means the finger is up now
fingerPostionY = 0.0;
Still there is some performance issue not solved yet in the drawing code
EDIT:
I enhanced the performance by using path.
You can see my code on the GitHub:
free painting on flutter
Problem:
Zooming in on image by scaling and moving using matrix causes the app to run out of memory and crash.
Additional Libraries used:
Gestouch - https://github.com/fljot/Gestouch
Description:
In my Flex Mobile app I have an Image inside a Group with pan/zoom enabled using the Gestouch library. The zoom works to an extent but causes the app to die (not freeze, just exit) with no error message after a certain zoom level.
This would be manageable except I can’t figure out how to implement a threshold to stop the zoom at, as it crashes at a different zoom level almost every time. I also use dynamic images so the source of the image could be any size or resolution.
They are usually JPEGS ranging from about 800x600 - 9000x6000 and are downloaded from a server so cannot be packaged with the app.
As of the AS3 docs there is no longer a limit to the size of the BitmapData object so that shouldn't be the issue.
“Starting with AIR 3 and Flash player 11, the size limits for a BitmapData object have been removed. The maximum size of a bitmap is now dependent on the operating system.”
The group is used as a marker layer for overlaying pins on.
The crash mainly happens on iPad Mini and older Android devices.
Things I have tried already tried:
1.Using Adobe Scout to pin point when the memory leak occurs.
2.Debugging to find the exact height and width of the marker layer and image at the time of crash.
3.Setting a max zoom variable based on the size of the image.
4.Cropping the image on zoom to only show the visible area. ( crashes on copyPixels function and BitmapData.draw() function )
5.Using imagemagick to make lower quality images ( small images still crash )
6.Using imagemagick to make very low res image and make a grid of smaller images . Displaying in the mobile app using a List and Tile layout.
7.Using weak references when adding event listeners.
Any suggestions would be appreciated.
Thanks
private function layoutImageResized(e: Event):void
{
markerLayer.scaleX = markerLayer.scaleY = 1;
markerLayer.x = markerLayer.y = 0;
var scale: Number = Math.min(width / image.sourceWidth , height / image.sourceHeight);
image.scaleX = image.scaleY = scale;
_imageIsWide = (image.sourceWidth / image.sourceHeight) > (width / height);
// centre image
if(_imageIsWide)
{
markerLayer.y = (height - image.sourceHeight * image.scaleY ) / 2 ;
}
else
{
markerLayer.x = (width -image.sourceWidth * image.scaleX ) / 2 ;
}
// set max scale
_maxScale = scale*_maxZoom;
}
private function onGesture(event:org.gestouch.events.GestureEvent):void
{
trace("Gesture start");
// if the user starts moving around while the add Pin option is up
// the state will be changed and the menu will disappear
if(currentState == "addPin")
{
return;
}
const gesture:TransformGesture = event.target as TransformGesture;
////trace("gesture state is ", gesture.state);
if(gesture.state == GestureState.BEGAN)
{
currentState = "zooming";
imgOldX = image.x;
imgOldY = image.y;
oldImgWidth = markerLayer.width;
oldImgHeight = markerLayer.height;
if(!_hidePins)
{
showHidePins(false);
}
}
var matrix:Matrix = markerLayer.transform.matrix;
// Pan
matrix.translate(gesture.offsetX, gesture.offsetY);
markerLayer.transform.matrix = matrix;
if ( (gesture.scale != 1 || gesture.rotation != 0) && ( (markerLayer.scaleX < _maxScale && markerLayer.scaleY < _maxScale) || gesture.scale < 1 ) && gesture.scale < 1.4 )
{
storedScale = gesture.scale;
// Zoom
var transformPoint:Point = matrix.transformPoint(markerLayer.globalToLocal(gesture.location));
matrix.translate(-transformPoint.x, -transformPoint.y);
matrix.scale(gesture.scale, gesture.scale);
/** THIS IS WHERE THE CRASH HAPPENS **/
matrix.translate(transformPoint.x, transformPoint.y);
markerLayer.transform.matrix = matrix;
}
}
I would say that's not a good idea to work with such a large image like (9000x6000) on mobile devices.
I suppose you are trying to implement some sort of map navigation so you need to zoom some areas hugely.
My solution would be to split that 9000x6000 into 2048x2048 pieces, then compress it using png2atf utility with mipmaps enabled.
Then you can use Starling to easily load these atf images and add it to stage3d and easily manage it.
In case you are dealing with 9000x6000 image - you'll get about 15 2048x2048 pieces, having them all added on the stage at one time you might think it would be heavy, but mipmaps will make it so that there are only tiny thumbnails of image are in memory until they are not zoomed - so you'll never run out of memory in case you remove invisible pieces from stage from time to time while zooming in, and return it back on zoom out
I recently updated my Cordova mobile mapping app from OL3 V3.1.1 to V3.7.0 to V3.8.2.
Am using PouchDB to store off-line tiles, and with V3.1.1 tiles were visible.
Here is the code snippet:
OSM_bc_offline_pouchdb = new ol.layer.Tile({
//maxResolution: 5000,
//extent: BC,
//projection: spherical_mercator,
//crossOrigin: 'anonymous',
source: new ol.source.XYZ({
//adapted from: http://jsfiddle.net/gussy/LCNWC/
tileLoadFunction: function (imageTile, src) {
pouchTilesDB_osm_bc_baselayer.getAttachment(src, 'tile', function (err, res) {
if (err && err.error == 'not_found')
return;
//if(!res) return; // ?issue -> causes map refresh on movement to stop
imageTile.getImage().src = window.URL.createObjectURL(res);
});
},
tileUrlFunction: function (coordinate, projection) {
if (coordinate == null)
return undefined;
// OSM NW origin style URL
var z = coordinate[0];
var x = coordinate[1];
var y = coordinate[2];
var imgURL = ["tile", z, x, y].join('_');
return imgURL;
}
})
});
trails_mobileMap.addLayer(OSM_bc_offline_pouchdb);
OSM_bc_offline_pouchdb.setVisible(true);
Moving to both V3.7.0 and V3.8.2 causes the tiles to not display. Read the API and I'm missing why this would happen.
What in my code needs updating to work with OL-V3.8.2?
Thanks,
Peter
Your issue might be related to the changes to ol.TileCoord in OpenLayers 3.7.0. From the release notes:
Until now, the API exposed two different types of ol.TileCoord tile coordinates: internal ones that increase left to right and upward, and transformed ones that may increase downward, as defined by a transform function on the tile grid. With this change, the API now only exposes tile coordinates that increase left to right and upward.
Previously, tile grids created by OpenLayers either had their origin at the top-left or at the bottom-left corner of the extent. To make it easier for application developers to transform tile coordinates to the common XYZ tiling scheme, all tile grids that OpenLayers creates internally have their origin now at the top-left corner of the extent.
This change affects applications that configure a custom tileUrlFunction for an ol.source.Tile. Previously, the tileUrlFunction was called with rather unpredictable tile coordinates, depending on whether a tile coordinate transform took place before calling the tileUrlFunction. Now it is always called with OpenLayers tile coordinates. To transform these into the common XYZ tiling scheme, a custom tileUrlFunction has to change the y value (tile row) of the ol.TileCoord:
function tileUrlFunction = function(tileCoord, pixelRatio, projection){
var urlTemplate = '{z}/{x}/{y}';
return urlTemplate
.replace('{z}', tileCoord[0].toString())
.replace('{x}', tileCoord[1].toString())
.replace('{y}', (-tileCoord[2] - 1).toString());
}
If this is your issue, try changing your tileUrlFunction to
function (coordinate, projection) {
if (coordinate == null)
return undefined;
// OSM NW origin style URL
var z = coordinate[0];
var x = coordinate[1];
var y = (-coordinate[2] - 1);
var imgURL = ["tile", z, x, y].join('_');
return imgURL;
}
I've tried various web searches, but I can't seem to find anything that relates to my problem.
To quickly delineate the problem:
HTML5 Cordova iOS app (7.1 -> 8.1)
uses draggable elements
I only have issues on the iPad, not on the iPhone
The HTML5 app itself works flawlessly in a web-browser
The app itself is a biology app that teaches translation - decoding RNA into a amino acid sequence, i.e. a protein.
For this, the user sees the sequence and drags the correct amino acid onto it. The amino acid is a draggable element and the target div is a droppable. One amino acid at a time a chain is built. Please refer to the screenshot to get an idea (can't embed yet).
http://i.stack.imgur.com/S4UpF.png
In order to fit all screens, I "transform: scale" the app accordingly (fixed size is ~850x550). And to get rid of the associated jQuery bug with draggable (object movement would also change with the scaling factor), I've followed the instructions at http://gungfoo.wordpress.com/2013/02/15/jquery-ui-resizabledraggable-with-transform-scale-set/
// scaling to fit viewport
// sizing the page
var myPage = $('.page');
var pageWidth=myPage.width();
var pageHeight=myPage.height();
// sizing the iFrame
var myFrame = $('.container');
var frameWidth=myFrame.width();
var frameHeight=myFrame.height();
// scaleFactor horizontal
var horizontalScale=pageWidth/frameWidth;
// scaleFactor vertiacal
var verticalScale=pageHeight/frameHeight;
// global zoomScale variable
var zoomScale = 1; // default, required for draggable debug
// if page fits vertically - scale horizontally
if ((frameHeight * horizontalScale) <= pageHeight) {
myFrame.css({
'transform': 'scale('+horizontalScale+')',
'transform-origin': 'top',
});
// adding vertical margin, if possible
if (pageHeight > frameHeight*horizontalScale) {
var heightDifference = pageHeight - frameHeight*horizontalScale;
myPage.css({
'margin-top': heightDifference/2,
'height': pageHeight - heightDifference/2,
});
}
zoomScale = horizontalScale;
// else scale vertically
} else {
myFrame.css({
'transform': 'scale('+verticalScale+')',
'transform-origin': 'top',
});
zoomScale = verticalScale;
}
// draggable + scale transform fixes (http://gungfoo.wordpress.com/2013/02/15/jquery-ui-resizabledraggable-with-transform-scale-set/)
function startFix(event, ui) {
ui.position.left = 0;
ui.position.top = 0;
}
function dragFix(event, ui) {
var changeLeft = ui.position.left - ui.originalPosition.left; // find change in left
var newLeft = ui.originalPosition.left + changeLeft / zoomScale; // adjust new left by our zoomScale
var changeTop = ui.position.top - ui.originalPosition.top; // find change in top
var newTop = ui.originalPosition.top + changeTop / zoomScale; // adjust new top by our zoomScale
ui.position.left = newLeft;
ui.position.top = newTop;
}
I've already got a beta version on iTunes connect and it works great on an iPhone. On iPads, however, the droppable area is oddly small and shifted. That is, the div seems to be properly rendered - it is the box with the dashed border.
Has anyone else encountered a similar bug? I really have no idea how to fix it.
I have managed to solve the problem.
The issue was probably based on the viewport meta tag (0.5 scaling) interacting badly with the transform:scale resizing.
Simply removing all viewport meta arguments have solved the problem.
I'm trying to catch a JSON object with a mouse click event. I use ray to identify the object, but for some reason, the objects are not always identified. I suspect that it is related to the fact that I move the camera, because when I click nearby the object, i is identified.
Can you help me figure out how to set the ray correctly, in accordance with the camera move?
Here is the code :
this is the part of the mouse down event *
document.addEventListener("mousemove", onDocumentMouseMove, false);
document.addEventListener("mouseup", onDocumentMouseUp, false);
document.addEventListener("mouseout", onDocumentMouseOut, false);
mouseXOnMouseDown = event.clientX - windowHalfX;
targetRotationOnMouseDown = targetRotation;
var ray, intersections;
_vector.set((event.clientX / window.innerWidth) * 2 - 1, -(event.clientY / window.innerHeight) * 2 + 1, 0);
projector.unprojectVector(_vector, camera);
ray = new THREE.Ray(camera.position, _vector.subSelf(camera.position).normalize());
intersections = ray.intersectObjects(furniture);
if (intersections.length > 0) {
selected_block = intersections[0].object;
_vector.set(0, 0, 0);
selected_block.setAngularFactor(_vector);
selected_block.setAngularVelocity(_vector);
selected_block.setLinearFactor(_vector);
selected_block.setLinearVelocity(_vector);
mouse_position.copy(intersections[0].point);
block_offset.sub(selected_block.position, mouse_position);
intersect_plane.position.y = mouse_position.y;
}
}
this is the part of the camera move *
camera.position.x = (Math.cos(timer) * 10);
camera.position.z = (Math.sin(timer) * 10);
camera.lookAt(scene.position);
Hmmm, It is hard to say what your problem might be without seeing some kind of demonstration of how your program is actually acting. I would suggest looking at my demo that I have been working on today. I handle my camera, controls, and rays. I am using a JSON as well.
First you can view my demo: here to get an idea of what it is doing, what your describing sounds similar. You should be able to adapt my code if you can understand it.
--If you would like a direct link to the source code: main.js
I also have another you might find useful where I use rays and mouse collisions to spin a cube. --Source code: main.js
Finally I'll post the guts of my mouse events and how I handle it with the trackball camera in the first demo, hopefully some of this will lead you to a solution:
/** Event fired when the mouse button is pressed down */
function onDocumentMouseDown(event) {
event.preventDefault();
/** Calculate mouse position and project vector through camera and mouse3D */
mouse3D.x = mouse2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
projector.unprojectVector(mouse3D, camera);
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
var intersects = ray.intersectObject(maskMesh);
if (intersects.length > 0) {
SELECTED = intersects[0].object;
var intersects = ray.intersectObject(plane);
offset.copy(intersects[0].point).subSelf(plane.position);
killControls = true;
}
else if (controls.enabled == false)
controls.enabled = true;
}
/** This event handler is only fired after the mouse down event and
before the mouse up event and only when the mouse moves */
function onDocumentMouseMove(event) {
event.preventDefault();
/** Calculate mouse position and project through camera and mouse3D */
mouse3D.x = mouse2D.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse3D.y = mouse2D.y = -(event.clientY / window.innerHeight) * 2 + 1;
mouse3D.z = 0.5;
projector.unprojectVector(mouse3D, camera);
var ray = new THREE.Ray(camera.position, mouse3D.subSelf(camera.position).normalize());
if (SELECTED) {
var intersects = ray.intersectObject(plane);
SELECTED.position.copy(intersects[0].point.subSelf(offset));
killControls = true;
return;
}
var intersects = ray.intersectObject(maskMesh);
if (intersects.length > 0) {
if (INTERSECTED != intersects[0].object) {
INTERSECTED = intersects[0].object;
INTERSECTED.currentHex = INTERSECTED.material.color.getHex();
plane.position.copy(INTERSECTED.position);
}
}
else {
INTERSECTED = null;
}
}
/** Removes event listeners when the mouse button is let go */
function onDocumentMouseUp(event) {
event.preventDefault();
if (INTERSECTED) {
plane.position.copy(INTERSECTED.position);
SELECTED = null;
killControls = false;
}
}
/** Removes event listeners if the mouse runs off the renderer */
function onDocumentMouseOut(event) {
event.preventDefault();
if (INTERSECTED) {
plane.position.copy(INTERSECTED.position);
SELECTED = null;
}
}
And in order to get the desired effect shown in my first demo that I wanted, I had to add this to my animation loop in order to use the killControls flag to selectively turn on and off the trackball camera controls based on the mouse collisions:
if (!killControls) controls.update(delta);
else controls.enabled = false;