Unity - Why iPhone 6 and 7 behaves differently to touch input? - ios

I have below code in Update() to drag camera and also detect clicks on objects. When we try on iphone 6 and X it works all well, but when we try on iPhone7 the drag screen is very unresponsive and clicking objects works only when you touch the screen very very lightly. Anybody have an idea on what is going on?
if (Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Began) {
fingerMoved = false;
if (_eventSystem.IsPointerOverGameObject(Input.GetTouch(0).fingerId)) {
fingerMoved = true;
}
hit_position = Input.GetTouch(0).position;
camera_position = cam.position;
} else if (Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Moved) {
current_position = Input.GetTouch(0).position;
LeftMouseDrag();
if (Vector2.Distance(hit_position, current_position) > 7f) {
fingerMoved = true;
}
cam.DOMoveY(target_position.y, 0.75f);
} else if (!fingerMoved && Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Ended) {
foreach (var item in storageList) {
if (Vector2.Distance(item.transform.position, Camera.main.ScreenToWorldPoint(hit_position)) < 0.5f) {
sideMenu.Open(item.myNo);
}
}
}
void LeftMouseDrag() {
Vector3 direction = Camera.main.ScreenToWorldPoint(current_position) - Camera.main.ScreenToWorldPoint(hit_position);
direction.x = 0f;
direction = direction * -1;
target_position = camera_position + direction;
if (target_position.y > camMaxY) {
target_position.y = camMaxY;
}
if (target_position.y < camMinY) {
target_position.y = camMinY;
}
}

I am unsure if it makes a difference but for this kind of stuff its easier and more reliable to use EventSystems and OnPointerClick / OnPointerDrag hanlders. This way, at least in theory, any sensitivity differences could be leveled out by Unity itself. (I am not aware weather it does it or not its just

The problem has gone by itself, I have no idea what went wrong at first.

I was having the same problem and fixed it by adding the second line of the code displayed below.
else if (Input.GetTouch(0).phase == TouchPhase.Moved)
if (master.calcDelta(Input.GetTouch(0).position) > 0f)
Which in turn, calculates the length of the vector that represents the dislocation from the previous location to the actual one.
calculatedDelta = Mathf.Abs(_touchPos.magnitude - pastPos.magnitude);
What is does is prevents the jittering from interfering with the built in function from unity "TouchPhase.Moved".

Related

Unity - Disable AR HitTest after initial placement

I am using ARKit plugin for Unity leveraging the UnityARHitTestExample.cs.
After I place my object into the world scene I want to disable the ARKit from trying to place the object again every time I touch the screen. Can someone please help?
There are a number of ways you can achieve this, although perhaps the simplest is creating a boolean to determine whether or not your model has been placed.
First off all you would create a boolean as noted above e.g:
private bool modelPlaced = false;
Then you would set this to true within the HitTestResultType function once your model has been placed:
bool HitTestWithResultType (ARPoint point, ARHitTestResultType resultTypes)
{
List<ARHitTestResult> hitResults = UnityARSessionNativeInterface.GetARSessionNativeInterface ().HitTest (point, resultTypes);
if (hitResults.Count > 0) {
foreach (var hitResult in hitResults) {
//1. If Our Model Hasnt Been Placed Then Set Its Transform From The HitTest WorldTransform
if (!modelPlaced){
m_HitTransform.position = UnityARMatrixOps.GetPosition (hitResult.worldTransform);
m_HitTransform.rotation = UnityARMatrixOps.GetRotation (hitResult.worldTransform);
Debug.Log (string.Format ("x:{0:0.######} y:{1:0.######} z:{2:0.######}", m_HitTransform.position.x, m_HitTransform.position.y, m_HitTransform.position.z));
//2. Prevent Our Model From Being Positioned Again
modelPlaced = true;
}
return true;
}
}
return false;
}
And then in the Update() function:
void Update () {
//Only Run The HitTest If We Havent Placed Our Model
if (!modelPlaced){
if (Input.touchCount > 0 && m_HitTransform != null)
{
var touch = Input.GetTouch(0);
if (touch.phase == TouchPhase.Began || touch.phase == TouchPhase.Moved)
{
var screenPosition = Camera.main.ScreenToViewportPoint(touch.position);
ARPoint point = new ARPoint {
x = screenPosition.x,
y = screenPosition.y
};
ARHitTestResultType[] resultTypes = {
ARHitTestResultType.ARHitTestResultTypeExistingPlaneUsingExtent,
};
foreach (ARHitTestResultType resultType in resultTypes)
{
if (HitTestWithResultType (point, resultType))
{
return;
}
}
}
}
}
}
Hope it helps...

CGEventCreateMouseEvent Complie Error in Xcode 7

Even though CGEventCreateMouseEvent is not deprecated in Xcode 7, I am getting error
No matching function for call to 'CGEventCreateMouseEvent'
Please find the code that worked in Xcode 6 (SDK OSX 10.10)
-(void)sendMouseClick:(CGMouseButton)mouseBtn value:(uint32_t)value
{
int mouseEvent = 0;
if(mouseBtn == kCGMouseButtonLeft && value == 1) //left btn down
{
mouseEvent = kCGEventLeftMouseDown;
}
else if (mouseBtn == kCGMouseButtonLeft && value == 0) //left btn up
{
mouseEvent = kCGEventLeftMouseUp;
}
else if (mouseBtn == kCGMouseButtonRight && value == 1) //Right btn down
{
mouseEvent = kCGEventRightMouseDown;
}
else if (mouseBtn == kCGMouseButtonRight && value == 0) //Right btn up
{
mouseEvent = kCGEventRightMouseUp;
}
if(mouseEvent != 0) // a valid mouse event
{
CGEventRef ourEvent = CGEventCreate(NULL);
NSPoint mouseLoc = CGEventGetLocation(ourEvent); //get current mouse position
CGEventRef mouseClick = CGEventCreateMouseEvent(
NULL,
mouseEvent,
mouseLoc,
mouseBtn
);
CGEventPost(kCGHIDEventTap, mouseClick);
}
}
I have tried importing <CoreGraphics/CGEvent.h> but it made no difference. Any idea what is happening?
I found out that there was a type mismatch after SDK 10.11.
Changing,
int mouseEvent = 0
to
CGEventType mouseEvent = kCGEventNull;
Fixes the issue.

Java compiling wrong old file

I have a java app that runs an infinite while loop. When I click run on eclipse it seems to be reverting to old code that I have changed. The thing is, when I build it updates at random times. The latest time I added System.exit(). I changed the code and it still exits. I have also tried this program in C#. I feel that I am somehow confusing the language runtime with the infinite while loop. The program works on a series of changing boolean values. The main action I am looking at the erratic behavior (this was what was happening before I added System.exit()) is in a method that iterates pixels in a BufferedImage. I am running Ubuntu 14.10. I have tried making a new project and pasting the same code (could it be invisible chars somehow?) I am very confused and would be happy if someone could help.
while(true){
if (bool1 && !exe.isSeparate(image))
{
// change boolean values
// did run System.exit(0)
}
if (bool2 && !exe.isSeparate(image))
{
// change boolean values
// did run System.exit(0)
}
}
boolean isSeparate(BufferedImage image)
{
int x = touchingX;
boolean first = false, second = false, third = false;
int startAt = this.getYStart(image);
for (int y = startAt; y < startAt + 150; y++)
{
Color pixel = new Color(image.getRGB(x, y));
if (!(pixel.getRed() == 255 && pixel.getGreen() == 255 && pixel.getBlue() == 255)
&& !(pixel.getRed() == 0 && pixel.getGreen() == 68 && pixel.getBlue() == 125))
{
if (!first)
{
first = true;
}
if (first && second && !third)
{
third = true;
}
}
else
{
if (first && !second)
{
second = true;
}
}
}
if (first && second && third)
{
return true;
}
return false;
}
I have answered my question. Ironically, it was a logic error.

iOS UIAutomation UIAElement.isVisible() throwing stale response?

I'm trying to use isVisible() within a loop to create a waitForElement type of a function for my iOS UIAutomation. When I try to use the following code, it fails while waiting for an element when a new screen pops up. The element is clearly there because if I do a delay(2) before tapping the element it works perfectly fine. How is everyone else accomplishing this, because I am at a loss...
Here's the waitForElement code that I am using:
function waitForElement(element, timeout, step) {
if (step == null) {
step = 0.5;
}
if (timeout == null) {
timeout = 10;
}
var stop = timeout/step;
for (var i = 0; i < stop; i++) {
if (element.isVisible()) {
return;
}
target.delay(step);
}
element.logElement();
throw("Not visible");
}
Here is a simple wait_for_element method that could be used:
this.wait_for_element = function(element, preDelay) {
if (!preDelay) {
target.delay(0);
}
else {
target.delay(preDelay);
}
var found = false;
var counter = 0;
while ((!found) && (counter < 60)) {
if (!element.isValid()) {
target.delay(0.5);
counter++;
}
else {
found = true;
target.delay(1);
}
}
}
I tend to stay away from my wait_for_element and look for any activityIndicator objects on screen. I use this method to actual wait for the page to load.
this.wait_for_page_load = function(preDelay) {
if (!preDelay) {
target.delay(0);
}
else {
target.delay(preDelay);
}
var done = false;
var counter = 0;
while ((!done) && (counter < 60)) {
var progressIndicator = UIATarget.localTarget().frontMostApp().windows()[0].activityIndicators()[0];
if (progressIndicator != "[object UIAElementNil]") {
target.delay(0.25);
counter++;
}
else {
done = true;
}
}
target.delay(0.25);
}
Here is a simple and better one using recursion. "return true" is not needed but incase u want it.
waitForElementToDismiss:function(elementToWait,waitTime){ //Using recursion to wait for an element. pass in 0 for waitTime
if(elementToWait && elementToWait.isValid() && elementToWait.isVisible() && (waitTime < 30)){
this.log("Waiting for element to invisible");
target.delay(1);
this.waitForElementToDismiss(elementToWait, waitTime++);
}
if(waitTime >=30){
fail("Possible login failed or too long to login. Took more than "+waitTime +" seconds")
}
return true;
}
Solution
I know this is an old question but here is my solution for a situation where I have to perform a repetitive task against a variable timed event. Since UIAutomation runs on javascript I use a recursive function with an empty while loop that checks the critical control state required before proceeding to the next screen. This way one never has to hard code a delay.
// Local target is the running simulator
var target = UIATarget.localTarget();
// Get the frontmost app running in the target
var app = target.frontMostApp();
// Grab the main window of the application
var window = app.mainWindow();
//Get the array of images on the screen
var allImages = window.images();
var helpButton = window.buttons()[0];
var nextButton = window.buttons()[2];
doSomething();
function doSomething ()
{
//only need to tap button for half the items in array
for (var i=0; i<(allImages.length/2); i++){
helpButton.tap();
}
//loop while my control is NOT enabled
while (!nextButton.isEnabled())
{
//wait
}
//proceed to next screen
nextButton.tap();
//go again
doSomething();
}

Cannot capture TouchEvent.UP in Blackberry

I am working on a Scrollable Image field.I am handling TouchEvent.DOWN mTouchEvent.MOVE,TouchEvent.UP.
Somehow control never goes to TouchEvent.UP section.How to capture the UP event.
I have to findout the start and end points of the drag.
My Code looks like this..
if (event == TouchEvent.DOWN && touchEvent.isValid())
{
_xTouch = touchEvent.getX(1);
_yTouch = touchEvent.getY(1);
}
else if(event == TouchEvent.UP && touchEvent.isValid())
{
int x = touchEvent.getX(1);
int y = touchEvent.getY(1);
}
else if (event == TouchEvent.MOVE && touchEvent.isValid())
{
boolean result = scrollImage((touchEvent.getX(1) - _xTouch), (touchEvent.getY(1) - _yTouch));
_xTouch = touchEvent.getX(1);
_yTouch = touchEvent.getY(1);
//If scrolling occurred, consume the touch event.
if (result)
{
return true;
}
else
{
return false;
}
}
Thanks in advance.
:)
it was a misunderstanding.I was handling the touch event in multiple layers..like Field level,layout manager level and screen level.
So in particular case..it was being cosumed by manager.And i need the event to be cosumed by field.
It was a mistake in return value.

Resources