Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am not able to detect surface using ARKit. I have also added the image of the surface.
How to detect a surface with less texture accurately?
To successfully detect a horizontal or vertical plane using ARKit or RealityKit you need to track:
surfaces with distinguished textures;
in a well-lit environment;
you must physically move around a room;
You do not need to track:
surfaces with "poor" or repetitive textures;
glare, reflective or refractive surfaces;
transparent objects (like glass tables);
However, to overcome most of these limitations, you can use the iPad Pro with a LiDAR scanner. iPad Pro 2020 detects surfaces at unprecedented speed (at tens of nanoseconds) in a poorly-lit room.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to build a simple iOS Application with iBeacons.
I have four iBeacons and my goal is to calculate my room position by doing Trilateration. I want to show the room on the display, set the fixed positions of the iBeacons and then calculate the Position and show it on the display.
My problem is, I don't know how to start.
Even though iBeacons are relatively simple to use, trilateration done with them is far from that. The standard is meant for simply determining your current location zone from the nearest beacon. The zones are: immediate (0-0,5m), near (0,5-2m), far (2-20m). Due to the instability of the signal, it is difficult to obtain more precise location data.
That being said, there are a couple of companies that I know, who have worked with that issue: Estimote (Estimote Indoor SDK) and Steerpath. Maybe looking at those two solutions could help you get started with your project.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am currently looking for some guide/samples on how to implement the OS X Yosemite’s markup like features. i.e., auto detecting/guess freehand drawing to match users’ intention of whether they are trying to draw circle, square, or triangle.
Please refer the image below, the left side represents the users freehand drawing and the right is auto-detected shapes replaced by OS X markup.
Recognizing objects from gestures is a subject of ongoing research. There is a class of algorithms called "$ recognizers" that you might want to look at. The original algorithm is "The $1 Recognizer" which is worth a read.
It is not that difficult to implement such recognizers, as long as you are limited to a specific class of shapes. The $1 recognizer (if I recall correctly) only works for a continuous path (so "X" would not work because it requires two strokes). However, later work has extended the $1 recognizer for non-continuous cases.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to create a game kite flying . Now suppose we have two button's Up and down. When we press on button up kite go away from screen and it should look realistic small while its going away and away on continues press . If we press on down it should come down in same way and it look bigger .
How can show kite depth on screen with sprite kit or any other framework ?
Thanks
Rajender
You could try adjusting the Scale property of the SKNode when a user presses the up button. You'll have to detect the specific key type that you're looking for, but you could do something like:
- (void)handlePressDown:(UIGestureRecognizer)recognizer
{
// ....Detect specific keystroke
_kiteScaleAmount++;
[kite setScale:_kiteScaleAmount];
}
More info on SKNode here.
Since you included the [scenekit] tag in your question, perhaps you're also looking for 3D solutions?
You can use SK3DNode to add 3D content—an embedded SceneKit scene—to your otherwise 2D SpriteKit game. So, if you have a 3D model of a kite, you can build a SceneKit scene around that, then attach it to an SK3DNode in your game. Then, to make the kite get farther and nearer to the viewer, adjust the camera in the SceneKit scene.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm new here im learning xcode and swift by myself and things are going well.
I wanted to ask what will be the best way (and most exact way) to measure short distances let's say up to 10 meters inside of a building so I cant use GPS
I want to get results in millimeters or centimeters.
Thank you for your time guys
Calculating distances based on device movement and using gyroscope, accelerometer, and other internal sensors is impossible.
There are a few reason why but see this link for an explanation...
https://www.youtube.com/watch?v=_q_8d0E3tDk&list=UUj_UmpoD8Ph_EcyN_xEXrUQ&spfreload=10
Use iBeacon and CoreLocation framework. Here is a video from last WWDC that touches this subject
Taking CoreLocation Indoors
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am making a application that you can apply different effects to photos using GPU Image Framework by Brad Larson.I want to add X-ray Effect filter in GPU image app.Any pointers will be appreciated
You want something like the GPUImageColorInvertFilter:
If that doesn't produce the exact effect you want, you could create a custom filter based on it and have your fragment shader first convert to luminance, and then apply a greenish tint based on the inverse of the pixel's luminance. That would provide the exact effect you show above.
I'll leave the coding of such a shader as an exercise for the reader.