Using gyroscope in an iPhone to find distance [closed] - ios

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Is the gyroscope in an iPhone accurate enough to find distance in the same way that a measuring tape could, I am not looking at using GPS as I would plan on this only working for short distances. If so how would I calculate this?

The short answer is, "no".
A more complete answer is: the gyroscope measures rotation, not distance. It is possible to use the gyroscope to help measure distance, but it would have to be coupled with some other sensor that can actually measure distance.
I will go farther and mention that, the accelerometer packaged with the gyroscope, by itself, can't measure distance, either. Even though the accelerometer measures acceleration, which is more closely related to distance than rotation, it needs some kind of absolute reference to give you a useful distance.
For an iPhone, your best bet is to use the camera, and computer vision techniques, combined with the gyroscope and accelerometer, to do integrated tracking of your stationary environment. The tricky bit will be reliably ignoring non-stationary things in your environment...

Related

How does robot do pose estimation in SLAM? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I know that in particle filter algorithm a robot can pick the best pose given the map. But how can robot predict the pose in SLAM where map is not given. Do we get data from IMU?
SLAM is a very broad application and there exist numerous methods on how to perform SLAM.
The basic idea is to estimate a map of an environment and the path a robot takes through the environment at the same time. The map generally is sensed by a sensor that measures the position of prominent landmarks in the environment. The movements of the robot are often integrated by IMU, odometry or GPS sensors. The setup and algorithms used to perform SLAM may vary drastically though.
It is not even necessary to get movement data from the robot. You can think of a Kalman filter which tracks landmark positions and robot location in it's state vector and assumes a constant position state transition model for the robot with no control input. Even though the transition model is assumed to be constant position the measurement updates of the landmark position are theoretically enough to give an estimate of the updated position of the robot.
This is taken a step further if you consider the Structure from Motion approach where a single camera that is moved through an environment is used to estimate a map of the environment from image features and at the same time estimate the path the camera takes through that map.
So as long as you don't have a specific sensor setup and algorithm in mind the question of "how a robot does pose estimation in SLAM" is not really productive. If you have a specific question I can maybe point you into the direction of specific literature.
The probabilistic robotics book by Sebastian Thrun has a good intro to probabilistic approaches to SLAM.
In cases where the map is not provided, the robot detects its surroundings by means of the sensors on it. It creates the map by marking the points it detects in three-dimensional space. For example, a vision-based robot tries to find points of interest (features) it detects in the present scene (frame) in the next scene (frame). It tries to determine its own displacement by looking at the displacement of the features it finds on the screen. With this concept expressed as odometry, the robot estimates how much it has moved in the environment. Odometry information usually obtained by using several different sensors by means of sensor fusion, not with a single sensor.

Pupil Center is Jumping a Lot in Real Time Eye Tracking [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed last month.
Improve this question
In my eye tracking project, the pupil center is jumping a lot and I don't see it as a fixed point.
What should I do?
My idea is comparing the pupil center and pupil in 2 frames with a threshold but it doesn't help the problem. Another point is camera noise.
What should I do to reduce the noises?
I used the starburst algorithm.
Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches.
Eye trackers come with 2 types of noise/errors: variable and systematic error. Variable noise is basically the dispersion around the gazed target and the constant drift or the deviation from the gaze target is the systematic noise. For reference, see the following paper:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0196348
In your case, its the variable error. Variable error arises due to fatigue, involuntary eye ball vibrations, light, and so on. You can remove it by just filtering the gaze data. However, be careful not to smoothen it too much which might lead to the loss of natural fluctuations inherent in the eye ball.

Which classifier should I use for game automation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I suppose I have to pick one from this list:
http://scikit-learn.org/stable/modules/scaling_strategies.html
As I need incremental learning.
I'm trying to get machine learning to learn how to play a simple NES game. I'm going to teach the machine some basic data from the game such as player x & y, enemy x & y, points etc.
Based on the data mentioned above the machine should predict which button to press.
So what classifier do you recommend for such project?
Here, let me do a browser search for you:
machine learning train computer to play video game
first hit
To summarize, this is not a problem you will solve well by choosing a classifier off a menu. Now, this article is extreme learning: the model trains from the screen image alone (an array of pixels). If you extract game abstractions (identify objects on the screen), you will have a quicker training period. However, the matter remains that to play a visual game well, you will likely need the learning strategy outlined in the papers from this research: time-based input with delayed reward recognition.
This means that your machine learning gets its feedback from points, lives, or playing time awarded somewhat after a particular good action. For instance, in Pong, you might make a 2-shot combination: one to pull your opponent's paddle out of position, the second to slap the ball past him in the opposite corner. Only after the opponent fails the second defence, do you get the point.
This is not a trivial problem to do well.

Vehicle Detection and Tracking unisng Lucas kanade [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have an image processing project, clearly the title reveals what it is.
Assume I have a camera on top of the one of the traffic lights beside a four way in a heavy crowded city. The project should get the recorded video from that camera.
Identify the cars on the scene and track their movements.
for the tracking part I believe Lucas Kanade with pyramids or even Lucas Kanade Tomasi would be sufficient.
But before tracking I should Identify the cars coming into the scene. I wonder how I can do that. I mean how I can distinguish between people/trees/building/... and cars.
what should I do for identifying ?
I want you to be kind enough with me and share your ideas.
thanks.
I detected contours and filtered them by size. That worked for me using the same video available on the link posted by GiLevi (http://www.behance.net/gallery/Vehicle-Detection-Tracking-and-Counting/4057777). You could also perform background sutraction, and detect blobs, on the foreground mask; again filtering by size, so as to differentiate from cars, people etc.

How to position an object in 3D space using cameras [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is it possible to use a couple of webcams (or any camera for that matter) to get the x, y and z co-ordinates of an object and then track them perhaps using OpenCV as it moves around a room.
I'm thinking of it in relation to localising and then controling an RC helicopter.
Yes. You need to detect points on both images simultaneously and then match the pairs that correspond to the same point in the scene. This way you will have the same point represented by different coordinate spaces (camera 1 and camera 2).
You can start here.
If using depth sensor is acceptable then you can take a look at how ReconstructMe does it. Otherwise take a look at this google search.

Resources