I'm trying to build a lightweight antenna tracker with two servos. For mechanical reasons, I'm first mounting servo1 on a base so that it tilts forward/backwards, then mount servo2 on it twisted 90˚so it can tilt left/right.
I can basically use the first servo to select a from the great circles that go through azimuth=0˚ and alt=0˚ and az=180˚ and alt=0˚, and use hte 2nd servo to move on the chosen great circle. This way, I should be able to point at the entire upper hemisphere even though I might need to reposition the antenna when crossing the midline (the servos only have 180 degrees of movement.)
I'm trying to find the function that maps az/alt to the tilt/tilt servo angles. I suspect that should be similar to how equatorial telescope mounts work, but I couldn't find a good reference on how to do it - neither do I trust my own math.
I found this astronomy lecture notes vaguely helpful: http://ircamera.as.arizona.edu/Astr_518/ametry.pdf especially page 22/23 on the ecliptic coordinate system, but I think the problem solved here is slightly different.
This seems a standard kinematics problem, it bothers me I can't figure it out or even find online resources. I'd be super thankful for any pointers. Happy to give more details on the servo setup.
I think I figured this out on math.stackexchange.com:
https://math.stackexchange.com/questions/3799191/direct-conversion-from-az-el-to-ecliptic-coordinates
Short answer (for python, consider parameter order for atan2 in your language):
$\epsilon=\atan2(\sin\delta, \cos\delta\cdot\sin\alpha)$
$\lambda = \arccos(\cos\alpha\cdot\cos\delta)$
where $\alpha$ is azimuth, $\delta$ is elevation or altitude, $epsilon$ is the angle for the first and $lambda$ the angle for the 2nd servo. This seems to work for all values of $\alpha$ and for values in $[0,\pi/2]$ for $\delta$.
Related
Ok, I've done some reading around the subject, have an idea of how I'd tackle my problem, but want to find out of this is the most efficient way, or if I'm missing something simple.
I have a line diagram of a section of railway that I'd like to plot the users location onto (the user being someone on a train moving up/down the railway).
Now, I initially went down the route of geo-referencing, but quickly realised this probably wasn't the way to go, as my image is not a real reflection of the area + I want the line diagram to be what the user sees.
OK, my though process of how I will tackle it:
I know the physical area so I could extract the coordinates along the railway, every x meters (my line diagram has a resolution of around 5m). Stick this into an array. Can anyone suggest a tool to do this?!
Allocate my line diagram a start and end, then match the image coordinates with the physical coordinates for the entire line.
Read in the users position and update where to draw the position based on the closest match in the array?
Does this sound doable, and would it give me decent results?
If you have more sophisticated answers, please do share.
It sounds reasonable in general. As the user is supposed to be on a train a simpler option may work where you just keep track of the physical distance moved and use that as a percentage distance along the line. This is a lot simpler to manage and could be backed up with some coordinate checkpoints to ensure you don't have a drifting error. I'd aim for a simpler implementation if you can.
I'm trying to add the altitude the pARk sample code, so the label appears near the top of the place of interest.
The first thing I did was adding a altitude property to PlaceOfInterest.h and filling it on the ViewController, with the altitude for each poi being in meters.
Then, on ARView, did the following changes:
Adding the altitude to the LLA to ECEF conversion of the device's location: latLonToEcef(location.coordinate.latitude, location.coordinate.longitude, location.altitude, &myX, &myY, &myZ);
same for the poi's location latLonToEcef(poi.location.coordinate.latitude, poi.location.coordinate.longitude, poi.location.altitude, &poiX, &poiY, &poiZ);
Added the Upwardness direction to the placesOfInterestCoordinates 4D vector placesOfInterestCoordinates[i][2] = (float)u;
I thought that was it, pretty easy. Run the project and... labels laying dead on the floor. Comparing the label location of a poi on the app with and without my changes above, they are pretty much in the same places, even though adding the altitude the values from the conversions to ECEF and ENU should change a bit.
I had little knowledge before looking at the code about Perspective projection, all the matrixes used, ECEF, ENU, etc. And I've been reading about all these concepts in Wikipedia to be able to understand the code, plus reading other questions related to this sample code here in SO like
Use of maths in the Apple pARk sample code
and also similar questions for Android... but still can't figure out why the altitude doesn't show on screen.
I tried multiplying the poi's altitude by 5 latLonToEcef(location.coordinate.latitude, location.coordinate.longitude, 5*location.altitude, &myX, &myY, &myZ); to see if there was a noticeable change on the yposition of the poi. And, while there was some, it was very small, the label still being far from the real altitude.
So if somebody could give me some hints about what am I missing, why the altitude isn't showing on screen, I'll appreciate it very much.
I uploaded my try to a repository here https://github.com/Tovkal/pARk-with-altitude, in case you want to see the code with the changes. If you run it, you should change the places of interest from the ViewController to some near you.
Thank you!
I am currently working on the "Xtion pro live" by using "OpenNI" library.
The problem is that the Xtion must be vertically placed (along a wall). The problem is that in this position the user calibration always fails, so it is impossible to get the Skeleton info.
So, I would like to know how to fix this issue, I suppose there is something that I didn't understand about "GetSkeletonCap().RequestCalibration()" or with the "SampleConfig.xml" file. After a lot of research however I am still stuck.
Try moving the user, followed by the camera in 360degree circumference around the subject keeping the vertical positioning of the camera the same all the way through. It may detect optimal angle on the depth sensor. We did this twice with the kinect and it worked.
Also make sure the room is well lit.
I have a requirement mentioned below:
Already have a floor plan map image
First detect current location on floor
Then select the destination location using floor plan map image
Now application should provide direction & distance for that source to destination path
This is like how google direction works, but its in-house map require.
For example,
- Current position of user is: At his desk
- Where is Meeting Room #11
- So application should provide direction and distance updates on the map/floor plan image.
Any kind of suggestions/help would be great.
Thanks in advance
Couple of points...
You could create various audio files and play them as way points based on routing. Same principal as 'turn right at the next light'.
Definitely want to set your accuracy to: kCLLocationAccuracyBest. But this will still probably only get you accuracy of around +/- 10 meters at best.
Do a floor plan overlay using MapOverlayView.
If you are indoor, iPhone uses cell towers or WIFI for a location fix. This might be a problem for you because if you are looking to map multiple floors, only GPS can give you altitude readings - ground floor, second floor, etc...
I don't want to pour cold water on your idea but I have not heard of anyone successfully doing an indoor navigation app on an iPhone using standard stuff. If you really wanted to move forward on this project, your best accuracy might be using indoor bluetooth transmitters as navigational beacons...?
What you want is path-planing in the map, is that? If so, there is lot of algoritms you can use. You can choose a block size based on your map and resolution needs, divide de map into this, amd mark each block as navegable or not. Then getting from the first block trying in the direction of the destionation block, check if the neighboor block is blocked or not, and get going, until you reach (or not, if its not reacheable) the destination block.
Thats a pseudo-implementation, you have some option to do it, if I understand your needs.
(I dont know your hardware as said by others, with simple GPS and indoor navigation, assuming a 15m resolution is a good balance between optimistic/pesimistc signal, If its for robot-navigation, its not a goos approach in the GPS terms, but the algorimt is).
I'm trying to develop a mini "Around Me" like using camera, compass and location. I would like to display place's images on my screen.
For the moment I have my location and my orientation with compass. I would like to know how can I determine the position of the place I want to display.
Thanks for your help ;)
Once you have relative distance and bearing, which you can determine from two points in the same coordinate space using algorithms found on this page, figuring out where a known coordinate is with respect to a known viewpoint is basically a perspective projection, the math is outlined on this Wikipedia article. The rotation of the camera is given by the compass, and the tilt by the accelerometer (the position is of course, GPS).
I'm trying to find a better document - there are a couple of extra things to consider - like the camera parameters etc, but this is a good starting point.
If it's too involved (like if you're not comfortable with rotation matrices) we can break it right down to the simple trig.
The code in the iPhone ARKit project does this, and quite a bit more. While you may not be able to use their complete library, it is a great reference on the subject of augmented reality.
Check out 3DAR, it lets you add an AR view to a MKMapView app very easily. There's a video tutorial on this process, as well as some sample code, on the 3DAR site, www.3dar.us
You can create a location based AR app in Junaio. It's an AR browser. Free to use and deploy in (as long as it's not a custom app and in Junaio).