Just about everywhere else references geo coordinates as x=latitude,y=longitude. This is generally thought to be for historical reasons of which measurement had the higher accuracy.
Does anyone know why almost all GIS geo spatial shape file formats stores their coordinates as x=longitude, y=latitude?
Maybe it's related to the right-hand-rule:http://en.m.wikipedia.org/wiki/Right-hand_rule.
Related
I have a PDB structure that I'm analyzing which has a putative binding pocket in it for some endogenous ligand.
I'd like to to determine for the relevant amino acids within, say, 3A of the pocket, where the optimal hydrogen bond donor/acceptor pair for the ligand would be within the pocket. I have done this already for determining locations of optimal pi-pi stacking (e.g. find aromatic residues, determine plane the face of the ring, go out N angstroms orthogonal to the face), but i'm struggling to consider this for hydrogen bonds.
Can this be done?
Weel, I'll try to write out how I would try to do it.
First of all its not clear to me if your pocket is described by a grid that represent the pocket surface, or by a grid that represent all the pocket space (lets call it pocket cloud).
With Biopython assuming you have a cloud described by your grid:
Loop over all the cloud-grid points:
for every point loop over all the PDB atoms that are H donor or acceptor:
if the distance is in the desidered target range (3A - distance for optimal
donor or acceptor pair):
select the corresponding AA/atom/point
add to your result list the point as donor/acceptor/or both togeher
with the atom/AA selected
else:
pass
with Biopyton and distances see here: Biopython PDB: calculate distance between an atom and a point
H bonds are generally 2.7 to 3.3 Å
I am not sure my logic is correct, the idea is to end up with a subset of your grids point where you have red grid points where you could pose a donor and blue ones where you could pose an acceptor.
We are talking only about distances here, if you introduce geometry factors of the bond I think you should need a ligand with its own geometry too
Of course with this approach you would waste a lot of time on not productive computation, if You find a way to select only the grid surface point you could select a subset of PDB atoms that are close to the surface (3A) and then use the same approach above.
I'm currently working with a Livox Avia LiDAR and I'm utilising ROS to access its data stream which is in the PointCloud2 format.
A LiDAR is mainly use to measure the distance an object in the boresight to the LiDAR.
Does anyone know how to read off a distance from PointCloud2?
Thanks in advance
You need to be a little more specific in what you mean by "read off a distance" since there are multiple distances that could be of use when talking about PC2 data. I'm assuming you're talking about distance from the sensor to a single measured point. If that's the case, you can pull out more usable information from pointcloud data with
the generator function read_points() to get (x, y, z) and then calculate distance directly. You can grab all points in x, y, z format like this:
from sensor_msgs import point_cloud2
def pc2_callback(msg):
generator = point_cloud2.read_points(msg, skip_nans=True, field_names=("x", "y", "z"))
I have a multi-polygon defined in geojson. I am trying to calculate its area.
When I traverse the ostensible code path, I am getting very confusing results. I suspect this is because I am using some element incorrectly.
boundaries = {...valid geojson...}
field_feature_collection = RGeo::GeoJSON.decode(boundaries, geo_factory: RGeo::Geographic.simple_mercator_factory)
first_feature = field_feature_collection[0]
first_feature.geometry.area
# => 1034773.6727743163
I know the area of my feature is ~602982 square meters. I cannot figure out how to reconcile that with the result above.
I suspect I am missing some obvious projection.
Anyone have an idea on what's wrong?
This is most likely due to using the simple_mercator_factory as your geo_factory in the GeoJSON decoder. The simple_mercator_factory parses geometries from lon/lat coordinate systems but performs all of the calculations (length, area, etc.) using the Mercator projection (EPSG:3857) of the geometry.
The Mercator projection gets more distorted the further away you get from the equator, so that's why the area you're getting is about double what you expect it to be.
Depending on the scope of your project, it may be more appropriate to use a different projection, especially if area is a key metric for you. If you choose a different projection, the rgeo-proj4 gem has just been updated to work with newer versions of PROJ, so you can create the appropriate transformations in your app.
Another option is to use the RGeo::Geographic.spherical_factory which uses a spherical approximation for the earth and will give you more realistic areas than the Mercator projection, but it is slower than using GEOS backed cartesian factories (which is what the simple_mercator_factory uses).
I am using OpenCV's solvePnPRansac function to estimate the pose of my camera given a pointcloud made from tracked features. My pipeline consists of multiple cameras where I form the point cloud from matched features between two cameras, and use that as a reference to estimate the pose of one of the cameras as it starts moving. I have tested this in multiple settings and it works as long as there are enough features to track while the camera is in motion.
Strangely, during a test I did today, I encountered a failure case where solvePnP would just return junk values all the time. What's confusing here is that in this data set, my point cloud is much denser, it's reconstructed pretty accurately from the two views, the tracked number of points (currently visible features vs. features in the point cloud) at any given time was much higher than what I usually have, so theoretically it should have been a breeze for solvePnP, yet it fails terribly.
I tried with CV_ITERATIVE, CV_EPNP and even the non RANSAC version of solvePnP. I was just wondering if I am missing something basic here? The scene I am looking at can be seen in these images (image 1 is the scene and feature matches between two perspectives, image 2 is the point cloud for reference)
The part of the code doing PNP is pretty simple. If P3D is the array of tracked 3Dpoints, P2D is the corresponding set of image points,
solvePnpRansac(P3D, P2D, K, d, R, T, false, 500, 2.0, 100, noArray(), CV_ITERATIVE);
EDIT: I should also mention that my reference poincloud was obtained with a baseline of 8 feet between the cameras, whereas the building I am looking at was probably like a 100 feet away. Could the possible lack of disparity cause issues as well?
Hi (sorry for my english) .. i'm working in a project for University in this project i need to use the MBA (Multilevel B-Spline Approximation) algorithm to get some points (control points) of a image to use in other operations.
I'm reading a lot of papers about this algorithm, and i think i understand, but i can't writing.
The idea is: Read a image, process a image (OpenCV), then get control points of the image, use this points.
So the problem here is:
The algorithm use a set of points {(x,y,z)} , this set of points are approximated with a surface generated with the control points obtained from MBA. the set of points {(x,y,z)} represents de data we need to approximate (the image)..
So, the image is in a cv::Mat format , how can transform this format to an ordinary array to simply access to the data an manipulate...
Here are one paper with an explanation of the method:
(Paper) REGULARIZED MULTILEVEL B-SPLINE REGISTRATION
(Paper)Scattered Data Interpolation with Multilevel B-splines
(Matlab)MBA
If someone can help, maybe a guideline, idea or anything will be appreciate ..
Thanks in advance.
EDIT: Finally i wrote the algorithm in C++ using armadillo and OpenCV ...
Well i'm using armadillo a C++ linear algebra library to works with matrix for the algorithm