I have a multi-polygon defined in geojson. I am trying to calculate its area.
When I traverse the ostensible code path, I am getting very confusing results. I suspect this is because I am using some element incorrectly.
boundaries = {...valid geojson...}
field_feature_collection = RGeo::GeoJSON.decode(boundaries, geo_factory: RGeo::Geographic.simple_mercator_factory)
first_feature = field_feature_collection[0]
first_feature.geometry.area
# => 1034773.6727743163
I know the area of my feature is ~602982 square meters. I cannot figure out how to reconcile that with the result above.
I suspect I am missing some obvious projection.
Anyone have an idea on what's wrong?
This is most likely due to using the simple_mercator_factory as your geo_factory in the GeoJSON decoder. The simple_mercator_factory parses geometries from lon/lat coordinate systems but performs all of the calculations (length, area, etc.) using the Mercator projection (EPSG:3857) of the geometry.
The Mercator projection gets more distorted the further away you get from the equator, so that's why the area you're getting is about double what you expect it to be.
Depending on the scope of your project, it may be more appropriate to use a different projection, especially if area is a key metric for you. If you choose a different projection, the rgeo-proj4 gem has just been updated to work with newer versions of PROJ, so you can create the appropriate transformations in your app.
Another option is to use the RGeo::Geographic.spherical_factory which uses a spherical approximation for the earth and will give you more realistic areas than the Mercator projection, but it is slower than using GEOS backed cartesian factories (which is what the simple_mercator_factory uses).
Related
I have a PDB structure that I'm analyzing which has a putative binding pocket in it for some endogenous ligand.
I'd like to to determine for the relevant amino acids within, say, 3A of the pocket, where the optimal hydrogen bond donor/acceptor pair for the ligand would be within the pocket. I have done this already for determining locations of optimal pi-pi stacking (e.g. find aromatic residues, determine plane the face of the ring, go out N angstroms orthogonal to the face), but i'm struggling to consider this for hydrogen bonds.
Can this be done?
Weel, I'll try to write out how I would try to do it.
First of all its not clear to me if your pocket is described by a grid that represent the pocket surface, or by a grid that represent all the pocket space (lets call it pocket cloud).
With Biopython assuming you have a cloud described by your grid:
Loop over all the cloud-grid points:
for every point loop over all the PDB atoms that are H donor or acceptor:
if the distance is in the desidered target range (3A - distance for optimal
donor or acceptor pair):
select the corresponding AA/atom/point
add to your result list the point as donor/acceptor/or both togeher
with the atom/AA selected
else:
pass
with Biopyton and distances see here: Biopython PDB: calculate distance between an atom and a point
H bonds are generally 2.7 to 3.3 Å
I am not sure my logic is correct, the idea is to end up with a subset of your grids point where you have red grid points where you could pose a donor and blue ones where you could pose an acceptor.
We are talking only about distances here, if you introduce geometry factors of the bond I think you should need a ligand with its own geometry too
Of course with this approach you would waste a lot of time on not productive computation, if You find a way to select only the grid surface point you could select a subset of PDB atoms that are close to the surface (3A) and then use the same approach above.
A humanoid robot can be described by a kinematic tree in a URDF file. I found that the order of the elements in the generalized coordinate vectors of the rigid body tree and the multi-body plant are different. There is a code snippet online showing the way to achieve the mapping.
# q_rbt = X*q_mbp
# rigid_body_tree is an instance of RigidBodyTree()
# multi_body_plant is an instance of MultibodyPlant()
B_rbt = rigid_body_tree.B
B_mbp = multi_body_plant.MakeActuationMatrix()
X = np.dot(B_rbt[6:,:],B_mbp[6:,:].T)
However, RigidBodyTree has been deprecated in the new Drake. Hence, how can I achieve the mapping now? In addition, I am curious about why Drake does not use the same order for the generalized coordinate vectors.
You might like the workflow I used in this littledog example. I have a PR in flight to enable that workflow directly in Drake. You can track the progress on in with this issue.
In general the recommendation, and I would recommend this even if we hadn’t implemented it a particular way in Drake, is that if you have something as complicated as humanoid, don’t try to work with the vectors based on indices. Find a way to access the elements via their names, or in Drake via their joint accessors. Working with the raw vector is very error prone.
I am using OpenCV's solvePnPRansac function to estimate the pose of my camera given a pointcloud made from tracked features. My pipeline consists of multiple cameras where I form the point cloud from matched features between two cameras, and use that as a reference to estimate the pose of one of the cameras as it starts moving. I have tested this in multiple settings and it works as long as there are enough features to track while the camera is in motion.
Strangely, during a test I did today, I encountered a failure case where solvePnP would just return junk values all the time. What's confusing here is that in this data set, my point cloud is much denser, it's reconstructed pretty accurately from the two views, the tracked number of points (currently visible features vs. features in the point cloud) at any given time was much higher than what I usually have, so theoretically it should have been a breeze for solvePnP, yet it fails terribly.
I tried with CV_ITERATIVE, CV_EPNP and even the non RANSAC version of solvePnP. I was just wondering if I am missing something basic here? The scene I am looking at can be seen in these images (image 1 is the scene and feature matches between two perspectives, image 2 is the point cloud for reference)
The part of the code doing PNP is pretty simple. If P3D is the array of tracked 3Dpoints, P2D is the corresponding set of image points,
solvePnpRansac(P3D, P2D, K, d, R, T, false, 500, 2.0, 100, noArray(), CV_ITERATIVE);
EDIT: I should also mention that my reference poincloud was obtained with a baseline of 8 feet between the cameras, whereas the building I am looking at was probably like a 100 feet away. Could the possible lack of disparity cause issues as well?
I have two plots in matlab where in I have plotted x and y coordinates. If I have these two plots, is it possible to compare if the plots match? Can I obtain numbers to tell how well they match?
Note that the graphs could possibly be right/left/up/down shifted in plot (turning axis off is not problem), scaled/rotated (I would also like to know if it is skewed, but for now, it is not a must).
It will not need to test color elements, color inversion and any other complicated graphic properties than basic ones mentioned above.
If matlab is not enough, I would welcome other tools.
Note that I cannot simply take the absolute difference of x- and y- values. I could obtain x-absolute difference average and y-absolute difference and then average but I need a combined error. I need to compare the graph.
Graphs to be compared.
EDIT
Direct Correlation does not work for me.
For a different set of data: I got .94 correlation. This is very high for given data. noticing that one data is fluctuating less and faster than other.
You can access the plotted data with this code
x = 10:100;
y = log10(x);
plot(x,y);
h = gcf;
axesObjs = get(h, 'Children'); %axes handles
dataObjs = get(axesObjs, 'Children'); %handles to low-level graphics objects in axes
objTypes = get(dataObjs, 'Type'); %type of low-level graphics object
xdata = get(dataObjs, 'XData'); %data from low-level grahics objects
ydata = get(dataObjs, 'YData');
Then you can do a correlation between xdata and ydata for example, or any kind of comparison. The coefficient R will indicate a percent match.
[R,P] = corrcoef(xdata, ydata);
You would also be interested in comparing the axes limits in the graphical current axes. For example
R = ( diff(get(h_ax1,'XLim')) / diff(get(h_ax2,'XLim')) ) + ...
( diff(get(h_ax1,'YLim')) / diff(get(h_ax2,'YLim')) )
where h_ax1 is the handle of the first axe and h_ax2 for the second one. Here, you will have a comparison between values of (XLim + YLim). The possible comparisons with different gca properties are really vast though.
EDIT
To compare two sets of points, you may use other metrics than analytical relationship. I think of distances or convergences such as the Hausdorff distance. A script is available here in matlab central. I used such distance to compare letter shapes. In the wikipedia page, the 'Applications' section is of importance (edge detector for thick shapes, but it may not be pertinent to your particular problem).
I have 2 images sourceImg, refImg.
I've extracted the features like so:
cv::GoodFeaturesToTrackDetector detector;
std::vector<cv::KeyPoint> sourceKeyPoints, refKeyPoints;
detector.detect(sourceImg, sourceKeyPoints);
detector.detect(refImg, refKeyPoints);
I want to find the translation of an object from refImg to sourceImg. There is no rotation or perspective change, only simple 2d translation. There may be some noise.
findHomography() works fine when both sets have the same number of features extracted, even handling noise quite well.
My question is, what do I do when the number of features differs?
Can someone point me in the right direction regarding DescriptorExtractor and Matching?
Note: I can't use SURF/SIFT for patent reasons.
You could try the FlannBasedMatcherclass from OpenCV. Use this to match descriptors (of any type) and then use the best matches to find your homography.