I am trying to represent in an ontology a few geometric objects (polygon, lines, points, etc.) and calculate their spatial/topological relations, through the adoption of GeoSPARQL relevant functions (sfTouches, sfEquals, sfContains, etc.). I am using GraphDB, with the GeoSPARQL plugin enabled.
I have seen that in the WKT representation of the geometric object, GeoSPARQL uses the concept of a default spatial reference system (i.e. the <http://www.opengis.net/def/crs/OGC/1.3/CRS84> URI which corresponds to the WGS84 coordinate reference system (CRS)). However, in my use case, the coordinates of the geometrical objects actually correspond to values in a 2D Cartesian coordinate system.
I found in the EPSG Geodetic Parameter Registry the proper CRS for representing Cartesian coordinates and I attached the proper URI in the WKT representation, but the GeoSPARQL functions do not return any result or error.
My question is the following: "Do the GeoSPARQL functions operate properly when representing spatial objects in any other type of CRS, apart from the default one?".
Thank you in advance.
Currently GDB does not support alternative CRS in WKT literals but supports them in GML literals (issue GDB-3142). GML literals are slightly more complex but still easy enough to generate, let us know if you need help with that.
However, I question your assertion that you have Cartesian coordinates. On one hand, any pair (lat,long) or (nothing,easting) is a Cartesian coordinate. On the other hand, since the Earth is not flat, any CRS or projection method is only an approximation, and many of them are tuned for specific localities.
So please tell us which EPSG CRS you picked, and a bit about the locality of your data.
Your example, slightly reformatted, and using normal turtle shortenings:
ex:polygon_ABCD rdf:type ex:ExampleEntity ;
geo:hasGeometry ex:geometry_polygon_ABCD .
ex:geometry_polygon_ABCD a geo:Geometry, sf:Polygon ;
geo:asWKT "<opengis.net/def/cs/EPSG/0/4499> Polygon((389.0 1052.0, 563.0 1052.0, 563.0 1280.0, 389.0 1280.0, 389.0 1052.0))"^^geo:wktLiteral .
ex:point_E rdf:type ex:ExampleEntity ;
geo:hasGeometry ex:geometry_point_E .
ex:geometry_point_E a geo:Geometry, sf:Point ;
geo:asWKT "<opengis.net/def/cs/EPSG/0/4499> Point(400.0 1100.0)"^^geo:wktLiteral ; .
You must use a specific URL for the CRS and cannot omit http:, so the correct URL is http://www.opengis.net/def/crs/EPSG/0/4499.
But you can see from the returned description that this CRS is applocable to "China - onshore and offshore between 120°E and 126°E". I'm not an expert in geo projections so I can't guarantee whether this CRS will satisfy your need "leave my coordinates alone, they are just meters". I'd look for a UK (OrdnanceSurvey) CRS with easting and northing coordinates.
To learn how to format GML:
see the GeoSPARQL spec (OGC 11-052r4) p18, whchc gives an example about gml:Point.
then google for gml:Polygon. There are many links but one that gives examples is http://www.georss.org/gml.html
Armed with this knowledge, we can reformat your example to GML:
ex:polygon_ABCD rdf:type ex:ExampleEntity ;
geo:hasGeometry ex:geometry_polygon_ABCD .
ex:geometry_polygon_ABCD a geo:Geometry, sf:Polygon ;
geo:asGML """
<gml:Polygon xmlns:gml="http://www.opengis.net/gml" srsName="http://www.opengis.net/def/crs/EPSG/0/TODO">
<gml:exterior>
<gml:LinearRing>
<gml:posList>
389.0 1052.0, 563.0 1052.0, 563.0 1280.0, 389.0 1280.0, 389.0 1052.0
</gml:posList>
</gml:LinearRing>
</gml:exterior>
</gml:Polygon>
"""^^geo:gmlLiteral.
ex:point_E rdf:type ex:ExampleEntity ;
geo:hasGeometry ex:geometry_point_E .
ex:geometry_point_E a geo:Geometry, sf:Point ;
geo:asGML """
<gml:Point xmlns:gml="http://www.opengis.net/gml" srsName="http://www.opengis.net/def/crs/EPSG/0/TODO">
<gml:pos>
400.0 1100.0
</gml:pos>
</gml:Point>
"""^^geo:gmlLiteral.
The """ (long quote) allows us to use " inside the literal without quoting
replace TODO with the better CRS you picked
the documentation http://graphdb.ontotext.com/documentation/master/enterprise/geosparql-support.html#geosparql-examples gives an example similar to yours but it cheats a bit because all coordinates are in the range (-90,+90) so it can just use WGS.
after you debug using geof: topology functions, turn on indexing and switch to geo: predicates, because the functions are slow (they check every geometry against every other) while the predicates use the special geo index
Let me know how it goes!
Related
i have shape file (.shp) with data in EPSG:2180.
I have extracted positions like this one:
303553.0249270061 580466.2644065879
Same values i see in equivalent .gml file.
That i am sure values are ok.
But how can i convert it to Latitude and Longitude?
(from comments GPS is WGS 84 (a.k.a. EPSG:4326)).
i see that params about EPSG:2180:
latitude_of_origin=0
central_meridian=19
scale_factor=0.9993
false_easting=500000
false_northing=-5300000
SPHEROID = 6378137,298.257222101
degree=0.0174532925199433
I have tried such simple calculation like
one_degree = 111196.672; //meters
X:= 303553.0249270061;
Y:= 580466.2644065879;
X:= X - false_northing;
Y:= Y - false_easting;
X:= latitude_of_origin + X/one_degree;
Y:= central_meridian + Y/one_degree;
but this show me:
50.3931720629823
19.7236391427847
which is not true. It is near but should be >20.
How this calculation should looks like?
I need it in Delphi application.
I don't know solution for You in Delphi but i found one in js and php.
I found this topic while i was searching "how to transform location units from one system to another" and google EPSG:2180 convert.
This problem name is:
Transform a point coordinate from one map projection to another
So solution is to make mathematical transformation and maby other calculations. You can Check:
Proj4 Library
with API Interfaces: C, C++, Python, Java, Ruby
My solution was to use library Proj4js.
Maby try to open source's on github of this library and analize function transform and with luck You will find answer ; )
I used php version of this package => Proj4jsphp
My problem was how to transform Coordinates in EPSG:2180 => GPS is WGS 84 (a.k.a. EPSG:4326).
Later i found out there are different names for this systems:
EPSG:2180 responds to Poland CS92
EPSG:4326 responds to WGS84 -> where WGS84 is the most popular system (lat, lon).
I am surprising to notice that it is somehow difficult to obtain a correct fit of interaction function from gam().
To be more specific, I want to estimate an additive function:
y=m_1(x)+m_2(z)+m_{12}(x,z)+u,
where m_1(x)=x^2, m_2(z)=z^2,m_{12}(x,z)=xz. The following code generate this model:
test1 <- function(x,z,sx=1,sz=1) {
#--m1(x) function
m.x<-x^2
m.x<-m.x-mean(m.x)
#--m2(z) function
m.z<-z^2
m.z<-m.z-mean(m.z)
#--m12(x,z) function
m.xz<-x*z
m.xz<-m.xz-mean(m.xz)
m<-m.x+m.z+m.xz
return(list(m=m,m.x=m.x,m.z=m.z,m.xz=m.xz))
}
n <- 1000
a=0
b=2
x <- runif(n,a,b)/20
z <- runif(n,a,b)
u <- rnorm(n,0,0.5)
model<-test1(x,z)
y <- model$m + u
So I use gam() by fitting the model as
b3 <- gam(y~ ti(x) + ti(z) + ti(x,z))
vis.gam(b3);title("tensor anova")
#---extracting basis matrix
B.f3<-model.matrix.gam(b3)
#---extracting series estimator
b3.hat<-b3$coefficients
Question: when I plot the estimated function by gam()above against its true function, I end up with
par(mfrow=c(1,3))
#---m1(x)
B.x<-B.f3[,c(2:5)]
b.x.hat<-b3.hat[c(2:5)]
plot(x,B.x%*%b.x.hat)
points(x,model$m.x,col='red')
legend('topleft',c('Estimate','True'),lty=c(1,1),col=c('black','red'))
#---m2(z)
B.z<-B.f3[,c(6:9)]
b.z.hat<-b3.hat[c(6:9)]
plot(z,B.z%*%b.z.hat)
points(z,model$m.z,col='red')
legend('topleft',c('Estimate','True'),lty=c(1,1),col=c('black','red'))
#---m12(x,z)
B.xz<-B.f3[,-c(1:9)]
b.xz.hat<-b3.hat[-c(1:9)]
plot(x,B.xz%*%b.xz.hat)
points(x,model$m.xz,col='red')
legend('topleft',c('Estimate','True'),lty=c(1,1),col=c('black','red'))
However, the function estimate of m_1(x) is largely different from x^2, and the interaction function estimate m_{12}(x,z) is also largely different from xz defined in test1 above. The results are the same if I use predict(b3).
I really can't figure it out. Can anybody help me out by explaining why the results end up with this? Greatly appreciate it!
First, the problem of the above issue is not due to the package, of course. It is closely related to the identification conditions of the smooth functions. One common practice is to impose the assumptions that E(mj(.))=0 for all individual function j=1,...,d, and E(m_ij(x_i,x_j)|x_i)=E(m_ij(x_i,x_j)|x_j)=0 for i not equal to j. Those conditions require one to employ centered basis function in series estimator, which has been done already in GAM package. However, in my case above, function m(x,z)=x*z defined in test1 does not satisfy the above identification assumptions, since the integral of x*z with respect to either x or z is not zero when x and z have range from zero to two.
Furthermore, series estimator allows the individual and interaction function to be identified if one impose m(0)=0 or m(0,x_j)=m(x_i,0)=0. This can be readily achieved if we center the basis function around zero. I have tried both cases, and they work well whenever DGP satisfies the identification conditions.
I'm currently working on some small examples about Apache Jena. What I want to show is universal quantification.
Let's say I have balls that each have a different color. These balls are stored within boxes. I now want to determine whether these boxes only contain balls that have the same color of if they are mixed.
So basically something along these lines:
SAME_COLOR = ∃x∀y:{y in Box a → color of y = x}
I know that this is probably not possible with Jena, and can be converted to the following:
SAME_COLOR = ∃x¬∃y:{y in Box a → color of y != x}
With "not exists" Jena's "NoValue" can be used, however, this does (at least for me) not work and I don't know how to translate above logical representations in Jena. Any thoughts on this?
See the code below, which is the only way I could think of:
(?box, ex:isA, ex:Box)
(?ball, ex:isIn, ?box)
(?ball, ex:hasColor, ?color)
(?ball2, ex:isIn, ?box)
(?ball2, ex:hasColor, ?color2)
NotEqual(?color, ?color2)
->
(?box, ex:hasSomeColors, "No").
(?box, ex:isA, ex:Box)
NoValue(?box, ex:hasSomeColors)
->
(?box, ex:hasSomeColors, "Yes").
A box with mixed content now has both values "Yes" and "No".
I've ran into the same sort of problem, which is more simplified.
The question is how to get a collection of objects or count no. of objects in rule engine.
Given that res:subj ont:has res:obj_xxx(several objects), how to get this value in rule engine?
But I just found a Primitive called Remove(), which may inspire me a bit.
cv::recoverPose has parameter "triangulatedPoints" as seen in documentation, though math behind it is not documented, even in sources (relevant commit on github).
When I use it, I get this matrix in following form:
[0.06596200907402348, 0.1074107606919504, 0.08120752154556411,
0.07162400555712592, 0.1112415181779849, 0.06479560707001968,
0.06812069103377787, 0.07274771866295617, 0.1036230973846902,
0.07643884790206311, 0.09753859499789987, 0.1050111597547035,
0.08431322508162108, 0.08653721971228882, 0.06607013741719928,
0.1088621999959361, 0.1079215237863785, 0.07874160849424018,
0.07888037486261903, 0.07311940086190356;
-0.3474319603010109, -0.3492386196164926, -0.3592673043398864,
-0.3301695131649525, -0.3398606744869519, -0.3240186574427479,
-0.3302508442361889, -0.3534091474425142, -0.3134288005980755,
-0.3456284001726975, -0.3372514921152191, -0.3229005408417835,
-0.3156005118578394, -0.3545418178651592, -0.3427899760859008,
-0.3552801904337188, -0.3368860879000375, -0.3268499974874541,
-0.3221050630233929, -0.3395139819250934;
-0.9334091581425227, -0.9288726274060354, -0.9277125424980246,
-0.9392374374147775, -0.9318967835907961, -0.941870018271934,
-0.9394698966781299, -0.9306592884695234, -0.9419749503870455,
-0.9332801148509925, -0.9343740431697417, -0.9386198310107222,
-0.9431781968459053, -0.9290466865633286, -0.9351167772249444,
-0.9264105322194914, -0.933362882155191, -0.9398254944757025,
-0.9414486961893244, -0.935785675955617;
-0.0607238817598344, -0.0607532477465341, -0.06067768097603395,
-0.06075467523485482, -0.06073245675798231, -0.06078081616640227,
-0.06074754785132623, -0.0606879948481664, -0.06089198212719162,
-0.06071522666667255, -0.06076842109618678, -0.06083346023742937,
-0.06084805655000008, -0.0606931888685702, -0.06071558440082779,
-0.06073329803512636, -0.06078189449161094, -0.06080195858434526,
-0.06083228813425822, -0.06073695721101467]
e.g. 4x20 matrix (in this case there were 20 points). I want to convert this data to std::vector in order to use it in solvePnP. How to do it, what is the math here? Thanks!
OpenCV offers a triangulatePoints function, which has the same output:
points4D 4xN array of reconstructed points in homogeneous coordinates.
Which indicates that each column is a 3D point in homogeneous coordinate system. However your points looks quite not as I would expect. For instance your first point is:
[0.06596200907402348, -0.3474319603010109, -0.9334091581425227, -0.0607238817598344]
But I would expect the last component to be 1.0 already. You should double check if something is not wrong here. You can always remove the "scaling" of the point by dividing each dimension by the last component:
[ x, y z, w ] = w [x/w, y/w, z/w, 1]
And then use the first three parts for your PnP solution.
I hope this helps
I have an HDF5 files, global coverage of temperature. The file was converted from netcdf. The conversion process set longitude from 0 to 360 and additionally flipped the map upside down, so north is now south. I have used HDFView and I can display the file but there is no way to interact with the map so locate a specific lat/long combination. The file doesn't display properly in arcmap even after setting the correct projection.
Is there anyway I can display the data and click on a location and extract lat/long or draw a point in a specific lat/long?
Short answer: No, that's not possible.
Long answer: Unlike NetCDF, HDF5 is a general purpose file format. It allows you to store n-dimensional numerical arrays (called datasets), grouped into folders (hence the name "hierarchical"). Nothing more. There is no semantics. To HDF5, your data is not a "map", it's just an array. Therefore, HDFView does not "know" about latitudes and longitudes. That information was lost in the NetCDF => HDF5 conversion process. Actually, the lat/lon arrays are probably still in the file but they no longer have any inherent meaning. NetCDF, on the other hand, imposes a common data model including coordinate systems. That's why the various visualization tools let you interact with your data in a more sophisticated way.
What tool did you use to convert your NetCDF-file to HDF5?
You can use HDF5 to store meteorological data (I do that, it works well). But then you have to write your own tools for georeferencing and visualization. Check out the h5py project if you're into Python.
As #heron13 has said, HDF5 is a file format.
What version of NetCDF was your file? As version 4 uses an enhanced version of HDF5 as the storage layer.
Does your NetCDF file follow (have) the CF conventions or COARDS conventions? If so I would look at the program you used to convert it to HDF5, as HDF5 can support the same conventions. For example.
Once you confirm that the conventions are in the HDF5 file, arcmap is meant to support them too (sorry I do not have access to arcmap to confirm).
Here's a look at a NetCDF file with the CF conventions:
$ ncdump tos_O1_2001-2002.nc | less
netcdf tos_O1_2001-2002 {
dimensions:
lon = 180 ;
lat = 170 ;
time = UNLIMITED ; // (24 currently)
bnds = 2 ;
variables:
double lon(lon) ;
lon:standard_name = "longitude" ;
lon:long_name = "longitude" ;
lon:units = "degrees_east" ;
lon:axis = "X" ;
lon:bounds = "lon_bnds" ;
lon:original_units = "degrees_east" ;
...
While here is a view of the same file only using h5dump:
$ h5dump tos_O1_2001-2002.nc | less
HDF5 "tos_O1_2001-2002.nc" {
GROUP "/" {
ATTRIBUTE "Conventions" {
DATATYPE H5T_STRING {
STRSIZE 6;
STRPAD H5T_STR_NULLTERM;
CSET H5T_CSET_ASCII;
CTYPE H5T_C_S1;
}
DATASPACE SCALAR
DATA {
(0): "CF-1.0"
}
}
...
One other question, is there any reason why you are not using the NetCDF file in arcmap?