i have shape file (.shp) with data in EPSG:2180.
I have extracted positions like this one:
303553.0249270061 580466.2644065879
Same values i see in equivalent .gml file.
That i am sure values are ok.
But how can i convert it to Latitude and Longitude?
(from comments GPS is WGS 84 (a.k.a. EPSG:4326)).
i see that params about EPSG:2180:
latitude_of_origin=0
central_meridian=19
scale_factor=0.9993
false_easting=500000
false_northing=-5300000
SPHEROID = 6378137,298.257222101
degree=0.0174532925199433
I have tried such simple calculation like
one_degree = 111196.672; //meters
X:= 303553.0249270061;
Y:= 580466.2644065879;
X:= X - false_northing;
Y:= Y - false_easting;
X:= latitude_of_origin + X/one_degree;
Y:= central_meridian + Y/one_degree;
but this show me:
50.3931720629823
19.7236391427847
which is not true. It is near but should be >20.
How this calculation should looks like?
I need it in Delphi application.
I don't know solution for You in Delphi but i found one in js and php.
I found this topic while i was searching "how to transform location units from one system to another" and google EPSG:2180 convert.
This problem name is:
Transform a point coordinate from one map projection to another
So solution is to make mathematical transformation and maby other calculations. You can Check:
Proj4 Library
with API Interfaces: C, C++, Python, Java, Ruby
My solution was to use library Proj4js.
Maby try to open source's on github of this library and analize function transform and with luck You will find answer ; )
I used php version of this package => Proj4jsphp
My problem was how to transform Coordinates in EPSG:2180 => GPS is WGS 84 (a.k.a. EPSG:4326).
Later i found out there are different names for this systems:
EPSG:2180 responds to Poland CS92
EPSG:4326 responds to WGS84 -> where WGS84 is the most popular system (lat, lon).
Related
I am trying to represent in an ontology a few geometric objects (polygon, lines, points, etc.) and calculate their spatial/topological relations, through the adoption of GeoSPARQL relevant functions (sfTouches, sfEquals, sfContains, etc.). I am using GraphDB, with the GeoSPARQL plugin enabled.
I have seen that in the WKT representation of the geometric object, GeoSPARQL uses the concept of a default spatial reference system (i.e. the <http://www.opengis.net/def/crs/OGC/1.3/CRS84> URI which corresponds to the WGS84 coordinate reference system (CRS)). However, in my use case, the coordinates of the geometrical objects actually correspond to values in a 2D Cartesian coordinate system.
I found in the EPSG Geodetic Parameter Registry the proper CRS for representing Cartesian coordinates and I attached the proper URI in the WKT representation, but the GeoSPARQL functions do not return any result or error.
My question is the following: "Do the GeoSPARQL functions operate properly when representing spatial objects in any other type of CRS, apart from the default one?".
Thank you in advance.
Currently GDB does not support alternative CRS in WKT literals but supports them in GML literals (issue GDB-3142). GML literals are slightly more complex but still easy enough to generate, let us know if you need help with that.
However, I question your assertion that you have Cartesian coordinates. On one hand, any pair (lat,long) or (nothing,easting) is a Cartesian coordinate. On the other hand, since the Earth is not flat, any CRS or projection method is only an approximation, and many of them are tuned for specific localities.
So please tell us which EPSG CRS you picked, and a bit about the locality of your data.
Your example, slightly reformatted, and using normal turtle shortenings:
ex:polygon_ABCD rdf:type ex:ExampleEntity ;
geo:hasGeometry ex:geometry_polygon_ABCD .
ex:geometry_polygon_ABCD a geo:Geometry, sf:Polygon ;
geo:asWKT "<opengis.net/def/cs/EPSG/0/4499> Polygon((389.0 1052.0, 563.0 1052.0, 563.0 1280.0, 389.0 1280.0, 389.0 1052.0))"^^geo:wktLiteral .
ex:point_E rdf:type ex:ExampleEntity ;
geo:hasGeometry ex:geometry_point_E .
ex:geometry_point_E a geo:Geometry, sf:Point ;
geo:asWKT "<opengis.net/def/cs/EPSG/0/4499> Point(400.0 1100.0)"^^geo:wktLiteral ; .
You must use a specific URL for the CRS and cannot omit http:, so the correct URL is http://www.opengis.net/def/crs/EPSG/0/4499.
But you can see from the returned description that this CRS is applocable to "China - onshore and offshore between 120°E and 126°E". I'm not an expert in geo projections so I can't guarantee whether this CRS will satisfy your need "leave my coordinates alone, they are just meters". I'd look for a UK (OrdnanceSurvey) CRS with easting and northing coordinates.
To learn how to format GML:
see the GeoSPARQL spec (OGC 11-052r4) p18, whchc gives an example about gml:Point.
then google for gml:Polygon. There are many links but one that gives examples is http://www.georss.org/gml.html
Armed with this knowledge, we can reformat your example to GML:
ex:polygon_ABCD rdf:type ex:ExampleEntity ;
geo:hasGeometry ex:geometry_polygon_ABCD .
ex:geometry_polygon_ABCD a geo:Geometry, sf:Polygon ;
geo:asGML """
<gml:Polygon xmlns:gml="http://www.opengis.net/gml" srsName="http://www.opengis.net/def/crs/EPSG/0/TODO">
<gml:exterior>
<gml:LinearRing>
<gml:posList>
389.0 1052.0, 563.0 1052.0, 563.0 1280.0, 389.0 1280.0, 389.0 1052.0
</gml:posList>
</gml:LinearRing>
</gml:exterior>
</gml:Polygon>
"""^^geo:gmlLiteral.
ex:point_E rdf:type ex:ExampleEntity ;
geo:hasGeometry ex:geometry_point_E .
ex:geometry_point_E a geo:Geometry, sf:Point ;
geo:asGML """
<gml:Point xmlns:gml="http://www.opengis.net/gml" srsName="http://www.opengis.net/def/crs/EPSG/0/TODO">
<gml:pos>
400.0 1100.0
</gml:pos>
</gml:Point>
"""^^geo:gmlLiteral.
The """ (long quote) allows us to use " inside the literal without quoting
replace TODO with the better CRS you picked
the documentation http://graphdb.ontotext.com/documentation/master/enterprise/geosparql-support.html#geosparql-examples gives an example similar to yours but it cheats a bit because all coordinates are in the range (-90,+90) so it can just use WGS.
after you debug using geof: topology functions, turn on indexing and switch to geo: predicates, because the functions are slow (they check every geometry against every other) while the predicates use the special geo index
Let me know how it goes!
cv::recoverPose has parameter "triangulatedPoints" as seen in documentation, though math behind it is not documented, even in sources (relevant commit on github).
When I use it, I get this matrix in following form:
[0.06596200907402348, 0.1074107606919504, 0.08120752154556411,
0.07162400555712592, 0.1112415181779849, 0.06479560707001968,
0.06812069103377787, 0.07274771866295617, 0.1036230973846902,
0.07643884790206311, 0.09753859499789987, 0.1050111597547035,
0.08431322508162108, 0.08653721971228882, 0.06607013741719928,
0.1088621999959361, 0.1079215237863785, 0.07874160849424018,
0.07888037486261903, 0.07311940086190356;
-0.3474319603010109, -0.3492386196164926, -0.3592673043398864,
-0.3301695131649525, -0.3398606744869519, -0.3240186574427479,
-0.3302508442361889, -0.3534091474425142, -0.3134288005980755,
-0.3456284001726975, -0.3372514921152191, -0.3229005408417835,
-0.3156005118578394, -0.3545418178651592, -0.3427899760859008,
-0.3552801904337188, -0.3368860879000375, -0.3268499974874541,
-0.3221050630233929, -0.3395139819250934;
-0.9334091581425227, -0.9288726274060354, -0.9277125424980246,
-0.9392374374147775, -0.9318967835907961, -0.941870018271934,
-0.9394698966781299, -0.9306592884695234, -0.9419749503870455,
-0.9332801148509925, -0.9343740431697417, -0.9386198310107222,
-0.9431781968459053, -0.9290466865633286, -0.9351167772249444,
-0.9264105322194914, -0.933362882155191, -0.9398254944757025,
-0.9414486961893244, -0.935785675955617;
-0.0607238817598344, -0.0607532477465341, -0.06067768097603395,
-0.06075467523485482, -0.06073245675798231, -0.06078081616640227,
-0.06074754785132623, -0.0606879948481664, -0.06089198212719162,
-0.06071522666667255, -0.06076842109618678, -0.06083346023742937,
-0.06084805655000008, -0.0606931888685702, -0.06071558440082779,
-0.06073329803512636, -0.06078189449161094, -0.06080195858434526,
-0.06083228813425822, -0.06073695721101467]
e.g. 4x20 matrix (in this case there were 20 points). I want to convert this data to std::vector in order to use it in solvePnP. How to do it, what is the math here? Thanks!
OpenCV offers a triangulatePoints function, which has the same output:
points4D 4xN array of reconstructed points in homogeneous coordinates.
Which indicates that each column is a 3D point in homogeneous coordinate system. However your points looks quite not as I would expect. For instance your first point is:
[0.06596200907402348, -0.3474319603010109, -0.9334091581425227, -0.0607238817598344]
But I would expect the last component to be 1.0 already. You should double check if something is not wrong here. You can always remove the "scaling" of the point by dividing each dimension by the last component:
[ x, y z, w ] = w [x/w, y/w, z/w, 1]
And then use the first three parts for your PnP solution.
I hope this helps
I have an HDF5 files, global coverage of temperature. The file was converted from netcdf. The conversion process set longitude from 0 to 360 and additionally flipped the map upside down, so north is now south. I have used HDFView and I can display the file but there is no way to interact with the map so locate a specific lat/long combination. The file doesn't display properly in arcmap even after setting the correct projection.
Is there anyway I can display the data and click on a location and extract lat/long or draw a point in a specific lat/long?
Short answer: No, that's not possible.
Long answer: Unlike NetCDF, HDF5 is a general purpose file format. It allows you to store n-dimensional numerical arrays (called datasets), grouped into folders (hence the name "hierarchical"). Nothing more. There is no semantics. To HDF5, your data is not a "map", it's just an array. Therefore, HDFView does not "know" about latitudes and longitudes. That information was lost in the NetCDF => HDF5 conversion process. Actually, the lat/lon arrays are probably still in the file but they no longer have any inherent meaning. NetCDF, on the other hand, imposes a common data model including coordinate systems. That's why the various visualization tools let you interact with your data in a more sophisticated way.
What tool did you use to convert your NetCDF-file to HDF5?
You can use HDF5 to store meteorological data (I do that, it works well). But then you have to write your own tools for georeferencing and visualization. Check out the h5py project if you're into Python.
As #heron13 has said, HDF5 is a file format.
What version of NetCDF was your file? As version 4 uses an enhanced version of HDF5 as the storage layer.
Does your NetCDF file follow (have) the CF conventions or COARDS conventions? If so I would look at the program you used to convert it to HDF5, as HDF5 can support the same conventions. For example.
Once you confirm that the conventions are in the HDF5 file, arcmap is meant to support them too (sorry I do not have access to arcmap to confirm).
Here's a look at a NetCDF file with the CF conventions:
$ ncdump tos_O1_2001-2002.nc | less
netcdf tos_O1_2001-2002 {
dimensions:
lon = 180 ;
lat = 170 ;
time = UNLIMITED ; // (24 currently)
bnds = 2 ;
variables:
double lon(lon) ;
lon:standard_name = "longitude" ;
lon:long_name = "longitude" ;
lon:units = "degrees_east" ;
lon:axis = "X" ;
lon:bounds = "lon_bnds" ;
lon:original_units = "degrees_east" ;
...
While here is a view of the same file only using h5dump:
$ h5dump tos_O1_2001-2002.nc | less
HDF5 "tos_O1_2001-2002.nc" {
GROUP "/" {
ATTRIBUTE "Conventions" {
DATATYPE H5T_STRING {
STRSIZE 6;
STRPAD H5T_STR_NULLTERM;
CSET H5T_CSET_ASCII;
CTYPE H5T_C_S1;
}
DATASPACE SCALAR
DATA {
(0): "CF-1.0"
}
}
...
One other question, is there any reason why you are not using the NetCDF file in arcmap?
I am working on my project that uses elgamal elliptic curve.
I know when the elgamal ec encrypt by following steps
Represent the message m as a point M in E(Fp).
Select k ∈R [1,n−1].
Compute C1 = kP.
Compute C2 = M +kQ.
Return(C1,C2).
Where Q is the intended recipient’s public key, P is base point.
My qusetion at number one. how represent m as a point. Is point represent one character or represent group of characters.
There's no obvious way to attach m to points in E(Fp). However, you can use variant algorithm of ElGamal such as Menezes-Vanstone Elliptic curve cryptosystem to encode a message
in a point, a good reference here(P.31).
As for java code, I suggest you do some work, and post another question on SO when you encounter a problem you really can't solve by yourself.
DISCLAIMER: This question is only for those who have access to the econometrics toolbox in Matlab.
The Situation: I would like to use Matlab to simulate N observations from an ARIMA(p, d, q) model using the econometrics toolbox. What's the difficulty? I would like the innovations to be simulated with deterministic, time-varying variance.
Question 1) Can I do this using the in-built matlab simulate function without altering it myself? As near as I can tell, this is not possible. From my reading of the docs, the innovations can either be specified to have a constant variance (ie same variance for each innovation), or be specified to be stochastically time-varying (eg a GARCH model), but they cannot be deterministically time-varying, where I, the user, choose their values (except in the trivial constant case).
Question 2) If the answer to question 1 is "No", then does anyone see any reason why I can't edit the simulate function from the econometrics toolbox as follows:
a) Alter the preamble such that the function won't throw an error if the Variance field in the input model is set to a numeric vector instead of a numeric scalar.
b) Alter line 310 of simulate from:
E(:,(maxPQ + 1:end)) = Z * sqrt(variance);
to
E(:,(maxPQ + 1:end)) = (ones(NumPath, 1) * sqrt(variance)) .* Z;
where NumPath is the number of paths to be simulated, and it can be assumed that I've included an error trap to ensure that the (input) deterministic variance path stored in variance is of the right length (ie equal to the number of observations to be simulated per path).
Any help would be most appreciated. Apologies if the question seems basic, I just haven't ever edited one of Mathwork's own functions before and didn't want to do something foolish.
UPDATE (2012-10-18): I'm confident that the code edit I've suggested above is valid, and I'm mostly confident that it won't break anything else. However it turns out that implementing the solution is not trivial due to file permissions. I'm currently talking with Mathworks about the best way to achieve my goal. I'll post the results here once I have them.
It's been a week and a half with no answer, so I think I'm probably okay to post my own answer at this point.
In response to my question 1), no, I have not found anyway to do this with the built-in matlab functions.
In response to my question 2), yes, what I have posted will work. However, it was a little more involved than I imagined due to matlab file permissions. Here is a step-by-step guide:
i) Somewhere in your matlab path, create the directory #arima_Custom.
ii) In the command window, type edit arima. Copy the text of this file into a new m file and save it in the directory #arima_Custom with the filename arima_Custom.m.
iii) Locate the econometrics toolbox on your machine. Once found, look for the directory #arima in the toolbox. This directory will probably be located (on a Linux machine) at something like $MATLAB_ROOT/toolbox/econ/econ/#arima (on my machine, $MATLAB_ROOT is at /usr/local/Matlab/R2012b). Copy the contents of #arima to #arima_Custom, except do NOT copy the file arima.m.
iv) Open arima_Custom for editing, ie edit arima_Custom. In this file change line 1 from:
classdef (Sealed) arima < internal.econ.LagIndexableTimeSeries
to
classdef (Sealed) arima_Custom < internal.econ.LagIndexableTimeSeries
Next, change line 406 from:
function OBJ = arima(varargin)
to
function OBJ = arima_Custom(varargin)
Now, change line 993 from:
if isa(OBJ.Variance, 'double') && (OBJ.Variance <= 0)
to
if isa(OBJ.Variance, 'double') && (sum(OBJ.Variance <= 0) > 0)
v) Open the simulate.m located in #arima_Custom for editing (we copied it there in step iii). It is probably best to open this file by navigating to it manually in the Current Folder window, to ensure the correct simulate.m is opened. In this file, alter line 310 from:
E(:,(maxPQ + 1:end)) = Z * sqrt(variance);
to
%Check that the input variance is of the right length (if it isn't scalar)
if isscalar(variance) == 0
if size(variance, 2) ~= 1
error('Deterministic variance must be a column vector');
end
if size(variance, 1) ~= numObs
error('Deterministic variance vector is incorrect length relative to number of observations');
end
else
variance = variance(ones(numObs, 1));
end
%Scale innovations using deterministic variance
E(:,(maxPQ + 1:end)) = sqrt(ones(numPaths, 1) * variance') .* Z;
And we're done!
You should now be able to simulate with deterministically time-varying variance using the arima_Custom class, for example (for an ARIMA(0,1,0)):
ARIMAModel = arima_Custom('D', 1, 'Variance', ScalarVariance, 'Constant', 0);
ARIMAModel.Variance = TimeVaryingVarianceVector;
[X, e, VarianceVector] = simulate(ARIMAModel, NumObs, 'numPaths', NumPaths);
Further, you should also still be able to use matlab's original arima class, since we didn't alter it.