UPDATE: The basic question is, if my GeoJSON delivered by the REST interface (Json data is visible at end of question) is a valid GeoJSON for the vector layer, because as soon as i add this as source for the vector layer, the layer is broken.
Currently there is no REST Interface to upload shapes, so i just took some valid coordinates from current shapes and created a static JSON on serverside
I try to make it possible that users can draw shapes (Polygons, Lines) on a vector layer and the i store the geometric data on a server and the next time the map is loaded it should be displayed again. But it seems somehow when i define my REST interface as source for the vector layer, that there is some problem as painting and adding objects into the vector layer does not work anymore.
Here the code i put together from OpenLayers Examples. First an image how it should look like:
I extracted the coordinates on the map with drawend event and built a REST interface where i could load the geometric data, this is the response i get for the vector layer source:
{
"type":"FeatureCollection",
"crs":{
"type":"name",
"properties":{
"name":"EPSG:2000"
}
},
"features":[
{
"type":"Feature",
"id":"1",
"properties":{
"name":"TEST1"
},
"geometry":{
"type":"LineString",
"coordinates":[
[
-5920506.46285661,
1533632.5355137766
],
[
-1882185.384494179,
234814.55089206155
]
]
}
}
]
}
But if i load this, nothing will be displayed and its not possible to draw on the layer anymore (if i remove the "source" attribute from the vector layer source it works again)
Here the complete code on pastebin: Example Code
I fixed the problem meanwhile, i was trying to load GeoJSON into my vector layer but the vector layer was always broken afterwards. As i was pretty new to open layers, i didn't notice my mistake, but i had syntax errors in my GeoJSON and the coordinates i supplied were also wrong.
After correcting the syntax and the coordinates everything is working as intended and as mentioned above EPSG:3857 was also the right thing to use. Sorry for my first messy experiences with open layers, but thanks for the friendly help ;)
Related
I'm working on an older project that uses D3D9 for rendering 3D environments.
I have a texture file loaded into memory, that I'm applying onto a simple 3D model for rendering. I'm loading this file using the D3DXCreateTextureFromFileInMemory function (MS Docs function link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemory), and everything works okay.
However, instead of simply reading & loading the entire texture file, I want to only be able to read & load a square portion of it (a sub-texture of sorts). I have a pair of UV coordinates of the supposed square portion of the sub-texture (one UV coordinate for top-left corner of square, one for the bottom-right), relative to the main texture file, but I can't find a D3D9 function that does such a thing (I believe the correct wording for this would be a "Texture Atlas", but I've only heard it a couple of times and I'm not sure).
Here is an example diagram, to make sure my question is clear:
Looking over the MS Docs for the D3D9 texture functions, D3DXCreateTextureFromFileInMemoryEx (MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromfileinmemoryex) can also be found with is a supposed upgrade of the previous D3DXCreateTextureFromFileInMemory function, however it only accepts a "height" and a "width" parameters, but not any sort of positional parameter pair. There are also alternative functions that use "Resources" instead of files in memory, but they also do not appear to accept any sort of positional parameters (such as D3DXCreateTextureFromResourceEx, MS Docs link: https://learn.microsoft.com/en-us/windows/win32/direct3d9/d3dxcreatetexturefromresourceex).
There are also several functions for a "UV Atlas" present in the MS Docs archives (https://learn.microsoft.com/en-us/windows/win32/direct3d9/dx9-graphics-reference-d3dx-functions-uvatlas), however I do not think those would be helpful to me.
Is what I'm trying to achieve here even possible using D3D9? Are there any functions that I may be missing that could help me achieve this goal?
I'm using the following python code to detect License Plates with Cloud Vision API.
response = client.annotate_image({
'image': {'source': {'image_uri': uri}},
'features': [
{'max_results': 1000, 'type_': vision.Feature.Type.TEXT_DETECTION},
{'max_results': 1000, 'type_': vision.Feature.Type.OBJECT_LOCALIZATION},
],
})
lo_annotations = response.localized_object_annotations
for obj in lo_annotations:
print('\n{} (confidence: {})'.format(obj.name, obj.score))
print('Normalized bounding polygon vertices: ')
for vertex in obj.bounding_poly.normalized_vertices:
print(' - ({}, {})'.format(vertex.x, vertex.y))
If i use an image showing more cars, buildings, persons etc. I get about 4-7 objects recognized. The recognized objects are the bigger ones in the scene like "Car", "Car", "Building", "Building", "Person"
If I snip out just one car from this image and do the Object Localization with this new image I get objects like "Car", "Tire", "Tire", "License plate" which is perfect - because the plate gets recogized and listed.
So it seems the Object Localization algorithm picks out some prominent objects from the image and ignores smaller or less prominent objects.
But in my case I need to localize all license plates in the image. Is there a way to get the used Model to list all license plates in the image or more objects than just the most prominent ones?
Otherwise what would then be the right approach to get all plates out of an image - do I have to train a custom model?
Vision API is a pre-trained image detection service provided by Google that could perform basic image detection like detecting of text, objects, etc. hence the observation you have mentioned about the API where it usually detects prominent objects in the image.
What I could suggest is, if the objects in your images usually appear in a specific area in the image (eg. objects appear at the lower half) you can pre-process the image by cropping it using python libraries like PIL and OpenCV before using Vision API to detect license plates. Or detect the objects, get the coordinates per object, use coordinates as input to crop specific objects then use Vision API to detect license plates.
And also as you have mentioned, as an alternative you can always create a custom model to detect license plates if you are not satisfied with the results of Vision API. With a custom model, you have more freedom on how to tweak the model to increase the accuracy to detect license plates.
I am displaying a point and a polygon in Cesium.js, and using turf.js to check is the point inside the polygon.
When displayed in Cesium (or geojson.io), the point is clearly outside the polygon, as can be seen here:
http://geojson.io/#id=gist:kujosHeist/1030e392bd751daf5d9af57aa412a49c&map=3/46.80/-22.76
However, when I queried this on the turf.js issiues page:
https://github.com/Turfjs/turf-inside/issues/15
I was told it was because geojson, (and therefore Cesium) is "misrepresenting the point since it uses World Web Mercator projection (EPSG:3857)" and that "viewing the same point/polygon using WGS84 with QGIS" ...shows the point is inside the polygon.
So my question is: How can I change the map projection used in Cesium (and also in geojson.io if possible), so that the point is correctly displayed inside the polygon?
I am not sure how well geojson.io or Cesium will handle different coordinate systems, but you can specify a CRS element in your GeoJSON that indicates the coordinate system used by points of your features. This is added as a member under your feature collection. For example:
{
{
"type": "FeatureCollection",
"crs": {
"type": "name",
"properties": {
"name": "urn:ogc:def:crs:OGC:1.3:CRS84"
},
...the rest of your structure...
}
But like I said, it's up to the map display software to pay attention to your specified CRS and use it to project coords. If it doesn't support the coordinate system you have, then you'll need to pick some other map display software or convert the coords to a supported coordinate system.
I'm brand new to openlayers, I've just been reading the docs today since I have a need to do some testing with it. I'd like to know if there's a way in openlayers to restrict the maximum geographic extent in a WMS tile request.
I've read that WMS bounding boxes are generated automagically, which is neat - but I have some issues with WMS requests on very large datasets which tend to make the underlying WMS server struggle. That's not really change-able, so we need to work around it, and one strategy is to request only small subsets at a time (~5 or 10 degree squares at most).
So - is there a way in openlayers to say 'get me at most 5 degrees by 5 degrees in a single WMS request, and build my map layer from those'?
Thanks in advance
Yes, you can set extent for your layer in such way:
new ol.layer.Tile( {
extent: ol.proj.transformExtent([30, 30, 50, 50], "EPSG:4326", "EPSG:3857"),
// your WMS source and other code
Please, make sure to use right projection. Example was given in case your map is in web mercator.
I've recently taken the plunge into DirectX and have been messing around a little with Anim8or, and have discovered several file types that models can be exported to that are text based. I've particularly taken to VTX files. I've learned how to parse some basics out of it, but I'm obviously missing a few things.
It starts with a .Faceset with is immediately (on the same line) followed by the number of meshes in the file.
For each mesh, there is one .Vertex section and one .Index section in that order and the first pair of .Vertex/.Index sections are the first mesh, the second set are the second mesh and so on as you'd expect.
In a .Vertex section of the file, there's 8 numbers per line and an undefined number of lines (unless you want to trust the comments Anim8or has put just before the section, but that doesn't seem to be part of the specs of the file, just Anim8or being kind). The first 3 numbers correspond to X, Y, and Z coordinates for a particular point that'll later be used as a vertex, the other 5 I have no idea. A majority of the time, the last 2 numbers are both 0, but I've noticed that's not ALWAYS true, just usually true.
Next comes the matching .Index section. This section has 4 numbers. The first 3 are reference numbers to the Vertexes previously stated and the 3 points mark a triangle in the model. 0 meaning the first mentioned Vertex, 1 meaning the next one, and so on, like a zero-based array. The 4th number appears to always be -1, I can't figure out what importance it has and I can't promise it's ALWAYS -1. In case you can't tell, I'm not too certain about anything in this file type.
There's also other information in the file that I'm choosing to ignore right now because I'm new and don't want to overcomplicate things too much. Such as after every .Index section is:
.Brdf
// Ambient color
0.431 0.431 0.431
// Diffuse color
0.431 0.431 0.431
// Specular color and exponent
1 1 1 2
// Kspecular = 0.5
// end of .Brdf
It appears to me this is about the surface of the mesh just described. But it's not needed for placement of meshes so I moved past it for now.
Moving on to the real problem... I can load a VTX file when there's only one mesh in the VTX file (meaning the .FaceSet is 1). I can almost successfully load a VTX file that has multiple meshes, each mesh is successfully structured, but not properly placed in relation to the other meshes. I downloaded an AT-AT model from an Anim8or thread in a forum and it's made up of 344 meshes, when I load the file just using the specs I've mentioned so far, it looks like the AT-AT is exploded out as if it were a diagram of how to make it (when loaded in Anim8or, all pieces are close and resemble a fully assembled AT-AT). All the pieces are oriented correctly and have the same up direction, but there's plenty of extra space between the pieces.
Does somebody know how to properly read a VTX file? Or know of a website that'll explain what those other numbers mean?
Edit:
The file extension .VTX is used for a lot of different things and has a lot of different structures depending on what the expected use is. Valve, Visio, Anim8or, and several others use VTX, I'm only interested in the VTX file that Anim8or exports and the structure that it uses.
I have been working on a 3D Modeling program myself and wanted a simple format to be able to bring objects in to the editor to be able to test the speed of my drawing routines with large sets of vertices and faces. I was looking for an easy one where I could get models quickly and found the .vtx format. I googled it and found your question. When I was unable to find the format on the internet, I played around and compared .OBJ exports with .vtx ones. (Maybe it was created just for Anim8or?) Here is what I found:
1) Yes, the vertices have eight numbers on each line. The first three are, as you guessed, the x, y, and z coordinates. The next three are the vertex normals, nx, ny, and nz. You may notice that each vertex appears multiple times with different normals for each face that contains it. The last two numbers are texture coordinates.
2) As for the faces, I reached the same conclusions as you did. The first three numbers are indices into the vertex list above. The last number does appear to always be -1. I am going to assume that it has something to do with the facing of the face. (e.g. facing in or out.) Since most models are created with the faces all facing appropriately, it stands to reason that this would be the same number for all of them.
3) One additional note: When comparing the .obj with the .vtx, I did notice that the positions of the vertices changed. This was also true when comparing with the .an8 file. This should not be a "HUGE" problem as long as they are all offset by the same amount in each vertex and every file. At least then it could be compensated for.
Have you considered using the .obj file format? It is text-based and is not extremely difficult to parse or understand. There is quite a bit of information about it online.
I am going to add that, after a few hours inspection, the vtx export in Anim8or seems to be broken. I experienced the same problem as you did that the pieces were not located properly. My assumption would be that anim8or exports these objects using the local coordinates for each mesh and not accounting for transformations that have been applied. I do also note that it will not IMPORT the vtx file...
Based on some googling, it seems you're at the wrong end of the pipeline. As I understand it: A VTX file is a Valve Proprietary File Format that is the result of a set of steps.
The final output of Studiomdl for each
Half-Life model is a group of files in
the gamedirectory/models folder ready
to be used by the Game Engine:
an .MDL
file which defines the structure of
the model along with animation,
bounding box, hit box, material, mesh
and LOD information,
a .VVD file which
stores position independent flat data
for the bone weights, normals,
vertices, tangents and texture
coordinates used by the MDL, currently
three separate types of VTX file:
.sw.vtx (Software),
.dx80.vtx (DirectX
8.0) and
.dx90.vtx (DirectX 9.0) which store hardware optimized material,
skinning and triangle strip/fan
information for each LOD of each mesh
in the MDL,
often a .PHY file
containing a rigid or jointed
(ragdoll) collision model, and
sometimes
a .ANI file for To do:
something to do with model animations
Valve
Now the Valve Source SDK may have some utilities in it to read VTX's (it seems to have the ability to make them anyway). Some people may have made 3rd party tools or have code to read them, but it's likely to not work on all files just cause it's a 3rd party format. I also found this post which might help if you haven't seen it before.