Track the created on and updated on data of features in geoserver layers - openlayers-3

In arcgis there is a capability of tracking the created and updated date of features in the layers.
Is it possible to handle the same thing from geoserver layers by any settings and / or plugins.

Related

Difference Between Cloud Data fusion and DataFlow on GCP

What is the difference between GCP pipeline services:
Cloud Dataflow and Cloud Data fusion ...
which to you when?
I did a high level pricing taking 10 instances with Basic in Data fusion.
and 10 instance cluster (n1-standard-8) in Dataflow.
The pricing is more than double for Datafusion.
What are the pros and cons for each over one another
Cloud Dataflow is purpose built for highly parallelized graph processing. And can be used for batch processing and stream based processing. It is also built to be fully managed, obfuscating the need to manage and understand underlying resource scaling concepts e.g how to optimize shuffle performance or deal with key imbalance issues. The user/developer is responsible for building the graph via code; creating N transforms and or operations to achieve desired goal. For example: read files from storage, process each line in file, extract data from line, cast data to numeric, sum data in groups of X, write output to data lake.
Cloud Data Fusion is focused on enabling data integration scenarios => reading from source (via extensible set of connectors) and writing to targets e.g. BigQuery, storage, etc. It does have parallelization concepts, but they are not fully managed like Cloud Dataflow. CDF rides on top of Cloud Dataproc which is a managed version for Hadoop based processing. It's sweet spot is visual based graph development leveraging an extensible set of connectors and operators.
Your question is based on "cost" concepts. My advice is to take a step back and define what your processing/graph goal(s) look like. Then look at each products value. If you want full control over processing semantics with greater focus on analytics and want to run in batch and or must have streaming focus on Dataflow. If you want point and click data movement, with less focus need on data analytics AND do not need streaming then look at CDF.

Data processing while using tensorflow serving (Docker/Kubernetes)

I am looking to host 5 deep learning models where data preprocessing/postprocessing is required.
It seems straightforward to host each model using TF serving (and Kubernetes to manage the containers), but if that is the case, where should the data pre and post-processing take place?
I'm not sure there's a single definitive answer to this question, but I've had good luck deploying models at scale bundling the data pre- and post-processing code into fairly vanilla Go or Python (e.g., Flask) applications that are connected to my persistent storage for other operations.
For instance, to take the movie recommendation example, on the predict route it's pretty performant to pull the 100 films a user has watched from the database, dump them into a NumPy array of the appropriate size and encoding, dispatch to the TensorFlow serving container, and then do the minimal post-processing (like pulling the movie name, description, cast from a different part of the persistent storage layer) before returning.
Additional options to josephkibe's answer, you can:
Implementing processing into model itself (see signatures for keras models and input receivers for estimators in SavedModel guide).
Install Seldon-core. It is a whole framework for serving that handles building images and networking. It builds service as a graph of pods with different API's, one of them are transformers that pre/post-process data.

Can Google Cloud Vision API be trained using your image data?

IBM Watson has a capability where you can train the classifiers on Watson using your images but I am unable to find a similar capability on Google Cloud Vision API? What I want is that I upload 10-15 classes of images and on the bases of upload images classify any images loaded after that. IBM Bluemix (Watson) has this capability but their pricing is significantly higher than Google. I am open to other services as well, if prices ares below Google's
As far as I know Google Cloud Vision API cannot be trained with your custom data. However, there is a service called vize.ai, where you can define your custom classes and upload the images, the training is for free and the prices for API usage are below Google's and IBM's.
Disclaimer: I'm vize.it co-founder
Edit: Link changed
You can train your own models using Cloud AutoML Vision. There are 2 different ways to do this:
Cloud-hosted models.
Edge exportable models.
With some work you can train a model for free using TensorFlow - see the model training section.
However, they have released an already trained model, so if you're lucky and what you want to classify already overlaps with their model, then no training is needed.
Azure has started this now, google for "azure custom vision" this is still a preview service but with good accuracy at least for our workload which is preschool children images.

How can i show satellite image as layer? in which format

How can i show satellite image as layer? in which format should be this image?
Hi there!
I have satellite images like RapidEye.
I created local openstreetmap layer like This example
And now i want to add multiple layers like RGB, NDVI, NDWI.
Give me tips pls,
Thank you
Use a wms server to publish your image as a layer. This can be done using geoserver or mapserver for example.
For the format of the image, depends on the wms server you are going to use. Geoserver accepts a large number of different formats details may be found here.
Here is also a description on how to publish a georefernced image on geoserver.
For other wms servers I am not so familiar. But more or less the process should be the same.

Adding vector map data to iOS GPS app. Real Time vector graphics rendering

We are working on a project to add vector map data from OSM and NAVTEQ to a iOS GPS app.
Currently, the app displays raster map images and provides moving map navigation features. We now want to take it a step further by integration vector maps but don't know where to start.
Guidance from developers with experience on GPS navigation would be great.
Here is the brief on the requirements:
Target Devices:
iOS. C++ is preferred for the core for future compatibility with other platforms.
Data integration and packaging:
Map data source:
- NAVTEQ
- OpenStreetMap
File format:
- Ideal for mobile devices with considerations of device limitations.
- Either find an already established format, or create one in house.
Compiling:
- Determine a format for source data (Shp, MapInfo etc)
- Compile source format to required format.
Map rendering engine:
Display of maps:
- Vector map view will be separate to the current raster map view.
- Render data into lines, points, polygons etc in real time. Tiled or pre-rendered is not acceptable.
- 2D birdseye view. (3D is planned for future versions).
- Shade relief to illustrate elevation.
- Display user generated data such as routes, tracklogs, waypoints.
- A scale, e.g. 500 metres.
- Speedy performance is essential to provide better user experience.
- Good examples would be the Tom Tom iOS app.
Map Interactions:
- Pan, Zoom, rotate.
- Make use of multitouch functionality.
Search
- Address, locations, POI (Geo Coding)
- Address from location (Reverse Geo Coding)
Style sheets
- Easily customise the look of the map been displayed.
- Every element can be cusomised.
We would like to find out where to start our research. What libraries and SDKs are out there that are worth spending the time investigating?
Some notes based on my experience:
Source data format: you'll probably want to be able to import data from ESRI shapefiles and OpenStreetMap (which comes as XML or a more compact but equivalent binary format). NAVTEQ data can be obtained as ESRI shapefiles. Shaded relief can be obtained by processing USGS height data (http://dds.cr.usgs.gov/srtm/).
2D versus 3D: the step from one to the other is a big one. 2D data is almost invariably provided as latitude and longitude and projected to a plane: Google Maps and OpenStreetMap use a very simple but much derided spherical Mercator projection. Moving to 3D requires a decision on the coordinate system - projected plane plus height versus true 3D based on the shape of the earth - and possibly problems involving level of detail. A good way to proceed might be to draw the shape of the earth (hills and valleys) as a triangle mesh, then drape the rest of the map on it as a texture. You might want to consider "two and a half D" - using a perspective transformation to display the map as if viewing it from a height.
Libraries: there's quite a big list of map rendering libraries here, both commercial and non-commercial (disclosure: mine is one of them). Many of these libraries have style sheet systems for customising the map look and feel.
A very good open-source rendering library (not mine) is Mapnik, but I am not sure whether that will port very easily on to iOS. However, it's a very good idea to read up on how Mapnik and other rendering libraries do their work, to get a feel for the problem. The OpenStreetMap wiki is a good portal for learning more about the field.
Text rendering on maps is nearly always done using FreeType, an open-source rasterizer library with an unrestrictive license.
Try out MapBox library: http://mapbox.com/
There is a list on the OSM Wiki but it is sadly not complete.
Two vector libraries that I know of are CartoType (which you can see in use in the newer Lonely Planet Guides) and Skobbler - Skobbler don't have an off the shelf product but I bellieve they will integrate their vector maps and routing for you.
There is also a related question on the OSM StackExchange

Resources