Tools for mapping time series data - mapping

I'm looking for suggestions/examples of tools or APIs that enable the mapping of large amounts of time series data into an intensity map.
The data includes dimensions for country, series, and year. Here's an example http://spreadsheets.google.com/ccc?key=t9ZwziZAgy768ZTXDEg8Maw&authkey=CPn0pdoH&hl=en_GB&ui=1

UUorld is a good choice if you want to create videos that show data changing over time. They're heavy on the 3-D, but I found some examples of what appear to be 2-D intensity maps in the gallery. The trial version is free and does not expire.
For static images, I love indiemapper. It's very simple to use and has beautiful color palettes and typography options. It also has 16 different map projections, if you're into that. The free trial is 30 days.
The caveat with these (and other mapping software) is that you may have to convert your data into a certain format, depending on what it is now. For example, indiemapper takes shapefiles, KML, and GPX as input.

Try GeoCommons, you would need to reformat your spreadsheet a bit, but once you get it in there you can join to country boundaries, creat an interactive temporal map, and embed it wherever you want. everything is web based so no need to download anything.

Related

Does highcharts (or vaadin) have built-in abilities to use an algorithm like Ramer–Douglas–Peucker?

Using Vaadin's charts (which ultimately uses HighCharts), I'm trying to plot a line graph with over 10,000 points. It actually works reasonably quickly (a couple seconds to plot). However, I wonder if it can be much faster, as I came accross a similar problem when using JavaFx charts and discovered that people have implemented a solution using the "Ramer–Douglas–Peucker algorithm" to reduce the number of datapoints in such a manner that it's basically noticeable to the human eye when its graphed. (Here's the original answer from SO: Performance issue with JavaFX LineChart with 65000 data points).
So, does highcharts already have such built in functionality? If not, does Vaadin? Or, do I need to recreate this logic in Vaadin, which means I ultimately need to find a Java library for the Ramer–Douglas–Peucker algorithm....
Unfortunately, Highcharts doesn't have "Ramer–Douglas–Peucker algorithm" built-in. However, it has a boost module which allows rendering thousands of points in milliseconds.
The Boost module allows certain series types to be rendered by WebGL
instead of the default SVG. This allows hundreds of thousands of data
points to be rendered in milliseconds. In addition to the WebGL
rendering, it saves time by skipping the processing and inspection of
the data wherever possible.
API reference:
https://api.highcharts.com/highcharts/boost
Demo:
https://www.highcharts.com/demo/line-boost
What's more, using Highstock you can use dataGrouping.
Data grouping is the concept of sampling the data values into larger
blocks in order to ease readability and increase performance of the
JavaScript charts.
API reference:
https://api.highcharts.com/highstock/series.line.dataGrouping
Demo:
https://www.highcharts.com/stock/demo/data-grouping
With most of the chart types when you ise Vaadin charts it actually makes more sense to apply algorithm to reduce number of data points in your server side logic, e.g when copying data from original data dourse to DataSeries you use in the chart. This does not only reduce rendering time, but time to load the data to browser as well.
I would also recommend to calculate linear regression as additional two point DataSeries on server side if such thing is needed.

What is the way to parse a string of a well known format from an image on iOS (some library created specifically for this purpose)?

Local travel cards in Saint-Petersburg, Russia have got huge id numbers that aren't easy to read and type into a web page when topping up the card online. So I want to build a small app that would take a photo of a travel card and parse the number out.
The task is a bit easier than a free form recognition:
card is of the very well known size
id numbers are of known size, are located in the very well known location on a card and they are number only, no letters (okay, there are two variations I think and maybe they will add 1-2 more in the future)
even the font is known in advance
even the first several numbers are the same for most of the card (so far there are only two prefixes used)
How would you do it? Are there any libraries tuned not for the general OCR, but for a "hinted" OCR like I need?
Best regards,
Artem.
P.S.
Actually a free/cheap web service for this task would also be good enough
Yes Google has a library called Tesseract and there is an iOS SDK on Github you can import into your application. So you can use this SDK and it has some documentation that you can read that will explain how to set it up in your app. It has methods that will return you a string with the text of the card in the string. BUT it will be ALL of the text from the card. So best thing to do would be to:
1 "clip" the original image to extract a sub image that displays only the portion of the card you wish to get the numbers from.
2 Process this sub image through Tesseract to retrieve the string you are looking for.
3 Then parse through the string and pick out the data that you need.
But just be warned, it can be a bit quirky. This SDK tends to recognize words best from images that are scanned, not "taken a picture of". Because although it is an advance piece of technology, it isn't perfect. So to get it to work as perfectly as possible for you, try to get scanned copies of the originals.
Best of luck.
The ideal solution for you would have three components:
1) Detection of the card. This is useful because if you have the detection, then the end users have much easier time actually using the scanner, because they can place the phone above the card in an arbitrary direction
2) Accurate OCR component. Ideally, customizable for this exact font you have on the card, for the exact position on the card.
3) Parsing mechanism. This would enable you to obtain the exact string written on the card without writing huge amount of OCR parsing code.
BlinkID SDK has all this. It has a preset for detection cards in the ID-1 format. It has integrated OCR engine. And it provides RegexParser, where you can define the exact format of the text which you're trying to extract from the document.
BlinkID was initially built for scanning ID documents which have very similar properties as the problem you're trying to solve.
Note. I'm one of the developers working on BlinkID.

Bloomberg real-time data with lot sizes

I am trying to download real-time trading data from Bloomberg using the api.
So far I can get bid / ask / last prices successfully but in some exchanges (like canada) quote sizes are in lots.
I can query the lots sizes of course with reference data api and write them for every security in the database or something like that but to convert the size for every quote tick is very "expensive" conversion since they come every second and maybe more often.
So is there any other way to achieve this?
Why do you need to multiply each value by lot size? As long as the lot size is constant each quote is comparable and any computation can be implemented using the exchange values. Any results scaled in a presentation layer if necessary.

How do I combine GPS track data with another time-coded dataset?

I have GPS track data from a logging device, in GPX format (or any other format, easily done with gpsbabel). I also have non-GPS data from another measurement device over the same time period. However, the measurement intervals of both devices are not synced.
Is there any software available that can combine the measurement data with the GPS data, so I can plot the measured values in a spatial context?
This would require matching of measurement times and interpolation of GPS trackpoints, combining the results in a new track file.
I could start scripting all of this, but if there are existing tools that can do this, I'd be happy to know about them. I was thinking that GPSBabel might be able to do this, but I haven't found how.
a simple Excel Macro would do your job
In desktop GIS software you could import the two data types in their appropriate formats (which you haven't specified) whether they are shapefiles or even simply tables. Then a table join can be undertaken based on attributes. By selecting the measurement times as the join fields then a table will be created where if the measurement times values are shared in both your types of data the rows will be appended to one another.

Good examples of MapServer / OpenLayers

I want to convince some clients to use MapServer and OpenLayers. Please can anyone suggest attractive websites to show off the possiblities!
The clients will be impressed by:
A density map (otherwise known as a heat map, colour-shaded grid coverage, contour plot...).
The ability for the user to download the underlying data for the density map, restricted to the area being viewed, in some format such as netCDF.
Standard OpenLayers stuff. Zooming, panning, scale bar, overview map...
Different base layers. Could be WMS, Google, Bing...
Searching for a placename, map is panned to display the place.
Exposing the heatmap data for other people to use in mashups as WMS or WCS
MapServer.org is back up but demo.mapserver.org seems to be down right now :( But from memory their examples didn't have the "wow" factor. The OpenLayers examples demonstrate only one or two features per example - I want something to wow the clients by showing all the capabilities in one example.
PS If you have good examples that use some other open source tools, post them by all means. But just JavaScript please: customer says no rich client.
EDIT Come on StackOverflow, someone must have an example that uses a density map?? I'm even offering a bounty now...
Note this answer is no longer relevant. The open source maps have since been replaced with a commercial alternative by a different company
http://maps.seai.ie/wind/ - mapping onshore and offshore wind speeds and farms in Ireland
http://maps.seai.ie/geothermal/ - mapping geothermal temperatures in Ireland, and borehole data
uses WMS services (and TileCache) for all the layers, so can be accessed by other client GIS's (well once I've set up metadata etc..)
has a variety of different base maps to choose from
built using MapFish / ExtJS
has drop down gazetteers for County and Townland (an Irish administrative unit)
all the basic map navigation tools and a simple info tool
right click on a layer to set transparency
uses MapServer opensource back-end, plus SQL Server 2008
The systems (and a third more complex Bioenergy Intranet system) got a mention here: http://www.geoconnexion.com/uploads/renewableenergy_intv9i4.pdf
http://haiticrisismap.org/ openlayes + geoxt
would it be possible to create a template map for the client with a bunch of data on it, census, socio, create some simple fake buffers.
Maybe have a look at the HeatMapAPI for Google Maps (not sure you'll wow the client with that though).
Another density map: http://maps.glassfish.org/server/ (showing the use of GlassFish around the world).
We're using the OpenLayers Heatmap layer, mostly because (for us) it handles large data volumes better than the Google Map version (your mileage may vary)
http://www.patrick-wied.at/static/heatmapjs/demo/maps_heatmap_layer/openlayers.php
By large data volumes, I mean location datasets with 100K+ rows
It also works nicely as an ASPX page with dynamic realtime data retrieval from an SQL Server database. I've used a stored procedure to pre-process the data into the array format, grouped by Latitude & Longitude.
For those that need a translation table to convert their UK Postcodes into Latitude & Longitude, here's a good source:
http://www.doogal.co.uk/UKPostcodes.php
The OneGeology Portal (http://portal.onegeology.org/OnegeologyGlobal/) has been online for about 10 years, currently running OpenLayers 2, with an OpenLayers 3 version in development.
The portal attempts to create a geological map of the world by pulling together disparate OGC services provided by data suppliers (mostly Geological Surveys) from across the globe. The portal provides access to data from WMS, WFS (simple and complex feature), and WCS. The portal uses CSW to help manage which functionality is available to a user, and provides the ability to style WMS layers through the application of custom SLD. Map contexts can be saved, shared and loaded using WMC.
There is a gazetteer to help you zoom to a location of choice, the ability to change projections, and scales, and the ability to create a KML file to allow the service to be used in Google Earth. Transparency can be changed on all layers.
There are currently 353 layers.
When the OneGeology project started, all documentation was geared to the support of services provided by MapServer, and many of the services in the portal are MapServer services. However, because the portal utilises open standards, any software that can provide services to those standards can be included.
This is an example of a classified grid generated in MapServer and displayed by OpenLayers: https://maps.greenwoodmap.com/sublette/mapserver/map#zcr=1/2690000/1170000/0&lyrs=slopesZ,townlim,ownership,roads. The raw, unclassified slope data can also queried by map click.

Resources