Is there a way to reorder points in geojson so that my line "sticks" to the road. Right now I tried sorting based on longitude, but "S" shaped curves puts some points out of gps sequence, but in sort order (hence, the zig-zag)
How would I go about reordering my points correctly? Currently I'm using turf for other stuff, but another library would also be fine.
Where did these points come from? If they were ordered either chronologically or antichronologically, then perhaps that order was fine to begin with. Perhaps there is additional metadata that can help order your points with ease.
If not, the only thing I can think of is to employ some sort of nearest neighbor sorting: https://en.wikipedia.org/wiki/Nearest-neighbor_chain_algorithm
This page: https://github.com/pastelsky/nnc seems to be the source of the animation seen on wikipedia and relies on javascript code, so perhaps you can make use of the underyling library used?
I'm evaluating Highcharts. It is a brilliant charting solution but I've hit a problem I just cannot work out.
I have a dataset where each point has x, y, and additional data in an array for example:
[[1432022880000,6,['192.168.100.144','36215','192.168.100.191','5432','tcp']],
[[1432022880002,4,['192.168.100.144','36216','192.168.100.191','5432','tcp']],
...
I use a custom tooltip formatter to show the conversation details, which relies the metadata in the array at point.config[2]
With a fairly modest dataset size of about 300 points the tooltips won't function and I get the following in the console:
TypeError: 'undefined' is not an object (evaluating 'this.point.config[2][0]')
However, it works fine with a subset of the exact same data. Unless I've missed something there's nothing wrong the larger dataset but I'm completely stuck for what is happening. In the code of the app I use setData to update the larger series and although there are no errors thrown onto the console the point config objects lack the array at [2], and it works fine for the smaller dataset.
Here's a fiddle for the smaller(subset) of data, where the tooltip works:
http://jsfiddle.net/stevehicks/m37sdef5/14/
...and here's one for the "full" dataset where the problem exists:
http://jsfiddle.net/stevehicks/vhx66vgb/11/
Any help would be greatly appreciated as I've burnt over a day on this :(
Similar kind of issue has been reported in the High Charts Repos page
Bad Performance with large Series (tooltip specifically)
Follow up the issue and track the workaround provided on the page.
Kacper from the Highcharts support team has pointed out that it's the effect of datagrouping :
The problem is that large data set is grouped to fit in window and that grouped points do not have custom properties.
Grouping can be disabled.
plotOptions: {
series: {
turboThreshold: 0,
dataGrouping: {
enabled: false
}
}
},
Example with current version of Highstock : http://jsfiddle.net/vhx66vgb/12/
For older version this fix doesn't apply - no plot is being rendered: http://jsfiddle.net/vhx66vgb/13/
Many thanks to Kacper !!
Currently i´m registering every 10 seconds if my services are accessable via netcast command.
nc -zvvw 10 ip port
A success is registered as a 1, otherwise a 0. The format is something like this:
date;server1;server2;serverN+1
But, it seems like Highcharts put some decimal data that i dont know where are from. And the data is not overlapping at all. (The 1 or the 0) Or at least two series than i'm aware of.
Here is an example about i'm trying to explain if my spanish level is not good enough
http://jsbin.com/overlaping/3/
can anybody help me please.
Edit 1:
The data is from a json source, it's looks like this:
var data = {"titulos":["fecha","server1","server2","server3","server4","server5"],"detalles":[[1389495600000,1,1,1,1,1],[1389495600000,1,1,1,1,1]]}
But for an entire day, every ten seconds.
Edit 2:
Wergeld noticed that i having the same timestamp (on ms) for more than a row of data. That was 'cause i was parsing the date without seconds.
SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm");
But i added the seconds and it is given me different dates for every row (as i should expect), but the problems persist.
The data now looks like this:
{"titulos":["fecha","yelcho","villarica","coya","cunco","culenar"],"detalles":[[1389495606000,1,1,1,1,1],[1389495616000,1,1,1,1,1],[1389495627000,1,1,1,1,1],[1389495637000,1,1,1,1,1],[1389495647000,1,1,1,1,1],[1389495657000,1,1,1,1,1]]}
Edit 3:
it seems to be some kind of behaviour at a large amount of data and trying to display all. If i zoom it enough, it display correctly. There is a way to display it properly even at full zoom out?
Edit 4:
I ended using this code to solve my problem. Thanks to Pawel Fus for his help.
plotOptions: {
series: {
dataGrouping: {
enabled: true,
approximation: 'open'
}
}
},
This is caused by dataGrouping, you can disable it.
However, it may be very slow to display 10 000points (with markers) on a chart width width 1 000px (10 points on a 1 pixel).
Reference.
How can i do something like this? (Ofc, use highcharts)
http://img59.imageshack.us/img59/3439/temp1a.png
I have 2 (may be better use 3) axis with data and some scale that calculates the efficiency of the some parameters.
Your best bet is to play around with the gauge type.
You can probably get a good deal of info on how to proceed from the VU meter example:
http://highcharts.com/demo/gauge-vu-meter
I am currently bringing large (tens of GB) data files into Matlab using memmapfile. The file I'm reading in is structured with several fields describing the data that follows it. Here's an example of how my format might look:
m.format = { 'uint8' [1 1024] 'metadata'; ...
'uint8' [1 500000] 'mydata' };
m.repeat = 10000;
So, I end up with a structure m where one sample of the data is addressed like this:
single_element = m.data(745).mydata(26);
I want to think of this data as a matrix of, from the example, 10,000 x 500,000. Indexing individual items in this way is not difficult though somewhat cumbersome. My real problem arises when I want to access e.g. the 4th column of every row. MATLAB will not allow the following:
single_column = m.data(:).mydata(4);
I could write a loop to slowly piece this whole thing into an actual matrix (I don't care about the metadata by the way), but for data this large it's hard to overemphasize how prohibitively slow that will be... not to mention the fact that it will double the memory required. Any ideas?
Simply map it to a matrix:
m.format = { 'uint8' [1024 500000] 'x' };
m.Data(1).x will be you data matrix.