Openlayers3 how to get the max feature id in postgis through geroserver - openlayers-3

I want to use the openlayers 3 to get the maximum feature id stroted in postgis through the geoserver, does someone have ideas. I have tried with CQL in openlayers3, but the sytax is not right, i cannot find the good example from internet to show how the openlayers3 use the cql to query sth from geoserver.
Does someone have some examples.

You dont need openlayers or even CQL to do that.
Just execute a get request to your geoserver as follows:
http://yourhost:port/geoserver/wfs?request=GetFeature&typeName=namespace:featuretype&propertyName=ID&version=1.0.0&sortBy=ID+D&maxFeatures=1
So lets see our parametes
&typeName=namespace:featuretype -->this is your layer name
&propertyName=ID --> these are the attributes should get back on response.Add more attributes using comma
&sortBy=ID+D --> this means to sort the results using ID field and +D means in descending order
&maxFeatures=1 -->return only one fetaure.
So to summarise. Mr Geoserver give me back only one feature, from layer "namespace:featuretype" ordered by id, in a descending order.
More info here

Related

Ordered replication in couchDB

I am developing notes app. The notes are downloaded from remote couchdb with bidirectional replication using couchdatabase lite API and then shown in listview. Now they are downloaded in indeterminate order, but I want them to be ordered by date. Other words at first I want to get the newer notes.
The question is: can replication be ordered by date field and how to achieve it in couchdatabase lite?
If not, should I use ordered PUT query instead?
Thanks for help!
So far I know, a filter doesn't sort documents. You get only a sorted Result for views.
On this site, are the rules how couchdb do the sorting for the keys in an index (http://wiki.apache.org/couchdb/View_collation).
Probably you write an external nodejs process to create a temporary database and fill it with the result of a view which sorts all indices by the date field. To limit the result just add the limit=[number] parameter to the request url. Then replicate that temporary database.
On the other side just replicate everything and write then a view as described above to sort the indices by date.

How do I find a string in an unknown neo4j database using Cypher?

TL,DR: I need a query which gives me all nodes/relationships which contain a certain value (in my case a string, so much I know), without knowing which property(key) contains the string. I am using neo4j(latest version), meteor (latest version) and the meteor neo4j driver, which is recommended on the neo4j website for meteor.
Currently I am working (as part of my bachelor thesis) on a tool to visualize the output of any Cypher query on any database, regardless of the database contents.
So far I've managed to correctly display nodes/relationships which are coming out. My problem now is to visualize (get nodes and relationships to feed into my frontend) textual queries like (taken from the neo4j movie database, which I am using for development)
MATCH (tom:Person {name:"Tom Hanks"})-[:ACTED_IN]->(m)<-[:ACTED_IN]-(coActors)
RETURN coActors.name
This kind of queries only returns an array of strings and not whole nodes or relationships. I now need some way (preferably a Cypher query) to get all nodes which contain for example the string "Audrey Tatou".
The problem I've now run into is that I didn't find a way to write a query which doesn't need something like
MATCH n
WHERE Person.name = "some name"
Since I don't know anything about the contents of the database I cannot use
WHERE propertyName = "propertyValue"
since I only know the value but not the name of the property.
The only solution here will be to get every nodes with your label and check properties and values using reflection on client side.
Using cypher, the solution would be to get all properties and their values and parse their values using a foreach loop. Maybe you can do this, but I'm really not sure, it's a recent feature but you can still give a try.
Here is what I found for the cypher solution: How can I return all properties for a node using Cypher?
So, you have query that returns array of string.
In fact - you can receive almost anything as result. Cypher is capable to return just bare strings, that are not related to anything.
Long story short - you can't vizualize this data, because of this data nature. Best you can do is to represent them as table (or similar), like Neo4j browser do this.
But, there is (probably) solution for you. Neo4j has feature called Legacy indexing. And there you can find full text indexes. Maybe this can help you.
You can just use a driver that returns nodes and rels, or if you do the queries manually add resultDataContents entry
{statements:[{statement:"MATCH ..","resultDataContents",["graph"]}]}
to your payload and you get nodes and relationships back.

Sizing nodes according to input weighting not connectivity

I am trying to use Gephi to help graph interview analysis results. The relationship map is only used to describe conventional connections and life cycles. What I would like to do is to size the nodes based on the number of interview responses that talk about the node, not the number of connections it has or the weighting of those connections. Can Gephi do this and if so, how do I do it please?
I have loaded in node weightings and can see this as part of node labels, but haven't been able to find a way of this having an effect on node size.
Many thanks
Data input field - change input format to integer
You can load the graph in gexf format adding a float attribute and add this attribute to ALL the nodes. It would like something like:
```
...
...
```
Once imported in Gephi, just go to the appearance tab and it will appear as one more attribute in "ranking" drop-down list.
If any problem with gefx format, let me know and I'll will share a whole example (just trying to remain short :-)
Regards

Order Solr results by degrees of friendship

I am currently using Solr 1.4 (soon to upgrade to 3.3). The friendship table is pretty standard:
id | follower_id | user_id
I would like to perform a regular keyword solr search and order the results by degrees of separation as well as the standard score ordering. From the result set, given the keyword matched any of my immediate friends, they would show up first. Secondly would be the friends of my friends, and thirdly friends by 3rd degree of separation. All other results would come after.
I am pretty sure Solr doesn't offer any 'pre-baked' way of doing this therefore I would likely have to do a join on MySQL to properly order the results. Curious if anyone has done this before and/or has some insights.
It's simply not possible in Solr. However, if you aren't too restricted and could use another platform for this, consider neo4j?
This "connections" and degrees is exactly where Neo4j steps in.
http://neo4j.org/
One way might be to create fields like degree_1, degree_2 etc. and store the list of friends at degree x in the field degree_x. Then you could fire multiple queries - the first restricting the results to those who have you in degree_1, the second restricting the results to those who have you in degree_2 and so on.
It is a bit complicated, but the only solution I could think of using Solr.
I haven't represented a graph in solr before, but I think at a high level, this is what you could do. First, represent people as nodes and the social network as a graph in the database. Implement transitive closure function in sql to allow you to walk the graph. Then you would index the result into solr with the social network info stored into payloads, for example.
I was able to achieve this by performing multiple queries and with the scope "with" to restrict to the id's of colleagues, 2nd and 3rd degree colleagues, using the id's and using mysql to do the select.
#search_1 = perform_search(1, options)
#search_2 = perform_search(2, options)
if degree == 1
with(:id).any_of(options[:colleague_ids])
elsif degree == 2
with(:id).any_of(options[:second_degree_colleagues])
end
It's kinda of a dirty solution as I have to perform multiple solr queries, but until I can use dynamic field sorting options (solr 3.3, not currently supported by sunspot) I really don't know any other way to achieve this.

Order Solr/sunspot search results by geo location

I'd like to be able to order my search results by score and location. Each user in the DB has lat/lot and I am currently indexing:
location :coordinates do
Sunspot::Util::Coordinates.new latlon[0], latlon[1]
end
The model which I would performing the search against is also indexed in the same manner. Essentially what I am trying to achieve is that the results be ordered by score and then by location. So if I search for Walmart, I would like to see all Walmart's ordered by their geo proximity to my location.
I remember reading something about solr's new geo-sort but not sure if it is out of alpha and/or if sunspot has implemented a wrapper.
What would you recommend?
Because of the way that Sunspot calculates location types you'll need to do some extra leg work to have it sort by distance from your target as well. The way it works is that it creates a geo-hash for each point and then searches using regular fulltext search on that geo-hash. The result is that you probably won't be able to determine if a point 10km away is further than a point that is 5km away, but you'll be able to tell if a point 50km away is further than a point 1-2km away. The exact distances are arbitrary but the result is that you probably won't have as fine-grained of a result as you would like and the search acts more as a way to filter points that are within an acceptable proximity. After you have filtered your points using the built-in location search, there are three ways to accomplish what you want:
Upgrade to Solr 3.1 or later and upgrade your schema.xml to use the new spatial search columns. You'll then need to make custom modifications to Sunspot to create fields and orderings that work with these new data types. As far as I know these aren't available in Sunspot yet, so you'll have to make those connections on your own and you'll have to dig around in Solr to do some manual configurations.
Leverage the Spatial Solr Plugin. You'll have to install a new JAR into your Solr directory and you'll have to make some modifications to Sunspot, but they are relatively painless and the full instructions can be found here.
Leverage your DB, if your DB is also indexed on the location columns then you can use the Sunspot built-in location search to filter your results down to a reasonable sized set. You can then query the DB for those results and order them by proximity to your location using your own distance function.

Resources