Neo4j console returning only 1000 rows - neo4j

I would like to compare mysql and neo4j and for that I have dumped a lot of data in neo4j as well mysql.
now the problem is In neo4j I cannot execute a query which returns more than 1000 rows therefore I cannot see the time of execution of that query.
In mysql I can eaily see the execution time in the console.
Also I would like to see a complete graphical view of all my nodes in Neo4j. Is it possible?

The limitation to 1000 result is a safety net withing Neo4j browser. Otherwise very long results might mess up your web browser in terms of memory and/or CPU for rendering.
To get the full plain results for comparison send your query as REST request using e.g. cURL. See http://docs.neo4j.org/chunked/stable/rest-api-transactional.html#rest-api-begin-and-commit-a-transaction-in-one-request for an example of the request, make sure you're using Accept and Content-Type header set to application/json. Additionally you might stream the result as documented at http://docs.neo4j.org/chunked/stable/rest-api-streaming.html.

There is a setting that limits the number of rows displayed, by default it's 1000. To change this setting Open Neo4j Browser -> Go to Settings and in the Graph Visualization section you can find Max Rows. You can change this to fit your needs. Also in this section another property can be found Initial Node Display - property that limit number of nodes displayed on the first load of the graph visualization.

Related

influxdb query group by value

I am new to influxdb and the TICK environment so maybe it is a basic question but I have not found how to do this. I have installed Influxdb 1.7.2, with Telegraph listening to a MQTT server that receives JSON data generated by different devices. I have Chronograph to visualize the data that is being recieved.
JSON data is a very simple message indicating the generating device as a string and some numeric values detected. I have created some graphs indicating the number of messages recieved in 5 minutes lapse by one of the devices.
SELECT count("devid") AS "Device" FROM "telegraf"."autogen"."mqtt_consumer" WHERE time > :dashboardTime: AND "devid"='D9BB' GROUP BY time(5m) FILL(null)
As you can see, in this query I am setting the device id by hand. I can set this query alone in a graph or combine multiple similar queries for different devices, but I am limited to previously identifying the devices to be controlled.
Is it posible to obtain the results grouped by the values contained in devid? In SQL this would mean including something like GROUP BY "devid", but I have not been able to make it work.
Any ideas?
You can use "GROUP BY devid" if devid is a tag in measurement scheme. In case of devid being the only tag the number of unique values of devid tag is the number of time series in "telegraf"."autogen"."mqtt_consumer" measurement. Typically it is not necessary to use some value both as tag and field. You can think of a set of tags in a measurement as a compound unique index(key) in conventional SQL database.

Counting ALL rows in Dynamics CRM Online web api (ODATA)

Is it possible to count all rows in a given entity, bypassing the 5000 row limit and bypassing the pagesize limit?
I do not want to return more than 5000 rows in one request, but only want the count of all the rows in that given entity.
According to Microsoft, you cannot do it in the request URI:
The count value does not represent the total number of entities in the system.
It is limited by the maximum number of entities that can be returned.
I have tried this:
GET [Organization URI]/api/data/v9.0/accounts/?$count=true
Any other way?
Use function RetrieveTotalRecordCount:
If you want to retrieve the total number of records for an entity beyond 5000, use the RetrieveTotalRecordCount Function.
Your query will look like this:
https://<your api url>/RetrieveTotalRecordCount(EntityNames=['accounts'])
Update:
Latest release v9.1 has the direct function to achieve this - RetrieveTotalRecordCount
————————————————————————————
Unfortunately we have to pick one of this route to identify the count of records based on expected result within the limits.
1. If less than 5000, use this: (You already tried this)
GET [Organization URI]/api/data/v9.0/accounts/?$count=true
2. Less than 50,000, use this:
GET [Organization URI]/api/data/v8.2/accounts?fetchXml=[URI-encoded FetchXML query]
Exceeding limit will get error: AggregateQueryRecordLimit exceeded. Cannot perform this operation.
Sample query:
<fetch version="1.0" mapping="logical" aggregate="true">
<entity name="account">
<attribute name="accountid" aggregate="count" alias="count" />
</entity>
</fetch>
Do a browser address bar test with URI:
[Organization URI]/api/data/v8.2/accounts?fetchXml=%3Cfetch%20version=%221.0%22%20mapping=%22logical%22%20aggregate=%22true%22%3E%3Centity%20name=%22account%22%3E%3Cattribute%20name=%22accountid%22%20aggregate=%22count%22%20alias=%22count%22%20/%3E%3C/entity%3E%3C/fetch%3E
The only way to get around this is to partition the dataset based on some property so that you get smaller subsets of records to aggregate individually.
Read more
3. The last resort is iterating through #odata.nextLink and counting the records in each page with a code variable (code example to query the next page)
The XrmToolBox has a counting tool that can help with this .
Also, we here at MetaTools Inc. have just released an online tool called AggX that runs aggregates on any number of records in a Dynamics 365 Online org, and it's free during the beta release.
You may try OData's $inlinecount query option.
Adding only $inlinecount=allpages in the querystring will return all records, so add $top=1 in the URI to fetch only one record along with count of all records.
You URL will look like /accounts/?$inlinecount=allpages&$top=1
For example, click here and the response XML will have the count as <m:count>11</m:count>
Note: This query option is only supported in OData version 2.0 and
above
This works:
[Organization URI]/api/data/v8.2/accounts?$count

Microsoft Graph filter vs $filter

I am testing filtering using Microsoft Graph Explorer. I noticed odd behavior that I cannot figure out.
Using endpoint https://graph.microsoft.com/v1.0/me/events?filter=start/dateTime%20ge%20%272018-04-01%27 I get properly filtered data back.
However, using documented $ prefix, https://graph.microsoft.com/v1.0/me/events?$filter=start/dateTime%20ge%20%272018-04-01%27, I get nothing. There is no error, just no data coming back.
How do I query the data using the $filter?
You're not actually getting the results you think you are. When Microsoft Graph sees a query parameter it doesn't expect, it simply ignores it.
When you call /events?filter=start/dateTime ge '2018-04-01' it is simply ignoring the unknown filter parameter and returning you an unfiltered result.
When you call /events?filter=start/dateTime ge '2018-04-01', it is filtering out anything prior to April 1, 2018. If there are no events with a start after this date, you will get an empty array as a result.
I assume you're using the default dataset included with Graph Explorer? The default Graph Explorer data set's most recent event is 2017-11-16T08:00:00.0000000.
The reason you see results from the /calendarView endpoint but not the /events endpoint is that /events only returns single instance meetings and series masters while /celandarView shows everything within a date range. In order to avoid having to maintain a dataset with updated events, the demo data relies on a handful of recurring event entries.
Since events does not return individual occurrences of a meeting, you don't see any results from your query.
If you try this query, you'll see actual results:
https://graph.microsoft.com/v1.0/me/events?$filter=start/dateTime ge '2017-04-01'

How to show the most recent timestamp for an InfluxDB measurement in Grafana table (or singlestat)?

I'm using Telegraf/InfluxDB/Grafana to register and view metrics for my servers. Occasionally one of these components crash and metrics stop flowing into InfluxDB.
To be able to notice when this happens (on top of using Monit to restart the service) I would like to create a Grafana dashboard where I have a singlestat panel for each host that shows the most recent timestamp (or better, how much time has passed) since the last metric was received. I'd also like to colorize the background of the singlestat depending on how long it's been. I would like to be able to do this for any InfluxDB metric, as different metrics can have different reasons for lagging behind.
Right now, I've tried something like this in InfluxQL, but I just get an error that at least one non-time field must be present in the query:
SELECT last(time) FROM "system" WHERE "load1" > -1 GROUP BY "host"
If I try to change it to this I get a "Multiple series error":
SELECT last(time), last("load1") FROM "system" GROUP BY "host"
Is what I'm trying to do not easily doable or am I missing something obvious?
...the query syntax was taken from somewhere in this issue.
I haven't yet looked at the singlestat panel, but I suggest creating a 'scripted dashboard'.
Start by manually creating a singlestat dash with a dummy number. Then export it to see what the json looks like.
Then, recreate that same json, with the live result from the above query, using a scripted dashboard.
Look in grafana's /public/dashboards for 'scripted.js' or 'scripted_templated.js'. Make a copy of one of those in the same folder, then hack away to generate your json. Here and here and here are some nice examples for the javascript magic to submit the query to influxdb and parse the result into the json for your singlestat dash. Good luck.

How do we set maximum_bad_records when loading a Bigquery table from dataflow?

Is there a way to set the maximum number of bad records when writing to BigqueryIO? It seems to keep the default at 0.
At this time, unfortunately, we don't provide a way to directly set the value of configuration.load.maxBadRecords in relation to BigQueryIO in Cloud Dataflow.
As a workaround, you should be able to apply a custom ParDo transform that filters "bad records" before they are passed to BigQueryIO.Write. As a result, BigQuery shouldn't get any "bad records". Hopefully, this helps.
If the ability to control configuration.load.maxBadRecords is important to you, you are welcome to file a feature request in the issue tracker of our GitHub repository.

Resources