when I was using gerrit query ssh -p 29418 exueniu#selngerrit.mo.sw.xxx.se gerrit query --start 10 status:merged 'project:wmr/wmr_xxx' it will only give me at most 500 results.
type: stats
rowCount: 500
runTimeMilliseconds: 196
moreChanges: true
I have tried to use the parameter limit and set the limit to 700.
ssh -p 29418 exueniu#selngerrit.mo.sw.XXXX.se gerrit query --start 10 status:merged 'project:wmr/wmr_XXX' limit:700 it doesn't work, we still get only 500 results, anyone know how to get more results?
There's an internal 500 limit which is not explained very well in the Gerrit documentation:
If no limit is supplied an internal default limit is used to prevent
explosion of the result set
To bypass this limit you need to have the Query Limit permission:
/changes/?o=CURRENT_REVISION&o=CURRENT_COMMIT&q=status:merged+after:2021-03-01+before:2021-03-25&start=0
/changes/?o=CURRENT_REVISION&o=CURRENT_COMMIT&q=status:merged+after:2021-03-01+before:2021-03-25&start=500
/changes/?o=CURRENT_REVISION&o=CURRENT_COMMIT&q=status:merged+after:2021-03-01+before:2021-03-25&start=1000
when the project changes over 500, param: n or limit does not work!
you can try this with param 'start', if start is null, the result limit is 500.
the /changes is query by pagenation.
when you receive the response,the last index of result have one param:_more_change;
when '_more_change' equals true,it means have next page. otherwise it reach to the end.
https://i.stack.imgur.com/z5xBE.png
Related
I am attempt to create a constraint like this:
CREATE CONSTRAINT ON (a:forecast) ASSERT a.match IS UNIQUE
but I get this error:
Unable to create Constraint( name='constraint_9615361a',
type='UNIQUENESS', schema=(:forecast {match}) ):
I have the Neo4J community edition 4.2.3, but judging by the documentation I should be allowed to create this type of constraint. What gives?
You can check if there is uniqueness on forecast {match} by running this.
MATCH (a:forecast)
RETURN a.match, count(*) LIMIT 5
If you see a value > 1 in count() then forecast.match is not unique.
Note: count() is sorting in descending order in my desktop. In case it is not, then run below:
MATCH (a:forecast)
WITH a.match as match, count(*) as cnt
RETURN match, cnt ORDER by cnt DESC LIMIT 5
Late reply, but still.
I was struck by the same error now (with 4.4.3) and it was lack of free space on drive.
Wasted half an hour, digging in absolutely wrong direction.
I create database with --skip-duplicate-nodes=true flag, so of course there couldn't be any duplicates.
I have an intermittent error come up after some deploys of my Rails app.
This code is running in Sidekiq (5 processes each with 10 threads), which is running in a Docker container.
I can have tens of thousands of these jobs queued up at any point.
path = Path.find(path_id)
nearby_nodes = Node.where("ST_DWITHIN(geog, ST_GeographyFromText(?), 25)", path.geog.to_s)
The error is:
ActiveRecord::StatementInvalid: PG::InternalError: ERROR: parse error - invalid geometry
HINT: "01" <-- parse error at position 2 within geometry
PG::InternalError: ERROR: parse error - invalid geometry
HINT: "01" <-- parse error at position 2 within geometry
I can get these jobs to run successfully if I quiet all the Sidekiq processes, stop the workers, wait a moment, then start the workers back up.
I added a number of delays to my deploy process (guessing that slowing things down might help, if restarting workers solves the problem), but that did not help.
I can usually get one successful deploy per day. After that first deploy, it's more likely to fall into this failure state & if it gets into this state every deploy thereafter will cause this same issue.
Path.first.geog returns:
#<RGeo::Geographic::SphericalPointImpl:0x3ffd8b2a6688 "POINT (-72.633932 42.206081)">
Path.first.geog.class returns:
RGeo::Geographic::SphericalPointImpl
I've tried a number of different formats of this query, which might shed some light on how/why this is failing (though I'm still stumped as to why it's only intermittent):
Node.where("ST_DWITHIN(geog, ST_GeographyFromText(?), 25)", path.geog) fails, generating this query:
Node Load (1.0ms) SELECT "nodes".* FROM "nodes" WHERE (ST_DWITHIN(geog, ST_GeographyFromText('0020000001000010e6c05228925785f8d340451a60dcb9a9da'), 25)) LIMIT $1 [["LIMIT", 11]]
and this error:
ActiveRecord::StatementInvalid (PG::InternalError: ERROR: parse error - invalid geometry)
HINT: "00" <-- parse error at position 2 within geometry
Node.where("ST_DWITHIN(geog, ST_GeographyFromText('#{path.geog}'), 25)") succeeds, generating this query:
Node Load (5.1ms) SELECT "nodes".* FROM "nodes" WHERE (ST_DWITHIN(geog, ST_GeographyFromText('POINT (-72.633932 42.206081)'), 25)) LIMIT $1 [["LIMIT", 11]]
Node.where("ST_DWITHIN(geog, ST_GeographyFromText(?), 25)", path.geog.to_s) also succeeds, generating the same query:
Node Load (2.3ms) SELECT "nodes".* FROM "nodes" WHERE (ST_DWITHIN(geog, ST_GeographyFromText('POINT (-72.633932 42.206081)'), 25)) LIMIT $1 [["LIMIT", 11]]
Doing the to_s conversion in a preceding line as some kind of superstitious test also works:
geog_string = path.geog.to_s
nearby_nodes = Node.where("ST_DWITHIN(geog, ST_GeographyFromText(?), 25)", geog_string)
Queries 2-4 generally work, but behave like query number 1 some of the time and only after a deploy.
I could not make 2-4 behave like the first query in a Rails console.
The only time queries 2-4 behave like the first query is in a Sidekiq job after a deploy.
It's as if the string conversion isn't working sometimes.
Here's a list of potentially relevant gems/versions:
activerecord-postgis-adapter (6.0.0)
pg (1.2.3)
rails (6.0.2.2)
rgeo (2.1.1)
rgeo-activerecord (6.2.1)
sidekiq (6.0.6)
Ruby 2.6.6
PostgreSQL 11.6
PostGIS 2.5.2
Docker 19.03.8, build afacb8b7f0
There is no need to convert the geography to a string, and then to read it back as a geography.
You can try directly
Node.where("ST_DWITHIN(geog, ?, 25)", path.geog)
That being said, you may indeed have some invalid geometries
I want to run a query on a remote database with a timeout option.
For example :
Select * from XYZ table
if this query does not return any result within 2 min then automatically stop this query process.
dummy psql
#timeout select * from XYZ
is it possible to pass timeout parameter at run time without touching any conf file?
In psql use statement_timeout(n) where n is in milliseconds. Or you can set statement_timeout in postgresql.conf but it will affect all sessions.
I am using neo4j to create a social network application. The data model has a FRIEND relationship between two USER nodes. I need to get all the friends of mine ordered by displayName (Unique Indexed).
I need pagination for this query. I will send the last name from the list I got from the previous query results. And I want to limit each page to 20 names.
MATCH (u:USER{displayName:{id}})-[:FRIEND]-(f:USER)
RETURN f
ORDER BY f.displayName
LIMIT 20;
What is the best way to do this? Will SKIP work here, sending SKIP 0, SKIP 1*20, SKIP 2*20, ...
You can use the query in this way i think :
ORDER BY f.displayName LIMIT START_POSITION , LAST_POSITION;
For example:
ORDER BY f.displayName LIMIT 0 , 20;
ORDER BY f.displayName LIMIT 21 , 40;
Yes, you can use the SKIP clause to do what you want. In the following, I assume that you provide the page value (starting at 0) as a parameter.
MATCH (u:USER{displayName:{id}})-[:FRIEND]-(f:USER)
RETURN f
ORDER BY f.displayName
SKIP {page} * 20
LIMIT 20;
Note that this technique is not foolproof if the list of friends can change during paging.
I see some strange behavior in 0.9.6.1. The issue is when i query a field without a where clause, it works but when i add "WHERE" in the statement for a tag key, it gives me empty results.
for ex,
select successful, merchant_id from session_metrics_new limit 5
name: session_metrics_new
time successful merchant_id
1453975732000000000 1 bms
1453975733000000000 1 snp
1453975735000000000 1 bms
1453975735000000000 1 snp
1453975739000000000 1 bms
but this does not work
select successful, merchant_id from session_metrics_new where merchant_id =~ /bms/ limit 5
Here, successful is a field key while merchant_id is a tag key. I do not know if this a bug or the way i have stored data. Please help
You're using the regex syntax.
I tried a query on my DB with the same syntax that you used, and I got a result set without a problem. The only problem I see there is if successful is also a TAG rather than a FIELD. But in that case you should get the following exception:
Server returned error: statement must have at least one field in select clause
Are you executing this query through the InfluxDb admin interface or through a 3rd party library for say Java, C#, NodeJs or something like that?
Try a simple where clause instead if you think you'll know the full value of merchant_id field all the time, it's slightly different (it doesn't do pattern matching, but it matches the whole value from the field), that should work and it should even be faster:
select successful, merchant_id from session_metrics_new where merchant_id = 'bms' limit 5