How to Pass Neo4j 2.0 Server Plugin Parameters - neo4j

I made a really simple Neo4j 2.0 Server Plugin that works great without any parameters. However, I'm not sure how I'm supposed to pass a string parameter to the plugin. I have one optional parameter called "criteria". This should be very simple. I'm just not very familiar with CURL, java, or REST.
#Name( "getLabelsForSearch" )
#Description( "Get all labels that match the search criteria from the Neo4j graph database" )
#PluginTarget( GraphDatabaseService.class )
public Iterable<String> getLabelsForSearch( #Source GraphDatabaseService graphDb, #Description("The search criteria string") #Parameter (name = "criteria", optional = true) String criteria )
{
ArrayList<String> labels = new ArrayList<>();
labels.add(criteria);
try (Transaction tx = graphDb.beginTx())
{
for ( Label label : GlobalGraphOperations.at(graphDb).getAllLabels() )
{
labels.add(criteria);
//This is just for testing
labels.add(label.name());
}
tx.success();
}
return labels;
}
I tried a few different ways with curl:
curl -X POST http://icexad01:7474/db/data/ext/GetAll/graphdb/getLabelsForSearch?criteria=thisorthat
curl -X POST http://icexad01:7474/db/data/ext/GetAll/graphdb/getLabelsForSearch/criteria/thisorthat
curl -X POST http://icexad01:7474/db/data/ext/GetAll/graphdb/getLabelsForSearch -data { "criteria" : "thisorthat"}
I've been following this page and it has an example of passing a parameter. Maybe I'm just overlooking something?
http://docs.neo4j.org/chunked/snapshot/server-plugins.html
This is the json information I get back when I do a GET request on the url:
http://icexad01:7474/db/data/ext/GetAll/graphdb/getLabelsForSearch/
{
"extends" : "graphdb",
"description" : "Get all labels that match the search criteria from the Neo4j graph database",
"name" : "getLabelsForSearch",
"parameters" : [ {
"description" : "The search criteria string",
"optional" : true,
"name" : "criteria",
"type" : "string"
} ]
}

You need to pass in the parameters in JSON format. Therefore it's crucial to specify the content type and to put the payload in quotes, so try
curl -X POST -H "Content-Type: application/json" -data '{ "criteria" : "thisorthat"}' http://icexad01:7474/db/data/ext/GetAll/graphdb/getLabelsForSearch

Related

Invalid Expression exception for JsonPath in RestAssured

We are using restassured for API automation in our project . I have sample response for which I tested JsonPath expression on https://www.jsonquerytool.com/. My Json expression is - $[*]['tags'][?(#.id==0)].
I am getting proper output when I tried expression on JsonQuerytool. But when I try same in below code , I get invalid expression message -
JsonPath jsonPathEvaluator = response.jsonPath();
ArrayList result = jsonPathEvaluator.get("$[*]['tags'][?(#.id==0)]");
Above code throws exception .
Can anyone tell me how can I programmatically query the response using JsonPathEvaluator ?
P.S - Response not pasted as it was very huge.
Since your issue is not in any particular query (expression does not matter) I'm answering using example from the link you provided.
RestAssured uses some special JsonPath library (which uses its own syntax in some cases) called GPath. So if you have json like this:
{
"key": "value",
"array": [
{
"key": 1
},
{
"key": 2,
"dictionary": {
"a": "Apple",
"b": "Butterfly",
"c": "Cat",
"d": "Dog"
}
},
{
"key": 3
}
]
}
And expect you would use query like this: $.array[?(#.key==2)].dictionary.a
Then for RestAssured case your query would be like this: array.findAll{i -> i.key == 2}.dictionary.a
So the complete code example would be:
public static void main(String[] args) {
JsonPath jsonPath = RestAssured
.get("http://demo1954881.mockable.io/gath")
.jsonPath();
List<String> resp = jsonPath.get("array.findAll{i -> i.key == 2}.dictionary.a");
System.out.println(resp);
}

Can't place GraphQL custom type as a Postman variable

Has anyone had luck with placing a GraphQL custom type argument as a Postman or Graphql variable? I'm kinda spinning in circles right now, I hope a fresh pair of eyes could point me in the right direction.
What I'm trying to do is to send a mutation request using Postman. The problem I'm having is that the method I'm calling is taking a custom type as an argument. Placing the content of that variable as GraphQL variable or Postman variable is giving me a headache. I can't embedd pictures yet, so here are the links (they are safe).
Schema
This custom type is a JSON-like structure, consisting of two enums and a set of primitive types (strings, ints...). I can screenshot the entire thing but basically that's it: two enums followed by strings, ints...
Custom type definition
What I've tried so far:
Simply hardcoding the request in Postman works but I wish to send multiple requests with varying data
Placing it in a GraphQL variable results in error message
{
"errors": [
{
"message": "Bad request - invalid request body.",
"locations": []
}
],
"data": null
}
Placing the custom type content as a Postman environment variable works, but I'm getting a syntax error (although the request passes...).
Request body is below. Hardcoding it and using a Postman variable produces the same request body, apart from the syntax error.
query: "mutation {
createApplication(request: {
applicationKind: NEW_ISSUANCE,
documentKind: REGULAR_PASSPORT,
personalData: {
timestamp: null,
firstname: "NAME",
lastname: "LASTNAME",
middlename: "MIDDLENAME",
dateOfBirth: "2011-09-28",
citizenshipCountryCode: "USA",
gender: MALE,
personalNumber: "3344",
placeOfBirth: "CHICAGO",
municipalityOfBirth: "SOUTH",
countryCodeOfBirth: "USA"},
addressData:{
street: "WEST",
municipality: "EAST",
place: "CHICAGO",
country: {
code: "USA",
name: null
},
entrance: "б",
flat: "13",
number: "35"}
})
{
__typename
... on AsyncTaskStatus {
taskID
state
payload {
... on ApplicationUpdated {
applicationID
applicationNumber
__typename
}
__typename
}
__typename
}
... on Error {
...errorData
__typename
}
}
}
fragment errorData on Error {
__typename
code
message
}"
Postman variable with a squiggly line
I'm spinning in circles right now. Has anyone had any luck with Postman requests of this kind?
I can post more info, screenshots...just let me know. I'll be watching this topic closely and provide feedback.
Thank you for your time.
please add a the variable in variable section as :
{
"request": {{request}}
}
and then refer this in your query as
$request

Response zip file with WebFlux

I am new in Spring 5 and Reactive Programming. My problem is creating the export feature for the database by a rest API.
User hits GET request -> Server reads data and returns data as a zip file. Because zip file is large, so I need to stream these data.
My code as below:
#GetMapping(
value = "/export",
produces = ["application/octet-stream"],
headers = [
"Content-Disposition: attachment; filename=\"result.zip\"",
"Content-Type: application/zip"])
fun streamData(): Flux<Resource> = service.export()
I use curl as below:
curl http://localhost/export -H "Accept: application/octet-stream"
But it always returns 406 Not Acceptable.
Anyone helps?
Thank you so much
The headers attribute of the #GetMapping annotation are not headers that should be written to the HTTP response, but mapping headers. This means that your #GetMapping annotation requires the HTTP request to contain the headers you've listed. This is why the request is actually not mapped to your controller handler.
Now your handler return type does not look right - Flux<Resource> means that you intend to return 0..* Resource instances and that they should be serialized. In this case, a return type like ResponseEntity<Resource> is probably a better choice since you'll be able to set response headers on the ResponseEntity and set its body with a Resource.
Is it right, man? I still feel it's not good with this solution at the last line when using blockLast.
#GetMapping("/vehicle/gpsevent", produces = ["application/octet-stream"])
fun streamToZip(): ResponseEntity<FileSystemResource> {
val zipFile = FileSystemResource("result.zip")
val out = ZipOutputStream(FileOutputStream(zipFile.file))
return ResponseEntity
.ok().cacheControl(CacheControl.noCache())
.header("Content-Type", "application/octet-stream")
.header("Content-Disposition", "attachment; filename=result.zip")
.body(ieService.export()
.doOnNext { print(it.key.vehicleId) }
.doOnNext { it -> out.putNextEntry(ZipEntry(it.key.vehicleId.toString() + ".json")) }
.doOnNext { out.write(it.toJsonString().toByteArray(charset("UTF-8"))) }
.doOnNext { out.flush() }
.doOnNext { out.closeEntry() }
.map { zipFile }
.doOnComplete { out.close() }
.log()
.blockLast()
)
}

Is it possible to post data to couch db and return data?

For example I would like to send the users score to the database and instead of it returning the typical status, id and rev I would like it to return the users rank. I'm guessing this isn't possible but figured I would ask.
The response to an HTTP POST/PUT should really only be used to help you confirm that it succeeded.
I'm even struggling to see even how you can get the rank of a user returned by a couchdb view, unless you retrieve the data for all users and work out the position of your user.
This use case ...
Simple structured data clearly tabular
The requirement to respond fast to a numerical column (Method to calculate the rank for a score)
OR the requirement to trigger an update a score table each time a rank is submitted.
... very much smells like a classical case where you may want to use a relational DB.
If the result can be calculated from the document you are to change with your http request, then you can use an update handler to PUT a change to the document and return that result:
// 'myhandler' update function
function(doc, req) {
// create a shorthand for json reponses
var json_reponse = function(obj, code) {
return {
headers: { 'Content-Type': 'application/json' }
, body: JSON.stringify(obj)
, code: code
}
}
// assume the incoming body is json and parse it
// needs proper error handling still
var body = JSON.parse(req.body)
// doc is the user document we are patching
// return an error if it isn't there
if(!doc)
return [null, json_response({error: 'user document not found'}, 404)]
// return an error if new_score is missing from body
if(!body.new_score)
return [null, json_response({error: 'missing property new_score'}, 400)
// now patch the user doc
doc.score = body.new_score
// calculate the new rank depending on your own method
var my_rank = my_rank_function(doc.score, Math.PI, 'bananarama')
return [doc, json_response({success: true, rank: my_rank}, 200)
}
Now PUT new data to receive the new rank:
request(
{ method: 'PUT'
, url: httptp://127.0.0.1:5984/mydb/_design/myddoc/_update/myhandler/myuserdocid
, json: {"new_score": 42}
, headers: { "Content-Type: application/json" }
}
, function(err, response, body) {
console.log("user's new rank:", JSON.parse(body).rank)
}
)
should print user's new rank: LEVEL 11 EIGHTIES GIRL GROUP LEADER
nb: I'm not at work so cannot confirm the code works, but you should get the hang of it...

How to make elasticsearch add the timestamp field to every document in all indices?

Elasticsearch experts,
I have been unable to find a simple way to just tell ElasticSearch to insert the _timestamp field for all the documents that are added in all the indices (and all document types).
I see an example for specific types:
http://www.elasticsearch.org/guide/reference/mapping/timestamp-field/
and also see an example for all indices for a specific type (using _all):
http://www.elasticsearch.org/guide/reference/api/admin-indices-put-mapping/
but I am unable to find any documentation on adding it by default for all documents that get added irrespective of the index and type.
Elasticsearch used to support automatically adding timestamps to documents being indexed, but deprecated this feature in 2.0.0
From the version 5.5 documentation:
The _timestamp and _ttl fields were deprecated and are now removed. As a replacement for _timestamp, you should populate a regular date field with the current timestamp on application side.
You can do this by providing it when creating your index.
$curl -XPOST localhost:9200/test -d '{
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"_default_":{
"_timestamp" : {
"enabled" : true,
"store" : true
}
}
}
}'
That will then automatically create a _timestamp for all stuff that you put in the index.
Then after indexing something when requesting the _timestamp field it will be returned.
Adding another way to get indexing timestamp. Hope this may help someone.
Ingest pipeline can be used to add timestamp when document is indexed. Here, is a sample example:
PUT _ingest/pipeline/indexed_at
{
"description": "Adds indexed_at timestamp to documents",
"processors": [
{
"set": {
"field": "_source.indexed_at",
"value": "{{_ingest.timestamp}}"
}
}
]
}
Earlier, elastic search was using named-pipelines because of which 'pipeline' param needs to be specified in the elastic search endpoint which is used to write/index documents. (Ref: link) This was bit troublesome as you would need to make changes in endpoints on application side.
With Elastic search version >= 6.5, you can now specify a default pipeline for an index using index.default_pipeline settings. (Refer link for details)
Here is the to set default pipeline:
PUT ms-test/_settings
{
"index.default_pipeline": "indexed_at"
}
I haven't tried out yet, as didn't upgraded to ES 6.5, but above command should work.
You can make use of default index pipelines, leverage the script processor, and thus emulate the auto_now_add functionality you may know from Django and DEFAULT GETDATE() from SQL.
The process of adding a default yyyy-MM-dd HH:mm:ss date goes like this:
1. Create the pipeline and specify which indices it'll be allowed to run on:
PUT _ingest/pipeline/auto_now_add
{
"description": "Assigns the current date if not yet present and if the index name is whitelisted",
"processors": [
{
"script": {
"source": """
// skip if not whitelisted
if (![ "myindex",
"logs-index",
"..."
].contains(ctx['_index'])) { return; }
// don't overwrite if present
if (ctx['created_at'] != null) { return; }
ctx['created_at'] = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date());
"""
}
}
]
}
Side note: the ingest processor's Painless script context is documented here.
2. Update the default_pipeline setting in all of your indices:
PUT _all/_settings
{
"index": {
"default_pipeline": "auto_now_add"
}
}
Side note: you can restrict the target indices using the multi-target syntax:
PUT myindex,logs-2021-*/_settings?allow_no_indices=true
{
"index": {
"default_pipeline": "auto_now_add"
}
}
3. Ingest a document to one of the configured indices:
PUT myindex/_doc/1
{
"abc": "def"
}
4. Verify that the date string has been added:
GET myindex/_search
An example for ElasticSearch 6.6.2 in Python 3:
from elasticsearch import Elasticsearch
es = Elasticsearch(hosts=["localhost"])
timestamp_pipeline_setting = {
"description": "insert timestamp field for all documents",
"processors": [
{
"set": {
"field": "ingest_timestamp",
"value": "{{_ingest.timestamp}}"
}
}
]
}
es.ingest.put_pipeline("timestamp_pipeline", timestamp_pipeline_setting)
conf = {
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1,
"default_pipeline": "timestamp_pipeline"
},
"mappings": {
"articles":{
"dynamic": "false",
"_source" : {"enabled" : "true" },
"properties": {
"title": {
"type": "text",
},
"content": {
"type": "text",
},
}
}
}
}
response = es.indices.create(
index="articles_index",
body=conf,
ignore=400 # ignore 400 already exists code
)
print ('\nresponse:', response)
doc = {
'title': 'automatically adding a timestamp to documents',
'content': 'prior to version 5 of Elasticsearch, documents had a metadata field called _timestamp. When enabled, this _timestamp was automatically added to every document. It would tell you the exact time a document had been indexed.',
}
res = es.index(index="articles_index", doc_type="articles", id=100001, body=doc)
print(res)
res = es.get(index="articles_index", doc_type="articles", id=100001)
print(res)
About ES 7.x, the example should work after removing the doc_type related parameters as it's not supported any more.
first create index and properties of the index , such as field and datatype and then insert the data using the rest API.
below is the way to create index with the field properties.execute the following in kibana console
`PUT /vfq-jenkins
{
"mappings": {
"properties": {
"BUILD_NUMBER": { "type" : "double"},
"BUILD_ID" : { "type" : "double" },
"JOB_NAME" : { "type" : "text" },
"JOB_STATUS" : { "type" : "keyword" },
"time" : { "type" : "date" }
}}}`
the next step is to insert the data into that index:
curl -u elastic:changeme -X POST http://elasticsearch:9200/vfq-jenkins/_doc/?pretty
-H Content-Type: application/json -d '{
"BUILD_NUMBER":"83","BUILD_ID":"83","JOB_NAME":"OMS_LOG_ANA","JOB_STATUS":"SUCCESS" ,
"time" : "2019-09-08'T'12:39:00" }'

Resources