RNeo4j Error: 400 Bad Request - neo4j

I am not sure why I am getting the error below, but I suppose it's something that I am doing wrong.
First, you can grab my dataset by downloading the file dataset.r from this link and loading it into your session with dget("dataset.r").
In my case, I would do dat = dget("dataset.r").
The code below is what I am using to load data into the Neo4j.
library(RNeo4j)
graph = startGraph("http://localhost:7474/db/data/")
graph$version
# sure that the graph is clean -- you should backup first!!!
clear(graph, input = FALSE)
## ensure the constraints
addConstraint(graph, "School", "unitid")
addConstraint(graph, "Topic", "topic_id")
## create the query
## BE CAREFUL OF WHITESPACE between KEY:VALUE pairs for parameters!!!
query = "
MERGE (s:School {unitid:{unitid},
instnm:{instnm},
obereg:{obereg},
carnegie:{carnegie},
applefeeu:{applfeeu},
enrlft:{enrlft},
applcn:{applcn},
admssn:{admssn},
admit_rate:{admit_rate},
ape:{ape},
sat25:{sat25},
sat75:{sat75} })
MERGE (t:Topic {topic_id:{topic_id},
topic:{topic} })
MERGE (s)-[:HAS_TOPIC {score:{score} }]->(t)
"
for (i in 1:nrow(dat)) {
## status
cat("starting row ", i, "\n")
## run the query
cypher(graph,
query,
unitid = dat$unitid[i],
instnm = dat$instnm[i],
obereg = dat$obereg[i],
carnegie = dat$carnegie[i],
applfeeu = dat$applfeeu[i],
enrlft = dat$enrlt[i],
applcn = dat$applcn[i],
admssn = dat$admssn[i],
admit_rate = dat$admit_rate[i],
ape = dat$apps_per_enroll[i],
sat25 = dat$sat25[i],
sat75 = dat$sat75[i],
topic_id = dat$topic_id[i],
topic = dat$topic[i],
score = dat$score[i] )
} #endfor
I can successfully load the first 49 records of my dataframe dat, but errors out on the 50th row.
This is the error that I recieve:
starting row 50
Show Traceback
Rerun with Debug
Error: 400 Bad Request
{"message":"Node 1477 already exists with label School and property \"unitid\"=[110680]","exception":"CypherExecutionException","fullname":"org.neo4j.cypher.CypherExecutionException","stacktrace":["org.neo4j.cypher.internal.compiler.v2_1.spi.ExceptionTranslatingQueryContext.org$neo4j$cypher$internal$compiler$v2_1$spi$ExceptionTranslatingQueryContext$$translateException(ExceptionTranslatingQueryContext.scala:154)","org.neo4j.cypher.internal.compiler.v2_1.spi.ExceptionTranslatingQueryContext$ExceptionTranslatingOperations.setProperty(ExceptionTranslatingQueryContext.scala:121)","org.neo4j.cypher.internal.compiler.v2_1.spi.UpdateCountingQueryContext$CountingOps.setProperty(UpdateCountingQueryContext.scala:130)","org.neo4j.cypher.internal.compiler.v2_1.mutation.PropertySetAction.exec(PropertySetAction.scala:51)","org.neo4j.cypher.internal.compiler.v2_1.mutation.MergeNodeAction$$anonfun$exec$1.apply(MergeNodeAction.scala:80)","org.neo4j.cypher.internal.compiler.v2_1
Here is my session info:
> sessionInfo()
R version 3.1.0 (2014-04-10)
Platform: x86_64-apple-darwin13.1.0 (64-bit)
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] RNeo4j_1.2.0
loaded via a namespace (and not attached):
[1] RCurl_1.95-4.1 RJSONIO_1.2-0.2 tools_3.1.0
And it's worth noting that I am using Neo4j 2.1.3.
Thanks for any help in advance.

This is an issue with how MERGE works. By setting the score property within the MERGE clause itself here...
MERGE (s)-[:HAS_TOPIC {score:{score} }]->(t)
...MERGE tries to create the entire pattern, and thus your uniqueness constraint is violated. Instead, do this:
MERGE (s)-[r:HAS_TOPIC]->(t)
SET r.score = {score}
I was able to import all of your data after making this change.

Related

Using Insert with a large multi-layered table using Lua

So I am working on a script for GTA5 and I need to transfer data over to a js script. However so I don't need to send multiple arrays to js I require a table, the template for the table should appear as below.
The issue I'm having at the moment is in the second section where I receive all vehicles and loop through each to add it to said 'vehicleTable'. I haven't been able to find the "table.insert" method used in a multilayered table
So far I've tried the following
table.insert(vehicleTable,vehicleTable[class][i][vehicleName])
This seems to store an 'object'(table)? so it does not show up when called in the latter for loop
Next,
vehicleTable = vehicleTable + vehicleTable[class][i][vehicleName]
This seemed like it was going nowhere as I either got a error or nothing happened.
Next,
table.insert(vehicleTable,class)
table.insert(vehicleTable[class],i)
table.insert(vehicleTable[class][i],vehicleName)
This one failed on the second line, I'm unsure why however it didn't even reach the next problem I saw later which would be the fact that line 3 had no way to specify the "Name" field.
Lastly the current one,
local test = {[class] = {[i]={["Name"]=vehicleName}}}
table.insert(vehicleTable,test)
It works without errors but ultimately it doesn't file it in the table instead it seems to create its own branch so object within the object.
And after about 3 hours of zero progress on this topic I turn to the stack overflow for assistance.
local vehicleTable = {
["Sports"] = {
[1] = {["Name"] = "ASS", ["Hash"] = "Asshole2"},
[2] = {["Name"] = "ASS2", ["Hash"] = "Asshole1"}
},
["Muscle"] = {
[1] = {["Name"] = "Sedi", ["Hash"] = "Sedina5"}
},
["Compacts"] = {
[1] = {["Name"] = "MuscleCar", ["Hash"] = "MCar2"}
},
["Sedan"] = {
[1] = {["Name"] = "Blowthing", ["Hash"] = "Blowthing887"}
}
}
local vehicles = GetAllVehicleModels();
for i=1, #vehicles do
local class = vehicleClasses[GetVehicleClassFromName(vehicles[i])]
local vehicleName = GetLabelText(GetDisplayNameFromVehicleModel(vehicles[i]))
print(vehicles[i].. " " .. class .. " " .. vehicleName)
local test = {[class] = {[i]={["Name"]=vehicleName}}}
table.insert(vehicleTable,test)
end
for k in pairs(vehicleTable) do
print(k)
-- for v in pairs(vehicleTable[k]) do
-- print(v .. " " .. #vehicleTable[k])
-- end
end
If there is not way to add to a library / table how would I go about sorting all this without needing to send a million (hash, name, etc...) requests to js?
Any recommendations or support would be much appreciated.
Aside the fact that you do not provide the definition of multiple functions and tables used in your code that would be necessary to provide a complete answere without making assumptions there are many misconceptions regarding very basic topics in Lua.
The most prominent is that you don't know how to use table.insert and what it can do. It will insert (append by default) a numeric field to a table. Given that you have non-numeric keys in your vehicleTable this doesn't make too much sense.
You also don't know how to use the + operator and that it does not make any sense to add a table and a string.
Most of your code seems to be the result of guess work and trial and error.
Instead of referring to the Lua manual so you know how to use table.insert and how to index tables properly you spend 3 hours trying all kinds of variations of your incorrect code.
Assuming a vehicle model is a table like {["Name"] = "MyCar", ["Hash"] = "MyCarHash"} you can add it to a vehicle class like so:
table.insert(vehicleTable["Sedan"], {["Name"] = "MyCar", ["Hash"] = "MyCarHash"})
This makes sense because vehicleTable.Sedan has numeric indices. And after that line it would contain 2 cars.
Read the manual. Then revisit your code and fix your errors.

How broadcast variables are used in dask parallelization

I have some code applying a map function on a dask bag. I need a lookup dictionary to apply that function and it doesn't work with client.scatter.
I don't know if I am doing the right things, because the workers starts, but they don't do anything. I have tried different configuration looking to different examples, but I can't get it to work. Any support will be appreciated.
I know from Spark, you define a broadcast variable and you access the content by variable.value inside the function you want to apply. I don't see the same with dask.
# Function to map
def transform_contacts_add_to_historic_sin(data,historic_dict):
raw_buffer = ''
line = json.loads(data)
if line['timestamp] > historic_dict['timestamp]:
raw_buffer = raw_buffer + line['vid']
return raw_buffer
# main program
# historic_dict is a dictionary previously filled, which is the lookup variable for map function
# file_records will be a list of json.dump getting from a S3 file
from distributed import Client
client = Client()
historic_dict_scattered = client.scatter(historic_dict, broadcast=True)
file_records = []
raw_data = s3_procedure.read_raw_file(... S3 file.......)
data = TextIOWrapper(raw_data)
for line in data:
file_records.append(line)
bag_chunk = db.from_sequence(file_records, npartitions=16)
bag_transform = bag_chunk.map(lambda x: transform_contacts_add_to_historic(x), args=[historic_dict_scattered])
bag_transform.compute()
If your dictionary is small you can just include it directly
def func(partition, d):
return ...
my_dict = {...}
b = b.map(func, d=my_dict)
If it's large then you might want to wrap it up in Dask delayed first
my_dict = dask.delayed(my_dict)
b = b.map(func, d=my_dict)
If it's very large then yes, you might want to scatter it first (though I would avoid this if things work out with either of the approaches above).
[my_dict] = client.scatter([my_dict])
b = b.map(func, d=my_dict)

Neo4j - poor performance

I receive daily a full csv file with all customers. I need to insert/update the neo4j database. So I am using this query:
I already create indexes on hash field.
MERGE (XFEWYX:CUSTOMER_BSELLER {hash:'xyz#hotmail.com'} ) ON CREATE SET XFEWYX.hash = 'xyz#hotmail.com',XFEWYX.name = 'XYZ ',XFEWYX.birthdate = '1975-05-20T00:00:00',XFEWYX.id = '1770852',XFEWYX.nick = 'CLARISSA',XFEWYX.documentNumber = 'XYZ',XFEWYX.email = 'clarissajuridica#hotmail.com' with XFEWYX
MERGE (WHHEKX:EMAIL {hash:'clarissajuridica#hotmail.com'} ) ON CREATE SET WHHEKX.hash = 'clarissajuridica#hotmail.com',WHHEKX.email = 'clarissajuridica#hotmail.com' with XFEWYX,WHHEKX
MERGE (JJKONT:DOCUMENT {hash:'06845078700'} ) ON CREATE SET JJKONT.document = 'XYZ',JJKONT.hash = '06845078700' with XFEWYX,WHHEKX,JJKONT
MERGE (MERUCB:PHONE {hash:'NoneNone'} ) ON CREATE SET MERUCB.areaCode = 'None',MERUCB.hash = 'NoneNone',MERUCB.number = 'None' with XFEWYX,WHHEKX,JJKONT,MERUCB
MERGE (BOORBT:PHONE {hash:'XYZ'} ) ON CREATE SET BOORBT.areaCode = '21',BOORBT.hash = 'XYZ',BOORBT.number = 'XYZ' with XFEWYX,WHHEKX,JJKONT,MERUCB,BOORBT
MERGE (XBLZNF:PHONE {hash:'XYZ'} ) ON CREATE SET XBLZNF.areaCode = '21',XBLZNF.hash = 'XYZ',XBLZNF.number = 'XYZ' with XFEWYX,WHHEKX,JJKONT,MERUCB,BOORBT,XBLZNF
MERGE (XFEWYX)-[:REGISTERED_WITH {optin:'false'}]->(WHHEKX)
MERGE (XFEWYX)-[:DOCUMENT]->(JJKONT)
MERGE (XFEWYX)-[:COMMERCIAL_PHONE]->(MERUCB)
MERGE (XFEWYX)-[:PHONE]->(XBLZNF)
MERGE (XFEWYX)-[:CELL_PHONE]->(BOORBT)
Does anyone have another approach how to execute this query?
I would try using the PROFILE command. You might put a LIMIT on your LOAD CSV (I assume you're using LOAD CSV) for the testing.
I would also check out this article:
http://www.markhneedham.com/blog/2014/10/23/neo4j-cypher-avoiding-the-eager/
Some of that has been fixed in recent versions of Neo4j, but you have an awful lot of MERGEs there, so you probably could stand to split some of that up and process your CSV file more than once.

sql.rows() in Groovy is running slow

I'm supporting a Grails web application which shows different visuals for client using AmCharts. On one of the tabs there are three charts which each return the top ten, so only ten rows, from the database based on different measures. It takes 4-5 or sometimes even more time to finish. The query runs on the DB in under 10 seconds.
The following service method is called to return results:
List fetchTopPages(params, Map querySettings, String orderClause) {
if(!((params['country'] && params['country'].size() > 0) || (params['brand'] && params['brand'].size() > 0) || (params['url'] && params['url'].size() > 0))) {
throw new RuntimeException('Filters country or brand or url not selected.')
}
Sql sql = new Sql(dataSource)
sql.withStatement { stmt -> stmt.fetchSize = 100 }
Map filterParams = acquisitionService.getDateFilters(params, querySettings)
ParamUtils.addWhereArgs(params, filterParams)
String query = "This is where the query is"
ParamUtils.saveQueryInRequest(ParamUtils.prettyPrintQuery(query, filterParams))
log.debug("engagement pageviews-by-source query: " + ParamUtils.prettyPrintQuery(query, filterParams))
List rows = sql.rows(query, filterParams)
rows
}
After some investigation it was clear that the List rows = sql.rows(query, filterParams) line is the one that takes up this load time.
Has anyone expreienced this issue before? Why is sql.rows() taking so long when it's only returning 10 rows worth of results, and the query is runnig super fast on the DB side?
Additional info:
DB: FSL1D
Running following command on DB side: java -jar ojdbc5.jar - getversion returns:
"Oracle 11.2.0.3.0 JDBC 3.0 compiled with JDK5 on Thu_Jul_11_15:41:55_PDT_2013
Default Connection Properties Resource
Wed Dec 16 08:18:32 EST 2015"
Groovy Version: 2.3.7
Grails Version: 2.4.41
JDK: 1.7.0
My set up with Groovy Version: 2.3.6 JVM: 1.8.0_11 and Oracle 12.1.0.2.0 using driver ojdbc7.jar
Note the activation of the 10046 trace before the run to allow diagnostics.
import oracle.jdbc.pool.OracleDataSource
def ods = new OracleDataSource();
ods.setURL('url')
ods.setUser('usr')
ods.setPassword('pwd')
def con = ods.getConnection()
def sql = new groovy.sql.Sql(con)
sql.withStatement { stmt -> stmt.fetchSize = 100 }
def SQL_QUERY = """select id, col1 from table1 order by id"""
def offset = 150
def maxRows = 20
// activate trace 10046
con.createStatement().execute "alter session set events '10046 trace name context forever, level 12'"
def t = System.currentTimeMillis()
def rows = sql.rows(SQL_QUERY, offset, maxRows)
println "time1 : ${System.currentTimeMillis()-t} with offset ${offset} and maxRows ${maxRows}"
The examination of the trace shows that the stament is parsed and executed, this means if there is ORDER BY clause, all data is sorted.
The fetch size is used correctly and no more than required records are fetched - here 170 = 150 + 20.
With fetch size 100 this is done in two steps (note the r parameter - number of fetched rows).
FETCH #627590664:c=0,e=155,p=0,cr=5,cu=0,mis=0,r=100,dep=0,og=1,plh=1169613780,tim=3898349818398
FETCH #627590664:c=0,e=46,p=0,cr=0,cu=0,mis=0,r=70,dep=0,og=1,plh=1169613780,tim=3898349851458
So basically the only problem I see that the "skipped" data are passed over the network to the client (to be ignored there).
This could produce with very high offset lot of overhead (and take more time that the same query running interactively producing the first page).
But the best way to identify your problem is simple the enable the 10046 trace and see what going on. I'm using the
level 12 which means you get also information
about the waits in the DB and bind variables.

Apache Pig: LIMIT inside FOREACH referencing toplevel field, Scalar has more than one row in the output

This question is similar to one asked two years ago, however for some reason, it does not work for me. Actually this is a combination of two ideas (answered questions) as given in the header. The example below replicates the accepted solution however it does not work for me: What is my mistake? I give a complete self contained working example:
Here is the data:
cat in_detail.csv
grp,val
1,2.1,
1,4.2,
1,6.3
2,6.5
2,1.2
2,4.3
2,3.2
cat in_cnt.csv
grp,cnt
1,2
2,3
output expected (sort order not important):
grp,val
1,2.1,
1,4.2,
2,6.5
2,1.2
2,4.3
Here is the code: with the error message
detail1 = LOAD '/tmp/sD_mvmd/c0nelha/data/in_detail.csv' using PigStorage(',') as (grp:chararray,num:double);
cnt1 = LOAD '/tmp/sD_mvmd/c0nelha/data/in_cnt.csv' using
PigStorage(',') as (grp:chararray,cnt:int);
d_group = GROUP detail1 by (grp);
describe d_group;
--d_group: {group: chararray,detail1: {(grp: chararray,num: double)}}
describe cnt1;
--cnt1: {grp: chararray,cnt: int}
detail2 = JOIN d_group by (group), cnt1 by (grp);
describe detail2;
--detail2: {d_group::group: chararray,d_group::detail1: {(grp: chararray,num: double)},cnt1::grp: chararray,cnt1::cnt: int}
detail3 = FOREACH detail2 {
mySelection = LIMIT d_group::detail1 detail2.cnt1::cnt;
GENERATE mySelection;
}
-- Apache Pig version 0.12.1.2.1.5.0-695 (rexported) compiled Aug 27 2014, 23:56:19
-- Backend error : Scalar has more than one row in the output.
-- 1st : (1,{(1,6.3),(1,4.2),(1,2.1)},1,2), 2nd :(2,{(2,3.2),(2,4.3),(2,1.2),(2,6.5)},2,3)
dump detail3;

Resources