Can't recreate just deleted node with UniqueNodeFactory - neo4j

I just created a node with UniqueNodeFactory and his relationship with UniqueRelationshipFactory. I deleted the the node with the NeoEclipse and then I tried to recreate the same node and I get no exception and the node it's not recreated again. Anyone knows this is happening?
public Node getOrCreateNodeWithUniqueFactory(final Index<Node> nodeIndex, final String indexableKey,final String indexableValue) {
UniqueFactory<Node> factory = new UniqueFactory.UniqueNodeFactory( Global.graphDB.getGraphDbService(), nodeIndex.getName())
{
#Override
protected void initialize(Node created, Map<String, Object> properties) {
created.setProperty(indexableKey, properties.get(indexableKey));
}
};
return factory.getOrCreate( indexableKey, indexableValue );
}
public Relationship getOrCreateRelationshipTypeWithUniqueFactory(Index<Relationship> index, String indexableKey, final String indexableValue,
final RelationshipType type, final Node start, final Node end) {
UniqueFactory<Relationship> factory = new UniqueFactory.UniqueRelationshipFactory(index) {
#Override
protected Relationship create(Map<String, Object> properties) {
Relationship r = start.createRelationshipTo(end, type);
return r;
}
};
return factory.getOrCreate(indexableKey, indexableValue);
}

I can't reproduce your issue. What I get is that there's a new node created the second time around. But it would help with the full source code. And also try with the method "getOrCreateWithOutcome" on UniqueNodeFactory to see whether it was created or not.

Related

Saxonia s9api integrated Extension functions Provide node

we are trying to submit a node using the integrated extension function. The node looks correct as far as it goes, but we can't access the individual elements, because there is always an outOfBound exception appearance.
How can we access the individual elements below the root element?
public ExtensionFunction updateTempNode = new ExtensionFunction() {
public QName getName() {
return new QName("de.dkl.dymoServer.util.ExternalFunctions", "updateTempNode");
}
public SequenceType getResultType() {
return SequenceType.makeSequenceType(
ItemType.BOOLEAN, OccurrenceIndicator.ONE
);
}
public net.sf.saxon.s9api.SequenceType[] getArgumentTypes() {
return new SequenceType[]{
SequenceType.makeSequenceType(
ItemType.STRING, OccurrenceIndicator.ONE),
SequenceType.makeSequenceType(
ItemType.DOCUMENT_NODE, OccurrenceIndicator.ONE)};
}
public XdmValue call(XdmValue[] arguments) {
String sessionId = arguments[0].itemAt(0).getStringValue();
SaplingElement tempNode = TransformationService.tempNodes.get(sessionId);
ItemTypeFactory itemTypeFactory = new ItemTypeFactory(((XdmNode) arguments[1]).getProcessor());
tempNode.withChild(
arguments[1].stream().map(xdmValue -> Saplings.elem(xdmValue.getStringValue()).withText(xdmValue.itemAt(0).getStringValue())).toList()
.toArray(SaplingElement[]::new)
);
System.out.println(tempNode);
return new XdmAtomicValue(true);
}
};
AOOB as I try to iterate
Data expected as document_node
Wild guess is that you want something like
tempNode = tempNode.withChild(
arguments[1]
.select(Steps.child().then(Steps.child()))
.map(childNode -> Saplings.elem(childNode.getNodeName()).withText(childNode.itemAt(0).getStringValue()))
.collect(Collectors.toList())
.toArray(new SaplingElement[]{})
);
which would populate tempNode with copies of the child nodes of the root element of the document node that is arguments[1]. There might be better ways to do that.
.

Google Dataflow write multiple line in BigQuery

I have a simple flow which aim is to write two lines in one BigQuery Table.
I use a DynamicDestinations because after that I will write on mutliple Table, on that example it's the same table...
The problem is that I only have 1 line in my BigQuery table at the end.
It stacktrace I see the following error on the second insert
"
status: {
code: 6
message: "Already Exists: Job sampleprojet3:b9912b9b05794aec8f4292b2ae493612_eeb0082ade6f4a58a14753d1cc92ddbc_00001-0"
}
"
What does it means ?
Is it related to this limitation ?
https://github.com/GoogleCloudPlatform/DataflowJavaSDK/issues/550
How can I do the job ?
I use BeamSDK 2.0.0, I have try with 2.1.0 (same result)
The way I launch :
mvn compile exec:java -Dexec.mainClass=fr.gireve.dataflow.LogsFlowBug -Dexec.args="--runner=DataflowRunner --inputDir=gs://sampleprojet3.appspot.com/ --project=sampleprojet3 --stagingLocation=gs://dataflow-sampleprojet3/tmp" -Pdataflow-runner
Pipeline p = Pipeline.create(options);
final List<String> tableNameTableValue = Arrays.asList("table1:value1", "table1:value2", "table2:value1", "table2:value2");
p.apply(Create.of(tableNameTableValue)).setCoder(StringUtf8Coder.of())
.apply(BigQueryIO.<String>write()
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.to(new DynamicDestinations<String, KV<String, String>>() {
#Override
public KV<String, String> getDestination(ValueInSingleWindow<String> element) {
final String[] split = element.getValue().split(":");
return KV.of(split[0], split[1]) ;
}
#Override
public Coder<KV<String, String>> getDestinationCoder() {
return KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of());
}
#Override
public TableDestination getTable(KV<String, String> row) {
String tableName = row.getKey();
String tableSpec = "sampleprojet3:testLoadJSON." + tableName;
return new TableDestination(tableSpec, "Table " + tableName);
}
#Override
public TableSchema getSchema(KV<String, String> row) {
List<TableFieldSchema> fields = new ArrayList<>();
fields.add(new TableFieldSchema().setName("myColumn").setType("STRING"));
TableSchema ts = new TableSchema();
ts.setFields(fields);
return ts;
}
})
.withFormatFunction(new SerializableFunction<String, TableRow>() {
public TableRow apply(String row) {
TableRow tr = new TableRow();
tr.set("myColumn", row);
return tr;
}
}));
p.run().waitUntilFinish();
Thanks
DynamicDestinations associates each element with a destination - i.e. where the element should go. Elements are routed to BigQuery tables according to their destinations: 1 destination = 1 BigQuery table with a schema: the destination should include just enough information to produce a TableDestination and a schema. Elements with the same destination go to the same table, elements with different destinations go to different tables.
Your code snippet uses DynamicDestinations with a destination type that contains both the element and the table, which is unnecessary, and of course, violates the constraint above: elements with a different destination end up going to the same table: e.g. KV("table1", "value1") and KV("table1", "value2") are different destinations but your getTable maps them to the same table table1.
You need to remove the element from your destination type. That will also lead to simpler code. As a side note, I think you don't need to override getDestinationCoder() - it can be inferred automatically.
Try this:
.to(new DynamicDestinations<String, String>() {
#Override
public String getDestination(ValueInSingleWindow<String> element) {
return element.getValue().split(":")[0];
}
#Override
public TableDestination getTable(String tableName) {
return new TableDestination(
"sampleprojet3:testLoadJSON." + tableName, "Table " + tableName);
}
#Override
public TableSchema getSchema(String tableName) {
List<TableFieldSchema> fields = Arrays.asList(
new TableFieldSchema().setName("myColumn").setType("STRING"));
return new TableSchema().setFields(fields);
}
})

DymanicDestinations in Apache Beam

I have a PCollection [String] say "X" that I need to dump in a BigQuery table.
The table destination and the schema for it is in a PCollection[TableRow] say "Y".
How to accomplish this in the simplest manner?
I tried extracting the table and schema from "Y" and saving it in static global variables (tableName and schema respectively). But somehow oddly the BigQueryIO.writeTableRows() always gets the value of the variable tableName as null. But it gets the schema. I tried logging the values of those variables and I can see the values are there for both.
Here is my pipeline code:
static String tableName;
static TableSchema schema;
PCollection<String> read = p.apply("Read from input file",
TextIO.read().from(options.getInputFile()));
PCollection<TableRow> tableRows = p.apply(
BigQueryIO.read().fromQuery(NestedValueProvider.of(
options.getfilename(),
new SerializableFunction<String, String>() {
#Override
public String apply(String filename) {
return "SELECT table,schema FROM `BigqueryTest.configuration` WHERE file='" + filename +"'";
}
})).usingStandardSql().withoutValidation());
final PCollectionView<List<String>> dataView = read.apply(View.asList());
tableRows.apply("Convert data read from file to TableRow",
ParDo.of(new DoFn<TableRow,TableRow>(){
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
String[] schemas = c.element().get("schema").toString().split(",");
List<TableFieldSchema> fields = new ArrayList<>();
for(int i=0;i<schemas.length;i++) {
fields.add(new TableFieldSchema()
.setName(schemas[i].split(":")[0]).setType(schemas[i].split(":")[1]));
}
schema = new TableSchema().setFields(fields);
//My code to convert data to TableRow format.
}}).withSideInputs(dataView));
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
Everything works fine. Only BigQueryIO.write operation fails and I get the error TableId is null.
I also tried using SerializableFunction and returning the value from there but i still get null.
Here is the code that I tried for it:
tableRows.apply("write to BigQuery",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to(new GetTable(tableName))
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
public static class GetTable implements SerializableFunction<String,String> {
String table;
public GetTable() {
this.table = tableName;
}
#Override
public String apply(String arg0) {
return "ProjectId:DatasetId."+table;
}
}
I also tried using DynamicDestinations but I get an error saying schema is not provided. Honestly I'm new to the concept of DynamicDestinations and I'm not sure that I'm doing it correctly.
Here is the code that I tried for it:
tableRows2.apply(BigQueryIO.writeTableRows()
.to(new DynamicDestinations<TableRow, TableRow>() {
private static final long serialVersionUID = 1L;
#Override
public TableDestination getTable(TableRow dest) {
List<TableRow> list = sideInput(bqDataView); //bqDataView contains table and schema
String table = list.get(0).get("table").toString();
String tableSpec = "ProjectId:DatasetId."+table;
String tableDescription = "";
return new TableDestination(tableSpec, tableDescription);
}
public String getSideInputs(PCollectionView<List<TableRow>> bqDataView) {
return null;
}
#Override
public TableSchema getSchema(TableRow destination) {
return schema; //schema is getting added from the global variable
}
#Override
public TableRow getDestination(ValueInSingleWindow<TableRow> element) {
return null;
}
}.getSideInputs(bqDataView)));
Please let me know what I'm doing wrong and which path I should take.
Thank You.
Part of the reason your having trouble is because of the two stages of pipeline execution. First the pipeline is constructed on your machine. This is when all of the applications of PTransforms occur. In your first example, this is when the following lines are executed:
BigQueryIO.writeTableRows()
.withSchema(schema)
.to("ProjectID:DatasetID."+tableName)
The code within a ParDo however runs when your pipeline executes, and it does so on many machines. So the following code runs much later than the pipeline construction:
#ProcessElement
public void processElement(ProcessContext c) {
tableName = c.element().get("table").toString();
...
schema = new TableSchema().setFields(fields);
...
}
This means that neither the tableName nor the schema fields will be set at when the BigQueryIO sink is created.
Your idea to use DynamicDestinations is correct, but you need to move the code to actually generate the schema the destination into that class, rather than relying on global variables that aren't available on all of the machines.

Dozer - How can i make Dozer copy a value from source to target, even if it already is set in target?

I am using Dozer (5.3.2) to map from a source Person object to a target Person object. The default constructor of the superclass of Person sets a UUID on the object. Because of that, when I map the source to a new Person, the UUID on the new Person is already set, and is thus not copied from the source object. In code (full code further down):
DozerBeanMapper MAPPER = // ... see code further down;
Person source = new Person();
Person target = new Person();
MAPPER.map(source, target);
What I want: After the call to MAPPER.map(...), I want target.getUuid() to equal source.getUuid(). As of now, they are different (because they are set to UUID.randomUUID in the super constructor, see below).
I could solve this by setting
target.setUuid(null);
before calling MAPPER.MAP(...), but that is not what I want, as this is a simplification of a more complex problem. Is there some way to configure Dozer or make custom classes so that the mapper sets values in the target object even the value in the target object is not null?
My code is as follows:
public abstract class AbstractEntity
{
private long uuid;
public AbstractEntity() {
this(UUID.randomUUID());
}
public AbstractEntity(long uuid) {
this.uuid = uuid;
}
public UUID getUuid()
{
return this.uuid;
}
public void setUuid(final UUID uuid)
{
this.uuid = uuid;
}
}
public class Person extends AbstractEntity
{
// Getters and setters...
}
Here is the code that creates and execute the dozer mapper:
DozerBeanMapper MAPPER = new DozerBeanMapper();
BeanMappingBuilder builder = new BeanMappingBuilder() {
protected void configure() {
mapping(UUID.class, UUID.class, TypeMappingOptions.oneWay(), TypeMappingOptions.mapNull(true),
TypeMappingOptions.beanFactory(UuidBeanFactory.class.getName()));
}
};
MAPPER.addMapping(builder);
Person source = new Person();
Person target = new Person();
MAPPER.map(source, target);
Code for the UUIDBeanFactory - I am using this because UUID does not have an empty constructor, which makes Dozer throw an exception.
public class UuidBeanFactory implements BeanFactory
{
#Override
public Object createBean(Object sourceObject, Class<?> aClass, String s)
{
if (sourceObject == null)
{
return null;
}
UUID source = (UUID) sourceObject;
UUID target = new UUID(source.getMostSignificantBits(), source.getLeastSignificantBits());
return target;
}
}

ItemDescriptionGenerator for vaadin TreeTable only returns null for column

Im using vaadin's TreeTable and im trying to add tooltips for my rows. This is how they say it should be done but the propertyId is always null so i cant get the correct column? And yes i'v run this in eclipse debugger aswell =)
Code related to this part:
private void init() {
setDataSource();
addGeneratedColumn("title", new TitleColumnGenerator());
addGeneratedColumn("description", new DescriptionGenerator());
setColumnExpandRatios();
setItemDescriptionGenerator(new TooltipGenerator());
}
protected class TooltipGenerator implements ItemDescriptionGenerator{
private static final long serialVersionUID = 1L;
#Override
public String generateDescription(Component source, Object itemId, Object propertyId) {
TaskRow taskRow = (TaskRow)itemId;
if("description".equals(propertyId)){
return taskRow.getDescription();
}else if("title".equals(propertyId)){
return taskRow.getTitle();
}else if("category".equals(propertyId)){
return taskRow.getCategory().toString();
}else if("operation".equals(propertyId)){
return taskRow.getOperation().toString();
}else if("resourcePointer".equals(propertyId)){
return taskRow.getResourcePointer();
}else if("taskState".equals(propertyId)){
return taskRow.getTaskState().toString();
}
return null;
}
}
I have passed the source object as the itemId when adding an item to the tree.
Node node = ...;
Item item = tree.addItem(node);
this uses the object "node" as the id. Which then allows me to cast itemId as an instance of Node in the generateDescription method.
public String generateDescription(Component source, Object itemId, Object propertyId) {
if (itemId instanceof Node) {
Node node = (Node) itemId;
...
Maybe not the best solution, but it Works for me. Then again, I am adding items directly to the tree rather than using a DataContainer.

Resources