Virtuoso R2RML rr:IRI generating - mapping

I have a problem with generating rr:termType rr:IRI in Virtuoso. I don't know if am I doing it wrong but I followed the W3C specification.
My mapping looks like this. When I generate triples with CONSTRUCT statement I still get "URL" but not IRI => <url> (OWNER_LINK and BRAND_LINK columns). Is it something Virtuoso doesn't support or am I coding it the wrong way?
DB.DBA.TTLP
( '
#prefix rr: <http://www.w3.org/ns/r2rml#> .
#prefix foaf: <http://xmlns.com/foaf/0.1/> .
#prefix gr: <http://purl.org/goodrelations/v1#> .
#prefix s: <http://schema.org/> .
#prefix pod: <http://linked.opendata.cz/ontology/product-open-data.org#> .
<#TriplesMap3>
a rr:TriplesMap ;
rr:logicalTable
[
rr:tableSchema "POD" ;
rr:tableOwner "DBA" ;
rr:tableName "BRAND_OWNER_BSIN"
];
rr:subjectMap
[
rr:template "http://linked.opendata.cz/resource/brand-owner-bsin/{BSIN}" ;
rr:class gr:BusinessEntity ;
rr:graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01>
];
rr:predicateObjectMap
[
rr:predicate gr:hasBrand ;
rr:objectMap
[
rr:parentTriplesMap <#TriplesMap4> ;
rr:joinCondition
[
rr:child "OWNER_CD" ;
rr:parent "OWNER_CD" ;
]; ]; ];
.
<#TriplesMap4>
a rr:TriplesMap ;
rr:logicalTable
[
rr:tableSchema "POD" ;
rr:tableOwner "DBA" ;
rr:tableName "BRAND_OWNER"
];
rr:subjectMap
[
rr:template "http://linked.opendata.cz/resource/brand-owner/{OWNER_CD}" ;
rr:class gr:BusinessEntity ;
rr:graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01>
];
rr:predicateObjectMap
[
rr:predicate gr:legalName ;
rr:objectMap
[ rr:column "OWNER_NM" ];
];
rr:predicateObjectMap
[
rr:predicate s:url ;
rr:objectMap
[
rr:termType rr:IRI ;
rr:column {OWNER_LINK} ;
]; ];
rr:predicateObjectMap
[
rr:predicate gr:hasBrand ;
rr:objectMap
[
rr:parentTriplesMap <#TriplesMap3> ;
rr:joinCondition
[
rr:child "OWNER_CD" ;
rr:parent "OWNER_CD" ;
]; ]; ];
.
<#TriplesMap2>
a rr:TriplesMap;
rr:logicalTable
[
rr:tableSchema "POD";
rr:tableOwner "DBA";
rr:tableName "BRAND_TYPE"
];
rr:subjectMap
[
rr:template "http://linked.opendata.cz/resource/brand-type/{BRAND_TYPE_CD}" ;
rr:class gr:BusinessEntityType ;
rr:graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01>
];
rr:predicateObjectMap
[
rr:predicate gr:name ;
rr:objectMap
[ rr:column "BRAND_TYPE_NM" ];
];
.
<#TriplesMap1>
a rr:TriplesMap;
rr:logicalTable
[
rr:tableSchema "POD" ;
rr:tableOwner "DBA" ;
rr:tableName "BRAND"
];
rr:subjectMap
[
rr:template "http://linked.opendata.cz/resource/brand/{BSIN}" ;
rr:class gr:Brand ;
rr:graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01>
];
rr:predicateObjectMap
[
rr:predicate pod:bsin ;
rr:objectMap [ rr:column "BSIN" ] ;
];
rr:predicateObjectMap
[
rr:predicate gr:name ;
rr:objectMap [ rr:column "BRAND_NM" ] ;
];
rr:predicateObjectMap
[
rr:predicate s:url ;
rr:objectMap
[
rr:termType rr:IRI ;
rr:column "BRAND_LINK" ;
]; ];
rr:predicateObjectMap
[
rr:predicate gr:BusinessEntityType ;
rr:objectMap
[
rr:parentTriplesMap <#TriplesMap2> ;
rr:joinCondition
[
rr:child "BRAND_TYPE_CD" ;
rr:parent "BRAND_TYPE_CD" ;
]; ]; ];
.
',
'http://product-open-data.org/temp',
'http://product-open-data.org/temp'
);
exec ( 'sparql ' || DB.DBA.R2RML_MAKE_QM_FROM_G ('http://product-open-data.org/temp') );

So I figured out my code was wrong it should be like this
rr:predicateObjectMap
[
rr:predicateMap
[
rr:constant s:url
];
rr:objectMap
[
rr:termType rr:IRI ;
rr:template "{BRAND_LINK}" ;
];
];.
and it's working
Thank you.

To be clear — are you saying the R2RML mapping is loading successfully, but when running a SPARQL CONSTRUCT query, the rr:termType rr:IRI mapping is not being displayed in the result set?
As the docs indicate, only rr:sqlQuery is not currently supported ...

Related

Validating neo4j graph using SHACL, when a relationship can lead to a node belonging to one of two specific types

I have the following graph in neo4j (the below is a Cypher query that creates it).
CREATE (banana:Fruit {name:'Banana'})
CREATE (apple:Fruit {name:'Apple'})
CREATE (fruit_salad:Dish {name:'Fruit Salad'})
CREATE (banana_gun:Weapon {name:'Banana Gun'})
CREATE
(apple)-[:can_be_used_in]->(fruit_salad),
(banana)-[:can_be_used_in]->(fruit_salad),
(banana)-[:can_be_used_in]->(banana_gun)
I am trying to write a SHACL schema for validating this graph.
The tricky part here is that the relationship "can_be_used_in" can lead from Fruit to either Dish or Weapon type nodes.
I don't want to allow for any other relationships from the Fruit type, so I am using "sh:closed true ;".
I suspect what I need can be somehow done via the sh:or or sh:xone blocks.
I attempted this (again, a Cypher query for loading the schema):
call n10s.validation.shacl.import.inline('
#prefix neo4j: <neo4j://graph.schema#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
# ---- Fruit -------------
neo4j:FruitShape a sh:NodeShape ;
sh:targetClass neo4j:Fruit ;
sh:closed true ;
sh:property [
sh:path neo4j:name ;
sh:maxCount 1 ;
sh:minCount 1 ;
sh:datatype xsd:string ;
];
sh:or(
sh:property [
sh:path neo4j:can_be_used_in ;
sh:class neo4j:Weapon ;
]
sh:property [
sh:path neo4j:can_be_used_in ;
sh:class neo4j:Dish ;
]
);
.
','Turtle')
However this gives me the following output:
[{'focusNode': 552,
'nodeType': 'Fruit',
'shapeId': 'neo4j://graph.schema#FruitShape',
'propertyShape': 'http://www.w3.org/ns/shacl#ClosedConstraintComponent',
'offendingValue': '555, 554',
'resultPath': 'can_be_used_in',
'severity': 'http://www.w3.org/ns/shacl#Violation',
'resultMessage': 'Closed type definition does not include this property/relationship'},
{'focusNode': 553,
'nodeType': 'Fruit',
'shapeId': 'neo4j://graph.schema#FruitShape',
'propertyShape': 'http://www.w3.org/ns/shacl#ClosedConstraintComponent',
'offendingValue': '554',
'resultPath': 'can_be_used_in',
'severity': 'http://www.w3.org/ns/shacl#Violation',
'resultMessage': 'Closed type definition does not include this property/relationship'}]
Then, I attempted the following:
call n10s.validation.shacl.import.inline('
#prefix neo4j: <neo4j://graph.schema#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
# ---- Fruit -------------
neo4j:FruitShape a sh:NodeShape ;
sh:targetClass neo4j:Fruit ;
sh:closed true ;
sh:property [
sh:path neo4j:name ;
sh:maxCount 1 ;
sh:minCount 1 ;
sh:datatype xsd:string ;
];
sh:property [
sh:path neo4j:can_be_used_in ;
sh:or(
[
sh:class neo4j:Whatever ;
]
[
sh:class neo4j:ThisDoesntExist ;
]
);
] ;
.
','Turtle')
But this just returns an empty result (suggesting a successful validation), even though the types used in the 'or' statement don't exist.
I also attempted these with sh:xone instead, and that didn't work either.
I am at a loss here, is this a SHACL/neo4j integration issue, or am I using it wrong?
Below is my full example in Python, in case it helps:
from py2neo import Graph
graph = Graph(password="test_database")
graph.delete_all()
example_graph_query = """
CREATE (banana:Fruit {name:'Banana'})
CREATE (apple:Fruit {name:'Apple'})
CREATE (fruit_salad:Dish {name:'Fruit Salad'})
CREATE (banana_gun:Weapon {name:'Banana Gun'})
CREATE
(apple)-[:can_be_used_in]->(fruit_salad),
(banana)-[:can_be_used_in]->(fruit_salad),
(banana)-[:can_be_used_in]->(banana_gun)
"""
graph.run(example_graph_query)
shacl_schema="""
call n10s.validation.shacl.import.inline('
#prefix neo4j: <neo4j://graph.schema#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
# ---- Fruit -------------
neo4j:FruitShape a sh:NodeShape ;
sh:targetClass neo4j:Fruit ;
sh:closed true ;
sh:property [
sh:path neo4j:name ;
sh:maxCount 1 ;
sh:minCount 1 ;
sh:datatype xsd:string ;
];
sh:or(
sh:property [
sh:path neo4j:can_be_used_in ;
sh:class neo4j:Weapon ;
]
sh:property [
sh:path neo4j:can_be_used_in ;
sh:class neo4j:Dish ;
]
);
.
','Turtle')
"""
shacl_schema_ver_2="""
call n10s.validation.shacl.import.inline('
#prefix neo4j: <neo4j://graph.schema#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
# ---- Fruit -------------
neo4j:FruitShape a sh:NodeShape ;
sh:targetClass neo4j:Fruit ;
sh:closed true ;
sh:property [
sh:path neo4j:name ;
sh:maxCount 1 ;
sh:minCount 1 ;
sh:datatype xsd:string ;
];
sh:property [
sh:path neo4j:can_be_used_in ;
sh:or(
[
sh:class neo4j:Whatever ;
]
[
sh:class neo4j:ThisDoesntExist ;
]
);
] ;
.
','Turtle')
"""
data = graph.run(shacl_schema).data()
validation_query="""
call n10s.validation.shacl.validate()
"""
result = graph.run(validation_query)

How to variabilize sshTransfer in a jenkinsfile?

I'm currently using a simple pipeline in order to delete logs older than 7 days on my servers.
I have 4 servers where the logs are in the same place, and thus 4 times the same sshTransfer command.
Is there anyway to variabilize those 4 redundant "sshTransfer".
Here is my code for more clarity :)
stage('Cleanup logs') {
steps {
script {
echo "1. Connection to server 1"
sshPublisher (
publishers:[
sshPublisherDesc(
configName: server1,
transfers:[
sshTransfer(
execCommand:"find /var/opt/application/log/* -mtime +6 -type f -delete",
execTimeout: 20000,
)
]
)
]
)
echo "2. Connection to server 2"
sshPublisher (
publishers:[
sshPublisherDesc(
configName: server2,
transfers:[
sshTransfer(
execCommand:"find /var/opt/application/log/* -mtime +6 -type f -delete",
execTimeout: 20000,
)
]
)
]
)
echo "3. Connection to server 3"
sshPublisher (
publishers:[
sshPublisherDesc(
configName: server3,
transfers:[
sshTransfer(
execCommand:"find /var/opt/application/log/* -mtime +6 -type f -delete",
execTimeout: 20000,
)
]
)
]
)
echo "4. Connection to server 4"
sshPublisher (
publishers:[
sshPublisherDesc(
configName: server4,
transfers:[
sshTransfer(
execCommand:"find /var/opt/application/log/* -mtime +6 -type f -delete",
execTimeout: 20000,
)
]
)
]
)
}
}
}
Thank you !

Jena TDB2 Assembler load data from file

Hi guys I am trying to load data with TDB2 assembler
#prefix cq: <http://www.example.co.uk/hya>
#prefix tdb: <http://jena.apache.org/2016/tdb#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#> .
#prefix text: <http://jena.apache.org/text#>
#prefix lm: <http://jena.hpl.hp.com/2004/08/location-mapping#>
#prefix dc: <http://purl.org/dc/elements/1.1/>
[] ja:loadClass "org.apache.jena.tdb2.TDB2" .
tdb:DatasetTDB2 rdfs:subClassOf ja:RDFDataset .
tdb:GraphTDB2 rdfs:subClassOf ja:Model .
<#dataset> rdf:type tdb:DatasetTDB2 ;
tdb:location "DB" ;
tdb:unionDefaultGraph true ;
.
<#data1> rdf:type tdb:GraphTDB ;
tdb:dataset <#dataset> ;
tdb:graphName <http://www.example.co.uk/hya/qdata> ;
ja:content [ja:externalContent <file:////Volumes/data/project/src/test/resources/metadata.ttl>;];
.
which is not working when i try to assemble dataset
Dataset dataset = TDB2Factory.assembleDataset("/Volumes/data/project/src/main/resources/tdb/tdb-assembler.ttl");
if(dataset ==null) {
log.debug("failed");
}else {
1. Model model = dataset.getUnionModel();
2. Model model = dataset.getNamedModel("<dataset>");
3. Model model = dataset.getDefaultModel();
dataset.begin(ReadWrite.WRITE);
model.write(System.out, Lang.TTL.getName());
}
I have tried to print model 3 different ways without success.
Can some suggest the best way to address this issue, and if there is any refrence documentation arounf TDB2.
Ideally i like to configure it later with following
:indexed-dataset
rdf:type text:TextDataset ;
text:dataset <#dataset> ;
text:index <#indexLuceneText> ;
.
# Text index description
<#indexLuceneText> a text:TextIndexLucene ;
text:directory <file:TDB/LUCENE> ;
text:entityMap <#entMap> ;
text:storeValues true ;
text:analyzer [ a text:StandardAnalyzer ] ;
text:queryAnalyzer [ a text:KeywordAnalyzer ] ;
text:queryParser text:AnalyzingQueryParser ;
text:multilingualSupport true ;
.
<#entMap> a text:EntityMap ;
according to assembler schema
"every model can has some content specified by a Content object."
ja:content a rdf:Property
; rdfs:label "Assembler.content"
; rdfs:comment
"""specifies that the subject Loadable is to be loaded with
all the contents specified by the object Content.
"""
; rdfs:domain ja:Loadable
; rdfs:range ja:Content
.
Persistent databases (TDB1, TDB2) don't process ja:content because that is a directive to load data each time into a memory model.
Instead, load the data with a TDB bulkloader beforehand.
For the text idnex,load the index with the textindexer command.
https://jena.apache.org/documentation/query/text-query.html

pwm-backlight driver not being probed in u-boot

I'm trying to get my PWM working on a custom am33x board (same beagle-bone black target). For some reason I don't see the pwm-backlight driver being probed and thus no PWM as indicated on my scope. Here are my relevant source files:
dts snippet:
/dts-v1/;
#include "am33xx.dtsi"
#include <dt-bindings/interrupt-controller/irq.h>
/ {
model = "test";
compatible = "ti,am33xx";
chosen {
stdout-path = &uart0;
};
backlight: backlight {
status = "okay";
compatible = "pwm-backlight";
pwms = <&ehrpwm1 0 10000 0>;
brightness-levels = <0 10 20 30 40 50 60 70 80 90 99>;
default-brightness-level = <6>;
};
};
&am33xx_pinmux {
ehrpwm1_pins: pinmux-ehrpwm1-pins {
pinctrl-single,pins = <
AM33XX_IOPAD(0x848, PIN_OUTPUT_PULLDOWN | MUX_MODE6) /* gpmc_a2.ehrpwm1a */
>;
};
};
&ehrpwm1 {
u-boot,dm-spl;
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&ehrpwm1_pins>;
};
defconfig
CONFIG_DM=y
CONFIG_CMD_DM=y
CONFIG_DM_VIDEO=y
CONFIG_DM_PWM=y
CONFIG_BACKLIGHT_PWM=y
pwm-backlight driver info
config BACKLIGHT_PWM
bool "Generic PWM based Backlight Driver"
depends on DM_VIDEO && DM_PWM
default y
help
If you have a LCD backlight adjustable by PWM, say Y to enable
this driver.
This driver can be use with "simple-panel" and
it understands the standard device tree
(leds/backlight/pwm-backlight.txt)
(linux version)
https://github.com/torvalds/linux/blob/master/Documentation/devicetree/bindings/leds/backlight/pwm-backlight.txt
and when I interrupt u-boot and use dm tree you can see that its not probed. Why?
=> dm tree
Class Index Probed Driver Name
-----------------------------------------------------------
root 0 [ + ] root_driver root_driver
simple_bus 0 [ + ] generic_simple_bus |-- ocp
simple_bus 1 [ ] generic_simple_bus | |-- l4_wkup#44c00000
simple_bus 2 [ ] generic_simple_bus | | |-- prcm#200000
simple_bus 3 [ ] generic_simple_bus | | `-- scm#210000
syscon 0 [ ] syscon | | `-- scm_conf#0
gpio 0 [ ] gpio_omap | |-- gpio#44e07000
gpio 1 [ ] gpio_omap | |-- gpio#4804c000
gpio 2 [ ] gpio_omap | |-- gpio#481ac000
gpio 3 [ ] gpio_omap | |-- gpio#481ae000
serial 0 [ + ] omap_serial | |-- serial#44e09000
mmc 0 [ + ] omap_hsmmc | |-- mmc#481d8000
timer 0 [ + ] omap_timer | |-- timer#48040000
timer 1 [ ] omap_timer | |-- timer#48042000
timer 2 [ ] omap_timer | |-- timer#48044000
timer 3 [ ] omap_timer | |-- timer#48046000
timer 4 [ ] omap_timer | |-- timer#48048000
timer 5 [ ] omap_timer | |-- timer#4804a000
misc 0 [ + ] ti-musb-wrapper | `-- usb#47400000
usb 0 [ + ] ti-musb-peripheral | `-- usb#47401000
eth 0 [ + ] usb_ether | `-- usb_ether
backlight 0 [ ] pwm_backlight `-- backlight

Failed to deserialize exception response from stream - Grails ElasticSearch

I have a simple Grails app, which I am trying to set upo to use ElasticSearch.
I have a single-node ElasticSearch instance running on EC2, which is running happily enough. (For reference, I just followed the steps here: http://www.elasticsearch.org/tutorials/elasticsearch-on-ec2/), but using 0.90.7 and the cloud-aws plugin version 1.15.0)
I am using the Grails ElasticSearch GORM plugin (http://grails.org/plugin/elasticsearch-gorm) (Master branch) and i'm connecting to ES using the transport client mode (elasticSearch.client.mode = 'transport')
Here's where it gets really odd...
The first time I boot up my app, it will happily index my Domain data on ES, I can query, etc, no problems.
If I then restart my grails app, it won't launch at all. I get
Message: Error creating bean with name 'searchableClassMappingConfigurator': Invocation of init method failed; nested exception is org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
Line | Method
->> 262 | run in java.util.concurrent.FutureTask
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 724 | run in java.lang.Thread
Caused by TransportSerializationException: Failed to deserialize exception response from stream
->> 169 | handlerResponseError in org.elasticsearch.transport.netty.MessageChannelHandler
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 123 | messageReceived in ''
| 70 | handleUpstream in org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler
| 564 | sendUpstream in org.elasticsearch.common.netty.channel.DefaultChannelPipeline
| 791 | sendUpstream in org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext
| 296 | fireMessageReceived in org.elasticsearch.common.netty.channel.Channels
| 462 | unfoldAndFireMessageReceived in org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder
| 443 | callDecode in ''
| 310 | messageReceived in ''
| 70 | handleUpstream in org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler
| 564 | sendUpstream in org.elasticsearch.common.netty.channel.DefaultChannelPipeline
| 559 | sendUpstream in ''
| 268 | fireMessageReceived in org.elasticsearch.common.netty.channel.Channels
| 255 | fireMessageReceived in ''
| 88 | read . . in org.elasticsearch.common.netty.channel.socket.nio.NioWorker
| 108 | process in org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker
| 318 | run . . . in org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector
| 89 | run in org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker
| 178 | run . . . in org.elasticsearch.common.netty.channel.socket.nio.NioWorker
| 108 | run in org.elasticsearch.common.netty.util.ThreadRenamingRunnable
| 42 | run . . . in org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 724 | run in java.lang.Thread
Caused by StreamCorruptedException: unexpected end of block data
->> 1370 | readObject0 in java.io.ObjectInputStream
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 1989 | defaultReadFields in ''
| 499 | defaultReadObject in ''
| 914 | readObject in java.lang.Throwable
| 1017 | invokeReadObject in java.io.ObjectStreamClass
| 1891 | readSerialData in java.io.ObjectInputStream
| 1796 | readOrdinaryObject in ''
| 1348 | readObject0 in ''
| 1989 | defaultReadFields in ''
| 499 | defaultReadObject in ''
| 914 | readObject in java.lang.Throwable
| 1017 | invokeReadObject in java.io.ObjectStreamClass
| 1891 | readSerialData in java.io.ObjectInputStream
| 1796 | readOrdinaryObject in ''
| 1348 | readObject0 in ''
| 370 | readObject in ''
| 167 | handlerResponseError in org.elasticsearch.transport.netty.MessageChannelHandler
| 123 | messageReceived in ''
| 70 | handleUpstream in org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler
| 564 | sendUpstream in org.elasticsearch.common.netty.channel.DefaultChannelPipeline
| 791 | sendUpstream in org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext
| 296 | fireMessageReceived in org.elasticsearch.common.netty.channel.Channels
| 462 | unfoldAndFireMessageReceived in org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder
| 443 | callDecode in ''
| 310 | messageReceived in ''
| 70 | handleUpstream in org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler
| 564 | sendUpstream in org.elasticsearch.common.netty.channel.DefaultChannelPipeline
| 559 | sendUpstream in ''
| 268 | fireMessageReceived in org.elasticsearch.common.netty.channel.Channels
| 255 | fireMessageReceived in ''
| 88 | read . . in org.elasticsearch.common.netty.channel.socket.nio.NioWorker
| 108 | process in org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker
| 318 | run . . . in org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector
| 89 | run in org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker
| 178 | run . . . in org.elasticsearch.common.netty.channel.socket.nio.NioWorker
| 108 | run in org.elasticsearch.common.netty.util.ThreadRenamingRunnable
| 42 | run . . . in org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 724 | run in java.lang.Thread
This happens until I change the elasticSearch host details - ie, I can't boot my ap at all, with the original host details, ever again.
Both the ES node and my Grails app are both using elasticSearch 0.90.7, my config for the ES plugin looks like so:#
elasticSearch.client.mode = 'transport'
elasticSearch.client.hosts = [[host:'<my EC2 DNS>', port:9300]]
elasticSearch.datastoreImpl = 'mongoDatastore'
elasticSearch.client.transport.sniff = true
The only domain object I am marking as 'searchable' is mapped with mongoDB, which looks like so:
class CompletedApplicationFormSearchEntry {
static searchable = true
Long formId
Long jobId
Long employerId
Long jobseekerId
Date applicationDate
static mapWith = "mongo"
static constraints = {
}
}
If I remove the searchable attribute from the domain class, then relaunch the app, it launches fine, so I assume that there's something going on in the bootstrapping process when the domain object is detected as being searchable, but of course, it only causes an issue when the app's been restarted.
There are a handful of threads kicking about where people are seeing similar issue,s where they have nodes running different ES versions, different JVM version,s etc. But in this case, I only have one node!
I am absolutely tearing my hair out over this - I just can't work out what on earth's going wrong. I've tried different plugin versions, elasticsearch versions, 32-bit EC2 instance, 64bit EC2 instance - no luck!
Not with grails es plugin, but I fixed similar issue by fixing the jvm versions in ide ( used by the es client) to be the same as es master running on different jvm version.
STEP 0 : Make sure api and actual es version are identical.
STEP 1 : check jvm versions used by the es nodes
In following case, I had two different jvm versions, as pointed by "jvm" json key of both es nodes.
$ curl -XGET "http://localhost:9200/_nodes?jvm=true&pretty=true"
{
"cluster_name" : "elasticsearch",
"nodes" : {
"A6PDUvlWSN-zN2GKRxrSHA" : {
"name" : "Madeline Joyce",
"transport_address" : "inet[/192.168.1.4:9301]",
"host" : "prayagupd",
"ip" : "127.0.1.1",
"version" : "1.3.2",
"build" : "dee175d",
"http_address" : "inet[/192.168.1.4:9201]",
"settings" : {
"path" : {
"logs" : "/usr/local/elasticsearch-1.3.2/logs",
"home" : "/usr/local/elasticsearch-1.3.2"
},
"cluster" : {
"name" : "elasticsearch"
},
"http" : {
"port" : "9201"
},
"transport" : {
"tcp" : {
"port" : "9301"
}
},
"foreground" : "yes",
"name" : "Madeline Joyce"
},
"os" : {
"refresh_interval_in_millis" : 1000,
"available_processors" : 4,
"cpu" : {
"vendor" : "Intel",
"model" : "Core(TM) i5 CPU M 480 # 2.67GHz",
"mhz" : 2667,
"total_cores" : 4,
"total_sockets" : 4,
"cores_per_socket" : 16,
"cache_size_in_bytes" : 3072
},
"mem" : {
"total_in_bytes" : 3803283456
},
"swap" : {
"total_in_bytes" : 5998899200
}
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 9036,
"max_file_descriptors" : 4096,
"mlockall" : false
},
"jvm" : {
"pid" : 9036,
"version" : "1.7.0_65",
"vm_name" : "Java HotSpot(TM) 64-Bit Server VM",
"vm_version" : "24.65-b04",
"vm_vendor" : "Oracle Corporation",
"start_time_in_millis" : 1421578674811,
"mem" : {
"heap_init_in_bytes" : 268435456,
"heap_max_in_bytes" : 1038876672,
"non_heap_init_in_bytes" : 24313856,
"non_heap_max_in_bytes" : 136314880,
"direct_max_in_bytes" : 1038876672
},
"gc_collectors" : [ "ParNew", "ConcurrentMarkSweep" ],
"memory_pools" : [ "Code Cache", "Par Eden Space", "Par Survivor Space", "CMS Old Gen", "CMS Perm Gen" ]
},
"thread_pool" : {
"generic" : {
"type" : "cached",
"keep_alive" : "30s",
"queue_size" : -1
},
"index" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "200"
},
"snapshot_data" : {
"type" : "scaling",
"min" : 1,
"max" : 5,
"keep_alive" : "5m",
"queue_size" : -1
},
"bench" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"get" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "1k"
},
"snapshot" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"merge" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"suggest" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "1k"
},
"bulk" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "50"
},
"optimize" : {
"type" : "fixed",
"min" : 1,
"max" : 1,
"queue_size" : -1
},
"warmer" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"flush" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"search" : {
"type" : "fixed",
"min" : 12,
"max" : 12,
"queue_size" : "1k"
},
"percolate" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "1k"
},
"management" : {
"type" : "scaling",
"min" : 1,
"max" : 5,
"keep_alive" : "5m",
"queue_size" : -1
},
"refresh" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
}
},
"network" : {
"refresh_interval_in_millis" : 5000,
"primary_interface" : {
"address" : "192.168.1.4",
"name" : "eth0",
"mac_address" : "20:6A:8A:2A:24:E6"
}
},
"transport" : {
"bound_address" : "inet[/0:0:0:0:0:0:0:0%0:9301]",
"publish_address" : "inet[/192.168.1.4:9301]"
},
"http" : {
"bound_address" : "inet[/0:0:0:0:0:0:0:0%0:9201]",
"publish_address" : "inet[/192.168.1.4:9201]",
"max_content_length_in_bytes" : 104857600
},
"plugins" : [ ]
},
"TWNkkYYZSWe8NnrOPU57mQ" : {
"name" : "Scarlet Spider",
"transport_address" : "inet[/192.168.1.4:9300]",
"host" : "prayagupd",
"ip" : "127.0.1.1",
"version" : "1.3.2",
"build" : "dee175d",
"http_address" : "inet[/192.168.1.4:9200]",
"attributes" : {
"client" : "true",
"data" : "false"
},
"settings" : {
"path" : {
"data" : "/var/lib/elasticsearch",
"work" : "/tmp/elasticsearch",
"conf" : "/etc/elasticsearch",
"logs" : "/var/log/elasticsearch"
},
"cluster" : {
"name" : "elasticsearch"
},
"node" : {
"client" : "true"
},
"name" : "Scarlet Spider"
},
"os" : {
"refresh_interval_in_millis" : 1000,
"available_processors" : 4
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 11028,
"max_file_descriptors" : 4096,
"mlockall" : false
},
"jvm" : {
"pid" : 11028,
"version" : "1.7.0_05",
"vm_name" : "Java HotSpot(TM) 64-Bit Server VM",
"vm_version" : "23.1-b03",
"vm_vendor" : "Oracle Corporation",
"start_time_in_millis" : 1421580829189,
"mem" : {
"heap_init_in_bytes" : 59426304,
"heap_max_in_bytes" : 846331904,
"non_heap_init_in_bytes" : 24313856,
"non_heap_max_in_bytes" : 136314880,
"direct_max_in_bytes" : 846331904
},
"gc_collectors" : [ "PS Scavenge", "PS MarkSweep" ],
"memory_pools" : [ "Code Cache", "PS Eden Space", "PS Survivor Space", "PS Old Gen", "PS Perm Gen" ]
},
"thread_pool" : {
"generic" : {
"type" : "cached",
"keep_alive" : "30s",
"queue_size" : -1
},
"index" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "200"
},
"snapshot_data" : {
"type" : "scaling",
"min" : 1,
"max" : 5,
"keep_alive" : "5m",
"queue_size" : -1
},
"bench" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"get" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "1k"
},
"snapshot" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"merge" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"suggest" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "1k"
},
"bulk" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "50"
},
"optimize" : {
"type" : "fixed",
"min" : 1,
"max" : 1,
"queue_size" : -1
},
"warmer" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"flush" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
},
"search" : {
"type" : "fixed",
"min" : 12,
"max" : 12,
"queue_size" : "1k"
},
"percolate" : {
"type" : "fixed",
"min" : 4,
"max" : 4,
"queue_size" : "1k"
},
"management" : {
"type" : "scaling",
"min" : 1,
"max" : 5,
"keep_alive" : "5m",
"queue_size" : -1
},
"refresh" : {
"type" : "scaling",
"min" : 1,
"max" : 2,
"keep_alive" : "5m",
"queue_size" : -1
}
},
"network" : {
"refresh_interval_in_millis" : 5000
},
"transport" : {
"bound_address" : "inet[/0:0:0:0:0:0:0:0:9300]",
"publish_address" : "inet[/192.168.1.4:9300]"
},
"http" : {
"bound_address" : "inet[/0:0:0:0:0:0:0:0:9200]",
"publish_address" : "inet[/192.168.1.4:9200]",
"max_content_length_in_bytes" : 104857600
},
"plugins" : [ ]
}
}
}
STEP 2: update jvm version in ide (following shows for intellij ide)
Update to required jvm version, and add to project,
STEP 3 : Run both es nodes, the problem of TransportSerializationException should get fixed
$ curl -XGET "http://localhost:9200/_nodes?jvm=true&pretty=true"
{
"cluster_name": "elasticsearch",
"nodes": {
"GeRZFRiDSje8zLM_m90WRw": {
"name" : "Dougboy",
"jvm": {
"pid": 15223,
"version": "1.7.0_65",
"vm_name": "Java HotSpot(TM) 64-Bit Server VM",
"vm_version": "24.65-b04",
"vm_vendor": "Oracle Corporation",
"start_time_in_millis": 1421586819876,
"mem": {
"heap_init_in_bytes": 59426304,
"heap_max_in_bytes": 846200832,
"non_heap_init_in_bytes": 24576000,
"non_heap_max_in_bytes": 136314880,
"direct_max_in_bytes": 846200832
}
}
},
"A6PDUvlWSN-zN2GKRxrSHA": {
"name": "Madeline Joyce",
"jvm": {
"pid": 9036,
"version": "1.7.0_65",
"vm_name": "Java HotSpot(TM) 64-Bit Server VM",
"vm_version": "24.65-b04",
"vm_vendor": "Oracle Corporation",
"start_time_in_millis": 1421578674811,
"mem": {
"heap_init_in_bytes": 268435456,
"heap_max_in_bytes": 1038876672,
"non_heap_init_in_bytes": 24313856,
"non_heap_max_in_bytes": 136314880,
"direct_max_in_bytes": 1038876672
}
}
}
}
}
Reference
Java Client TransportSerializationException #3835, Oct 6, 2013
Looks like it was an issue with the plugin with Elasticsearch 0.90.7 throwing an exception which wasn't caught by the plugin.
Pull request here: https://github.com/mstein/elasticsearch-grails-plugin/pull/74 has a fix and includes ES 0.90.7

Resources