Sybase IQ 16 : java.sql.SQLException: JZ0C0: Connection is already closed - sap-iq

I am using Sybase IQ version 16.0 and using driver SQL Anywhere JDBC 4.0 driver (com.sybase.jdbc4.jdbc.SybDriver) as mentioned in Sybase Support Doc. I have written a simple Java code as below.
import com.sybase.jdbc4.jdbc.SybDriver;
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.stream.Collectors;
public class TestClass{
public static void main(String args[]) throws Exception {
try {
String table_name;
Class.forName("com.sybase.jdbc4.jdbc.SybDriver");
Connection con = DriverManager.getConnection("jdbc:sybase:Tds:10.13.93.30:2638?ServiceName=xyz", "username", "password");
System.out.println("Connection established......");
//gets all table names from system table
String query = "SELECT TABLE_NAME, USEr_NAMe FROM SYS.SYSTABLE T INNER JOIN SYSUSER U ON T.CREATOR = U.USER_ID WHERE USER_NAME = 'perf_TS2_Schema_1'";
PreparedStatement stmt = con.prepareStatement(query);
ResultSet rs = stmt.executeQuery();
ResultSetMetaData md = rs.getMetaData();
while (rs.next()) {
table_name = (String) rs.getObject(md.getColumnName(1));
System.out.println(table_name);
PreparedStatement stmt2 = con.prepareStatement("Select * from perf_TS2_Schema_1." + tables.get(i));
ResultSet rs2 = stmt2.executeQuery();
// do something with resultset
}
if (con != null)
con.close();
}
catch(SQLException e){
System.out.println(e);
}
}
}
The connection is getting terminated from SybaseIQ's side after processing around 740-780 tables (which takes around ~15 mins) with the below error
java.sql.SQLException: JZ0C0: Connection is already closed
I tried modifying below server settings as mentioned on support website.
MAX_IQ_THREADS_PER_CONNECTION : 5000
DEDICATED_TASK : ON
MAX_STATEMENT_COUNT: 2147483647 (Max value of INT)
But still no luck. Any help in this will be highly appreciated.

Related

Deprecation Errors with Kafka Consumer for twitter streaming

I've been working on Kafka twitter streaming feed data.
I'm following the sample from below link:
http://www.hahaskills.com/tutorials/kafka/Twitter_doc.html
I'm able to use Producer code and it is working fine. Able to get twitter feed and send to Kafka Producer.
I'm not able to use Consumer code, since it has been throwing as deprecated error for many APIs.
Here is the Consumer code:
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.consumer.Consumer;
//import kafka.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
//import kafka.consumer.KafkaStream;
//import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
//import org.apache.kafka.clients.producer.KafkaProducer;
public class KafkaConsumer {
private final ConsumerConnector consumer;
private final String topic;
public KafkaConsumer(String zookeeper, String groupId, String topic) {
Properties props = new Properties();
props.put("zookeeper.connect", zookeeper);
props.put("group.id", groupId);
props.put("zookeeper.session.timeout.ms", "500");
props.put("zookeeper.sync.time.ms", "250");
props.put("auto.commit.interval.ms", "1000");
consumer = Consumer.createJavaConsumerConnector(new ConsumerConfig(props));
this.topic = topic;
}
public void testConsumer() {
System.out.println("Test Con called");
Map<String, Integer> topicCount = new HashMap<>();
topicCount.put(topic, 1);
Map<String, List<KafkaStream<byte[], byte[]>>> consumerStreams = consumer.createMessageStreams(topicCount);
List<KafkaStream<byte[], byte[]>> streams = consumerStreams.get(topic);
System.out.println("For");
for (final KafkaStream stream : streams) {
ConsumerIterator<byte[], byte[]> it = stream.iterator();
System.out.println("Size"+it.length());
while (it.hasNext()) {
System.out.println("Stream");
System.out.println("Message from Single Topic: " + new String(it.next().message()));
}
}
if (consumer != null) {
consumer.shutdown();
}
}
public static void main(String[] args) {
System.out.println("Started");
String topic="twittertopic";
KafkaConsumer simpleTWConsumer = new KafkaConsumer("localhost:XXXX", "testgroup", topic);
simpleTWConsumer.testConsumer();
System.out.println("End");
}
}
It throws error : ConsumerConnector, ConsumerIterator, KafkaStream are deprecated.
ConsumerConfig is not visible.
Is there fixed version of this sample code (Kafka consumer for twitter)?
The tutorial you are following is very old and it's using the old Scala Kafka clients that have been deprecated, see http://kafka.apache.org/documentation/#legacyapis
The classes that have been deprecated are:
kafka.consumer.* and kafka.javaapi.consumer instead use the newer Java Consumer under org.apache.kafka.clients.consumer.*
kafka.producer.* and kafka.javaapi.producer instead use the newer Java Producer under org.apache.kafka.clients.producer.*
Apart from using deprecated classes, your code was mostly correct, I only had to fix a few imports. See below a fixed version. Using it I was able to consume messages I was producing to a topic called twittertopic.
package example;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
public class MyConsumer {
private final ConsumerConnector consumer;
private final String topic;
public MyConsumer(String zookeeper, String groupId, String topic) {
Properties props = new Properties();
props.put("zookeeper.connect", zookeeper);
props.put("group.id", groupId);
props.put("zookeeper.session.timeout.ms", "500");
props.put("zookeeper.sync.time.ms", "250");
props.put("auto.commit.interval.ms", "1000");
consumer = Consumer.createJavaConsumerConnector(new ConsumerConfig(props));
this.topic = topic;
}
public void testConsumer() {
Map<String, Integer> topicCount = new HashMap<>();
topicCount.put(topic, 1);
Map<String, List<KafkaStream<byte[], byte[]>>> consumerStreams = consumer.createMessageStreams(topicCount);
List<KafkaStream<byte[], byte[]>> streams = consumerStreams.get(topic);
for (final KafkaStream stream : streams) {
ConsumerIterator<byte[], byte[]> it = stream.iterator();
while (it.hasNext()) {
System.out.println("Message from Single Topic: " + new String(it.next().message()));
}
}
if (consumer != null) {
consumer.shutdown();
}
}
public static void main(String[] args) {
System.out.println("Started");
String topic = "twittertopic";
MyConsumer simpleTWConsumer = new MyConsumer("localhost:2181", "testgroup", topic);
simpleTWConsumer.testConsumer();
System.out.println("End");
}
}
While the code above can be used, the next major Kafka release is likely to remove classes that are currently deprecated, so you should not write new logic using these.
Instead you should get started with the Java clients, you can use the examples provided on Github: https://github.com/apache/kafka/tree/trunk/examples/src/main/java/kafka/examples
Using the new Java Consumer, your logic would look like:
import java.util.Arrays;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
public class MyConsumer {
static final String TOPIC = "twittertopic";
static final String GROUP = "testgroup";
public static void main(String[] args) {
System.out.println("Started");
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", GROUP);
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);) {
consumer.subscribe(Arrays.asList(TOPIC));
for (int i = 0; i < 1000; i++) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1L));
System.out.println("Size: " + records.count());
for (ConsumerRecord<String, String> record : records) {
System.out.println("Received a message: " + record.key() + " " + record.value());
}
}
}
System.out.println("End");
}
}

The import org.apache.lucene.queryparser cannot be resolved

I am using Lucene 6.6 and I am facing difficulty in importing lucene.queryparser and I did check the lucene documentations and it doesn't exist now.
I am using below code. Is there any alternative for queryparser in lucene6.
import java.io.IOException;
import java.text.ParseException;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.StringField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopScoreDocCollector;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
public class HelloLucene {
public static void main(String[] args) throws IOException, ParseException {
// 0. Specify the analyzer for tokenizing text.
// The same analyzer should be used for indexing and searching
StandardAnalyzer analyzer = new StandardAnalyzer();
// 1. create the index
Directory index = new RAMDirectory();
IndexWriterConfig config = new IndexWriterConfig(analyzer);
IndexWriter w = new IndexWriter(index, config);
addDoc(w, "Lucene in Action", "193398817");
addDoc(w, "Lucene for Dummies", "55320055Z");
addDoc(w, "Managing Gigabytes", "55063554A");
addDoc(w, "The Art of Computer Science", "9900333X");
w.close();
// 2. query
String querystr = args.length > 0 ? args[0] : "lucene";
// the "title" arg specifies the default field to use
// when no field is explicitly specified in the query.
Query q = null;
try {
q = new QueryParser(Version.LUCENE_6_6_0, "title", analyzer).parse(querystr);
} catch (org.apache.lucene.queryparser.classic.ParseException e) {
e.printStackTrace();
}
// 3. search
int hitsPerPage = 10;
IndexReader reader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(reader);
TopScoreDocCollector collector = TopScoreDocCollector.create(hitsPerPage, true);
searcher.search(q, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
// 4. display results
System.out.println("Found " + hits.length + " hits.");
for (int i = 0; i < hits.length; ++i) {
int docId = hits[i].doc;
Document d = searcher.doc(docId);
System.out.println((i + 1) + ". " + d.get("isbn") + "\t" + d.get("title"));
}
// reader can only be closed when there
// is no need to access the documents any more.
reader.close();
}
private static void addDoc(IndexWriter w, String title, String isbn) throws IOException {
Document doc = new Document();
doc.add(new TextField("title", title, Field.Store.YES));
// use a string field for isbn because we don't want it tokenized
doc.add(new StringField("isbn", isbn, Field.Store.YES));
w.addDocument(doc);
}
}
Thanks!
The problem got solved.
Initially, in the build path, only Lucene-core-6.6.0 was added but lucene-queryparser-6.6.0 is a separate jar file that needs to be added separately.

throwing org.atmosphere.cpr.AtmosphereResourceImpl: Exception during suspend() operation java.lang.NullPointerException

I am new to atmosphere-websocket. I am working on private chat. I am using dropwizard framework.
Below is my code:
service.java:
#Override
public void run(ServiceConfiguration config, Environment environment)throws Exception {
AtmosphereServlet servlet = new AtmosphereServlet();
servlet.framework().addInitParameter("com.sun.jersey.config.property.packages", "dk.cooldev.chatroom.resources.websocket");
servlet.framework().addInitParameter(ApplicationConfig.WEBSOCKET_CONTENT_TYPE, "application/json");
servlet.framework().addInitParameter(ApplicationConfig.WEBSOCKET_SUPPORT, "true");
ServletRegistration.Dynamic servletHolder = environment.servlets().addServlet("Chat", servlet);
servletHolder.addMapping("/chat/*");``
}
ChatRoom.java:
package resource;
import org.atmosphere.config.service.DeliverTo;
import org.atmosphere.config.service.Disconnect;
import org.atmosphere.config.service.ManagedService;
import org.atmosphere.config.service.Message;
import org.atmosphere.config.service.PathParam;
import org.atmosphere.config.service.Ready;
import org.atmosphere.cpr.AtmosphereResource;
import org.atmosphere.cpr.AtmosphereResourceEvent;
import org.atmosphere.cpr.AtmosphereResourceFactory;
import org.atmosphere.cpr.Broadcaster;
import org.atmosphere.cpr.BroadcasterFactory;
import org.atmosphere.cpr.MetaBroadcaster;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import representation.UserMessage;
import utility.ChatProtocol;
import utility.JacksonEncoder;
import utility.ProtocolDecoder;
import utility.UserDecoder;
import javax.inject.Inject;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.concurrent.ConcurrentHashMap;
#ManagedService(path = "/chat/{room: [a-zA-Z][a-zA-Z_0-9]*}")
public class ChatRoom {
private final Logger logger = LoggerFactory.getLogger(ChatRoom.class);
private final ConcurrentHashMap<String, String> users = new ConcurrentHashMap<String, String>();
private final static String CHAT = "/chat/";
#PathParam("room")
private String chatroomName;
#Inject
private BroadcasterFactory factory;
#Inject
private AtmosphereResourceFactory resourceFactory;
#Inject
private MetaBroadcaster metaBroadcaster;
/**
* Invoked when the connection as been fully established and suspended, e.g ready for receiving messages.
*
* #param r
*/
#Ready(encoders = {JacksonEncoder.class})
#DeliverTo(DeliverTo.DELIVER_TO.ALL)
public ChatProtocol onReady(final AtmosphereResource r) {
r.suspend();
logger.info("Browser {} connected.", r.uuid());
return new ChatProtocol(users.keySet(), getRooms(factory.lookupAll()));
}
private static Collection<String> getRooms(Collection<Broadcaster> broadcasters) {
Collection<String> result = new ArrayList<String>();
for (Broadcaster broadcaster : broadcasters) {
if (!("/*".equals(broadcaster.getID()))) {
// if no room is specified, use ''
String[] p = broadcaster.getID().split("/");
result.add(p.length > 2 ? p[2] : "");
}
};
return result;
}
/**
* Invoked when the client disconnect or when an unexpected closing of the underlying connection happens.
*
* #param event
*/
#Disconnect
public void onDisconnect(AtmosphereResourceEvent event) {
if (event.isCancelled()) {
// We didn't get notified, so we remove the user.
users.values().remove(event.getResource().uuid());
logger.info("Browser {} unexpectedly disconnected", event.getResource().uuid());
} else if (event.isClosedByClient()) {
logger.info("Browser {} closed the connection", event.getResource().uuid());
}
}
/**
* Simple annotated class that demonstrate how {#link org.atmosphere.config.managed.Encoder} and {#link org.atmosphere.config.managed.Decoder
* can be used.
*
* #param message an instance of {#link ChatProtocol }
* #return
* #throws IOException
*/
#Message(encoders = {JacksonEncoder.class}, decoders = {ProtocolDecoder.class})
public ChatProtocol onMessage(ChatProtocol message) throws IOException {
if (!users.containsKey(message.getAuthor())) {
users.put(message.getAuthor(), message.getUuid());
return new ChatProtocol(message.getAuthor(), " entered room " + chatroomName, users.keySet(), getRooms(factory.lookupAll()));
}
if (message.getMessage().contains("disconnecting")) {
users.remove(message.getAuthor());
return new ChatProtocol(message.getAuthor(), " disconnected from room " + chatroomName, users.keySet(), getRooms(factory.lookupAll()));
}
message.setUsers(users.keySet());
logger.info("{} just send {}", message.getAuthor(), message.getMessage());
return new ChatProtocol(message.getAuthor(), message.getMessage(), users.keySet(), getRooms(factory.lookupAll()));
}
#Message(decoders = {UserDecoder.class})
public void onPrivateMessage(UserMessage user) throws IOException {
String userUUID = users.get(user.getUser());
if (userUUID != null) {
// Retrieve the original AtmosphereResource
AtmosphereResource r = resourceFactory.find(userUUID);
if (r != null) {
ChatProtocol m = new ChatProtocol(user.getUser(), " sent you a private message: " + user.getMessage().split(":")[1], users.keySet(), getRooms(factory.lookupAll()));
if (!user.getUser().equalsIgnoreCase("all")) {
factory.lookup(CHAT + chatroomName).broadcast(m, r);
}
}
} else {
ChatProtocol m = new ChatProtocol(user.getUser(), " sent a message to all chatroom: " + user.getMessage().split(":")[1], users.keySet(), getRooms(factory.lookupAll()));
metaBroadcaster.broadcastTo("/*", m);
}
}
}
Now, when I run my application , its throwing
resource.ChatRoom: Browser 72196f81-3425-4137-8f37-c5aa6b134534 connected.
WARN: org.atmosphere.cpr.AtmosphereResourceImpl: Exception during suspend() operation java.lang.NullPointerException
Please help me out to resolve this exception.
If you are getting that warning then your socket should have been removed from the broadcast listeners list . Message that is delivered to the broadcaster which calls outgoingBroadcast(Object message) is null and null case isn't handled . In drop-wizard that message becomes null for onReady Method . Quick fix is to extend the broadcaster and override that method and handle the null case .
I hope it helps . I was using RedisBroadcaster and I had same issue and I fixed by handling the null case .
public void outgoingBroadcast(Object message) {
// Marshal the message outside of the sync block.
if(message == null){
return;
}
String contents = message.toString();
I was just setting up the chat samples my self and hit the same issue. In my case, an injected field was not available.
In your case, I would say it is one of
#Inject
private BroadcasterFactory factory;
#Inject
private AtmosphereResourceFactory resourceFactory;
#Inject
private MetaBroadcaster metaBroadcaster;
That are not wired in correctly. I have no used drop-wizard before so I can't advise on how to make sure these are configured correctly.
This type of issue was easy enough to see once I configured atmosphere for trace logging. e.g. for logback add the following line
<logger name="org.atmosphere" level="TRACE" />
Thanks
Ryan

Neo4J 2.0.2 + Gremlin plugin: the console doesn't run

I'm experimenting with Neo4J + Gremlin plugin, unfortunately I got this error (bellow) when I try to use the Gremlin console web (the console didn't runs on Gremlin language):
SEVERE: The exception contained within MappableContainerException could not be mapped to a response, re-throwing to the HTTP container
java.lang.NoClassDefFoundError: org/neo4j/server/logging/Logger
at org.neo4j.server.webadmin.console.GremlinSession.<clinit>(GremlinSession.java:42)
at org.neo4j.server.webadmin.console.GremlinSessionCreator.newSession(GremlinSessionCreator.java:35)
......
Any suggestions?
Many thanks!
Neo4J 2.0.2 server has changed the Logger class, so I modified GremlinSession.java code in order to use java.util.logging.Logger class instead of org.neo4j.server.logging.Logger. Finally the Gremlin console web works fine:
Here you will find the new GremlinSession.java code.
/**
* Copyright (c) 2002-2014 "Neo Technology,"
* Network Engine for Objects in Lund AB [http://neotechnology.com]
*
* This file is part of Neo4j.
*
* Neo4j is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package org.neo4j.server.webadmin.console;
import com.tinkerpop.blueprints.TransactionalGraph;
import com.tinkerpop.blueprints.impls.neo4j2.Neo4j2Graph;
import groovy.lang.Binding;
import groovy.lang.GroovyRuntimeException;
import org.codehaus.groovy.tools.shell.IO;
import org.neo4j.graphdb.Transaction;
import org.neo4j.helpers.Pair;
import org.neo4j.server.database.Database;
import java.util.logging.Logger;
import java.util.logging.Level;
import java.io.BufferedOutputStream;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class GremlinSession implements ScriptSession {
private static final String INIT_FUNCTION = "init()";
private static final Logger log = Logger.getLogger(GremlinSession.class.getName());
private final Database database;
private final IO io;
private final ByteArrayOutputStream baos = new ByteArrayOutputStream();
private final List<String> initialBindings;
protected GremlinWebConsole scriptEngine;
public GremlinSession(Database database) {
this.database = database;
PrintStream out = new PrintStream(new BufferedOutputStream(baos));
io = new IO(System.in, out, out);
Map<String, Object> bindings = new HashMap<String, Object>();
bindings.put("g", getGremlinWrappedGraph());
bindings.put("out", out);
initialBindings = new ArrayList<String>(bindings.keySet());
try {
scriptEngine = new GremlinWebConsole(new Binding(bindings), io);
} catch (final Exception failure) {
scriptEngine = new GremlinWebConsole() {
#Override
public void execute(String script) {
io.out.println("Could not start Groovy during Gremlin initialization, reason:");
failure.printStackTrace(io.out);
}
};
}
}
/**
* Take some gremlin script, evaluate it in the context of this gremlin
* session, and return the result.
*
* #param script
* #return the return string of the evaluation result, or the exception
* message.
*/
#Override
public Pair<String, String> evaluate(String script) {
String result = null;
try (Transaction tx = database.getGraph().beginTx()) {
if (script.equals(INIT_FUNCTION)) {
result = init();
} else {
try {
scriptEngine.execute(script);
result = baos.toString();
} finally {
resetIO();
}
}
tx.success();
} catch (GroovyRuntimeException ex) {
log.log(Level.SEVERE, ex.toString());
result = ex.getMessage();
}
return Pair.of(result, null);
}
private String init() {
StringBuilder out = new StringBuilder();
out.append("\n");
out.append(" \\,,,/\n");
out.append(" (o o)\n");
out.append("-----oOOo-(_)-oOOo-----\n");
out.append("\n");
out.append("Available variables:\n");
for (String variable : initialBindings) {
out.append(" " + variable + "\t= ");
out.append(evaluate(variable));
}
out.append("\n");
return out.toString();
}
private void resetIO() {
baos.reset();
}
private TransactionalGraph getGremlinWrappedGraph() {
Neo4j2Graph neo4jGraph = null;
try {
neo4jGraph = new Neo4j2Graph(database.getGraph());
} catch (Exception e) {
throw new RuntimeException(e);
}
return neo4jGraph;
}
}

Batch cypher queries generated by RestCypherQueryEngine

I am trying to batch together a few cypher queries with the REST API (using the java bindings library) so that only one call is made over the wire. But it seems to not respect the batching on the client side and gives this error:
java.lang.RuntimeException: Error reading as JSON ''
at org.neo4j.rest.graphdb.util.JsonHelper.readJson(JsonHelper.java:57)
at org.neo4j.rest.graphdb.util.JsonHelper.jsonToSingleValue(JsonHelper.java:62)
at org.neo4j.rest.graphdb.RequestResult.toEntity(RequestResult.java:114)
at org.neo4j.rest.graphdb.RequestResult.toMap(RequestResult.java:123)
at org.neo4j.rest.graphdb.batch.RecordingRestRequest.toMap(RecordingRestRequest.java:138)
at org.neo4j.rest.graphdb.ExecutingRestAPI.query(ExecutingRestAPI.java:489)
at org.neo4j.rest.graphdb.ExecutingRestAPI.query(ExecutingRestAPI.java:509)
at org.neo4j.rest.graphdb.RestAPIFacade.query(RestAPIFacade.java:233)
at org.neo4j.rest.graphdb.query.RestCypherQueryEngine.query(RestCypherQueryEngine.java:50)
...
Caused by: java.io.EOFException: No content to map to Object due to end of input
at org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2766)
at org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2709)
at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1854)
at org.neo4j.rest.graphdb.util.JsonHelper.readJson(JsonHelper.java:55)
... 41 more
This is how I am trying to batch them:
graphDatabaseService.getRestAPI().executeBatch(new BatchCallback<Void>() {
#Override
public Void recordBatch(RestAPI batchRestApi) {
String query = "CREATE accounts=({userId:{userId}})-[r:OWNS]->({facebookId:{facebookId}})";
graphDatabaseService.getQueryEngine().query(query, map("userId", 1, "facebookId", "1"));
graphDatabaseService.getQueryEngine().query(query, map("userId", 2, "facebookId", "2"));
graphDatabaseService.getQueryEngine().query(query, map("userId", 3, "facebookId", "3"));
return null;
}
});
I am using noe4j version 1.9 and the corresponding client library. Should this be possible?
Here is a JUnit sample code that works for your batch. Here no string template is used but native methods on the RestAPI object:
public static final DynamicRelationshipType OWNS = DynamicRelationshipType.withName("OWNS");
#Autowired
private SpringRestGraphDatabase graphDatabaseService;
#Test
public void batchTest()
{
Assert.assertNotNull(this.graphDatabaseService);
this.graphDatabaseService.getRestAPI().executeBatch(new BatchCallback<Void>()
{
#Override
public Void recordBatch(RestAPI batchRestApi)
{
for (int counter = 1; counter <= 3; counter++)
{
RestNode userId = batchRestApi.createNode(map("userId", Integer.valueOf(counter)));
RestNode facebookId = batchRestApi.createNode(map("facebookId", Integer.valueOf(counter).toString()));
batchRestApi.createRelationship(userId, facebookId, OWNS, map());
}
return null;
}
});
}

Resources