MonoDroid speed - xamarin.android

I'm thinking of moving to MonoDroid, problem is the trial version only allows to test my code in emulator and everything runs slow in the emulator. my question (before i pay $400) is if the compiled code runs fast enough when deployed to the actual device?

Performance, after app startup (~3s), is very good on a Nexus One, and is nothing like trying to run on the emulator.

Performance on device is good for me too. Sometimes even faster than Dalvik. But yes, there exist a 2-3 second lag at start.
(This "answer" is intended to assure folks who wants to try MonoDroid that it works not only for one person :) )

I am seeing far more than 2-3 second lag on startup. That is, the time from when I tap an application (and the log shows ActivityManager starting my Activity) to when OnCreate is first called.
I see 5 seconds or more on my HTC Legend with Android 2.2 (about 2 years old). For example:
2011-11-26 11:54:37.782 I 97/ActivityManager: Displayed activity
com.xxx.android/.SplashActivity: 5309 ms (total 5309 ms)
or the full log:
2011-11-26 11:54:32.372 I 97/ActivityManager: Starting activity: Intent {
act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER]
flg=0x10200000 cmp=com.xxx.android/.SplashActivity }
2011-11-26 11:54:32.492 I 97/ActivityManager: Start proc com.xxx.android
for activity com.xxx.android/.SplashActivity: pid=23858 uid=10055
gids={1015, 3003}
2011-11-26 11:54:32.492 I 23858/pthread: ## thread 23858 is creating
thread #dalvik/vm/Thread.c:1795
2011-11-26 11:54:32.502 I 23858/pthread: ## thread 23859 is created success
2011-11-26 11:54:32.522 I 23858/pthread: ## thread 23858 is creating
thread #dalvik/vm/Thread.c:1795
2011-11-26 11:54:32.522 I 23858/pthread: ## thread 23860 is created success
2011-11-26 11:54:32.592 I 23858/pthread: ## thread 23858 is creating
thread #dalvik/vm/Thread.c:1795
2011-11-26 11:54:32.592 I 23858/pthread: ## thread 23861 is created success
2011-11-26 11:54:32.602 I 23858/pthread: ## thread 23858 is creating
thread #frameworks/base/libs/utils/Threads.cpp:139
2011-11-26 11:54:32.622 I 23858/pthread: ## thread 23862 is created success
2011-11-26 11:54:32.632 I 23858/pthread: ## thread 23862 is creating
thread #frameworks/base/libs/utils/Threads.cpp:139
2011-11-26 11:54:32.642 I 23858/pthread: ## thread 23863 is created success
2011-11-26 11:54:32.712 I 73/pthread: ## thread 23864 is created success
2011-11-26 11:54:32.712 I 73/pthread: ## thread 23865 is created success
2011-11-26 11:54:32.792 I 23858/ActivityThread: Publishing provider
com.xxx.android.__mono_init__: mono.MonoRuntimeProvider
2011-11-26 11:54:32.842 D 23858/dalvikvm: Trying to load lib
/data/data/com.xxx.android/lib/libmonodroid.so 0x44e02348
2011-11-26 11:54:32.872 D 23858/dalvikvm: Added shared lib
/data/data/com.xxx.android/lib/libmonodroid.so 0x44e02348
2011-11-26 11:54:33.332 I 23858/pthread: ## thread 23866 is created success
2011-11-26 11:54:33.552 D 183/BT HS/HF: gsmAsuToSignal=6
2011-11-26 11:54:34.042 2 97/GpsLocationProvider:
ServiceState.STATE_IN_SERVICE
2011-11-26 11:54:34.042 D 97/ConnectivityService: getMobileDataEnabled
returning true
2011-11-26 11:54:34.052 D 97/TelephonyRegistry: notifyDataConnection()
state=2isDataConnectivityPossible()true, reason=null
2011-11-26 11:54:34.052 D 97/TelephonyRegistry:
broadcastDataConnectionStateChanged()
state=CONNECTEDtypes=default,dun,supl, interfaceName=rmnet0
2011-11-26 11:54:34.072 D 97/NetworkLocationProvider:
onDataConnectionStateChanged 3
2011-11-26 11:54:34.092 D 97/ConnectivityService: getMobileDataEnabled
returning true
2011-11-26 11:54:34.122 D 97/MobileDataStateTracker: replacing old
mInterfaceName (rmnet0) with rmnet0 for hipri
2011-11-26 11:54:34.122 D 97/MobileDataStateTracker: replacing old
mInterfaceName (rmnet0) with rmnet0 for supl
2011-11-26 11:54:34.132 D 97/MobileDataStateTracker: replacing old
mInterfaceName (rmnet0) with rmnet0 for dun
2011-11-26 11:54:34.222 2 97/AlarmManager: Adding Alarm{4521c788 type 2
com.google.android.apps.maps} Dec 15 09:35:32 am
2011-11-26 11:54:34.362 I 97/LSState:
EventReceiver:android.intent.action.NOTIFICATION_UPDATE
2011-11-26 11:54:34.822 D 23858/dalvikvm: GC_FOR_MALLOC freed 11754
objects / 463408 bytes in 67ms
2011-11-26 11:54:35.042 D 23858/dalvikvm: GC_FOR_MALLOC freed 10024
objects / 469712 bytes in 62ms
2011-11-26 11:54:36.372 I 97/LSState:
EventReceiver:android.intent.action.NOTIFICATION_UPDATE
2011-11-26 11:54:37.462 I 23858/pthread: ## thread 23867 is created success
2011-11-26 11:54:37.782 I 97/ActivityManager: Displayed activity
com.xxx.android/.SplashActivity: 5309 ms (total 5309 ms)

The slowness is due to the Android emulator - running on an actual device is fine for MonoDroid.

Related

Why does Thread.sleep() trigger the subscription to Flux.interval()?

If, in a main() method, I execute this
Flux.just(1,2)
.log()
.subscribe();
I get this in the console:
[ INFO] (main) | onSubscribe([Synchronous Fuseable] FluxArray.ArraySubscription)
[ INFO] (main) | request(unbounded)
[ INFO] (main) | onNext(1)
[ INFO] (main) | onNext(2)
[ INFO] (main) | onComplete()
If instead of just() I use the interval() method:
Flux.interval(Duration.ofMillis(100))
.take(2)
.log()
.subscribe();
the elements are not logged, unless I add Thread.sleep() which gives me:
[ INFO] (main) onSubscribe(FluxTake.TakeSubscriber)
[ INFO] (main) request(unbounded)
[ INFO] (parallel-1) onNext(0)
[ INFO] (parallel-1) onNext(1)
[ INFO] (parallel-1) onComplete()
The question is: why do I require to pause a thread to actually trigger the subscription ?
You need to wait on main thread and let the execution complete. Your main thread terminates before next element is generated. Your first element is generated after 100 ms so you need to wait/block the main thread. Try this:
CountDownLatch latch = new CountDownLatch(1);
Flux.interval(Duration.ofMillis(100))
.take(2)
.doOnComplete(latch::countDown)
.log()
.subscribe();
latch.await(); // wait for doOnComplete
CountDownLatch:
A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.
Flux.interval(..) emits items on a parallel, freeing up your calling thread. This is why your program exits. What you should do is:
Flux.interval(Duration.of(duration))
.take(n)
.doOnNext(this::logElement)
.blockLast();
This will block the calling thread till doOnNext() emits the last time (which should be the second item form the upstream).
Flux.interval(Duration) produces a Flux that is infinite and emits regular ticks from a clock.
So your first example has no concurrency, everything happens on the same thread. The subscription and all events must complete before the method and process ends.
By using Flux.interval you have added concurrency and asynchronous behaviour into your example. Interval is a regular clock like a metronome. You subscribe but immediately complete and the process will usually finish before any work (onNext) is done.

What is (k)ill for in the iex break menu?

I access the Break Menu of eix 1.8.2 by pressing CTRL + C. It looks like this:
BREAK: (a)bort (c)ontinue (p)roc info (i)nfo (l)oaded
(v)ersion (k)ill (D)b-tables (d)istribution
At first I assumed kill would be similar to abort (ie, just ends the session), but no. Instead, pressing k produces a core dump and offers more options:
iex(1)>
BREAK: (a)bort (c)ontinue (p)roc info (i)nfo (l)oaded
(v)ersion (k)ill (D)b-tables (d)istribution
k
Process Information
--------------------------------------------------
=proc:<0.105.0>
State: Waiting
Spawned as: erlang:apply/2
Spawned by: <0.75.0>
Message queue length: 0
Number of heap fragments: 1
Heap fragment data: 5
Link list: [{to,<0.64.0>,#Ref<0.720592203.270008322.27074>}]
Reductions: 4202
Stack+heap: 233
OldHeap: 0
Heap unused: 177
OldHeap unused: 0
BinVHeap: 1
OldBinVHeap: 0
BinVHeap unused: 46421
OldBinVHeap unused: 46422
Memory: 2804
Stack dump:
Program counter: 0x000000001f8230e0 (io:execute_request/2 + 200)
CP: 0x0000000000000000 (invalid)
arity = 0
0x000000001ddcee08 Return addr 0x000000001f8a4ba0 ('Elixir.IEx.Server':io_get/3 + 96)
y(0) #Ref<0.720592203.270008322.27074>
y(1) {false,{get_line,unicode,<<"iex(1)> ">>}}
y(2) <0.64.0>
0x000000001ddcee28 Return addr 0x000000001d53ecf8 (<terminate process normally>)
y(0) <0.105.0>
y(1) <0.75.0>
Internal State: ACT_PRIO_NORMAL | USR_PRIO_NORMAL | PRQ_PRIO_NORMAL
(k)ill (n)ext (r)eturn:
If I press k again, I get another core dump. Pressing n also gives me a core dump and I think it's the same as pressing k. The final option, r, does different things depending on what I've done previously. If I've only pressed k or n a few times, it just ignores it and I have to press enter twice. iex interprets the second enter as it normally would and returns nil.
(k)ill (n)ext (r)eturn:
r
nil
If I've pressed k and n a bunch of times, it either does this:
(k)ill (n)ext (r)eturn:
r
** (EXIT from #PID<0.104.0>) shell process exited with reason: killed
Interactive Elixir (1.8.2) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)>
09:39:57.929 [info] Application iex exited: killed
or this:
(k)ill (n)ext (r)eturn:
r
09:46:20.268 [info] Application iex exited: killed
09:46:20.269 [info] Application elixir exited: killed
09:46:20.274 [error] GenServer IEx.Pry terminating
** (stop) killed
Last message: {:EXIT, #PID<0.88.0>, :killed}
State: 1
or this:
(k)ill (n)ext (r)eturn:
r
Logger - error: {removed_failing_handler,'Elixir.Logger'}
Logger - error: {removed_failing_handler,'Elixir.Logger'}
Logger - error: {removed_failing_handler,'Elixir.Logger'}
I am unsure how it decides which of those messages should be displayed.
I'm really curious what (k)ill and it's suboptions do and look forward to learning about it. Any direction is appreciated, thanks!
Looking at the source code:
case 'k':
process_killer();
and
switch(j) {
case 'k':
ASSERT(erts_init_process_id != ERTS_INVALID_PID);
/* Send a 'kill' exit signal from init process */
erts_proc_sig_send_exit(NULL, erts_init_process_id,
rp->common.id, am_kill, NIL,
0);
case 'n': br = 1; break;
case 'r': return;
default: return;
}
k seems to be for enumerating and killing individual processes by sending them a kill signal. The different output is because it depends how each processes handles the signal.
The kill command goes through all running processes, and for each of them displays a bunch of information and asks you whether to:
kill it and go to the next process (k)
go to the next process without killing this one (n), or
stop killing processes and go back to the shell (r).
It might be tricky to identify the process you want to kill. One thing you can look at is the Dictionary line, which for most long-running processes has an $initial_call entry telling you which module contains the code that this process is running. For example:
Dictionary: [{'$ancestors',[<0.70.0>]},{iex_evaluator,ack},{'$initial_call',{'Elixir.IEx.Evaluator',init,4}}]
The different messages are displayed depending on which process(es) you killed. For example, it seems like Elixir.IEx.Evaluator is the process running the Elixir shell, which gives you the shell process exited with reason: killed error message.
A way of looking at this is that it shows the fault tolerance of an Elixir application: even if a process somewhere within the system has an error (in this case caused by explicitly killing the process), the supervisors try to restart the process in question and keep the entire system running.
Actually, I've never used this way of killing processes in a running system. If you know the process id ("pid") of the process you want to kill, you can type something like this into the shell:
Process.exit(pid("0.10.0"), :kill)
without having to step through the list of processes.

Puma sleeps an important thread on boot of rails application

I am running Rails 3 with Ruby 2.3.3 on puma with postgresql. I have an initializer/twitter.rb file that starts a thread on boot with a streaming api for twitter. When I use rails server to start my application, the twitter streaming works and I can reach my website like normal. (If I do not put the streaming on a different thread, the streaming works but I can not view my application in the browser since the thread is blocked by the twitter stream). But when I use puma -C config/puma.rb to start my application, I get the following message that is telling me that my thread was found on startup and was put to sleep. How can I tell puma to let me run this thread in the background on startup?
initializer/twitter.rb
### START TWITTER THREAD ### if production
if Rails.env.production?
puts 'Starting Twitter Stream...'
Thread.start {
twitter_stream.user do |object|
case object
when Twitter::Tweet
handle_tweet(object)
when Twitter::DirectMessage
handle_direct_message(object)
when Twitter::Streaming::Event
puts "Received Event: #{object.to_yaml}"
when Twitter::Streaming::FriendList
puts "Received FriendList: #{object.to_yaml}"
when Twitter::Streaming::DeletedTweet
puts "Deleted Tweet: #{object.to_yaml}"
when Twitter::Streaming::StallWarning
puts "Stall Warning: #{object.to_yaml}"
else
puts "It's something else: #{object.to_yaml}"
end
end
}
end
config/puma.rb
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Valid on Rails up to 4.1 the initializer method of setting `pool` size
ActiveSupport.on_load(:active_record) do
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['pool'] = ENV['RAILS_MAX_THREADS'] || 5
ActiveRecord::Base.establish_connection(config)
end
end
Message on startup
2017-04-19T23:52:47.076636+00:00 app[web.1]: Connecting to database specified by DATABASE_URL
2017-04-19T23:52:47.115595+00:00 app[web.1]: Starting Twitter Stream...
2017-04-19T23:52:47.229203+00:00 app[web.1]: Received FriendList: --- !ruby/array:Twitter::Streaming::FriendList []
2017-04-19T23:52:47.865735+00:00 app[web.1]: [4] * Listening on tcp://0.0.0.0:13734
2017-04-19T23:52:47.865830+00:00 app[web.1]: [4] ! WARNING: Detected 1 Thread(s) started in app boot:
2017-04-19T23:52:47.865870+00:00 app[web.1]: [4] ! #<Thread:0x007f4df8bf6240#/app/config/initializers/twitter.rb:135 sleep> - /app/vendor/ruby-2.3.3/lib/ruby/2.3.0/openssl/buffering.rb:125:in `sysread'
2017-04-19T23:52:47.875056+00:00 app[web.1]: [4] - Worker 0 (pid: 7) booted, phase: 0
2017-04-19T23:52:47.865919+00:00 app[web.1]: [4] Use Ctrl-C to stop
2017-04-19T23:52:47.882759+00:00 app[web.1]: [4] - Worker 1 (pid: 11) booted, phase: 0
2017-04-19T23:52:48.148831+00:00 heroku[web.1]: State changed from starting to up
Thanks in advance for the help. I have looked at several other posts mentioning WARNING: Detected 1 Thread(s) started in app boot but the answers say to ignore the warning if the thread is not important. In my case, the thread is very important and I need this thread to not sleep.
From your code I think you have a bigger issue on your hands than a sleeping thread... which I guess might be caused by the fact that some things are misnamed and others are just not often considered when relying on a web framework.
In the world of servers, "workers" refer to forked processes that perform server related tasks, often accepting new connections and handling web requests.
BUT - fork doesn't duplicate threads! - the new process (the worker) starts with only one single thread, which is a copy of the thread that called fork.
This is because processes don't share memory (normally). Whatever global data you have in a process is private to that process (i.e., if you save connected websocket clients in an array, that array is different for each "worker").
This can't be helped, it's part of how the OS and fork are designed.
So, the warning is not something you can circumvent - it's an indication of a design flaw in the app(!).
For example, in your current design (assuming the thread wasn't sleeping), the handle_tweet method will only be called for the original server process and it won't be called for any worker process.
If you're using pub/sub, you only need one twitter_stream connection for the whole app (no matter how many servers or workers you application has) - perhaps a twitter_stream process (or background app) will be better than a thread.
But if you're implementing handle_tweet in a process specific way - i.e., by sending a message to every connected clients saved in an array - you need to make sure every "worker" initiates a twitter_stream thread(!).
When I wrote Iodine (a different server than Puma), I handled these use cases using the Iodine.run method, which defers tasks for later. The "saved" task should be performed only after the workers are initialized and the event loop starts running, so it's performed in each process (allowing you to start a new tread in each proccess).
i.e.
Iodine.run do
Thread.start do
twitter_stream.user do |object|
# ...
end
end
end
I assume Puma has a similar solution. From what I understand of the Puma Clustered-Mode Documentation, Adding the following block to your config/puma.rb might help:
# config/puma.rb
on_worker_boot do
Thread.start do
twitter_stream.user do |object|
# ...
end
end
end
Good luck!
EDIT: relating to the comment about twitter_stream using ActiveRecord
From the comments I gather that the twitter_stream callbacks store data in the DataBase as well as handle "push" events or notices.
Although these two concerns are connected, they are very different from each other.
For example, twitter_stream callbacks should only store data in the DataBase once. Even if your application grows to a billion users, you will only need to save the data in the database once.
This means that the twitter_stream callbacks should have their own dedicated process that runs only once, possibly separate from the main application.
At first, and as long as you limit your application to a single (only one server/application running), you might use fork together with the initializer/twitter.rb script... i.e.:
### START TWITTER PROCESS ### if production
if Rails.env.production?
puts 'Starting Twitter Stream...'
Process.fork do
twitter_stream.user do |object|
# ...
end
end
end
On the other hand, notifications should be addressed to a specific user on a specific connection owned by a specific process.
Hence, notifications should be a separate concern from the twitter_stream DataBase update and they should be running in the background of every process, using the on_worker_boot (or Iodine.run) described above.
To achieve this, you might have on_worker_boot start a background thread that will listen to a pub/sub service such as Redis, while the twitter_stream callbacks "publish" updates to the pub/sub service.
This would allow each process to review the update and check if any of the connections it "owns" belongs to a client that should be notified of the update.
The way I'm reading your question, this doesn't look like an issue. A sleeping thread is different from a dead thread. Sleep just means that the thread is waiting idle, not consuming any cpu. If all else is hooked up properly, then as soon as the twitter api detects an event, it should wake the the thread up, run whatever handler you've defined, and then go right back to sleep. Sleeping isn't "running in the background," but it is "waiting for something to happen (e.g. someone tweets #me.) so I can run in the background."
A quick example to demonstrate this:
2.4.0 :001 > t = Thread.new { TCPServer.new(1234).accept ; puts "Got a connection! Dying..." }
=> #<Thread:0x007fa3941fed90#(irb):1 sleep>
2.4.0 :002 > t
=> #<Thread:0x007fa3941fed90#(irb):1 sleep>
2.4.0 :003 > t
=> #<Thread:0x007fa3941fed90#(irb):1 sleep>
2.4.0 :004 > TCPSocket.new 'localhost', 1234
=> #<TCPSocket:fd 35>
2.4.0 :005 > Got a connection! Dying...
t
=> #<Thread:0x007fa3941fed90#(irb):1 dead>
Sleeping just means "waiting for action."
Puma is a thread-based server, and is very particular about spinning threads up in its boot process, hence the warning about a thread started at app boot.
For what it's worth though, it's kind of weird to have a thread listening for updates from an api like that in a webserver. Maybe you should look into having a worker handle twitter events using something like Resque? Or maybe ActionCable is relevant to your use case?

IPython.parallel client is hanging while waiting for result of map_async

I am running 7 worker processes on a single machine with 4 cores. I may have made a poor choice with this loop while waiting for the result of map_async:
while not result.ready():
time.sleep(10)
for out in result.stdout:
print out
rec_file_list = result.get()
result.stdout keeps growing with all the printed output from the 7 processes running, and it caused the console that initiated the map to hang. The activity monitor on my MacBook Pro shows the 7 processes are still running, and the terminal running the Controller is still active. What are my options here? Is there any way to acquire the result once the processes have completed?
I found an answer:
Remote introspection of ASyncResult objects is possible from another client as long as a 'database backend' has been enabled by the controller with:
ipcontroller --dictb # or --mongodb or --sqlitedb
Then, it is possible to create a new client instance and retrieve the results with:
client.get_result(task_id)
where the task_ids can be retrieved with:
client.hub_history()
Also, a simple way to avoid the buffer overflow I encountered is to periodically print just the last few lines from each engine's stdout history, and to flush the buffer like:
from IPython.display import clear_output
import sys
while not result.ready():
clear_output()
for stdout in result.stdout:
if stdout:
lines = stdout.split('\n')
for line in lines[-4:-1]:
if line:
print line
sys.stdout.flush()
time.sleep(30)

Neo4j JDBCCypherExecutor blocks thread

I am using neo4j 2.1.5 and using JDBCCypherExecutor to post my cypher queries.
often the cypher executor thread gets stuck making the app unusable after sometime.
The only option after sometime is to restart the spark webapp.
Has anyone encountered this problem?
The jstack of one of the blocked thread is
"qtp1639509299-63" prio=10 tid=0x00007fe454001000 nid=0x1e0e waiting on condition [0x00007fe564fea000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000586cf6e88> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at org.apache.http.impl.conn.tsccm.WaitingThread.await(WaitingThread.java:158)
at org.apache.http.impl.conn.tsccm.ConnPoolByRoute.getEntryBlocking(ConnPoolByRoute.java:403)
at org.apache.http.impl.conn.tsccm.ConnPoolByRoute$1.getPoolEntry(ConnPoolByRoute.java:300)
at org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager$1.getConnection(ThreadSafeClientConnManager.java:224)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:391)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:732)
at org.restlet.ext.httpclient.internal.HttpMethodCall.sendRequest(HttpMethodCall.java:336)
at org.restlet.engine.adapter.ClientAdapter.commit(ClientAdapter.java:114)
at org.restlet.engine.adapter.HttpClientHelper.handle(HttpClientHelper.java:112)
at org.restlet.Client.handle(Client.java:180)
at org.restlet.routing.Filter.doHandle(Filter.java:159)
at org.restlet.routing.Filter.handle(Filter.java:206)
at org.restlet.resource.ClientResource.handle(ClientResource.java:1136)
at org.restlet.resource.ClientResource.handleOutbound(ClientResource.java:1225)
at org.restlet.resource.ClientResource.handle(ClientResource.java:1068)
at org.restlet.resource.ClientResource.handle(ClientResource.java:1044)
at org.restlet.resource.ClientResource.post(ClientResource.java:1453)
at org.neo4j.jdbc.internal.rest.TransactionalQueryExecutor.post(TransactionalQueryExecutor.java:98)
at org.neo4j.jdbc.internal.rest.TransactionalQueryExecutor.commit(TransactionalQueryExecutor.java:133)
at org.neo4j.jdbc.internal.rest.TransactionalQueryExecutor.executeQueries(TransactionalQueryExecutor.java:204)
at org.neo4j.jdbc.internal.rest.TransactionalQueryExecutor.executeQuery(TransactionalQueryExecutor.java:214)
at org.neo4j.jdbc.internal.Neo4jConnection.executeQuery(Neo4jConnection.java:370)
at org.neo4j.jdbc.internal.Neo4jPreparedStatement.executeQuery(Neo4jPreparedStatement.java:48)
at com.zahoor.graph.executor.JdbcCypherExecutor.query(JdbcCypherExecutor.java:28)
Separate threads might be the issue here, as the JDBC driver keeps the transaction in a thread local, so if you spawn new threads it creates new transaction objects and new connections (if you don't reuse the same Connection instance.
And the default pooling of HttpClient is (afaik) 10 parallel connections.

Resources