Can't connect to Neo4j server locally on Ubuntu VPS - neo4j

Recently I've reinstalled my VPS and have a fresh install of Neo4j on it.
I'm using putty to connect from my machine, tunneling port 7474 as I've done in the past. I'm new to Neo4j 3.2 and am getting this error when I try to connect to the server on the Neo4j browser:
N/A: WebSocket connection failure. Due to security constraints in your
web browser, the reason for the failure is not available to this Neo4j
Driver.
After trying a lot of different suggestions for sort of related topics I ended up allowing remote connections and discovered that when I access remotely eg. http://my_vps_ip:7474/browser/ I have no issues at all.
This is the output of neo4j status:
● neo4j.service - Neo4j Graph Database
Loaded: loaded (/lib/systemd/system/neo4j.service; disabled; vendor preset: enabled)
Active: active (running) since Fri 2017-05-12 04:47:11 CEST; 2h 1min ago
Main PID: 17040 (java)
Tasks: 38
Memory: 272.1M
CPU: 1min 6.731s
CGroup: /system.slice/neo4j.service
└─17040 /usr/bin/java -cp /var/lib/neo4j/plugins:/etc/neo4j:/usr/share/neo4j/lib/*:/var/lib/neo4j/plugins/* -server -XX:
May 12 04:47:11 vps276997 neo4j[17040]: import: /var/lib/neo4j/import
May 12 04:47:11 vps276997 neo4j[17040]: data: /var/lib/neo4j/data
May 12 04:47:11 vps276997 neo4j[17040]: certificates: /var/lib/neo4j/certificates
May 12 04:47:11 vps276997 neo4j[17040]: run: /var/run/neo4j
May 12 04:47:11 vps276997 neo4j[17040]: Starting Neo4j.
May 12 04:47:12 vps276997 neo4j[17040]: 2017-05-12 02:47:12.417+0000 INFO ======== Neo4j 3.2.0 ========
May 12 04:47:12 vps276997 neo4j[17040]: 2017-05-12 02:47:12.844+0000 INFO Starting...
May 12 04:47:13 vps276997 neo4j[17040]: 2017-05-12 02:47:13.950+0000 INFO Bolt enabled on 0.0.0.0:7687.
May 12 04:47:18 vps276997 neo4j[17040]: 2017-05-12 02:47:18.196+0000 INFO Started.
May 12 04:47:20 vps276997 neo4j[17040]: 2017-05-12 02:47:20.274+0000 INFO Remote interface available at http://localhost:7474/
Any ideas why this might be happening?

Please ensure that public access to 7687 port is enabled in your
'neo4j.conf' file. In the latest version, it should be two line in your 'neo4j.conf':
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=0.0.0.0:7687
That is because neo4j's bolt protocol takes 7687 port.
Also ensure your expose 7687 in your instance to public, if you are using AWS EC2, choose protocol to be TCP because bolt is based on TCP.
If you are using Docker/k8s, also ensure that you expose all ports(7474,7473,7687 by default) in your containers or k8s service.

There is a neo4j knowledge base article is about this exact issue.
Quote:
This error can be resolved by editing the file
$NEO4J_HOME/conf/neo4j.conf and uncommenting:
# To have Bolt accept non-local connections, uncomment this line:
dbms.connector.bolt.address=0.0.0.0:7687

Related

UWSGI Works Within Network But Not Over Domain

I have a RPi running NGINX and UWSGI serving a webpage and an API via UWSGI.
Web page works fine, both locally and from the web.
API works locally, but not via web. My guess it's either the router or the NGINX configuration.
I am using cloudflare for the DNS, and all appears fine there.
I can GET / POST locally using Postman, but not via the web address. I would greatly appreciate any ideas on where to look.
Output from uwsgi is:
*** Starting uWSGI 2.0.20 (32bit) on [Sat May 14 12:35:08 2022] ***
compiled with version: 8.3.0 on 06 October 2021 05:59:48
os: Linux-5.10.103-v7l+ #1529 SMP Tue Mar 8 12:24:00 GMT 2022
nodename: xxx
machine: armv7l
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /var/www/xxx.xxx/public
detected binary path: /home/pi/.local/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 12393
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :9090 fd 4
spawned uWSGI http 1 (pid: 3176)
uwsgi socket 0 bound to TCP address 127.0.0.1:34881 (port auto-assigned) fd 3
Python version: 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0xd5c950
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 64408 bytes (62 KB) for 1 cores
*** Operational MODE: single process ***
<<<<<<<<<<<<<<<< Loaded script >>>>>>>>>>>>>>>>
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0xd5c950 pid: 3175 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 3175, cores: 1)

XGBoost model failed due to Closing connection _sid_af1c at exit

We use XGBoost model for regression prediction model, We use XGBoost as grid search hyper parameter tuning process,
We run this model on 90GB h2o cluster. This process now running over 1.2 years, but suddenly this process stop due to "Closing connection _sid_af1c at exit"
Training data set is 800 000, due to this error we decreased it to 500 000 but same error occurred.
ntrees - 300,400
depth - 8.10
variables - 382
I have attached H2o memory log and our application error log. Could you please support to fixed this issue.
----------------------------------------H2o Log [Start]----------------------
**We start H2o as 2 node cluster, but h2o log crated on one node.**
INFO water.default: ----- H2O started -----
INFO water.default: Build git branch: master
INFO water.default: Build git hash: 0588cccd72a7dc1274a83c30c4ae4161b92d9911
INFO water.default: Build git describe: jenkins-master-5236-4-g0588ccc
INFO water.default: Build project version: 3.33.0.5237
INFO water.default: Build age: 1 year, 3 months and 17 days
INFO water.default: Built by: 'jenkins'
INFO water.default: Built on: '2020-10-27 19:21:29'
WARN water.default:
WARN water.default: *** Your H2O version is too old! Please download the latest version from http://h2o.ai/download/ ***
WARN water.default:
INFO water.default: Found H2O Core extensions: [XGBoost, KrbStandalone]
INFO water.default: Processed H2O arguments: [-flatfile, /usr/local/h2o/flatfile.txt, -port, 54321]
INFO water.default: Java availableProcessors: 20
INFO water.default: Java heap totalMemory: 962.5 MB
INFO water.default: Java heap maxMemory: 42.67 GB
INFO water.default: Java version: Java 1.8.0_262 (from Oracle Corporation)
INFO water.default: JVM launch parameters: [-Xmx48g]
INFO water.default: JVM process id: 83043#masterb.xxxxx.com
INFO water.default: OS version: Linux 3.10.0-1127.10.1.el7.x86_64 (amd64)
INFO water.default: Machine physical memory: 62.74 GB
INFO water.default: Machine locale: en_US
INFO water.default: X-h2o-cluster-id: 1644769990156
INFO water.default: User name: 'root'
INFO water.default: IPv6 stack selected: false
INFO water.default: Possible IP Address: ens192 (ens192), xxxxxxxxxxxxxxxxxxxx
INFO water.default: Possible IP Address: ens192 (ens192), xxxxxxxxxxx
INFO water.default: Possible IP Address: lo (lo), 0:0:0:0:0:0:0:1%lo
INFO water.default: Possible IP Address: lo (lo), 127.0.0.1
INFO water.default: H2O node running in unencrypted mode.
INFO water.default: Internal communication uses port: 54322
INFO water.default: Listening for HTTP and REST traffic on http://xxxxxxxxxxxx:54321/
INFO water.default: H2O cloud name: 'root' on /xxxxxxxxxxxx:54321, discovery address /xxxxxxxxxxxx:57653
INFO water.default: If you have trouble connecting, try SSH tunneling from your local machine (e.g., via port 55555):
INFO water.default: 1. Open a terminal and run 'ssh -L 55555:localhost:54321 root#xxxxxxxxxxxx'
INFO water.default: 2. Point your browser to http://localhost:55555
INFO water.default: Log dir: '/tmp/h2o-root/h2ologs'
INFO water.default: Cur dir: '/usr/local/h2o/h2o-3.33.0.5237'
INFO water.default: Subsystem for distributed import from HTTP/HTTPS successfully initialized
INFO water.default: HDFS subsystem successfully initialized
INFO water.default: S3 subsystem successfully initialized
INFO water.default: GCS subsystem successfully initialized
INFO water.default: Flow dir: '/root/h2oflows'
INFO water.default: Cloud of size 1 formed [/xxxxxxxxxxxx:54321]
INFO water.default: Registered parsers: [GUESS, ARFF, XLS, SVMLight, AVRO, PARQUET, CSV]
INFO water.default: XGBoost extension initialized
INFO water.default: KrbStandalone extension initialized
INFO water.default: Registered 2 core extensions in: 2632ms
INFO water.default: Registered H2O core extensions: [XGBoost, KrbStandalone]
INFO hex.tree.xgboost.XGBoostExtension: Found XGBoost backend with library: xgboost4j_gpu
INFO hex.tree.xgboost.XGBoostExtension: XGBoost supported backends: [WITH_GPU, WITH_OMP]
INFO water.default: Registered: 217 REST APIs in: 353ms
INFO water.default: Registered REST API extensions: [Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4]
INFO water.default: Registered: 291 schemas in 112ms
INFO water.default: H2O started in 4612ms
INFO water.default:
INFO water.default: Open H2O Flow in your web browser: http://xxxxxxxxxxxx:54321
INFO water.default:
INFO water.default: Cloud of size 2 formed [mastera.xxxxxxxxxxxx.com/xxxxxxxxxxxx:54321, masterb.xxxxxxxxxxxx.com/xxxxxxxxxxxx:54321]
INFO water.default: Locking cloud to new members, because water.rapids.Session$1
INFO hex.tree.xgboost.task.XGBoostUpdater: Initial Booster created, size=448
ERROR water.default: Got IO error when sending a batch of bytes:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:51)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:468)
at water.H2ONode$SmallMessagesSendThread.sendBuffer(H2ONode.java:605)
at water.H2ONode$SmallMessagesSendThread.run(H2ONode.java:588)
----------------------------------------H2o Log [End]--------------------------------
----------------------------------------Application Log [Start]----------------------
Checking whether there is an H2O instance running at http://localhost:54321 . connected.
Warning: Your H2O cluster version is too old (1 year, 3 months and 17 days)! Please download and install the latest version from http://h2o.ai/download/
-------------------------- ------------------------------------------------------------------
H2O_cluster_uptime: 19 mins 49 secs
H2O_cluster_timezone: Asia/Colombo
H2O_data_parsing_timezone: UTC
H2O_cluster_version: 3.33.0.5237
H2O_cluster_version_age: 1 year, 3 months and 17 days !!!
H2O_cluster_name: root
H2O_cluster_total_nodes: 2
H2O_cluster_free_memory: 84.1 Gb
H2O_cluster_total_cores: 40
H2O_cluster_allowed_cores: 40
H2O_cluster_status: locked, healthy
H2O_connection_url: http://localhost:54321
H2O_connection_proxy: {"http": null, "https": null}
H2O_internal_security: False
H2O_API_Extensions: Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4
Python_version: 3.7.0 final
-------------------------- ------------------------------------------------------------------
-------------------------- ------------------------------------------------------------------
H2O_cluster_uptime: 19 mins 49 secs
H2O_cluster_timezone: Asia/Colombo
H2O_data_parsing_timezone: UTC
H2O_cluster_version: 3.33.0.5237
H2O_cluster_version_age: 1 year, 3 months and 17 days !!!
H2O_cluster_name: root
H2O_cluster_total_nodes: 2
H2O_cluster_free_memory: 84.1 Gb
H2O_cluster_total_cores: 40
H2O_cluster_allowed_cores: 40
H2O_cluster_status: locked, healthy
H2O_connection_url: http://localhost:54321
H2O_connection_proxy: {"http": null, "https": null}
H2O_internal_security: False
H2O_API_Extensions: Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4
Python_version: 3.7.0 final
-------------------------- ------------------------------------------------------------------
release memory here...
Checking whether there is an H2O instance running at http://localhost:54321 . connected.
Warning: Your H2O cluster version is too old (1 year, 3 months and 17 days)! Please download and install the latest version from http://h2o.ai/download/
-------------------------- ------------------------------------------------------------------
H2O_cluster_uptime: 19 mins 49 secs
H2O_cluster_timezone: Asia/Colombo
H2O_data_parsing_timezone: UTC
H2O_cluster_version: 3.33.0.5237
H2O_cluster_version_age: 1 year, 3 months and 17 days !!!
H2O_cluster_name: root
H2O_cluster_total_nodes: 2
H2O_cluster_free_memory: 84.1 Gb
H2O_cluster_total_cores: 40
H2O_cluster_allowed_cores: 40
H2O_cluster_status: locked, healthy
H2O_connection_url: http://localhost:54321
H2O_connection_proxy: {"http": null, "https": null}
H2O_internal_security: False
H2O_API_Extensions: Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4
Python_version: 3.7.0 final
-------------------------- ------------------------------------------------------------------
Parse progress: |█████████████████████████████████████████████████████████| 100%
xgboost Grid Build progress: |████████Closing connection _sid_af1c at exit
H2O session _sid_af1c was not closed properly.
Closing connection _sid_9313 at exit
H2O session _sid_9313 was not closed properly.
----------------------------------------Application Log [End]----------------------
This typically means one of the nodes crashed, it can be due to many different reasons - memory is the most common one.
I see your machine has about 64GB of physical memory and H2O is getting 48GB out of that. XGBoost runs in native memory, not in the JVM memory. For XGBoost we recommend splitting the physical memory 50-50 to H2O and XGBoost.
You are running a development version of H2O (3.33) - I suggest upgrading to the latest stable.

Apache nifi is not starting up

I am trying to start Apache nifi version 1.2.0 on window 8 machine. It used to start properly. After I restarted the system the nifi is not starting at all. I had check status Its keep getting "Apacha Nifi not running".
Below are logs from nifi.bootstrap.log file:-
2017-07-05 15:41:57,105 WARN [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Failed to set permissions so that only the
owner can read pid file E:\softwares\nifi-1.2.0\bin\..\run\nifi.pid; this
may allows others to have access to the key needed to communicate with NiFi.
Permissions should be changed so that only the owner can read this file
2017-07-05 15:41:57,142 WARN [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Failed to set permissions so that only the
owner can read status file E:\softwares\nifi-1.2.0\bin\..\run\nifi.status;
this may allows others to have access to the key needed to communicate with
NiFi. Permissions should be changed so that only the owner can read this
file
2017-07-05 15:41:57,168 INFO [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for
Bootstrap requests on port 50765
2017-07-05 15:43:12,077 ERROR [NiFi logging handler] org.apache.nifi.StdErr
Failed to start web server: Unable to start Flow Controller.
2017-07-05 15:43:12,078 ERROR [NiFi logging handler] org.apache.nifi.StdErr
Shutting down...
2017-07-05 15:43:14,501 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi
never started. Will not restart NiFi
Stack trace from nifi.app.log: -
2017-07-05 15:43:12,077 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
org.apache.nifi.web.NiFiCoreException: Unable to start Flow Controller.
at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:88)
at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:876)
at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:532)
at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:839)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:344)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1480)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1442)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:799)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:540)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:290)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start(Server.java:452)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:419)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:695)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: java.io.IOException: Expected to read a Sentinel Byte of '1' but got a value of '0' instead
at org.apache.nifi.repository.schema.SchemaRecordReader.readRecord(SchemaRecordReader.java:65)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeRecord(SchemaRepositoryRecordSerde.java:115)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeEdit(SchemaRepositoryRecordSerde.java:109)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeEdit(SchemaRepositoryRecordSerde.java:46)
at org.wali.MinimalLockingWriteAheadLog$Partition.recoverNextTransaction(MinimalLockingWriteAheadLog.java:1096)
at org.wali.MinimalLockingWriteAheadLog.recoverFromEdits(MinimalLockingWriteAheadLog.java:459)
at org.wali.MinimalLockingWriteAheadLog.recoverRecords(MinimalLockingWriteAheadLog.java:301)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.loadFlowFiles(WriteAheadFlowFileRepository.java:381)
at org.apache.nifi.controller.FlowController.initializeFlow(FlowController.java:712)
at org.apache.nifi.controller.StandardFlowService.initializeController(StandardFlowService.java:953)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:534)
at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:72)
... 28 common frames omitted
Thanks in advance
After Googling on this error "Caused by: java.io.IOException: Expected to read a Sentinel Byte of '1' but got a value of '0' instead" I found that this error indicates a partial write to the repos.
Here are a couple of things you can check/try to bring your Dataflow back online ;
check if your dsks are not full
Did you launch nifi with the same user ? Did you run it with administrator privileges ?
You can backup/move your repositories and try to start Nifi with empty repositories, you will still have your dataflows there but any file that was processing when you shutdown will be gone.
Could you please try that ?
I think the issue is with incompatible java version, use JAVA 8 version.
If you haven't set JAVA_HOME then set in environment variables with path Like "C:/program files/jdk1.8"
Jira addressing when NiFi run with java 9 version and the issue not resolved yet
https://issues.apache.org/jira/browse/NIFI-4419

Strange behavior of neo4j-service on Debian 8.1

I have installed neo4j on Debian 8.1 thanks to these instructions : http://debian.neo4j.org/
Now if, as root, I start neo4j with neo4j-service like this
service neo4j-service start
Sometimes it will works correctly but most of the the time, the neo4j-service will timeout. But the interesting fact is that neo4j is indeed started, I can go the the browser and make some queries. But the neo4j-service tells me that it failed :
root#ns***:~# service neo4j-service start
Job for neo4j-service.service failed. See 'systemctl status neo4j-service.service' and 'journalctl -xn' for details.
root#ns***:~# systemctl status neo4j-service.service
● neo4j-service.service - LSB: Neo4j Graph Database server
Loaded: loaded (/etc/init.d/neo4j-service)
Active: failed (Result: timeout) since Fri 2015-10-16 19:03:08 CEST; 6min ago
Process: 24556 ExecStop=/etc/init.d/neo4j-service stop (code=exited, status=0/SUCCESS)
Process: 29730 ExecStart=/etc/init.d/neo4j-service start (code=killed, signal=TERM)
Oct 16 18:58:08 ns***.ip-91-***-***.eu neo4j-service[29730]: WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Oct 16 18:58:08 ns***.ip-91-***-***.eu neo4j-service[29730]: Starting Neo4j Server...WARNING: not changing user
Oct 16 19:03:08 ns***.ip-91-***-***.eu systemd[1]: neo4j-service.service start operation timed out. Terminating.
Oct 16 19:03:08 ns***.ip-91-***-***.eu systemd[1]: Failed to start LSB: Neo4j Graph Database server.
Oct 16 19:03:08 ns***.ip-91-***-***.eu systemd[1]: Unit neo4j-service.service entered failed state.
And sometimes it will tell me that the service started correctly but will not manage to stop it.
Most of the time, I have to kill the process myself to "reset everything" correctly.
Do you know why this is happening ?
Are you aware of any issues with the neo4j-service on Debian 8.1 ?
This approach to running Neo4j is deprecated and you should use neo4j command.
Or you can write your own service wrapper and for that I suggest to you use http://supervisord.org/

WSO2: message broker, startup takes long (v 2.2.0)

We have installed WSO2 Message Broker, v2.2.0 on Suse 64 bit OS, single core. We have configured the master-datasources.xml to point to an Oracle database. The startup of the MB takes minutes, especially:
TID: [0] [MB] [2014-06-11 15:57:53,039] INFO {org.apache.cassandra.thrift.ThriftServer} - Listening for thrift clients... {org.apache.cassandra.thrift.ThriftServer}
TID: [0] [MB] [2014-06-11 15:57:53,219] INFO {org.apache.cassandra.service.GCInspector} - GC for MarkSweepCompact: 407 ms for 1 collections, 60663688 used; max is 1037959168 {org.apache.cassandra.service.GCInspector}
TID: [0] [MB] [2014-06-11 15:58:39,137] WARN {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - Waiting for required OSGi services: org.wso2.carbon.server.admin.common.IServerAdmin,org.wso2.carbon.throttling.agent.ThrottlingAgent, {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent}
TID: [0] [MB] [2014-06-11 15:59:39,136] WARN {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - Waiting for required OSGi services: org.wso2.carbon.server.admin.common.IServerAdmin,org.wso2.carbon.throttling.agent.ThrottlingAgent, {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent}
TID: [0] [MB] [2014-06-11 16:00:39,136] WARN {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - Waiting for required OSGi services: org.wso2.carbon.server.admin.common.IServerAdmin,org.wso2.carbon.throttling.agent.ThrottlingAgent, {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent}
Is there a reason for this?
With Wso2 MB 220 we are getting these kind of errors when zookeeper/casandra server does not start properly.Ideally if clustering enabled zookeeper(Internal or External) server should be started properly before MB starts.
Further If you trying to run a MB cluster on a single machine and want to run two Zookeeper nodes there, Most probably you will be end up in these OSGI level errors.Please follow blog post on http://indikasampath.blogspot.com/2014/05/wso2-message-broker-cluster-setup-in.html for configuration details on WSO2 Message Broker cluster setup on a single machine

Resources