My Kylin metadata is corrupt, so I removed all metadata and reinstalled Kylin on the same server.
I tried running:
$KYLIN_HOME/bin/sample.sh
And it is not giving any error.
So i tried to create a simple cube with 1 fact and 2 dimension tables.
But my cube build failed at its first step, with this error:
java.lang.NullPointerException
at org.apache.kylin.source.hive.CreateFlatHiveTableStep.getCubeSpecificConfig(CreateFlatHiveTableStep.java:100)
at org.apache.kylin.source.hive.CreateFlatHiveTableStep.doWork(CreateFlatHiveTableStep.java:105)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I have the same problem,and fixed。
The reason is that zookeeper already have directory kylin。When I remove then kylin on zookeeper,cube build successfully
1. use zkCli.sh to connect to zookeeper
2. rmr /kylin
3. restart kylin。
Related
I'm running a database on Neo4j v3.5.11 CE via Docker volume on AWS. I want to upgrade to 4.4.9, so I created a tar of ./graph.db and brought it back to my dev box. I extracted to /var/lib/neo4j/data/databases. I mounted it to a neo4j v3.5.11 container and it starts fine. I can see all the data via localhost:7474.
Next I try mounting to neo4j v4.4.0 via:
docker run -d -p 7474:7474 -p 7687:7687 -v /var/lib/neo4j/data:/var/lib/neo4j/data -v /var/lib/neo4j/plugins:/plugins -v /var/lib/neo4j/logs:/var/log/neo4j -e NEO4J_AUTH=none -e NEO4J_dbms_allow__upgrade=true --name neo4j neo4j:4.0.0
Neo4j fails: "Transaction logs contains entries with prefix 2, and the highest supported prefix is 1. This indicates that the log files originates from a newer version of neo4j." This is odd because it was upgraded from 3.5.5 and has been running on 3.5.11--never touched by a newer version.
docker logs neo4j-apoc
Fetching versions.json for Plugin 'apoc' from https://neo4j-contrib.github.io/neo4j-apoc-procedures/versions.json
Installing Plugin 'apoc' from https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/download/4.0.0.7/apoc-4.0.0.7-all.jar to /plugins/apoc.jar
Applying default values for plugin apoc to neo4j.conf
Skipping dbms.security.procedures.unrestricted for plugin apoc because it is already set
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /logs
plugins: /plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2022-09-10 14:18:32.888+0000 WARN Unrecognized setting. No declared setting with name: apoc.export.file.enabled
2022-09-10 14:18:32.892+0000 WARN Unrecognized setting. No declared setting with name: apoc.import.file.enabled
2022-09-10 14:18:32.893+0000 WARN Unrecognized setting. No declared setting with name: apoc.import.file.use_neo4j_config
2022-09-10 14:18:32.921+0000 INFO ======== Neo4j 4.0.0 ========
2022-09-10 14:18:32.934+0000 INFO Starting...
2022-09-10 14:18:48.713+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#123d7057' was successfully initialized, but failed to start. Please see the attached cause exception "Transaction logs contains entries with prefix 2, and the highest supported prefix is 1. This indicates that the log files originates from a newer version of neo4j.". Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#123d7057' was successfully initialized, but failed to start. Please see the attached cause exception "Transaction logs contains entries with prefix 2, and the highest supported prefix is 1. This indicates that the log files originates from a newer version of neo4j.".
I tried a couple things:
1.) Deleting the transaction logs: sudo rm graph.db/neostore.transaction.db.* It throws the same exact transaction log error, even though there are no transaction logs in the directory;
2.) Tried a database recovery by adding this to the run command: -e NEO4J_unsupported_dbms_tx__log_fail__on__corrupted__log__files=false This fails with "Unknown store version 'SF4.3.0'":
2022-09-10 15:39:48.458+0000 INFO Starting...
2022-09-10 15:40:34.529+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#2a39aa2b' was successfully initialized, but failed to start. Please see the attached cause exception "Unknown store version 'SF4.3.0'". Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#2a39aa2b' was successfully initialized, but failed to start. Please see the attached cause exception "Unknown store version 'SF4.3.0'".
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabaseService#2a39aa2b' was successfully initialized, but failed to start. Please see the attached cause exception "Unknown store version 'SF4.3.0'".
Any ideas appreciated! Thanks!
Deleting transaction logs is never a good idea. What you want to do is add an environment variable:
dbms.allow_upgrade=true
Then it should work as the docs states that you can update the latest 3.5 to 4.0.0 Neo4j version.
I'm trying to install Apache Kylin in Ubuntu 16.04.
I installed:
hadoop 3.1.2 in pseudo distributed mode (fs.default.name: hdfs://localhost:9000)
apache hive 3.1.2 and db derby 10.14.2.0 (config hive use db derby)
hbase 1.4.10 in pseudo distributed mode (using hdfs://localhost:9000/hbase)
but when i call:
hbase shell
hbase(main):001:0> list
get error:
ERROR: Can't get master address from ZooKeeper; znode data == null
Here is some help for this command:
List all user tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:
hbase> list
hbase> list 'abc.*'
hbase> list 'ns:abc.*'
hbase> list 'ns:.*'
and when i call:
ssh localhost
kylin.sh start
get error
2019-09-27 09:26:41,029 INFO [main] client.ZooKeeperRegistry:107 : ClusterId read in ZooKeeper is null
Exception in thread "main" java.lang.IllegalArgumentException: Failed to find metadata store by url: kylin_metadata#hbase
at org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:99)
at org.apache.kylin.common.persistence.ResourceStore.getStore(ResourceStore.java:111)
at org.apache.kylin.rest.service.AclTableMigrationTool.checkIfNeedMigrate(AclTableMigrationTool.java:99)
at org.apache.kylin.tool.AclTableMigrationCLI.main(AclTableMigrationCLI.java:43)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.kylin.common.persistence.ResourceStore.createResourceStore(ResourceStore.java:92)
... 3 more
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:372)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:275)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:436)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:310)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:639)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:409)
at org.apache.kylin.storage.hbase.HBaseConnection.tableExists(HBaseConnection.java:281)
at org.apache.kylin.storage.hbase.HBaseConnection.createHTableIfNeeded(HBaseConnection.java:306)
at org.apache.kylin.storage.hbase.HBaseResourceStore.createHTableIfNeeded(HBaseResourceStore.java:114)
at org.apache.kylin.storage.hbase.HBaseResourceStore.<init>(HBaseResourceStore.java:88)
... 8 more
From the error, obviously your HBase is not running; Please make sure HBase is good.
Hadoop has a long history and it is complex, so we recommend you to use some well-tested Hadoop Distribution such as CDH and HDP, but not a custom Hadoop environment.
If you are do a PoC and want to learn Kylin quickly, please use Docker image https://hub.docker.com/r/apachekylin/apache-kylin-standalone. If want to use Kylin in a more formal Hadoop environment, could you please use a CDH 5.x or HDP 2.x Hadoop Distribution?
If you have more question, please contact Kylin community by user mailing list.
Using Master on Windows VM whereas tried to spin container in linux container. Kindly help to resolve this.
Even some idea/ guidance would be of great help.
hudson.remoting.ChannelBuilder withJarCacheOrDefault WARNING: Could not
create jar cache. Running
without cache.
java.io.IOException:Failed to initialize the default JAR Cache location
Caused by: java.nio.file.AccessDeniedException: /home/jenkins/? \
Caused by: java.lang.IllegalArgumentException: Root directory not
writable: ?/.jenkins/cache/jars
Looks like permission issue.
Does the Windows VM has write permission for the Linux Container?
Caused by: java.nio.file.AccessDeniedException: /home/jenkins/? \
Caused by: java.lang.IllegalArgumentException: Root directory not writable: ?/.jenkins/cache/jars
In Linux Container did you verified above path exists or not?
I am having issue switching between neo4j enterprise and community versions.Since i was unable to do a graphml import,i switched to enterprise where i can import graphml databases.Once i am done i am trying to open the database file created in enterprise version in community version it is giving error.
org.neo4j.server.database.LifeCycleManagingDatabase was succesfully initialized but failed to start
Is it possible to open a db created in enterprise version in community.What am i doing wrong here?
Please find the error i am getting when i am opening the db from java .
Exception in thread "main" java.lang.RuntimeException: Error starting org.neo4j.kernel.EmbeddedGraphDatabase, D:\roshni\graph.db
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:314)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:59)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.newDatabase(GraphDatabaseFactory.java:107)
at org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:94)
at org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:176)
at org.neo4j.graphdb.factory.GraphDatabaseFactory.newEmbeddedDatabase(GraphDatabaseFactory.java:66)
at Testing.main(Testing.java:15)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.transaction.state.DataSourceManager#f1cb476' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:499)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:108)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:309)
... 6 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.NeoStoreDataSource#2ad13d80' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:499)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:108)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:117)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:493)
... 8 more
Caused by: org.neo4j.kernel.impl.storemigration.StoreUpgrader$UpgradingStoreVersionNotFoundException: 'neostore.nodestore.db' does not contain a store version, please ensure that the original database was shut down in a clean state.
at org.neo4j.kernel.impl.storemigration.UpgradableDatabase.checkUpgradeable(UpgradableDatabase.java:86)
at org.neo4j.kernel.impl.storemigration.StoreMigrator.needsMigration(StoreMigrator.java:158)
at org.neo4j.kernel.impl.storemigration.StoreUpgrader.getParticipantsEagerToMigrate(StoreUpgrader.java:259)
at org.neo4j.kernel.impl.storemigration.StoreUpgrader.migrateIfNeeded(StoreUpgrader.java:134)
at org.neo4j.kernel.NeoStoreDataSource.upgradeStore(NeoStoreDataSource.java:532)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:434)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:493)
... 11 more
It's better to have same version of Neo4j community and enterprise.
If your enterprise version is older then community. I suggest to change following property for update datastore
conf/neo4j.properties
allow_store_upgrade=true
In addition to what #MicTech said, you cannot downgrade a datastore. Neo4j supports upgrades. So when moving from community to enterprise, the enterprise variant needs to be the same version or a newer one.
Before doing a store upgrade, it's crucial to do a clean shutdown with the old version.
As per their documentation on Ubuntu and Debian you can do an upgrade as follow, for Neo4j 2.3.1
The Neo4j Debian repository can be used on Debian or Ubuntu.
To use the repository follow these steps:
wget -O - https://debian.neo4j.org/neotechnology.gpg.key | sudo
apt-key add - echo 'deb http://debian.neo4j.org/repo stable/'
/tmp/neo4j.list sudo mv /tmp/neo4j.list /etc/apt/sources.list.d
sudo apt-get update
Installing Neo4j
To install the latest Neo4j Community Edition:
sudo apt-get install neo4j
To install the latest Neo4j Enterprise Edition:
sudo apt-get install neo4j-enterprise
The installation process will guide you thru the upgrade
I am getting mad with this problem and I have no idea how to solve it.
We are trying to trigger Jenkins builds from hooks on a Windows Central repository. This is actually working on an old Jenkins server (LTS 1.580.1).
The way we did it before was calling Jenkins CLI with the SSH private key stored on a file.
Here is the weird thing:
C:\Users\Username\jenkins>java -jar jenkins-cli.jar -s http://hostname:8080 -i ci.key list-jobs
hudson.security.AccessDeniedException2: jenkins_ci is missing the Overall/Read permission
at hudson.security.ACL.checkPermission(ACL.java:58)
at hudson.model.Node.checkPermission(Node.java:417)
at hudson.cli.CLICommand.main(CLICommand.java:236)
at hudson.cli.CliManagerImpl.main(CliManagerImpl.java:92)
at sun.reflect.GeneratedMethodAccessor345.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:320)
at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:295)
at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:254)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:324)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at hudson.cli.CliManagerImpl$1.call(CliManagerImpl.java:63)
at hudson.remoting.CallableDecoratorAdapter.call(CallableDecoratorAdapter.java:18)
at hudson.remoting.CallableDecoratorList$1.call(CallableDecoratorList.java:21)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The jenkins_ci user is an Active Directory Service Account which mostly worked with everything. In the Jenkins security matrix I have the same permissions that this service account.
When I use my ssh key and run exactly the same command, it worked like a charm.
If I run who-I-am it said "jenkins_ci" BUT if I change Anonymous permissions then jenkins_ci started to work.
It seems that it is not reading the defined user permissions and it is using the Anonymous ones instead.
Any ideas how to make it work? Is this one a bug that I should report to Jenkins or am I missing anything?
Thanks!
Ok, after hours and hours working on it, I had a "happy idea" and it worked.
Our Jenkins is authenticating against Active Directory using LDAP.
Somehow, the user created by Jenkins (and it's user folder) was:
"jenkins_ci" (lowercase) and our Active Directory account is "JENKINS_CI" (upper case).
It seems that Jenkins security is case-sensitive somehow.
I stopped Jenkins, removed the user folder on host and just started Jenkins.
The new folder is now called JENKINS_CI and now CLI is working.
java -jar jenkins-cli.jar -s http://server get-job myjob > myjob.xml
I am able to run above command using below link
https://wiki.jenkins-ci.org/display/JENKINS/Disable+security