launch cassandra-cli error - ant

I get the following errors when I try to run cassandra-cli.
manuzhang#manuzhang-U24E:~/git/cassandra-trunk$ bin/cassandra-cli -h localhost -p 9160
Column Family assumptions read from /home/manuzhang/.cassandra-cli/assumptions.json
Connected to: "Test Cluster" on localhost/9160
Welcome to Cassandra CLI version Unknown
Exception in thread "main" java.lang.AssertionError
at org.apache.cassandra.cli.CliClient.loadHelp(CliClient.java:178)
at org.apache.cassandra.cli.CliClient.getHelp(CliClient.java:171)
at org.apache.cassandra.cli.CliClient.printBanner(CliClient.java:197)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:312)
That line is:
final InputStream is = CliClient.class.getClassLoader().getResourceAsStream("org/apache/cassandra/cli/CliHelp.yaml");
assert is != null;
The file is actually located in $CASSANDRA_HOME/src/resources/org/apache/cassandra/cli.
I have run it successfully for several times.

well, solved by ant build in terminal.
I think it's because I'm building from source and from time to time I modify some codes.
but just adding several lines of comments cannot reproduce the problem.

Related

Error when running near indexer localnet, fail to generate config.json

So I'm trying to run the indexer on localnet following the official tutorial https://docs.near.org/docs/tutorials/near-indexer
However when I run cargo run -- init to generate the localnet json config I get this error
Finished dev [unoptimized + debuginfo] target(s) in 17.62s
Running `target/debug/example-indexer init`
thread 'main' panicked at 'Failed to deserialize config: Error("expected value", line: 1, column: 1)', /home/francois/.cargo/git/checkouts/nearcore-5bf7818cf2261fd0/a44be20/nearcore/src/config.rs:499:39
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
At some point it seems the json is not created or not created properly I guess, the function crashing in config.rf line 499 is
impl From<&str> for Config {
fn from(content: &str) -> Self {
serde_json::from_str(content).expect("Failed to deserialize config")
}
}
It's quite difficult to debug since cargo run -- init is using some inner near function (also I'm new to rust).
the config.json file is created but it seems the permission are not set properly by the script, the content of config.json is
"<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message> ... "
If anyone from the community has encountered this problem or has a hint it would be great!! thanks a lot !
In the tutorial you referenced, it mentions a similar error, and suggests the following:
Open your config.json located in the .near folder in the root of your home directory. ( ~/.near/config.json )
In this file, locate: "tracked_shards": [] and change the value to [0].
Save the file and try running your indexer again.
So I had the wrong config with download_config: false,
It should be download_config: false, for the localnet use

pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null

I am trying to run hadoop using docker provided here:
https://github.com/big-data-europe/docker-hadoop
I use the following command:
docker-compose up -d
to up the service and am able to access it and browse file system using: localhost:9870. Problem rises whenever I try to use pyhdfs to put file on HDFS. Here is my sample code:
hdfs_client = HdfsClient(hosts = 'localhost:9870')
# Determine the output_hdfs_path
output_hdfs_path = 'path/to/test/dir'
# Does the output path exist? If not then create it
if not hdfs_client.exists(output_hdfs_path):
hdfs_client.mkdirs(output_hdfs_path)
hdfs_client.create(output_hdfs_path + 'data.json', data = 'This is test.', overwrite = True)
If test directory does not exist on HDFS, the code is able to successfully create it but when it gets to the .create part it throws the following exception:
pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null
What surprises me is that my code is able to create the empty directory but fails to put the file on HDFS. My docker-compose.yml file is exactly the same as the one provided in the github repo. The only change I've made is in the hadoop.env file where I change:
CORE_CONF_fs_defaultFS=hdfs://namenode:9000
to
CORE_CONF_fs_defaultFS=hdfs://localhost:9000
I have seen this other post on sof and tried the following command:
hdfs dfs -mkdir hdfs:///demofolder
which works fine in my case. Any help is much appreciated.
I would keep the default CORE_CONF_fs_defaultFS=hdfs://namenode:9000 setting.
Works fine for me after adding a forward slash to the paths
import pyhdfs
fs = pyhdfs.HdfsClient(hosts="namenode")
output_hdfs_path = '/path/to/test/dir'
if not fs.exists(output_hdfs_path):
fs.mkdirs(output_hdfs_path)
fs.create(output_hdfs_path + '/data.json', data = 'This is test.')
# check that it's present
list(fs.walk(output_hdfs_path))
[('/path/to/test/dir', [], ['data.json'])]

CraftCMS exception on first install (HTTP 503 – ServiceUnavailableHttpException)

I'm trying to install CraftCMS for the first time, and appear to have gone through all the steps on the installation guide - https://docs.craftcms.com/v3/installation.html#step-1-download-craft - yet I'm getting an Exception.
HTTP 503 – Service Unavailable – craft\web\ServiceUnavailableHttpException
Here is the line (509 in /var/www/craft/vendor/craftcms/cms/src/web/Application.php) that's throwing the exception:
// Should they be accessing the installer?
if (!$isInstalled) {
if (!$isCpRequest) {
throw new ServiceUnavailableHttpException();
}
Below is the call stack:
craft\web\ServiceUnavailableHttpException in /var/www/craft/vendor/craftcms/cms/src/web/Application.php:509
Stack trace:
#0 /var/www/craft/vendor/craftcms/cms/src/web/Application.php(184): craft\web\Application->_processInstallRequest(Object(craft\web\Request))
#1 /var/www/craft/vendor/yiisoft/yii2/base/Application.php(386): craft\web\Application->handleRequest(Object(craft\web\Request))
#2 /var/www/craft/web/index.php(21): yii\base\Application->run()
#3 {main}
I'm using v3.0.24 as far as I can see:
- Installing craftcms/cms (3.0.24): Downloading (100%)
As I haven't even got started with the CMS, I don't really know what more info to give - or where to go from here. The .env file has been copied, there really is no more instruction to do anything. Any ideas?
UPDATE
I've identified this section here (in /vendor/yiisoft/yii2/db/mysql/Schema.php) is returning an empty array:
protected function findTableNames($schema = '')
{
$sql = 'SHOW TABLES';
if ($schema !== '') {
$sql .= ' FROM ' . $this->quoteSimpleTableName($schema);
}
return $this->db->createCommand($sql)->queryColumn();
}
The table have been setup, I can see them in the MySQL console. My .env db config settings seem totally fine too.
Try the following steps for install craft3 by the terminal.
create a virtual host that point to the web directory of the project setup.
composer create-project craftcms/craft
./craft setup/security-key
./craft setup
After completing the above steps, provide the permission of storage, config, web, Modules, template folder.
Admin URL: http:///index.php/admin
For those creating a fresh install using Craft CMS Nitro and its nitro create command, don't forget to run the the Setup Wizard as a final step, as described in Step 6: Run the Setup Wizard, from the Craft Docs.
This will populate the database with Craft's tables and what not and should address the 503 error.

What does this Neo4j batch loader error number mean

I've been using the Neo4j batch loader for a while now and tonight started running into issues building my graph from a fresh database export. Running it yields the following:
> java -servjava -server -Xmx4G -jar ~/Dev/github.com/jexp/batch-import/target/batch-import-jar-with-dependencies.jar ./graph.db nodes.csv rels.csv node_index entities exact entities_idx.csv
Usage: Importer data/dir nodes.csv relationships.csv [node_index node-index-name fulltext|exact nodes_index.csv rel_index rel-index-name fulltext|exact rels_index.csv ....]
Using: Importer ./graph.db nodes.csv rels.csv node_index entities exact entities_idx.csv
Using Existing Configuration File
........................
Importing 2412268 Nodes took 4 seconds
.....................
Total import time: 9 seconds
Exception in thread "main" org.neo4j.graphdb.NotFoundException: id=2412269
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.getNodeRecord(BatchInserterImpl.java:917)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.createRelationship(BatchInserterImpl.java:471)
at org.neo4j.batchimport.Importer.importRelationships(Importer.java:136)
at org.neo4j.batchimport.Importer.doImport(Importer.java:214)
at org.neo4j.batchimport.Importer.main(Importer.java:78)
I was able to successfully run the batch loader for the nodes.csv and rels.csv that are included in its own repository, so I'm thinking that the issue is somewhere in my rels.csv file. However, it's a pretty big file and I would like to know what id=2412269 means, as it seems like the best starting point for diagnosing the failure.
Any ideas?
_howard
This means that in the rels.csv file, you are trying to create a relationship for a node referenced by id = 2412269 . But no such node has been created in your nodes.csv file.
After working through the issue with the author of the importer, it turns out that the issue was that I had single, unescaped quotes in my nodes.csv file. So the rels.csv record was pointing to a node that could not be created in nodes.csv. Unfortunately, the error reported on the console was not exactly the error causing the issue.

Flume NullPointerExceptions on checkpoint

I've setup a file to file source/sink , just as a test of basic flume functionality.
Im currently using the "exec" source, with the command being "tail -F mytmpfile".
In my script, I continuously echo "....." >> mytmpfile , so that the tail command constitutes a stream.
However, I've started seeing the following exception in the flume logs:
java.lang. IllegalStateException: Channel closed [channel=c1]. Due to
java.lang.NullPointerException: null
at org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:353)
at org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:122)
at org.apache.flume.sink.RollingFileSink.process(RollingFileSink.java:183)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.NullPointerException
at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:895)
at org.apache.flume.channel.file.Log.replay(Log.java:406)
at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:303)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:236)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
... 1 more
Any thoughts on where this NullPointerException is coming from? It appears from scanning the code that maybe it related to a missing folder or directory. But I cant find the exact line on the git hub branches.
This is using apache-flume-1.3.1.23-...
In the past I've had problems with file channels, and they've normally boiled down to two problems:
1) If you're running multiple agents on the same box, make sure you configure them to have separate dataDirs and checkpointDir.
2) On Linux boxes, check that your tmpfs isn't near its capacity. If it's getting full, flume will complain. Try stopping the flume agent, unmount tmpfs, enlarge it, remount and restart the agent.

Resources