Error when running near indexer localnet, fail to generate config.json - near

So I'm trying to run the indexer on localnet following the official tutorial https://docs.near.org/docs/tutorials/near-indexer
However when I run cargo run -- init to generate the localnet json config I get this error
Finished dev [unoptimized + debuginfo] target(s) in 17.62s
Running `target/debug/example-indexer init`
thread 'main' panicked at 'Failed to deserialize config: Error("expected value", line: 1, column: 1)', /home/francois/.cargo/git/checkouts/nearcore-5bf7818cf2261fd0/a44be20/nearcore/src/config.rs:499:39
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
At some point it seems the json is not created or not created properly I guess, the function crashing in config.rf line 499 is
impl From<&str> for Config {
fn from(content: &str) -> Self {
serde_json::from_str(content).expect("Failed to deserialize config")
}
}
It's quite difficult to debug since cargo run -- init is using some inner near function (also I'm new to rust).
the config.json file is created but it seems the permission are not set properly by the script, the content of config.json is
"<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message> ... "
If anyone from the community has encountered this problem or has a hint it would be great!! thanks a lot !

In the tutorial you referenced, it mentions a similar error, and suggests the following:
Open your config.json located in the .near folder in the root of your home directory. ( ~/.near/config.json )
In this file, locate: "tracked_shards": [] and change the value to [0].
Save the file and try running your indexer again.

So I had the wrong config with download_config: false,
It should be download_config: false, for the localnet use

Related

Knife bootstrap failing through Jenkins execute shell

I'm trying to perform a "knife bootstrap" command through Jenkins web UI execute shell, but I keep getting this error message :
(this is the knife bootstrap command I'm using) :
"knife bootstrap [the node's IP] --ssh-user ec2-user --sudo --identity-file "[my key to the node]" --node-name My123 --run-list 'role[role1]' "
and this is the error message:
" ERROR: Errno::ENOENT: No such file or directory # rb_sysopen - /etc/chef/validation.pem "
when I run the 'knife bootstrap' command directly through the CLI it works fine.
any idea why it's not working from Jenkins execute shell?
It is due to validation.pem file is missing, this is default path for validation file. Either you can set path in /chef-repo/.chef/knife.rb file or you can use default location /etc/chef/validation.pem.
You can regenerate validation key from webUI and replace the existing one, this should resolve your issue.

symfony/yaml backed symfony/config not parsing environment variables

I have recreated a simple example in this tiny github repo. I am attempting to use symfony/dependency-injection to configure monolog/monolog to write logs to php://stderr. I am using a yaml file called services.yml to configure dependency injection.
This all works fine if my yml file looks like this:
parameters:
log.file: 'php://stderr'
log.level: 'DEBUG'
services:
stream_handler:
class: \Monolog\Handler\StreamHandler
arguments:
- '%log.file%'
- '%log.level%'
log:
class: \Monolog\Logger
arguments: [ 'default', ['#stream_handler'] ]
However, my goal is to read the path of the log files and the log level from environment variables, $APP_LOG and LOG_LEVEL respectively. According to The symphony documentations on external paramaters the correct way to do that in the services.yml file is like this:
parameters:
log.file: '%env(APP_LOG)%'
log.level: '%env(LOGGING_LEVEL)%'
In my sample app I verified PHP can read these environment variables with the following:
echo "Hello World!\n\n";
echo 'APP_LOG=' . (getenv('APP_LOG') ?? '__NULL__') . "\n";
echo 'LOG_LEVEL=' . (getenv('LOG_LEVEL') ?? '__NULL__') . "\n";
Which writes the following to the browser when I use my original services.yml with hard coded values.:
Hello World!
APP_LOG=php://stderr
LOG_LEVEL=debug
However, if I use the %env(VAR_NAME)% syntax in services.yml, I get the following error:
Fatal error: Uncaught UnexpectedValueException: The stream or file "env_PATH_a61e1e48db268605210ee2286597d6fb" could not be opened: failed to open stream: Permission denied in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php:107 Stack trace: #0 /var/www/vendor/monolog/monolog/src/Monolog/Handler/AbstractProcessingHandler.php(37): Monolog\Handler\StreamHandler->write(Array) #1 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(337): Monolog\Handler\AbstractProcessingHandler->handle(Array) #2 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(532): Monolog\Logger->addRecord(100, 'Initialized dep...', Array) #3 /var/www/html/index.php(17): Monolog\Logger->debug('Initialized dep...') #4 {main} thrown in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php on line 107
What am I doing wrong?
Ok you need a few things here. First of all you need version 3.3 of Symfony, which is still in beta. 3.2 was the released version when I encountered this. Second you need to "compile" the environment variables.
Edit your composer.json with the following values and run composer update. You might need to update other dependencies. You can substitute ^3.3 with dev-master.
"symfony/config": "^3.3",
"symfony/console": "^3.3",
"symfony/dependency-injection": "^3.3",
"symfony/yaml": "^3.3",
You will likely have to do this for symfony/__WHATEVER__ if you have other symfony components.
Now in you're code after you load your yaml configuration into your dependency container you compile it.
So after you're lines here (perhaps in bin/console):
$container = new ContainerBuilder();
$loader = new YamlFileLoader($container, new FileLocator(__DIR__ . DIRECTORY_SEPARATOR . '..'));
$loader->load('services.yml');
Do this:
$container->compile(true);
Your IDE's intellisense might tell you compile takes no parameters. That's ok. That's because compile() grabs its args indirectly via func_get_arg().
public function compile(/*$resolveEnvPlaceholders = false*/)
{
if (1 <= func_num_args()) {
$resolveEnvPlaceholders = func_get_arg(0);
} else {
. . .
}
References
Github issue where this was discussed
Pull request to add compile(true)
Using this command after loading your services.yaml file should help.
$containerBuilder->compile(true);
given your files gets also validated by the checks for proper configurations which this method also does. The parameter is $resolveEnvPlaceholders which makes environmental variables accessible to the yaml services configuration.

How do I configure SaltStack to transfer a file (or install a package) for the first time?

I am running two instances of RedHat. I have SaltMaster installed on one machine and SaltMinion installed on another. I am using a free version of Salt. I want to test SaltStack to do a basic configuration management task. If it can transfer a file from SaltMaster to SaltMinion, that would be great. If it can install Apache web server on SaltMinion, that would be great. Either task will help me learn. My learning goal is semi-flexible.
I can use salt '*' test.ping. The response is True. I tried this command: salt '*' state.apply
I got this error:
> hostname.fqdn:
> Data failed to compile:
> ----------
> No matching salt environment for environment 'qa' found
> ----------
> No matching sls found for 'qa1' in env 'qa'
> ----------
> No matching sls found for 'base1' in env 'base'
> ----------
> No matching salt environment for environment 'dev' found
> ----------
> Specified SLS base1 in saltenv dev is not available on the salt master or through a configured fileserver
I modified the /etc/salt/master file. I uncommented these lines:
fileserver_backend:
- git
- roots
I tried this command again: salt '*' state.apply
I received this error:
> [ERROR ] Error parsing configuration file: /etc/salt/master -
> expected '<document start>', but found '<block mapping start>' in
> "<string>", line 547, column 1:
> fileserver_backend:
> ^ [ERROR ] Error parsing configuration file: /etc/salt/master - expected '<document start>', but found '<block mapping start>' in
> "<string>", line 547, column 1:
> fileserver_backend:
> ^
I have been following these directions here:
https://docs.saltstack.com/en/latest/topics/tutorials/states_pt1.html
I created a webserver.sls file.
I inserted these lines as the content:
apache: # ID declaration
pkg: # state declaration
- installed # function declaration
I do not see how the three lines in the directions above would be enough to configure SaltStack to work. Where would the apache installation media need to be? Where would the transfer happen from? Am I supposed to download the media to SaltMaster? I would assume so. But where would I put it? I have a satellite server for yum commands to work.
Alternatively, how do I get SaltStack to transfer a file from SaltMaster to SaltMinion?
The first error ([...]No matching sls found for 'qa1' in env 'qa'[...]) indicates that you have configured a lot of different environments (file_roots), which are not present on your master's filesystem. Your approach to solve this goes in the correct direction, but leads to this error:
[ERROR ] Error parsing configuration file: /etc/salt/master - expected '', but found '' in "", line 547, column 1: fileserver_backend: ^ [ERROR ] Error parsing configuration file: /etc/salt/master - expected '', but found '' in "", line 547, column 1: fileserver_backend: ^
You should no longer be able to test.ping your minion, as the salt master should not run anymore, does it? To solve it just read the error message. It tells you with which point in your salt master configuration file salt is unhappy.
The fileserver_backend configures which types of backend should be available. You should check the file_roots configuration to actually define which roots are available. Roots refer to salt states folders in your filesystem.
A very simple config might look like that:
file_roots:
base:
- /srv/salt
It assumes that /srv/salt is the root of your state tree - which effectively means, that your webserver.sls should be located in this folder.
Your webserver.sls looks promising - it should install apache2 on a minion, when you apply it.
Managing configuration files on the master and transferring them to the minions is something salt can easily achieve. A simple state might look like:
/etc/myawesomeconfigurationfile.conf:
file.managed:
source: salt://myawesomefile # refers to /srv/salt/myawesomefile
user: root
group: root
mode: 640
You also asked for media files that you want to manage. If you talk about application related data it is not a good idea to use salt to move them around. IMO other approaches like NFS, GlusterFS or anything else that decouples user content from your application would be a better approach.

Yeoman - How to extract zipped files in generator?

I want to build a Yeoman generator that needs to unzip a file.
From their documentation, it seems this process is done using this.registerTransformStream(...). It says it accept any gulp plugin, so I tried gulp-unzip (link)
Here's my code:
// index.js
...
writing: function() {
var source = this.templatePath('zip'); // the folder where the zipped file is
var destination = this.destinationRoot();
this.fs.copy(source, destination);
this.registerTransformStream(unzip() );
}
...
The result seems promising, first it shows all the file list then I get Error: write after end error.
Here's the dump:
create license.txt
create readme.html
create config.php
...
...
events.js:141
throw er; // Unhandled 'error' event
^
Error: write after end
at writeAfterEnd (C:\Users\myname\Documents\project\generator-test\node_modules\gulp-unzip\node_modules\readable-stream\lib\_stream_writable.js:144:12)
at Transform.Writable.write (C:\Users\myname\Documents\project\generator-test\node_modules\gulp-unzip\node_modules\readable-stream\lib\_stream_writable.js:192:5)
at DestroyableTransform.ondata (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_readable.js:531:20)
at emitOne (events.js:77:13)
at DestroyableTransform.emit (events.js:169:7)
at readableAddChunk (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_readable.js:198:18)
at DestroyableTransform.Readable.push (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_readable.js:157:10)
at DestroyableTransform.Transform.push (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_transform.js:123:32)
at DestroyableTransform._transform (C:\Users\myname\Documents\project\generator-test\node_modules\mem-fs-editor\lib\actions\commit.js:34:12)
at DestroyableTransform.Transform._read (C:\Users\myname\Documents\project\generator-test\node_modules\through2\node_modules\readable-stream\lib\_stream_transform.js:159:10)
The destination folder is empty after this. It seems the stream is trying to write the unzipped file but failed.
Does anyone solved this problem before? Or is there alternative way by just using the built-in fs?
Thanks a lot

launch cassandra-cli error

I get the following errors when I try to run cassandra-cli.
manuzhang#manuzhang-U24E:~/git/cassandra-trunk$ bin/cassandra-cli -h localhost -p 9160
Column Family assumptions read from /home/manuzhang/.cassandra-cli/assumptions.json
Connected to: "Test Cluster" on localhost/9160
Welcome to Cassandra CLI version Unknown
Exception in thread "main" java.lang.AssertionError
at org.apache.cassandra.cli.CliClient.loadHelp(CliClient.java:178)
at org.apache.cassandra.cli.CliClient.getHelp(CliClient.java:171)
at org.apache.cassandra.cli.CliClient.printBanner(CliClient.java:197)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:312)
That line is:
final InputStream is = CliClient.class.getClassLoader().getResourceAsStream("org/apache/cassandra/cli/CliHelp.yaml");
assert is != null;
The file is actually located in $CASSANDRA_HOME/src/resources/org/apache/cassandra/cli.
I have run it successfully for several times.
well, solved by ant build in terminal.
I think it's because I'm building from source and from time to time I modify some codes.
but just adding several lines of comments cannot reproduce the problem.

Resources